The FIR Podcast Network is the premiere podcast network for PR, organizational communications, marketing, and internal communications content. Each of the FIR Podcast Network's shows can be accessed individually. This is the EVERYTHING Feed, which gets you the latest episodes of every show in the network.
Josh Bernoff has just completed the largest survey yet of writers and AI – nearly 1,500 respondents across journalism, communication, publishing, and fiction.
We interviewed Josh for this podcast in early December 2025. What emerges from both the data and our conversation is not a single, simple story, but a deep divide.
Writers who actively use AI increasingly see it as a powerful productivity tool. They research faster, brainstorm more effectively, build outlines more quickly, and free themselves up to focus on the work only humans can do well – judgement, originality, voice, and storytelling. The most advanced users report not only higher output, but improvements in quality and, in many cases, higher income.
Non-users experience something very different.
For many non-users, AI feels unethical, environmentally harmful, creatively hollow, and a direct threat to their livelihoods. The emotional language used by some respondents in Josh’s survey reflects just how personal and existential these fears have become.
And yet, across both camps, there is striking agreement on key risks. Writers on all sides are concerned about hallucinations and factual errors, copyright and training data, and the growing volume of bland, generic “AI slop” that now floods digital channels.
In our conversation, Josh argues that the real story is not one of wholesale replacement, but of re-sorting. AI is not eliminating writers outright. It is separating those who adapt from those who resist – and in the process reshaping what it now means to be a trusted communicator, editor, and storyteller.
Josh Bernoff is an expert on business books and how they can propel thinkers to prominence. Books he has written or collaborated on have generated over $20 million for their authors.
More than 50 authors have endorsed Josh’s Build a Better Business Book: How to Plan, Write, and Promote a Book That Matters, a comprehensive guide for business authors. His other books include Writing Without Bullshit: Boost Your Career by Saying What You Mean and the Business Week bestseller Groundswell: Winning in a World Transformed by Social Technologies. He has contributed to 50 nonfiction book projects.
Josh’s mathematical and statistical background includes three years of study in the Ph.D. program in mathematics at MIT. As a Senior Vice President at Forrester Research, he created Technographics, a consumer survey methodology, which is still in use more than 20 years later. Josh has advised, consulted on, and written about more than 20 large-scale consumer surveys.
Josh writes and posts daily at Bernoff.com, a blog that has attracted more than 4 million views. He lives in Portland, Maine, with his wife, an artist.
Follow Josh on LinkedIn: https://www.linkedin.com/in/joshbernoff/
Relevant Links
Shel Holtz
Hi everybody, and welcome to a For Immediate Release interview. I’m Shel Holtz.
Neville Hobson
And I’m Neville Hobson.
Shel Holtz
And we are here today with Josh Bernoff. I’ve known Josh since the early SNCR days. Josh is a prolific author, professional writer, mostly of business material. But Josh, I’m gonna ask you to share some background on yourself.
Josh Bernoff
Okay, thanks. What people need to know about me, I spent four years in the startup business and 20 years as an analyst at Forrester Research. Since that time, which was in 2015, I have been focused almost exclusively on the needs of authors, professional business authors. So I work with them as a coach, writer, ghostwriter, an editor, and basically anything they need to do to get business books published.
The other thing that’s sort of relevant in this case is that while I was at Forrester, I originated their survey methodology, which is called Technographics. And I have a statistics background, a math background, so fielding surveys and analysing them and writing reports about them is a very comfortable and familiar place for me to be. So when the opportunity arose to write about a survey of authors in AI, said, all right, I’m in, let’s do this.
Shel Holtz
And you’ve also published your own books. I’ve read your most recent one, How to Write a Better Business Book.
Josh Bernoff
Mm-hmm, yes. So, this is like, the host has to prod you to promote your own stuff. Yes. Yes. So by my two most recent books, I wrote a book called Writing Without Bullshit, which is basically a, a manifesto for, people in corporations to write better. and I wrote build a better business book that you talked about, which is a complete manual for everything you need to do to think about conceive. write, get published and promote a business book. Yeah, so they’re both both available online where your audience can find them.
Shel Holtz
Wherever books are sold. So we’re here today, Josh, to talk about that survey of writers that you conducted, asking them about their use of AI. What motivated you to undertake this survey in the first place?
Josh Bernoff
Well, I’ll just go back a tiny little bit. About two years ago, Dan Gerstein, who is the CEO of Gotham Ghost Readers and a really fantastically interesting guy, reached out to me because he knew my background of doing statistics and said, let’s do a survey of the ROI of business books, get business authors to talk about what they went through to create their business books and whether they made a profit from all the things that followed on that.
So at the conclusion of that project, which people can certainly still get access to that information, at authorroi.com, at the conclusion of that project, it was clear that we could do a really good job together. So when he came to me and said, let’s do a survey about authors and AI. It’s a topic I’ve been researching a lot, talking to many authors about how they use it. And I said, all right, yeah, let’s actually get a definitive result here. And we were really pleased that the survey basically went viral.
We got almost 1,500 responses, way more than we did for the business author survey, because there’s a lot more writers than authors in the world. And because we got such a large response, it was possible to slice that so I can answer questions like how do technical writers feel about AI or is this different between men and women or older or younger people. And so that enabled us to do a really robust survey which people can download if they want. It’s at gothamghostwriters.com/AI-writer, available free for anyone who wants to see it.
Shel Holtz
And we’ll have that link in the show notes as well.
Josh Bernoff
Okay, great.
Neville Hobson
It’s a massive piece of work you did, Josh. I, I kind of went through the PDF quite closely because it’s a topic that interests me quite a bit. And I was really quite intrigued by many of the findings that it surfaced. But I have a fundamental question right at the very beginning, because I’m a writer myself. But I encountered this phrase throughout, “professional writer.” I’m not a professional writer, but I’m a writer.
And I know a lot of communicators who would say, yeah, I’m a professional writer. I don’t think it fits the definition you’re working to. So can you actually succinctly say what is a professional writer as opposed to any other kind of writer that communicators might say they are? What’s the difference?
Josh Bernoff
Yeah, that’s there’s less there than meets the eye and I will describe why.
So, we fielded this survey, and we basically said if you are a writer, you can answer this survey, and we got help from all sorts of people who are willing to share it within their communities. So over 2000 people responded. But of course, you have to disqualify people if they’re not really a writer and the way we define that is, we said, you spend at least 10 hours a week on writing and editing? And somebody who didn’t, I’m like, okay, you’re not really a writer if you don’t spend at least 10 hours a week on it.
And we also looked at how people made their living. So let’s just say you’re a product manager. You’re probably doing a lot of writing, but you wouldn’t describe yourself as a professional writer. So part of what we did was to have people answer questions about what kind of writer are you?
And we had the main categories and we captured almost everybody in them, know, marketing writers, nonfiction authors, ghost writers, you know, PR writers and so on. And although we had not intended to do so, we got almost 300 responses from fiction authors. And we were like, okay, what are we going to do here? Because these people are very different from the people who are writing in a business context or non-fiction authors, but I don’t want to invalidate their experience.
So we basically divided up the survey and we said, most of the responses are from people who are writing things that are intended to be true. And a small group is written from people who are intentionally lying because they’re fiction writers. So then we had an ongoing discussion about what do we call the people who write things that are intended to be true. And Dan Gerstein and I eventually agreed to call them professional writers, which is not a dig on the professional fiction authors, but it’s just a catchall for people who are making their living as a writer and writing nonfiction.
Shel Holtz
Josh, you described in the survey report a deep attitudinal divide where users see productivity and non-users see what you called a sociopathic plagiarism machine.
Josh Bernoff
Thanks. Now, now, wait a minute. I didn’t call it that. One of the people who took the survey called it that. Yes, that was a direct quote. I mean, I just want to comment here that in the survey business, we call responses to open-ended questions verbatims, right? So these are the actual text responses. And because we surveyed writers, these are the best verbatims I’ve ever seen. This is extremely literate.
Shel Holtz
OK, that was, that was a response. Got it. Well, yeah.
Josh Bernoff A collection of people expressing their opinion and the sociopathic plagiarism machine came from one of those folks. Yes.
Shel Holtz
I did like that a lot. But for somebody like me, a communications director managing a team, how do you bridge that gap when half the team might be ethically opposed to the tools that the other half is enthusiastically using every day?
Josh Bernoff
You just tell the other people to go to hell. No, I’m kidding! Now this is, it’s true. So one of the most notable findings of the survey was that people who do not use AI are likely to have negative attitudes about it. So it’s not just like, you know, well, I don’t happen to drink alcohol, but it’s fine with me. No, these people are.
Josh Bernoff
This is bad for the environment. It’s an evil product. There were a lot of interesting verbatims in the survey from people like that. 61% of the professional writers said that they use AI. So this is a minority of people who are not using it, and an even smaller group who are opposed to it. But they are fervently opposed to it. The people who do use it are generally getting really useful things done. A majority say that it’s making them more productive. And the people who are most advanced are doing all sorts of things with it.
By the way, this is really important to note. The thing that everyone’s sort of morally up in arms about, which is people generating text that’s intended to be read using AI, is actually quite rare. Most of the, that was only 7% that did that and only 1% that did that daily. So most people are doing research or they’re, they’re, you know, using it as a thesaurus or, or, using it to analyse material that they find and, and are citing as own background or something like that. it, to come directly at your question though, it is important to acknowledge this divide in any writing organisation.
And I think that the people who are using AI need to understand that there are some serious objections and they need to address that. The people who are not using it, I think, need to understand that perhaps they should be trying this out just so that they’re not operating from a position of ignorance about what the thing can do.
And I think most importantly, the big companies that are creating AI tools need to be a lot more serious about compensating the folks who create the writing that it’s trained on because it is putting the sociopathic plagiarism machine aside, it’s pretty bothersome when you find out that the thing has absorbed your book and is giving people advice based on that and you got no compensation for that.
Shel Holtz
I just want to follow up on this question real quickly. Were you able to quantify among the people who don’t use it and object to it the reasons? I mean, you listed a couple, but I’m wondering if there’s any data around the percentage that are concerned about the environment, the percentage that, I mean, the one that I keep reading in LinkedIn posts is it has no human experience or empathy, which I don’t understand why that’s a requirement for say earnings releases or welcome to our new sales VP, but nevertheless.
Josh Bernoff
Yeah, I going to say that describes a bunch of human writers too. They don’t seem to have any empathy. So we looked at one of the questions that we asked is how concerned are you about the following? And then we had a list of concerns. And it’s interesting that they divide pretty neatly into things that everyone is concerned about and things that the non-users are far more concerned about. So for example, the top thing that people were concerned about was, and I quote, AI-generated text can include factual errors or hallucinations. So even the people who use it are like, okay, we’ve got to be careful with this thing because sometimes it comes up with false information.
For example, if you ask it for my bio, it will tell you that I have a bachelor’s degree in classics from Harvard University and an MBA from the Harvard Business School, and I’ve never attended Harvard. So it’s like, no, no, no, no, no, no, that’s not right!
On the other hand, there are some other things where there’s a very strong difference of opinion. So for example, question, AI generated text is eroding the perception of value and expertise that experienced writers bring to a project. 92% of the non-users of AI agreed with that, but only 53% of the heaviest users of AI agreed with that. So if you use AI a lot, it’s like, well, actually, this isn’t as big of a problem as people think.
The environmental question, I think that non-users, 85% of them were concerned about its use of resources, but only 52% of the heavy users were concerned about that. And I want to point out something which I think is probably the most interesting division here. If you ask writers, should AI-generated text be labelled as such, they all mostly agree that it should. But if you ask them, should text generated with the aid of AI be labelled as such, the people who use AI often think, well, you don’t need to know that I used it to do research, because it’s not visible in the output. Whereas the non-users are like, no, you used AI, you have to label it. So that’s a good example of a place where the difference of opinion is going to have to somehow get settled over time.
Neville Hobson
That’s probably one of those things I would say take a while to do that, given what you see. and I talked about this recently on verification. Some people, and I know some people who are very, very heavy users of AI who don’t check stuff that is output with the aid of their AI companion. That’s crazy, frankly, because as Shel noted in our conversation, the latest episode of FIR podcast, your reputation is the one that’s going to suffer when it’s when you get found out that you’ve done this and haven’t disclosed it.
But it also manifests itself in something, you know, the great em-dash debate that went on for most of this year. Right. But I wrote a post about a couple of weeks ago about this and about ChatGPT’s plan saying you can tell it not to use em-dashes.
And my experience is I’ve done that and it still goes ahead and does it. It apologizes each time, it still goes ahead and does it, you know. But you know what? That post produced a credible reaction from people. 40,000 views in a couple of days. That’s for me, that’s a lot, frankly. And I did an analysis, which I published just a few days ago, that showed the opinions people have about it are widely divisive.
Some see it as, I’m not going to give up my whole heritage of writing just because of this stupid argument to others who say you’ve got to stop it because it doesn’t matter if it got it from us in the first place, it signals that you’re using AI, therefore your writing is no good. That kind of discussion was going on. So I’d see this is continuing. It’s crazy. looking at the data highlights, there’s some really fascinating stuff in there, Josh, that caught my eye.
The headline starting with the right to see AI is both a tool and a threat. And yes, that’s quite clear from what you’ve been saying, but also this hallucinations concern 91% of writers. And I think that’s true across, you no matter how experienced you are, it concerns me, which is why I’m strongly motivated to check everything, even though sometimes you think, God, do it, don’t don’t question, just do it.
I reviewed something recently that had 60 plus URLs mentioned in it. And so I checked them all, and 15 of them just didn’t exist or 404s or server errors. And yet the client had issued it already and without checking that kind of thing. Stuff like that. So you’ve got a job to educate them.
So I guess this is all peripheral to the question I wanted to ask you, which is that correlation that comes across in the data highlights between AI usage and positive attitudes towards it and as opposed to the negative attitudes, but the users are very highly positive.
How should we interpret this divide? I guess is the question you may have touched on this already, actually, I think you may have actually, is it just a skills gap? Is it a cultural gap? Or what is it? Because the attitudes that are different, I guess, like much these days seems to me to be quite polarised, strong opinions, pro and con. How do we interpret this?
Josh Bernoff
All right, so I want to go back to a few of the things that you said here. I have some advice in my book, Build a Better Business book, and it’s generally good advice about checking facts that you find, finding false information on the internet has always been a problem for people who are citing sources.
There used to be a guy called the numbers guy in the, Carl Bialik in the Wall Street Journal, who would actually write a column every month about some made up statistic that got into print. All that we’ve done is to make it much more efficient. But people do need to check. And it’s interesting. You learn when you use these tools that it’s subtle. If you click and say, OK, that is a real source, that’s fine.
But often, it will tell you that that source says X or Y and then you go and you read it and you’re like, no, it doesn’t actually say that. So yes, you are now citing a source that when you go look at it says the opposite of what you thought it said. Real professional writers know that that is an important part of their job and it just happens to be easy to behave incompetently and irresponsibly now.
But believe me, I deal with professional publishers all the time and there are all these clauses now in their contracts which basically say you have to disclose when you’re using AI and if there’s false information in here then you’re responsible for it and we might not publish it. I will say this, so let’s just put this in a different context. So think about Photoshop.
Okay, when Photoshop started to become popular, people were like, wait a minute, we can’t believe what we see in pictures. Maybe the person doesn’t have skin that’s all that smooth. Maybe that background is fake. But in context where you’re supposed to be doing factual stuff, like a photo that’s in a magazine, there’s safeguards against this and the users have learned what is legit and what isn’t. And I think also that the readers have learned that, okay, we have to be a little skeptical about what we see. This AI has made it possible to do that with text way more easily, but it’s still the case that you, as a reader, you need to be skeptical and as a user, you need to be sophisticated about what you can and can’t do and what is and is not legit.
I do these writing workshops with corporations. I’m doing one next week with a very large media company. And I’m trying to help them to understand, start with clear writing principles and use AI to support them as opposed to use it to substitute for your judgment, generate crap, and then do a disservice to the poor people who are reading it.
Shel Holtz
I am always amused when I see people expressing such angst over AI generated images taking money from artists. And I didn’t hear the same level of anxiety when CGI became the means of making animated movies. What happened to the people who inked the cells? They’re out of a job. No, Pixar got nothing but praise.
Josh Bernoff
Yeah, I know. Right. Right. They should. Yes, yes Yes, right and it’s like no no, they should have actually gotten 26,000 dinosaurs in that scene and I’m like You you were entertained admit it and you know that they’re not real and that’s it…
Shel Holtz
Yeah. Josh, your data shows that thought leadership writers and PR and comms professionals are the heaviest users of AI. Thought leadership writers, 84% of them and 73% of PR and comms professionals are using AI in their writing. Journalists are somewhere around half of that at 44%.
Did you glean any insights as to why the people who are pitching the media are using this more than the people being pitched?
Josh Bernoff
I have some theories about that. What I’m about to tell you is not supported by the data, although I could go in and start digging around. There’s infinite insight in here if I do that. So I think journalists are a little paranoid about it. And the fact that, yes, 44% of the journalists said that they used it, but only 18% said that they used it every day, which is at the very bottom of all the professional writers.
So I think they are not only concerned about their livelihood, but also that they don’t wanna make a mistake. They don’t wanna get anything into print that’s false. Whereas if you look at the thought leadership writers and the PR and comms professionals, it’s a simple question of volume. These people are under pressure to produce a very large amount of information.
And I can tell you as a professional writer that that there are certain tasks that you really would rather not spend time on if an AI can do it. So if you’re gathering up a bunch of background information and perplexity does a better job on contextual searches than Google, which it absolutely does, then you’re probably going to use it.
Now, there is the risk that these people are basically generating large quantities of crap and then sharing it. But I think that that rapidly becomes unproductive. If you’re basically spamming people with AI slop, then they will immediately become sort of immune to that, and then you lose trust and at that point you’ve destroyed your own livelihood.
Neville Hobson
Yeah, absolutely. I want to ask you about one of the other finds you had in here about ChatGPT is the clear leader amongst all writers. 76% using it weekly. I use ChatGPT more than any other tool. I’m very happy with it. It does what I want. But in light of how fast things move in this industry, how things change. How do you see that shifting or does it not actually matter at the end of the day which tool you use as long as it delivers what you want from it?
Josh Bernoff
Well, what you have here is people spending hundreds of millions of dollars to become the default choice, the sort of dominant company here. And if you look at past battles of this kind to be like, who is the top browser or what’s the top mobile operating system, this is a land grab.
If you sit out and wait and see what happens, you could very easily end up on the sidelines, which is why there’s so much money flooding into this. ChatGPT definitely has an early lead, but there was an article in the Wall Street Journal yesterday, I believe, about the fact that they’re very concerned about Google. And the reason is on a sort of features and capability basis, Google is Google better?
It depends on what day it is, they keep making advances. But it does integrate with people’s basic use of Google in other ways, and for example, use of Google in email. And wait a minute, have we never heard this story before? Where a company that has a dominant position in one area attempts to leverage it in another area? Gee, that’s like the whole story of the tech industry for the last 30 years!
Josh Bernoff
The same is true, my daughter works in a company that uses Microsoft products, which is very common. And so everybody in that company is using Microsoft Copilot because they got it for free. There’s this, if you ask me who is going to have the top market share in 18 months, I have no clue, but I don’t think that ChatGPT is necessarily in a position to say, ours is clearly better than everybody else and so everyone will use what we have.
I will point out that the, I’m trying to remember if I have the number on this, but the average person who is using these tools in a sophisticated way is typically using at least three or four different tools. So just like you might use Perplexity for one web search and Google for another, you might decide to use Microsoft Copilot in some situations and use Google Gemini in another situation.
Neville Hobson
It’s interesting that because I started using Copilot recently through a change of how I’m doing something for one particular area of work I’m interested in. And it blew me away because I’m using Copilot, it’s using ChatGPT5. So and I see, I sense the output I get from the input I give it is in a similar style to what ChatGPT would write.
So I’m impressed with that and I haven’t gained any further significance to it. Maybe it’s coincidental, but I quite like that. So that’s actually getting me more accustomed to Microsoft’s product. So these little things, maybe this is how it’s all going to work in the end.
Josh Bernoff
Yeah, yeah, I will point out that professional writers that I talked to are very enamoured of Claude as far as the creation of text. And definitely if you’re doing a web search, Perplexity has got some pretty superior features for that. I find myself often using telling ChatGPT, don’t show me anything unless you can provide a link, because I’m not going to trust you until you do that. And I’m going to check that link and see what it really says.
So that’s, you know, the, the, the development of specialised tools for specialised purposes is absolutely going to continue here.
Shel Holtz
Yeah, I’ve been using Gemini almost exclusively since 3.0 dropped. I find it’s just exponentially better, but I’m sure that when ChatGPT releases their next model, I’ll be back to that. In the meantime, I did see Chris Penn commenting, I think it was just yesterday on that Wall Street Journal article pointing out that it’s baked into Google Docs and Google Sheets and all the Google products, whereas OpenAI doesn’t have any products to bake it into.
And that’s a clear advantage to Google. But Josh, you revealed in the research that 82% of non-users worry that AI is contributing to bland and boring writing. What I found interesting was that 63% of advanced users felt the same way, that it’s creating this AI slop.
So as a counsellor to writers, how would you counsel people, our audience is organisational communicators. So I’ll say, would you counsel organisational communicators? When cutting through the noise is vital, you need to get your audience. I deal mostly with employee communication, and we need employees to pay attention to this message, despite the fact that there are so many competing things out there, just clamouring for their attention. How do you avoid the trap of this bland and boring writing when you’re so desperate to cut through that clutter and capture that attention?
Josh Bernoff
Yes, well, large language models create bad writing far more efficiently than any tool we’ve ever had before. So, and of course, I’m talking to both corporate writers and professional authors all the time about this. And so basically, the general advice is that the more you can use this for things behind the scenes, the better off you are and the more you use it to actually generate text that people read, the worse off you are.
I’m gonna give you a very clear example. So I am currently collaborating with a co-writer on a book about startups for a brilliant, brilliant author who really knows everything about startups, has an enormous background on it. And he has insisted that I use AI for all sorts of tasks. In fact, he’s like, you know, why are you wasting your time when you could just send this thing off and tell it to do the research? And we’ve done some spectacular things like I had a list of startups and I told it to go out on the internet and get me a simple statement about who they are, what financing stage they’re in, what category they’re in.
And it goes off and it does that. That would have taken me days. But because this guy is intelligent, there’s a reason he’s hired me and not replaced me with AI because once it’s time to actually create something that’s gonna be read by people, we have to rewrite that from beginning to end. That’s, as a professional writer, that is my, how I make a living. And what I write is the complete opposite of bland and boring. And he doesn’t want bland and boring. He wants punchy and surprising and… insightful.
So I, you know, you can both say use AI for all of this other stuff and don’t you dare publish anything that it creates. and I feel like that is generally the right advice that everybody is going to end up where I have ended up, which is, even in a corporate environment, it can support you, but you’re not using it to generate texts that people are going to actually read.
Neville Hobson
It’s a really good point you’ve made there I think because one of the elements one of the findings in the survey report, AI powered writers are sure they’re more productive and I definitely sit in that category. I’m absolutely convinced I’m probably in that what is it 92% or whatever it is of the advanced users who think so how do I prove it?
Well it’s not so much the output it’s the quality. It kind of tunes your mind into some of the reports that you read or what others are saying elsewhere that use AI tools to support you in doing the stuff that is what AI is better at than humans. Unstructured structured data, whatever it is, finding patterns, all that stuff that we can all read about. And you do the intellectual stuff, the stuff humans are really good at.
Josh Bernoff
Absolutely.
Neville Hobson
And they sound great phrases and sentences. And I’ve said to lots of people, I don’t see too many people doing that. So they’re obviously not in the advanced stage, let’s say. I find it hard to believe, frankly. Really I do. In conversations I’ve had during this year on those who diss this, who say this is like some of your respondents have said, you know, it’s the, what is it, psychotic plagiarism machine or whatever it was, the stuff…
Josh Bernoff
Sociopathic, but yes.
Shel Holtz
Both things can be true.
Neville Hobson
…sorry, sociopathic, but it’s where they can, but it amazes me, it truly does. And I think if we’ve got this situation where clearly there is evidence that if you use this in an effective way, it will help you be productive.
It will augment your own intelligence, to use a favourite phrase of mine. So AI is augmenting intelligence, not artificial. And yet that still encounters brick walls and pushbacks on a scale that’s ridiculous. Worse in an organization when that’s at a leadership level, I would say.
So how do we kind of make this less of a threat as it’s seen by others, or is this part of the issue that those naysayers just see all this as a massive threat?
Josh Bernoff
Well, boy, that’s a deep question. So first of all, I always start with the data here, because I want to distinguish between my opinions and the data. And the data says that the more you use AI, the more likely you are to say that it is making you more productive. And as you said, 92% of the advanced users said that it made them more productive. And interestingly, 59% of the advanced users said that it actually made the quality of their writing better.
So it’s not just producing more, but producing better stuff. And one more statistics here. We actually asked them how much more productive. The average across all the writers who use it is 37 % more productive, but like any tool, you need to get adept at it and learn what it’s good at and what you can use it for. And this technology has advanced way, way ahead of the, the learning about how to use it.
So there has to be a, basically a movement in every company and all writing organizations to teach people the best way to take advantage of it and what not to do. And in fact, one of the things that I recommended and that I tell some of the corporate clients I work with is find the people who are really good at this and then have them train the other people.
Because there’s nothing better than somebody saying, okay, here, let me show you what I can do with this.
I’ll just give you an example. So this report itself, obviously people are saying, well, did you use AI to write the report? I started out trying to use AI to analyse the data and I found that it was not dependable. I’m like, okay, I’m gonna have to calculate these statistics the old-fashioned way with spreadsheets and data tools. Every single word of the report was written by a human, me, at least most people still think I’m a human.
But we had, you know, thousands of verbatims to go through. And the person to whom I delegated the task of finding the most interesting verbatim used AI to go in and find verbatims that were interesting, had certain, there were some positive ones, negative ones, you know, had some diversity in terms of who they were from. So we weren’t quoting all technical writers. And that’s a perfect use to go into a huge corpus of text and pull out some of the interesting things out of there because that would have taken days.
I can’t help mentioning here because in preparation for doing this report, I interviewed some of the most advanced writers that I knew, including Shel. And one of my favourite examples is a very intelligent woman who, Shel, I know you know, is completing her doctoral degree right now. And she told me that the review of existing research is an enormous element of this, and that using AI to help summarise and compare the existing research would save her three years in the completion of her doctoral degree.
You cannot walk away from that level of productivity. And she’s full of enormously creative ideas. So this is not a bad writer. This is an excellent writer, but what she’s doing is she’s saying, I had this brilliant idea. Hey, is there anything in the literature that’s similar to this? wait a minute. These people came up with the same thing. So I can’t claim the authorship. it went across all the research and nobody else is saying that. great. This is an original thing I can include. That’s a smart way to use it.
Shel Holtz
Yeah, just this past week, I interviewed our new safety director, just came on board. I used Otter AI to do the interview. I like that because I’m able to focus on the interview subject rather than scribble notes. And what I did was uploaded the transcript of the interview that I downloaded from Otter into Gemini. And I said because the interview led to a lot of digressions and a lot of personal back and forth that interrupted the substance of what we were trying to get to.
So I just said, clean up this transcript, get rid of everything that doesn’t have to do with his coming on board at our company as the new safety director, his background and all of that, and then categorise it. But don’t change any of his words, right? I want the transcript to be exact. And it did exactly what I asked it to do.
For me to take that transcript… well, first of all, for me to take all those notes and then put it in some sort of usable form before I even start writing the article would have taken a considerable amount of time. And yet it didn’t mess at all with what he was telling me in response to my questions. And I was able to use that to produce an article that I wrote.
One of my favourite uses though, as a writer, is when there’s a turn of phrase that I want to use and I can’t quite draw it out. I know what it is. It’s right there. So I’ll share what I’m writing about. And this is what I’m trying to say. And there’s a turn of phrase I’m thinking of. What is it? And it’ll say, well, it might be one of these. And almost always from the list it gives me, that’s the one I was thinking of.
Josh Bernoff
This is a way better thesaurus than anything else I’ve ever used. And at the age that we’re at, sometimes you can’t, you know there’s a word and you can’t bring it to mind. I’m like, yeah, that was a word I was looking for.
Shel Holtz
Yeah. Josh, you found that 40% of freelancers and agencies say that AI has eaten into their income. If you were advising, say, a boutique PR agency today on how to survive in 2026, what’s the one pivot that you would advise them that they need to make based on this data?
Josh Bernoff
I think you need to focus on talent that has two skills. One is, clear and interesting writing skills are even more valuable than they used to be. So, you know, if you say, well, who are the best writers in our organisation, do everything you can to hang on to those people, because you’re going to need that to continue to stand apart from the AI slop.
And then the other side of that is to become as efficient as possible with AI for the rote tasks. So you also want people who are really skilled at using these tools to conduct research tasks. I interviewed a woman at the gathering of the ghosts, which is the event where this research was first presented. She matches up ghost writers to to author clients. And she gets like a background, briefing on every single person that she goes and pitches. And it’s really good at that.
So when she gets on the phone with these people, they’re like, wow. She’s really smart. She, she did a whole lot of homework here. And this is the kind of person I want to work with. Okay. It has nothing to do with her writing ability. It has to do with her ability to take advantage of these tools and, yeah, I think that we’re going to be able to get more done with fewer people. which is a, tail is all this time, really. That’s just, that’s just the direction that things go with automation.
But I, I have, I can’t resist pointing out here on the flip side. I think, a bunch of people, including publishers are now delegating work to AI and laying people off and it’s doing a bad job. I ghost wrote a book recently where the copy editing came back and I was like, this is inadequate. This is a terrible job. This was obviously done by a machine and done badly by a machine.
And my client and I decided that in order to avoid errors, we would hire our own professional copy editor because the publisher had skimped in exactly the wrong place. And the professional copy editor did a fantastic job. It cost a bunch of money, but we were much happier with that.
Neville Hobson
To continue this theme slightly, think I had the question, which I think Shel answered part of it, but the page in the report with the headline, nearly half of writers have seen AI kill a friend’s job. And I found that interesting because there’s constant talk in some of the mainstream media, some of the professional journals too, is AI going to replace jobs? One report comes out and before you know it, the headline saying yes, it is. The other report comes out saying no, it’s not.
But these are intriguing, I found, that they’re actual real world examples you’ve got from people who answered the questions you asked them in the survey. Where it says only 10% of corporate workers have had AI driven layoffs at their organization, but 43% of writing professionals know someone who has lost their job to AI. So is this a trend that’ll continue this way, do you think? how would you interpret this overall picture that you’ve shown? This particular page, page 20 in the report.
Josh Bernoff
Okay. Okay. Yes. So it was interesting. We expected to hear a lot more direct response of, yes, they’ve done layoffs of my work as a result of this. And the fact that only 10 % of the people who worked in corporations, which includes media companies, said that they had seen this was an indication to me that at least at the time we did this survey in August and September, that that was not a huge trend.
The fact that a lot of people know somebody who lost their job, you know, if one person loses their job and they have 12 friends, then we’re gonna get 12 positives on that. But that having been said, I’m not convinced that even if we did this survey now, which is what, like four months later, that we would get the same results.
It’s clear to me that there’s a lot of layoffs happening that a significant amount of it is AI stimulated. A certain amount of that is coders, for example. They need fewer coders to do the same programming now. My daughter got a computer science degree a few years ago because it was like everyone knew that that was how you got a job and you know, it’s not so easy right now.
I think that we’re going to see two things. First of all, we’re going to see this trend of people being laid off because AI includes productivity across the entire employment spectrum. It’s a huge trend that’s likely to happen. But I also think that you’re going to find companies backtracking and saying, oh my God, we thought we could have all this productivity, but it turns out that we need more humans here than I realised and we need to go back and bring them back.
I feel that it is driven by a certain amount by investment mania to cut back expenses and that in the end, as in so many cases, when you replace people with automation, you end up with a poor quality result.
Shel Holtz
I want to talk about fiction authors for a minute. And I find it intriguing that they are so universally anti-AI. Neville and I are both friends with JD Lasica. I don’t know if you know JD. He’s got a product out there called Authors AI. It’s a model that he and his partners have trained. It’s not using ChatGPT or Gemini or any of the large frontier models.
But what you do is you feed your novel to it, presumably in a first draft, and it analyses the novel against all of the criteria that has been trained on about what makes a good novel and gives you a report about, you need to do a better job of character development here, the story arc is weak here, things like that. So, I mean, there are uses for fiction writers beyond actually writing for you, but you did note that they almost universally detest it. Only 42% use it and they are…
Josh Bernoff
No, no, no, no. Let’s be clear here. It was the non-users among the fiction authors who almost universally detested.
Shel Holtz
Okay, I misread that. Emphatically angry was the language that jumped out at me. I’m wondering for those of us in business writing, is there a lesson we should take away from fiction writers about the preservation of the soul of a narrative?
Josh Bernoff
No, no, it’s interesting to me. So I’ve been conducting surveys now for probably 20 years. And one of the main things that you learn is that it’s never black and white. There’s never a hundred percent of the people that agree with anything. There’s never 0% of the people that agree with anything until this survey, when I found that fictional authors that do not use AI are as close as you can get to unanimous about it being a horrible, evil thing.
So yes, I was like, 100% of the people agreed with this? I’ve never seen that in my entire career of analysing surveys. But to give you a little bit more thoughtful answer than no, soulless fiction is boring and nobody wants to read it. And that happens to also be true of soulless nonfiction writing.
So let’s just take this report. If I used AI to generate the text in this report, you wouldn’t be talking to me because I found the most interesting things in the most interesting language to describe it. And the same applies if you’re writing about, you know, should we adopt a new project management methodology?
That’s a story, you know? We have this problem. This solution was suggested to us. We compared this to that. It looks like this is going to save money, but here are the things that I’m really worried about. This is an emotional story. And really, all nonfiction writing needs to have a story element to it. until AI becomes a little bit less soulless, which may never happen, you still need humans to tell those stories.
Neville Hobson
Yeah, I agree with that. So before we get to that question of what question should we have asked you, I’m looking at page 28, what these findings mean for the writing profession. And it’s really well done this, Josh, you succinctly condensed it all. But to avoid me trying to interpret what you said, can you tell us a summary of what these findings do mean for the writing profession?
Josh Bernoff
Well, thank you.
You know, it’s interesting Neville, there was always a section like that at the end of my reports at Forrester Research, because that’s what they were paid for. And in this case, I said, no, I’m just going to do the data. And my partner here, the people at Gotham Ghostwriters, Dan was like, why don’t you write something about what this means for the industry? I’m like, I can do that. Good idea! Okay.
So I wrote this and I think that in corporate environments, it is important now to understand what this is good for and to take the people who’ve become advanced at it and use them to help train other folks. And it’s especially challenging, I think, in media organisations because on the one hand, they are under enormous pressure, profit pressure.
You know, think about a newspaper or magazine or publisher. It’s very difficult for them to be profitable, highly competitive environment. If they can cut costs, they’re gonna try and find a way to do it. On the other hand, it is exactly their content that’s getting hoovered up and ripped off.
So they need to have a balance here, think on a political basis, they need to lobby and basically do everything possible to preserve the value of their content and not have it be used for training purposes without any compensation. But I also think they have to be very prudent in what kinds of things that they take AI to do and what they don’t. Just like the people at that publisher who use the AI copy editing that did a terrible job. If they economise in the wrong places, it’s gonna be a very bad scene.
I can’t help but drop this in here. I learned recently about a romance bookstore, a bookstore that sells romances, a physical bookstore. And they’re using AI to analyse trends, figure out which books to stock and how to organise them and what to put into their marketing. And I just thought that was fascinating because the content is as human and emotional as you can be, and yet they figured out a way to use AI to be successful.
Shel Holtz
That’s really interesting. So let’s ask you that question now, Josh. I mean, we could spend another hour here, but what question didn’t we ask that you were hoping we would?
Josh Bernoff
I think that the most interesting finding here, and there were so many fascinating findings, so that’s saying something, was in the questions that we asked about what tasks do you do with AI? And what really amazed me was the huge variety of tasks. So I wasn’t surprised that research was, but I’m looking over to the side here just to make sure I get the information exactly accurate.
I wasn’t surprised that replacement for web search and finding words or phrases of thesaurus was something that people wanted, but I was surprised by how many people use AI as a brainstorming companion. That they’re actually asking questions about Can I write it this way or that way? What suggestions do you have? And getting great ideas back on that. To summarise articles is very popular, but you know, generate outlines, find flaws and inconsistencies. As a devil’s advocate, deep research reports. mean, this is, the people who get good at this, they keep coming up with new ways to use it.
So I think that if you look at what’s happening in the future, all this debate about AI-generated slop getting published is much less interesting to me than the capability that this has to make writers more powerful, smarter, more interesting, come up with more ideas, and to basically be an infinitely patient assistant that can get you to be the best writer you can possibly be.
Shel Holtz
Yeah, that devil’s advocate is one of the very first things I used it for when ChatGPT was first introduced. I would say I’m planning on communicating this this way. The goal, the objective is to get employees to think, believe, do X. What pushback am I going to get from this approach? And nine times out of 10, it would come up with a very valid list of reasons that this isn’t going to work. It would lead me to re-strategise.
Josh Bernoff
Well, Shel, as you know, you can contact me anytime if you need someone to tell you that you’re wrong! But I’m not available at three in the morning, and ChatGPT is so from that perspective, it’s probably better. Plus my rates are much higher than theirs.
Shel Holtz
Josh, how can our listeners find you?
Josh Bernoff
Well, the most interesting thing is to subscribe to my blog at bernoff.com. I actually write a blog post about books, writing, publishing, and authoring every weekday. People say, why do you do that? The only good answer I have is it’s a mental illness, but you may as well take advantage of it. And we shared the URL for this research report and certainly anyone who’s interested in writing a business book, just do a search on build a better business book and you can get access to that.
And certainly if someone is so desperate that they really want a human to help them, I am available for that.
Shel Holtz Thanks so much, Josh. We really appreciate your time.
Josh Bernoff Okay, was really great to talk to you.
Neville Hobson Yeah, a pleasure, likewise, thank you.
The post AI and the Writing Profession with Josh Bernoff appeared first on FIR Podcast Network.
Big Four consulting firm Deloitte submitted two costly reports to two governments on opposite sides of the globe, each containing fake resources generated by AI. Deloitte isn’t alone. A study published on the website of the U.S. Centers for Disease Control (CDC) not only included AI-hallucinated citations but also purported to reach the exact opposite conclusion from the real scientists’ research. In this short midweek episode, Neville and Shel reiterate the importance of a competent human in the loop to verify every fact produced in any output that leverages generative AI.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 491. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz, and I want to return to a theme we addressed some time ago: the need for organizations, and in particular communication functions, to add professional fact verification to their workflows—even if it means hiring somebody specifically to fill that role. We’ve spent the better part of three years extolling the transformative power of generative AI. We know it can streamline workflows, spark creativity, and summarize mountains of data.
But if recent events have taught us anything, it’s that this technology has a dangerous alter ego. For all that AI can do that we value, it is also a very confident liar. When communications professionals, consultants, and government officials hand over the reins to AI without checking its work, the result is embarrassing, sure, but it’s also a direct hit to credibility and, increasingly, the bottom line.
Nowhere is this clearer than in the recent stumbles by one of the world’s most prestigious consulting firms. The Big Four accounting firms are often held up as the gold standard for diligence. Yet just a few days ago, news broke that Deloitte Canada delivered a report to the government of Newfoundland and Labrador that was riddled with errors that are characteristic of generative AI. This report, a massive 526-page document advising on the province’s healthcare system, came with a price tag of nearly $1.6 million. It was meant to guide critical decisions on virtual care and nurse retention during a staffing crisis.
But when an investigation by The Independent, a progressive news outlet in the province, dug into the footnotes, the veneer of expertise crumbled. The report contained false citations pulled from made-up academic papers. It cited real research on papers they hadn’t worked on. It even listed fictional papers co-authored by researchers who said they had never actually worked together. One adjunct professor, Gail Tomlin Murphy, found herself cited in a paper that doesn’t exist. Her assessment was blunt: “It sounds like if you’re coming up with things like this, they may be pretty heavily using AI to generate work.”Deloitte’s response was to claim that AI wasn’t used to write the report, but was—and this is a quote—”selectively used to support a small number of research citations.” In other words, they let AI do the fact-checking and the AI failed.
Amazingly, Deloitte was caught doing something just like this earlier in a government audit for the Australian government. Only months before the Canadian revelation, Deloitte Australia had to issue a humiliating correction to a report on welfare compliance. That report cited court cases that didn’t exist and contained quotes from a federal court judge that had never been spoken. In that instance, Deloitte admitted to using the Azure OpenAI tool to help draft the report. The firm agreed to refund the Australian government nearly $290,000 Australian dollars.
This isn’t an isolated incident of a junior copywriter using ChatGPT to phone in a blog post. This is a pattern involving a major consultancy submitting government audits in two different hemispheres. The lesson is pretty stark: The logo on your letterhead isn’t going to protect you if the content is fiction. In fact, this could have long-term repercussions for the Deloitte brand.
But it doesn’t stop at consulting firms. Here in the US, we’ve seen similar failures in the public sector. There’s one from the Make America Healthy Again (MAHA) commission. They released a report with non-existent study citations to a presentation on the CDC website—that’s the Centers for Disease Control—citing a fake autism study that contradicted the real scientists’ actual findings.
The common thread here is a fundamental misunderstanding of the tool. For years, the mantra in our industry was a parroting of the old Ronald Reagan line: “Trust but verify.” When it comes to AI though, we just need to drop that “trust” part. It’s just verify. We have to remember that large language models are designed to predict the next plausible word, not to retrieve facts. When Deloitte’s AI invented a research paper or a court case, it wasn’t malfunctioning. It was doing exactly what it was trained to do: tell a convincing story.
And that brings us to the concept of the human in the loop. This phrase gets thrown around a lot in policy documents as a safety net, but these cases prove that having a human involved isn’t enough. You need a competent human in the loop. Deloitte’s Canadian report undoubtedly went through internal reviews. The Australian report surely passed across several desks. The failure here wasn’t just technological, it was a failure of human diligence. If you’re using AI to write content that relies on facts, data, or citations, you can’t simply be an editor. You must be a fact-checker.
Deloitte didn’t just lose money on refunds or potential reputational hits; they lost the presumption of competence. For those of us in PR and corporate communications, we’re the guardians of our organization’s truth. If we allow AI-generated confabulations to slip into our press releases, earnings statements, annual reports, or white papers, we erode the very foundation of our profession. Communicators need to update their AI policies. Make it explicit that no AI-generated fact, quote, or citation can be published without primary source verification. And you need to make sure that you have the human resources to achieve that. The cost of skipping that step, trust me, is a lot higher than a subscription to ChatGPT.
Neville Hobson: It’s quite a story, isn’t it really? I think you kind of get exasperated when we talk about something like this, because we’ve talked about this quite a bit. Most recently, in our interview with Josh Bernoff—which will be coming in the next day or so—where this very topic came up in discussion: fact-checking versus not doing the verification.
I suppose you could cut through all the preamble about the technology and all this stuff, and the issue isn’t that; it’s the humans involved. Now, we don’t know more than the Fortune article, I’ve seen the one in Entrepreneur magazine, and the link that you shared. Nowhere does it disclose detail about exactly what it was other than the citation. So we don’t know, was it prompted badly or what? Either way, someone didn’t check something. I don’t know how much you need to really hammer home the point that if you don’t verify what the AI assistant has responded to or the output to your input, then you’re just asking for this kind of trouble.
I did something just this morning, funnily enough, when I was doing some research. The question I asked came back with three comments linking to the sources. A bit like Josh—because Josh mentioned this in our interview—every instruction to your AI goes: “Do not come back with anything unless you’ve got a source.” And so I checked the sources, one of which just did not exist. The document concerned on the website of a reputable media company wasn’t there. Now, it could be that someone had moved it, or it did exist but it was in another location. But the trouble is, when these things happen, you tend to fall on the side of, “Look, they didn’t do this properly.”
So I’m not sure what I can add to the story, Shel, frankly. Your remarks towards the end about your reputation is the one that’s going to get hit. You look stupid. You really do. And your credibility suffers.
I found in Entrepreneur they quoted a Deloitte spokesperson saying, “Deloitte Canada firmly stands behind the recommendations put forward in our report.” Excuse me? Where’s your little humility there? Because you’ve been caught out doing something here. And they’re saying, “We’re revising it to make a small number of citation corrections which do not impact the report finding.” What arrogance they are displaying there. Not anything about an apology—or fine, let’s say they don’t need an apology—but a more credible explainer that at least gives them the sense that they empathize here, rather than this arrogant, “Well, we stand by it.” It’s just a little citation? It’s actually a big deal that you quote as something that either doesn’t exist or is a fake document. Exactly. So I don’t know what I can say to add anything more. But if they keep doing this, they’re going to lose business big time, I would say.
Shel Holtz: It didn’t exist. Yeah, I understand their desire to stand by the report. I have no doubt that they had valid information and made valid recommendations, but that’s hardly the point. The inaccuracies call all of the report into question, even if at the end of the day they can demonstrate that they used appropriate protocols and methodologies to develop their recommendations based on accurate information.
You still have this lingering question: “Well, you got this wrong, what else did you get wrong? What else did you turn over to AI that you’re not telling us about because you didn’t get caught?” Even if they didn’t do any of that, those questions are there from the people who are the ones who paid for this report. If I were representing a government that needed this kind of work, first of all, I would be hesitant to reach out to Deloitte. I would be looking at one of their competitors.
If I had a long-standing relationship with Deloitte, and even if I had a high degree of trust with Deloitte, I would still add a rider to a contract that says either you will not use AI in the creation of this report, or if you do, you will verify each citation and you will refund us X dollars—the cost of this report—for each inaccurate, invalid verification that you submit. I’d want to cover my ass if I were a client based on having done this not once, but twice.
Neville Hobson: Right. I wonder what would have happened if the spokesman at Deloitte Canada had said something like, “You’re absolutely right. We’re sorry. We screwed up big time there. We made a mistake. Here’s what happened. We’ve identified where the fault lay, it’s ours, and we’re sorry. And we’re going to make sure this doesn’t happen again.”
Shel Holtz: “Here’s how we’re going to make sure it doesn’t happen again.” Yeah, I mean, this is like any crisis. You want to tell people what you’re going to do to make sure it doesn’t happen again.
Neville Hobson: Yeah, exactly. So they say—and you mentioned—”AI was not used to write the report, it was selectively used to support a small number of research citations.” What does that mean, for God’s sake? That’s kind of corporate bullshit talk, frankly. So they use the AI to check the research citations? Well, they didn’t, did they? “Selectively used to support a small number of research citations…” I don’t know what that even means.
So I don’t think they’ve done themselves any favors with the way they’ve denied this and the way their reporting has spread out into a variety of other media, all basically saying the same thing: They did this work for this client and it was bad. Didn’t do a good job at all.
Shel Holtz: Yeah. So, I’m, as you know, finishing up work on a book on internal communications. It was originally 28 blog posts and I started this back in, I think, 2015. So a lot of the case studies have gotten old. So I did some research on new case studies and I used AI to find the case studies. And then I said, “Okay, now I need you to give me the links to sources that I can cite in the end notes of each chapter that verify this information.”
In a number of cases, it took me to 404s on legitimate websites—Inc, Fortune, Forbes, and the like. But the story wasn’t there and a search for it didn’t produce it. And I would have to go back and say, “Okay, that link didn’t work. Show me some that are verified.” And sometimes it took two, three, four shots before I got to one where I look and say, “It’s a credible source, it’s a national or global business publication or the Financial Times or what have you, the article is here and the article validates what was in the case study,” and that’s the one I would use. But it takes time, and I think any organization that doesn’t have somebody doing that runs the risk of the credibility hit that Deloitte’s facing.
Neville Hobson: Yeah, I mean, this story is probably not going to be front-page headlines everywhere at all. But it hasn’t kind of died yet. Maybe there’s going to be more in professional journals later on about this. But I wonder what they’re planning next on this because the criticisms aren’t going away, it seems to me.
Shel Holtz: No, and as the report noted, it’s not just the Deloittes of the world. It’s Robert F. Kennedy’s Department of Health and Human Services justifying their advisory board’s decisions to rewrite the rules on vaccinations based on citations that not only don’t exist, but that contradict the actual research that the scientists produced.
Neville Hobson: Well, there is a difference there though. That’s run by crazy people. I mean, Deloitte’s not run by crazy people.
Shel Holtz: Not as far as I know. That’s true. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #491: Deloitte’s AI Verification Failures appeared first on FIR Podcast Network.
In this episode, Chip and Gini discuss the complexities of hiring in growing agencies. They highlight the challenges of finding skilled, reliable employees who align with agency values.
Sharing personal experiences, Gini explains the pitfalls of hasty hiring and the benefits of thorough vetting and cultural fit. They stress the importance of a structured hiring process, including clear job roles, career paths, and appropriate compensation. They also underscore the value of meaningful interviews, proper candidate evaluations, and treating the hiring process as the start of a long-term relationship.
Lastly, Chip and Gini emphasize learning from past mistakes to improve hiring effectiveness and employee retention. [read the transcript]
The post ALP 290: Balancing skills and personality when hiring a new team member appeared first on FIR Podcast Network.
The communication profession stands at a pivotal moment. Artificial intelligence is transforming how we create and distribute content. Trust in institutions continues to erode while employees demand authenticity and transparency. The hybrid workplace has permanently altered how we reach our audiences. And the pace of change shows no signs of slowing.
In this environment, what does it mean to be a communication professional? More specifically, what will it mean in 2026 and the years that follow?
The December Circle of Fellows panel will tackle these questions head-on, bringing together four IABC Fellows to share their perspectives on where our profession is headed and what opportunities await those prepared to seize them.
The conversation will explore several interconnected themes. We’ll examine the evolving role of the communication professional as a trusted advisor; the new capabilities and mindsets that will distinguish the communication leaders who thrive from those who struggle to keep pace; the practical challenge of coaching executives to communicate with empathy and impact; the skills the next generation of communicators should be developing now; and how we can maintain professional standards and ethical practice when the tools and channels keep shifting beneath our feet.
The session is scheduled for 5 p.m. ET on Thursday, December 18. You’ll be able to participate in the conversation with questions, comments, experiences, and observations. If you’re unable to join the live discussion, you can catch the video replay or listen to the audio podcast afterward.
About the panel:
Zora Artis, GAICD, SCMP, ACC, FAMI, CPM, is CEO of Artis Advisory and co-founder of The Alignment People. She helps leaders and teams tackle tough challenges, find clarity, and take action, particularly when the stakes are high and the path isn’t obvious. Her superpower is being comfortable with the uncomfortable: aligning people, solving problems, and navigating change so leaders can focus on what matters most and teams can do their best work.
With more than three decades of experience across consulting, executive leadership, and strategic communication, Zora has guided major brands, government, for-purpose and for-profit organisations in aligning purpose, culture, strategy, and performance. A leading thinker, researcher, and expert in strategic and team alignment, leadership, brand, and communication, she is co-authoring a global study on Strategic Alignment & Leadership. She is a Research Fellow with the Team Flow Institute.
Zora has served as Chair of the IABC Asia Pacific region, as a Director on the IABC International Executive Board, and on multiple committees and task forces. She holds multiple IABC Gold Quill Awards and Chairs the IABC SIG Change Management. Based in Melbourne, she works globally.
Bonnie Caver, SCMP, is the Founder and CEO of Reputation Lighthouse, a global change management and reputation consultancy with offices in Denver, Colorado, and Austin, Texas. The firm, which is 20 years old, focuses on leading companies to create, accelerate, and protect their corporate value. She has achieved the highest professional certification for a communication professional, the Strategic Communication Management Professional (SCMP), a distinction at the ANSI/ISO level. She is also a certified strategic change management professional (Kellogg School of Management), a certified crisis manager (Institute of Crisis Management). She holds an advanced certification for reputation through the Reputation Institute (now the RepTrak Company). She is a past chair of the global executive board for the International Association of Business Communicators (IABC). She currently serves on the board of directors for the Global Alliance for Public Relations and Communication Management, where she leads the North American Regional Council and is the New Technology Responsibility/AI Director. Caver is the Vice Chair for the Global Communication Certification Council (GCCC) and leads the IABC Change Management Special Interest Group, which has more than 1,300 members. In addition, she is heavily involved in the global conversation around ethical and responsible AI implementation and led the Global Alliance’s efforts in creating Ethical and Responsible AI Guidelines for the global profession.
Adrian Cropley is the founder and director of the Centre for Strategic Communication Excellence, a global training and development organization. For over thirty years, Adrian has worked with clients worldwide, including Fortune
500 companies, on major change communication initiatives, internal communication reviews and strategies, professional development programs, and executive leadership and coaching. He is a non-executive director on several boards and advises some of the top CEOs and executives globally.
Adrian is a past global chair of the International Association of Business Communicators (IABC), where he implemented the IABC Career Road Map, kick-started a global ISO certification for the profession, and developed the IABC Academy. Adrian pioneered the Melcrum Internal Communication Black Belt program in Asia Pacific and is a sought-after facilitator, speaker, and thought leader. He has been a keynote speaker and workshop leader on strategic and change communication at international conferences in Canada, the U.S., Europe, the Middle East, Malaysia, Singapore, China, India, Hong Kong, Thailand, New Zealand, and Australia. He has received numerous awards, including IABC Gold Quill Awards for communication excellence, and his Agency received Boutique Agency of the Year 6 years running.
Adrian is the Chair of the Industry Advisory Committee for the RMIT School of Media and Communication and a Fellow of the IABC and RSA. In 2017, he was awarded the Medal of Order of Australia for his contribution to the field of communication.
Mary Hills, ABC, IABC Fellow, Six Sigma, FCSCE serves as MBA Faculty in Benedictine University’s Goodwin College of Business. Her work in marketing, finance and organizational communication and management brings an interdisciplinary perspective to her students. Mary’s professional career includes serving large corporations such as First Wisconsin National Bank – Milwaukee, Federal Reserve Bank of Chicago, Whiteco Advertising, NiSource, Northern Trust, Unilever and Zebra Technologies. She supported starts-ups through Purdue Technology Center and Research Park of NWI. As a member of senior management, her work includes research, risk analysis and strategic planning for product launches, market expansion, and change and crisis management. In 2009, she co-founded HeimannHills Marketing Group, Chicago and Phoenix, serving as business principal until 2021. Most recently, Mary’s work involves AI’s impact on the role of the communication professional. Her work has been recognized nationally and internationally.
The post Circle of Fellows to Explore the Future of Communication in 2026 and Beyond appeared first on FIR Podcast Network.
Studies purport to identify the sources of information that generative AI models like ChatGPT, Gemini, and Claude draw on to provide overviews in response to search prompts. The information seems compelling, but different studies produce different results. Complicating matters is the fact that the kinds of sources AI uses one month aren’t necessarily the same the next month. In this short midweek episode, Neville and Shel look at a couple of these reports and the challenges communicators face relying on them to help guide their content marketing placements.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz Hi everybody, and welcome to episode number 490 of For Immediate Release. I’m Shel Holtz.
Neville Hobson And I’m Neville Hobson. One of the big questions behind generative AI is also one of the simplest: What is it actually reading? What are these systems drawing on when they answer our questions, summarize a story, or tell us something about our own industry? A new report from Muckrec in October offers one of the clearest snapshots we’ve seen so far. They analyzed more than a million links cited by leading AI tools and discovered something striking.
When you switch citations on, the model doesn’t just add footnotes, it changes the answer itself. The sources it chooses shape the narrative, the tone, and even the conclusion. We’ll dive into this next.
Those sources are overwhelmingly from earned media. Almost all the links AI sites come from non-paid content, and journalism plays a huge role, especially when the query suggests something recent. In fact, the most commonly cited day for an article is yesterday. It’s a very different ecosystem from SEO, where you can sometimes pay your way to the top. Here, visibility depends much more on what is credible, current, and genuinely covered. So that gives us one part of the picture.
AI relies heavily on what is most available and most visible in the public domain. But that leads to another question, a more unsettling one raised by a separate study published in the JMIR Mental Health in November. Researchers examined how well GPT-4.0 performs when asked to generate proper academic citations. And the answer is not well at all. Nearly two thirds of the citations were either wrong or entirely made up.
The less familiar the topic, the worse the accuracy became. In other words, when AI doesn’t have enough real sources to draw from, it fills the gaps confidently. When you put these two pieces of research side by side, a bigger story emerges. On the one hand, AI tools are clearly drawing on a recognizable media ecosystem: journalism, corporate blogs, and earned content. On the other hand, when those sources are thin, or when the task shifts from conversational answers to something more formal, like scientific referencing, the system becomes much less reliable. It starts inventing the citations it thinks should exist.
We end up with a very modern paradox. AI is reading more than any of us ever could, but not always reliably. It’s influenced by what is published, recent, and visible, yet still perfectly capable of fabricating material when the trail runs cold. There’s another angle to this that’s worth noting.
Nature reported last week that more than 20% of peer reviews for a major AI conference were entirely written by AI, many containing hallucinated citations and vague or irrelevant analysis. So if you think about that in the context of the Muckrec findings in particular, it becomes part of a much bigger story. AI tools are reading the public record, but increasing parts of that public record are now being generated by AI itself.
The oversight layer that you use to catch errors is starting to automate as well. And that creates a feedback loop where flawed material can slip into the system and later be treated as legitimate source material. For communicators, that’s a reminder that the integrity of what AI reads is just as important as the visibility of what we publish. All this raises fundamental questions. How much has earned media now underpin what AI says about a brand?
If citations actively reshape AI outputs, what does that mean for accuracy and trust? How do we work in a world where AI can appear transparent, citing its sources, while still producing invented references in other contexts? And the Muckrec and MJIR studies show that training data coverage, not truth, determines what AI cites. So the question, is AI reading, has two answers, I think. It reads what is most visible and recent in the public domain, and it invents what it thinks should exist when the knowledge isn’t there. That gap between the real and the fabricated is now a core communication risk for organizations. How do you see it, Shel? Thoughts on that?
Shel Holtz It is a very, very complex issue. I was looking at a study from Profound called AI Search Volatility. And what it found was that search engines within the AI context, the search that ChatGPT and Gemini and Claude conduct, are probabilistic rather than deterministic, which means that they’re designed to give different answers and to cite different resources, even for the same query over time.
Another thing that this study found was that there is citation drift. That is, the percentage of domains cited in July are not necessarily present in June for the same prompts. You look at these results, the number that weren’t present in June that were in July for Google AI overviews, nearly 60%, just over 54% for ChatGPT, over 53% for Co-Pilot, and over 40% for Perplexity. So 40 to 60% of the domains that are cited in AI responses are going to be different a month later for the same prompt. And this volatility increases over time, goes from 70 to 90 percent over a six month period.
So you look at one of these studies that’s a snapshot in time and it’s not necessarily telling you that you should be using this information as a strategy to guide where you’re going to publish your content if the sources are going to drift. And by the way, a profound study by their AEO specialist, a guy named Josh Bliskolp, found that AI relies heavily on social media and user generated content, which is different from what the Muckrec study found. They were probably getting that snapshot in time where the citations had drifted. So, while I think all these studies are interesting, I think what it tells us as communicators looking to show up in these answers is we need to be everywhere.
Neville Hobson Yeah, I’ve been trying to get my head around this. I must admit reading these reports and the Nature one kind of threw me sideways when I found that because I thought how relevant is that to the topic we’re discussing in this podcast? And so my further research showed it is relevant as the content is being fed back into the system and that’s showing up in social results. You’re right. In another sense, I think you can get all these survey reports and dissect them which way to Christmas.
But they have credibility in my eyes, certainly, particularly Muckrec’s. I find the MJIR one equally good, but it touches on areas that I’m not wholly familiar with. This one in Nature is equally good, quite troubling, I think, that that one shows. Listening to how you were describing the profound report on citation consistency over time, I just kept thinking now about the Nature one as an example, let’s say. What if that sounds great, it’s measuring citation consistency over time, but what if the citations are fake, they’re full of hallucinations, they’re full of invalid information? Where does that sit? That’s my question, I suppose.
Shel Holtz Well, yeah, this shouldn’t surprise anybody who’s been paying attention. AI still confabulates. It’s still at the bottom. I think of the ChatGPT or Gemini that this is still prone to misinformation. They are configured more to satisfy your query than they are to be accurate. So when they can’t find or don’t know an accurate citation, they’ll make one up.
We still have attorneys who are filing briefs with cases that don’t actually exist. So this is the nature of the beast right now. If you’re not verifying the information that you get before you do something with it, that’s on you. That’s not on the AI. They’re telling you that these things still hallucinate. They’re working on it. They hope to have that fixed one of these days, but they’re not quite sure how that actually works. So it’s not like just going in and turning a dial or flipping a switch, the researchers are struggling to figure this out. And if it were that easy, they would have done it by now.
Neville Hobson Sure. Although what you just said does not come across at all in any of the communication you see from any of the chatbot makers, except in four point tight at the bottom, you know, it can hallucinate, you need to do your verification. I don’t hear that clear call to a kind of a warning shot, if you like, from anyone when they’re talking about all this stuff, and that needs to change in that case. I don’t feel that it’s as bad as what I got from what you were saying.
Although the point does rear itself quite clearly and it’s got to be repeated again and again. You’ve got to double check everything that you run through. Well, not run through an AI, but the results you get when you do a search. So, you know, it’s all very well talking about citation consistency of time frame from one month to the next. You’ve got to check that yourself. The question will arise, I think, for many. How do you do that? You might use a chatbot to do it, would you? Of course you would, because it’s a tool you’ve got in your armory, but you’ve got to check that.
Shel Holtz Well, I’ve got Google in my armory too. If I see it make an assertion and has a citation, I’m going to go to Google and look it up. I’m not going to look up the URL that the chat doc presented. I’m going to type in the information about the report or the study or the white paper or whatever it was that is cited and see if I can find it. And then if I can and it’s the right one, I’m going to check and see if the link is the same one that the AI provided.
I did a white paper. I used Google Gemini’s deep research for the first pass of this, it was loaded with citations. Where I spent my time wasn’t in doing the initial research, it was validating every citation that it provided before I passed this along to people. So that’s got to be part of the workflow with these things for now. I hope they fix it one day, but for now, you can’t just crank one of these things out and, you know, submit it to a judge or, you know, use it in your medical practice or pass it along to your boss. You have to validate that it’s all accurate.
Neville Hobson Yeah. By the way, didn’t you say once a long time ago now, I expect you didn’t use Google anymore? was only only ChatGPT or Gemini.
Shel Holtz I switch back and forth based on which one is performing better on the benchmarks. I also find that the three primary models, ChatGPT, Google Gemini, and Claude, are better at different things. So I tend to use different ones for different things. But Gemini 3.0 is spectacular. This most recent upgrade that just came out, I think it was last week, wasn’t it? It’s amazing. So I have sort of shifted most of my work using one of the large language models to Gemini right now. I still use ChatGPT for a few things right now. Of course, they’re going to come out with their own big upgrade, probably. Well, there’s some speculation before the end of the year. So we’ll see where they land. But right now, I find Google Gemini is best for a number of things. And by the way, Nano Banana Pro, the image generator. If I were the product manager for Photoshop or for Canva, I’d be worried because you can just upload an image and edit it in Nano Banana with plain text and just tell it what you want done and it does it and pretty awesome. I’ve been playing with it. I can tell you what I did with it, but it’s spectacular.
Neville Hobson Okay, so yeah.
Shel Holtz And fast. You compare that to OpenAI’s image generator, which takes minutes. You’re just sitting there watching this gray blob slowly resolve. Nano Banana’s, boom, there it is.
Neville Hobson Yeah, I see a lot of people posting examples of what it can do. It looks pretty good. So going back to this, though, I think let’s talk a bit about the kind of verification because I think many people know, I don’t know how many it might be, maybe a small number needs some guidance in what to do with that. It’s a quite an additional step, you might argue, in what some people see as the speed and simplicity of using an AI tool to conduct your research, for instance, or to summarize a PDF file or whatever it might be. So what would be your tips for a communicator then on building this into the workflow so that it becomes a natural part of what they’re doing and not a pain in the ass, frankly? So what would you say to them?
Shel Holtz Yeah, well, my tip is to build it into the workflow. It’s still going to save you, well, first of all, it’s still going to save you time. For me to go through and validate the facts that are presented in a bit of AI research takes me less time than it would to conduct the research and draft the white paper. And by the way, I want to be sure everyone understands, I do heavily edit the white paper for language, for style. I rewrite entire sections based on how I would say this. But for that first draft, think that’s the point is that you have to look at these as a first draft. This is why we have interns, right? Is to crank out first drafts of things and save us the time. And I still think that metaphor for AI being a really smart intern who doesn’t go home at the end of the day, doesn’t need a paycheck and just works 24 hours and never gets sick. I think that’s an apt metaphor.
But to just ignore the need to review these things and think it’s going to give you a finished product, that’s a mistake. And you need to come up with a workflow, define your own, but it has to include validation of the information that it provided. If it doesn’t, you’re setting yourself up for some real grief. I mean, if you share the results of that with somebody who is important in your career, in your life, and they make decisions based on it that turn out to be bad decisions because it was a confabulated citation, then that’s going to roll right back on you. So you have to build it into the workflow, just like any other workflow. This is the step that comes after the first step.
Neville Hobson I wonder if this is, tell me what you think, is this significantly more concerning if you’re in academia, say, or working for scientific firm in the science side of things, where peer reviewed, citation-led work on research for medical breakthroughs, or whatever it might be, or scientific discoveries, typically takes months, if not years to go through a process. What would you do if you were in that situation where you are, I know they’re relying on this and this is now emerging that academic papers in particular, well becoming what, untrustworthy? That’s to me is a pretty big deal if many people see it that way. I’m just curious how you discuss that with someone.
Shel Holtz I don’t think my guidance would change. There is an obligation to ensure that what you are sharing is accurate. And if you are using Gen. AI to produce some or all of this report, that obligation extends to fact checking. Mean, hire an intern to do the fact checking so that you have time to do other things. There’s a reason to have an intern. I’ve had this as a question that we hear, if AI can do what an intern can do, what will an intern do? And the answer may be validate what the AI cranks out.
But the risk is so severe that this just needs to become a matter of routine. And especially in science, where these things can be translated into medicines and treatment protocols and the like, you don’t want to be responsible for people getting sicker or dying because you had a confabulated resource or a citation that you didn’t check before you moved on to the next step with this. And if the peer review of the document that you have created produces those errors, if the peers that are reviewing it find the fictitious citations or the wrong citations, it’s your reputation that’s on the line. No one’s going to blame the AI. They’re going to blame you. So your credibility is on the line.
One other point I want to make here in terms of what I would recommend, I would go back to Ginny Dietrich’s PESO model, paid, earned, social and owned, and recognize that that model hasn’t changed in the age of AI. If you want to be cited, don’t chase the shiny object of the latest report that says, it’s reading this, it’s reading that. The fact that it shifts from month to month means you need to be in all those places. And before AI, we were paying a lot of attention to the PESO model. And I’d hate to see it fall by the wayside as lazy people think they can get away with just doing one thing. It’s gotten so easy because AI reads this. Well, that’s this month. Next month, you’re toast.
Neville Hobson Yeah, of course I recall that many people I know still now talk about I don’t need an intern anymore because I have an AI.
Shel Holtz Yeah, well, then they must be spending a, they’re either spending a lot of time validating with the AI produced or they’re putting garbage out into the world.
Neville Hobson I sense not a lot of time, actually. So this then comes back to you got to put in the time. On the some of the work that I’ve been doing recently on research reminds me of something I did, I guess, two weeks ago, which was checking the links in a report that cited this, this and this. And I would say of the 65 or so links I checked 15 404s or not known or not, you know, or even the browser errors you get when it can’t connect to something. So no one had checked those. But I’m okay with that because that’s why I’m here. That’s what I will do. And you’ve got to do it. I agree, you’ve got to do it.
Shel Holtz Well, exactly. Yeah, and the net is still a gain for you, the communicator. You’re still going to save time. You’re just not going to save as much as you think you will if you don’t have to do anything other than write a prompt. There’s more to it than that.
Neville Hobson Right. So I would say that could to conclude on that then we kind of rang the alarm bell about in the in my narrative intro about, you know, this this report in Nature in particular, that flagged up all these fake citations. Just see that then as something if you’ve got a report that that had lots of links in there and all sorts of things being said, you have to manually check each one. And that then comes
Shel Holtz Yes, yes you do.
Neville Hobson back to good old Google probably. But it’s not just the tool, it’s the framework under which you do it in that for instance, minor thing. But if I was doing that, now I would be doing it on a clean interface like the browser I’m not logged into, probably different browser perhaps than I normally using even a different computer if I really wanted to take to extreme level. But it gives you more confidence that your own persona, if you like, is not influencing anything even unbeknownst to you that it might be doing. I mean, I’m not saying it is, but this gives you the, the best way of doing it, I would say so this is best practice. So we should write a best practice guide on this, perhaps. But you know, it’s it’s food for thought.
Shel Holtz It certainly is. And by the way, I think I said paid, earned, social and owned when I was running down what the letters in PESO stand for. The S is actually shared, which includes social, but has a few other things in it. Go look up Ginny Dietrich’s PESO model, folks, and you’ll find it.
Neville Hobson think she did an update to to this for the AI age. I’ve seen to recall a lot of talk about that. Yeah, as well as a tool for ChatGPT that that you could use just, you know, based on that, basically.
Shel Holtz She did. Yeah, she did. I believe she did. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #490: What Does AI Read? appeared first on FIR Podcast Network.
The forward-looking discussion was joined by five seasoned leaders: two professors shaping the next generation of communicators and three senior practitioners traversing today’s real-world pressures. Together, they bridge campus and workplace, theory and execution, to define what readiness really looks like in a world of constant change. Shel Holtz, SCMP, IABC Fellow, will moderate the session.
This episode featured a candid, fast-paced discussion on the skills and mindsets that matter now — and the ones you’ll need next. From AI literacy and data comfort to ethical judgment, change agility, and human-centered storytelling, the panel will share practical frameworks you can apply immediately. You’ll hear how universities are evolving curricula, how employers can cultivate lifelong learning, and how individual pros can future-proof their careers without losing the craft that sets them apart.
You’ll get actionable guidance, plenty of examples from classrooms and boardrooms. Whether you lead a team, teach, hire, or are building your own career path, this conversation will help you set priorities for the year ahead.
You’ll leave with:
A clear, current skills map for modern communicators
Practical ways to integrate AI and analytics—without sacrificing trust and creativity
Playbooks for continuous upskilling across individuals, teams, and organizations
About the panel:
Diane Gayeski is recognized as a thought leader in the practice and teaching of business communications. She is Professor of Strategic Communications at the Roy H. Park School of Communications at Ithaca College and provides consulting in communications analysis and strategies through Gayeski Analytics. Diane was recently inducted as an IABC Fellow; she’s been active in IABC for more than 30 years as a featured speaker and think-tank leader at the international conference, the author of 3 editions of the IABC-published book, Managing the Communications Function, and the advisor to Ithaca College’s student chapter. She has led more than 300 client engagements for clients, including the US Navy, Bank of Montreal, Fiat, Sony, Abbott Diagnostics, and Borg-Warner, focusing on assessing and building capacities and implementing new technologies for workplace communications and learning teams.
Sue Heuman, SCMP, ABC, MC, IABC Fellow, based in Edmonton, Canada, is an award-winning, accredited authority on organizational communications with more than 40 years of experience. Since co-founding Focus Communications in 2002, Sue has worked with clients to define, understand, and achieve their communications objectives. Sue is a highly sought-after executive advisor, specializing in leading communication audits and strategies for clients across all three sectors. Much of her practice involves a strategic review of the communications function within an organization, analyzing channels and audiences. She creates strategic communication plans and provides expertise to enable their execution. Sue has been a member of the International Association of Business Communicators (IABC) since 1984, which enables her to both stay current with and contribute to the field of communications practices. In 2016, Sue received the prestigious Rae Hamlin Award from IABC in recognition of her work in promoting global standards for communication. She was also named 2016 IABC Edmonton Chapter Communicator of the Year. In 2018, IABC named Sue a Master Communicator, the Association’s highest honor in Canada. Sue earned the IABC Fellow designation in 2022.
Dr. Theomary Karamanis is a multiple award-winning communication professor and consultant with 25 years of global experience. She is currently a full-time senior lecturer in Management Communication at the Cornell SC Johnson College of Business and regularly delivers executive education programs in leadership communication, crisis communication, and strategic communication. She has held several professional leadership positions, including Chair of the GCCC (Global Communication Certification Council), Chair of the IABC (International Association of Business Communicators) Academy, and Chair of the IABC Awards committee.
Her academic background includes a PhD in communication studies, a Master of Arts in mass communication, and a postgraduate certificate in telecommunications, all from Northwestern University, as well as a bachelor’s degree in economics from the Athens University of Economics and Business. She also holds professional certifications as a Strategic Communication Management Professional (SCMP), online facilitator, and executive program instructor. She has received 40 professional communication awards, including 12 Platinum MarCom awards, 7 Gold Quill awards, 4 Silver Quill awards, and a Comm Prix award. In 2020, she received the Award for Excellence in Communication Consulting by APCC (Association of Professional Communication Consultants) and ABC (Association for Business Communication). She is the author of several books and academic papers on communication, and she also regularly delivers presentations at international conferences and other business forums.
Leticia Narváez, ABC, is the CEO and Founding Partner of Narváez Group, a consulting firm specializing in Strategic Communication, Crisis Management, Employee Engagement, Communication Training, and Change Management. A 30-year experienced professional, she held top-level positions at Sanofi, Merck, American Express, and Ford Motor Co., among others. She builds communication bridges to the highest standards of excellence. She has developed communication strategies for several employers and clients, including those involved in mergers and acquisitions, diversity leadership, crisis management, and senior executive consulting. Many of these strategies have earned global awards for their proven results and successful impact. She has been a speaker at international forums, is a co-author of several books and manuals on business communication, public relations, and inclusion. She teaches Measurement and Evaluation in the Master of Institutional Communication at the Panamericana University in Mexico City.
Jennifer Wah, MC, ABC, has worked with clients to deliver ideas, plans, words and results since she founded her storytelling and communications firm, Forwords Communication Inc., in 1997. With more than two dozen awards for strategic communications, writing, and consulting, Jennifer is recognized as a storyteller and strategist. She has worked in industries from healthcare and academia to financial services and the resource sector, and is passionate about the strategic use of storytelling to support business outcomes. Although she has delivered workshops and training throughout her career, Jennifer formally added teaching to her experience in 2013, first with Royal Roads University and more recently as an adjunct professor of business communications with the UBC Sauder School of Business, where she now works part-time to impart crucial communication skills on the next generation of business leaders. When she is not working, Jennifer spends her time cooking, walking her dog, Orion, or discussing food, hockey, or music with her husband and two young adult children in North Vancouver, Canada.
Raw Transcript
00:00:00 Speaker: Hi everybody, and welcome to episode one hundred and twenty two of Circle of Fellows. I’m Shel Holtz, your moderator today, and I am the senior director of Communications at Webcor. We’re a commercial general contractor in California, headquartered in San Francisco. And I’m coming to you live today from our offices across the bay in Alameda. Uh, and I am also a certified, uh, communication professional through the Global Communication Certification Council. And I am delighted to have a terrific panel joining me today to talk about preparing tomorrow’s communication professionals. Uh, that includes some people from the world of academia. Uh, you’ll learn who they are as they introduce themselves in just a couple of seconds. But first, I’m going to give you the, uh, the first of a few reminders that, um, you are welcome to participate in this discussion. You are watching this presumably through YouTube and there is a chat feature. And if you send us a question or a comment or an observation through that chat window, I’ll be able to share it on the screen and we can get feedback from the panelists who will now introduce themselves, starting with Letty. Um. Hi, everybody. Um, I’m Letty Narvaez, I’m based in Mexico City, and I’ve been working on communication for more than thirty years. For the last ten years, I have had my own consulting firm specializing mainly on employee communications, change management, crisis and risk management, and and a lot of training on measurement and presentation skills. And it’s great to to be here. It’s great to have you here, Letty. Uh, Theo. Mary. You’re next. Hello, everyone. Thanks for being here with us. I’m Theo America. I’m based in Ithaca, New York. This is upstate New York. I work for Cornell University, and I teach MBA and executive MBA students. And I’m also very much involved with executive education. So I get to see a lot of executives and leaders across industries and professions. Um, I’ve been in communication for more than I don’t want to say, but I will, uh, twenty five years now. And I started after my PhD. I started in corporate communication. So I had a corporate life. Then I went into consulting. I had my own boutique firm, and for the past ten, maybe now close to fifteen years, I’ve been full time in academia. I’ve been in contact. I always have contact with executives, uh, through my executive education courses and also through my Um, MBA courses, and I’m looking forward to sharing with you some insights about communication professionals and what communication will mean to us, uh, in the future. And thank you so much for inviting me. Uh, thanks for being here. Uh, Diane, you’re up. Hi, I’m Diane Gajewski, and I am on the other hill from Theo Mary, also in Ithaca, New York. I’m a professor of strategic communication at Ithaca College, which is also my alma mater. And, uh, I’ll embarrass myself and say I’ve been there for forty seven years. Um, in addition to teaching, I practice what I preach through my consulting firm, Gaieski Analytics. Um, I mostly focus on new technologies and, uh, trends in both corporate communication and corporate learning. And I teach almost exclusively undergraduates. And, uh, they are very worried about the topic that we’re talking about today, so I look forward to the conversation. It’ll be a good one. Uh, Sue? Hi, everybody. Sue from Edmonton, Canada. Um, I have been in communication for forty three years. And, um, through the course of my time, I have actually been, um, uh, an instructor with MacEwan University, and I have worked with other academic programs as a guest lecturer. Um, but, uh, I think I’m hoping to bring the perspective today of somebody who’s done a lot of hiring, um, over the years and have seen a lot of different folks come through the door for our small agency, which is focused communications in Edmonton. So, yeah, happy to be here and happy to be here with this great group of very impressive women. So thank you. Thank you. Uh, Jen, your last but not least. Excellent. Uh, hello. From Vancouver, Canada. Just over the hill, as it were, from Sue in Edmonton. Um, and, um. And happy Thanksgiving to those who will be celebrating in the next, uh, in the next day or so. Um, I, uh, I’m, I teach as an adjunct professor at the UBC Sauder School of Business. So I teach business communications to first and third year students, which is, um, which is a, um, something I’ve been doing for the last four or five years. Um, I absolutely am enchanted by it and by the students and, um, love being in the classroom with them. And it, uh, I think only makes me stronger as a, as a professional. Um, in my consulting business, as a storytelling consultant for organizations. Um, and, uh, yeah, really looking forward to the conversation today and and echo what, what I’ve even heard so far in terms of what what the next generation of entering the workforce are worried about and how, um, and how we all just need to keep learning. I’ve been doing some rough math as we’ve gone around, uh, at my forty eight years in the profession. I was a newspaper reporter for a few years before that. Uh, and I think we have well over two hundred years of experience on this panel, so you should get some good wisdom. We don’t look a day over one ninety nine, so that’s true. Uh, it’s good. Genes is what it is. Um, so let’s jump into this and I want to share a quick story. Uh, several years ago, uh, I was invited to come speak to the faculty at San Jose State University by the dean of the journalism department. That was Bill Briggs, who is an Iabc fellow. Um, and one of the questions I was asked, uh, really struck me. They said, what aren’t we teaching that we should be? Um. And I thought about it a minute. And this was in the heyday of blogging. Um, you know, the big social networks didn’t exist yet. Facebook and LinkedIn and the like were not around. Uh, but blogging was getting really big. Uh, and, uh, newspapers were starting to close, uh, in some frightening numbers. And I said, I think what you need to be teaching is entrepreneurial ism. Uh, when I went to journalism school, they taught us to work for a newspaper or a news magazine, uh, a news broadcast outlet or a news radio station. And that was it. Um, and I think today, uh, journalists are going to have to reinvent themselves. I mean, look at the number of them that are doing podcasts, like, uh, Jim Acosta, who used to be with CNN, uh, they’re doing Substack newsletters for communicators. Uh, let me ask, what aren’t universities, uh, with, with communication programs, teaching that they should be. Well, I can start if you want. Um, I want, um, I’m in a business school, and of course, I don’t have communication students, but I have the future business leaders, and we teach them communication skills, and I’m really, um, very happy about it. But I think what we do not teach, and we need to start teaching, and I don’t know how is agility, adaptability and the the capacity to take things as they come and be able to, um, confront the challenges and have grace under fire, uh, being calm under stressful situations. So I think that a lot of the preparation that we do is based on the assumption that you know what’s coming and therefore you do the steps and ABC and the stakeholder analysis and all of that. But I don’t think we teach them necessarily, um, how to be resilient and be adaptable when things change. And I could add also that I think it could be very important. We speak a lot about being strategic, but I don’t think that universities really insist on this, on how to be really strategic, how to participate and to design strategic plans and to, um, to get to know the business much better so that we really can advise on how what kind of of communication strategies we should use, how to, uh, emphasize the human connection and to understand and listen to the needs of our audience so we can, uh, respond with our communication according to the, to our audience needs. And another thing is, uh, measurement and evaluation of the communication. I think that very few universities really, uh, really teach the different techniques and the benefits of really measuring the communication and what we are doing in our communication plans. Good point. I was going to add to what you said, Theo. Mary. Um, uh, sorry, Diane. Um, and just, uh, build on that in terms of, in terms of the agility and the resilience. I would say, like I kind of joked in my intro about, you know, that all of us need to keep learning, but I think there is a critical need to bring that beginner brain along with us, uh, especially for the next generation entering the workforce. They will need to learn and relearn skills and strategies and approaches, uh, within different contexts, at a much faster speed than than previous generations. And so I agree, I don’t quite know how to teach that either. That, um, that, um, ability to pivot, that ability to, uh, to, uh, re-embrace and, um, turn a corner and, and, uh, reengage in a whole new way on a regular basis, not just a couple of times over a career. Um, so sorry, Diane, back over to you. Uh, yeah. Great points. You know, I’d like to build on what everybody has said, I think, and I do teach communication students. And what I think we could do a better job of is teaching them business acumen and, um, financial analysis and being able to, uh, understand how businesses and nonprofits really work. Um, I find that my students often avoid that as I talk to them. They don’t understand how the stock market works. They they really have not much sophistication in terms of understanding, uh, how how businesses really get along. And, uh, and to build upon the point of, uh, assessment, I think is a really important one that what I find is because students don’t really understand how businesses make money and they don’t know how to read financial reports. Uh, they will often come in with, um, you know, very proud of some kind of ROI, thinking that they’ve done a great job because they’ve saved the company a thousand dollars or, you know, or they increase sales by, you know, a couple of dollars. And they don’t understand that in the large scheme of things, that is quite literally lunch money. So, um, I think we could do a better job collaborating across communication schools and business schools. Can I just second that? Because what you’re describing is exactly the opposite of what’s happening in business schools. So my students know everything about budgeting, everything about accounting, and they’re just focused on that, but they don’t necessarily understand the value of stakeholder analysis. Audience analysis how to communicate to different people. So I really think that your point, Diane, is very valuable. Uh, we need to just somehow find a way to merge these schools of thoughts and show students that there’s value to both. Like, of course they want to. They need to know the business side, but they also need to know the communication side. Um, I’m not sure how much I can add to this, except I would say, as someone who’s hired a lot of new graduates, um, we’ve always had entry level positions in our firm. Um, the thing that I would like to see schools teach them is how to better value every voice. Um, you don’t know everything when you graduate, you certainly have learned a lot and you’ve gained a lot of skills. But there’s there’s wisdom everywhere. And so how do we invite conversation? How do we invite people to accept constructive feedback and criticism? I feel sometimes, um, especially young employees, they get offended easily if someone corrects their work, and I feel like they just need to be better at understanding that they don’t know everything. They may have learned a lot, but there’s still. It’s a big world. I learned something new every day. Um, so I feel like if we could get them to to just be open to hearing other voices, um, that would go a long way to develop their soft skills around the boardroom table when they eventually get there. Yeah. Diane, uh, my favorite, uh, ROI assertions by by young communicators is when they come in and tell me the ROI was something that has absolutely nothing to do with money. Uh, you know, uh oh, we we we grew our readership. You know, that’s that’s not ROI. ROI is always money, always expressed as a percentage. Uh, it’s a formula. It’s it’s, you know, not negotiable. Um, but, uh, Sue, you mentioned, uh, entry level positions. Uh, one of the things that I’m hearing a lot about the, the threat, uh, to employment that AI, uh, is, is, uh, potentially going to, to bring us, uh, is that if it takes away a lot of the drudge work? Uh, well, that’s what a lot of entry level positions are all about. Uh, so, so what do we do with entry level positions in order to, uh, get people started on their careers in the right direction, doing work that that matters, um, but is still entry level. It’s not, um, you know, raising unrealistic expectations about what? Somebody brand new to the field might be able to do. Yeah. So what we’ve done is we’ve really brought our entry level staff in to things like client meetings and, um, you know, leadership meetings so that they can understand the business of the business. Um, not so much the financial statements necessarily, but we’re a small agency. So there’s, there’s how do you get new clients? What do you do with them? What’s the process? So we’re trying to teach them skills that they can. Then um, they will be the front end part of that of course. But then eventually they will start to manage projects on their own. And having had the experience of, um, you know, being in the, in the room when the big meetings are held, being in the room when decisions are made, they can see the debate, they can see different points of view, and it really helps to round them out. So I feel like that that’s probably something that more organizations can do. Do not put your, you know, new employee in a room by themselves and say, okay, here you go. Um, they need interaction and they need role models. They need people to understand how do I behave at work? Um, what does a what does a board meeting look like? Um, because this is all new experience for them. You can’t just, you know, suddenly be thrown into a board meeting and expected to, um, expected to, you know, swim. You’re probably going to sink if you haven’t had that exposure. So I really encourage people, if you’ve got an entry level position, is make sure you’re teaming them up with mid-level or senior level people so that they can shadow you to important meetings and, uh, see how the assignments are, are managed and how strategy works and how to anticipate things like that’s critical thinking that they can learn by observing. Yeah, that’s a great point, Sue. Um, you know, I, I think we’re running into, uh, almost like a triple threat of problems We still have students coming off a Covid experience. Um, the ones graduating now at least had a much more normal high school and college. But, you know, there was still there was still a lot of disruption. Um, then many organizations are doing more hybrid and remote work and, uh, still pointed out a lot of the entry level jobs are this kind of, you know, sort of routine grunt work which people can say, oh, you know, you don’t need to come into the office. You can do that at home. Um, you know, and then we have, you know, AI, um, stepping in and doing a lot of those kind of tasks. So what are young employees going to be doing? So, um, I, I know that our students are very worried about all of that. One of the things that I’m telling them is that they have I’ve got to kind of get them to the point where I was in my career, fifteen years in, and be much more adept at speaking to executives and understanding their language and holding their own and actually being able to push back. Um, what I’m doing a lot of is having consulting projects with my students. We, you know, always gave them projects, but now I am trying to take on projects where clients will, um, allow meetings to be on zoom in front of my class, and I’ll have the students watch me interact with a client, and we’ll debrief afterwards. And I’m trying to model how to politely push back and understand that what a client or a project sponsor comes in asking may not be at all what they need. And, um, you know, I think they need to learn how to do that in a convincing way, Even though they are young. But the ability to be in the office and have mentors and have that sort of informal learning cannot be overemphasized. If I may interject here, I’m so happy to hear you say all these things. I think that the industry needs and we as educators, we need to understand, in essence, there are no entry level positions anymore because that’s AI. It’s really if you think about it, it’s what we call an entry level position is a sort of a mid-level position because you require three years of experience, you require a master of analytics tools, the ability to manage stakeholders. And that’s a lot to ask. I would agree with both Diane and Sue that organizations and this is not on the candidate, but this is on businesses that they need to maybe rewrite job descriptions and focus on trainable capabilities. So like maybe curiosity, writing fundamentals, critical thinking and learning mindset. So I love that Sue said. Bring them in in a meeting. Show them how it’s done. So shadowing or I hear some businesses have this, um, rotational, um, programs where they go from one department to the next, uh, for training. So I think that this is where we are supposed to go as a, as a society, not even as communication professionals. Yeah. Something else that I think would also contribute to the, the entry level or the or just graduated is, uh, the experience of, of working in a company, in a big organization and also the experience of working for a consultancy or an agency. Having worked myself like for more than twenty years in different companies, you learn many things that you have not learned at the university as the importance of interacting with the cross-functional situation to to learn about the other areas of the company, to interact with all of them. So you can you can advise them on on communication strategies and to support them to to get to reach their objectives and the importance of the human connection, not only not only the communication tools, but what understanding, learning, listening so you can respond to their own needs. Um, the adaptability and then being on the on meetings within the organization so you can learn from other leaders. And also if you work in an agency then how to work with the client, how to respond to a client, how to contribute and advice. So both experiences, I think are very important for for someone who’s just starting in their career. To marry you. You mentioned AI, which it’s it’s inevitable that any panel is going to talk about AI these days. Um, I am reading continuously on LinkedIn professors, uh, saying they don’t allow AI in the classroom. They don’t teach it. Uh, to me, that’s insane, because as soon as they get to the workplace, they’re going to be expected to know how to use it, uh, and apply it in their jobs. Um, how I’m wondering, for those of you who are teaching, uh, what’s your view and how are you employing AI as as part of your curriculum? I’ll start. Since you mentioned my name. Um, You’re very right. Uh, there’s a lot of talk and a lot of backlash in the academia about AI, and I want to recognize that it’s different for different courses. But I will tell you how I am managing this. I’m telling them to use it as much as they can. First of all, AI literacy will be is a skill that, uh, people need, uh, at the workplace. And I think my students need to be able to, to use it. I still tell them that they’re responsible for accuracy of the final outcome, and they should have a unique perspective. So through the prompts, they need to give a perspective to the assignment or to the project that they’re doing. And the other thing is that AI cannot replace the critical thinking of the human. And because I’m in communication, just like all of you, they need to be able to defend it so they get an outcome. Okay, fine. It might be perfect. Can you come in front of me in person and justify it? Defend it, explain it. Um, and give me all kinds of aspects on the specific project. And I think at least with my level of students, because they’re, they’re a bit older, they’re not undergraduates. Um, I think they get it. And a lot of them are working towards that. So how do I use it? Uh, so that it’s very authentic because of course, authenticity is part of what leadership is nowadays. So if you come to me and you feel robotic or you feel like you are AI generated, you do not promote trust. And that defeats the purpose of any organization or any leader. And people are still grappling with that? Um, but I do feel that there’s a need for us to incorporate AI into what we’re doing and still find what is the differentiator, uh, between using AI, everybody will use AI. But what makes you stand out as a leader? What will make you stand out is if you own it, if you have a unique perspective and if you can present it with credibility, empathy and confidence. Just to build on the innovation factor, um, I recently had, uh, students doing a group project and they all each group in, in my class had a different idea, groups of four or five. And they were building that into a group presentation. And, um, I, uh, had them. I also embrace AI in the classroom and in, in, in all the ways that you described the human input in human. In human. Humans at the beginning and end of the process. Um, but I had them ask, uh, AI to take their idea and expand on it in bold and innovative and new ways and and help them consider some ways, some some different, uh, elevated ways of thinking. And they and frankly, I were quite their jaws dropped. These were third year students because, um, because AI returned to them all like five, five or six or eight different ideas in the classroom. And each group got very, very similar responses. And so we had this very robust discussion about originality and, uh, and unique thinking and, and so just to circle back to the last question about entry level positions or whatever we’re calling them, um, I actually think that that original thinking can be part of, of the learning process in workplaces because although they’re coming into the workplace with, with perhaps more, um, mature skills than, than many of us did, they haven’t necessarily caught up, uh, in other ways, especially especially some of the Covid impact, uh, young people. Um, and so they may be, um, you know, they may be looking for a professional maturity and opportunities to grow in other ways, uh, that I think exploring critical and innovative thinking can really, uh, allow them to do so. Um, yeah, I there’s we need the AI fluency. I’m not sending my my students out into the world without that. Yeah, I’m I’m doing the same thing. I require my students to use it in different ways, and not just the large language models to, you know, help them write things. I have them, uh, do at least prototypes using AI generated images and video. So, for example, in an intro PR class, one of the assignments is to do a PSA, a video PSA, and uh, they don’t have time to go out and and shoot it professionally. And a lot of them don’t have those video skills, but I will have them mock something up on AI. Um, I’ll have them mock up things like, uh, posters, uh, or bus wraps. Um, in some of my other classes, I will, uh, have them use AI as a thinking partner, as a brainstorming partner, uh, to help them come up with clever titles for a white paper, for example. Um, and then I also, uh, as they’re writing recommendations, I will have them, uh, ask AI what’s missing in this report, or if I were to deliver this final report to clients, what might their questions or pushback be? Or how might different stakeholders react to these Suggestions? Um, I’m also building some simulations where, um, AI will act as various stakeholders, and they can interview those stakeholders and get different kinds of interviews and feedback on on an issue. So, uh, but you know, what’s really interesting is students are all over the place. A lot of them really don’t like AI. Some of them because they don’t know how to prompt it very well, get frustrated. They, they they work with it. And it doesn’t, doesn’t give them anything that they find as usable. And so we need to work on that. But a lot of them are, um, ethically, uh, very worried about AI. Uh, some of it in terms of harvesting their ideas or other kinds of data or profiles about them and the environmental impacts. Um, Ithaca is a place where there’s a lot of people very interested in the environment from way back, and our college attracts a lot of students who are, um, very concerned about sustainability. And so we have a number of professors and students who really don’t want to use AI because of what they feel are the significant, uh, climate impacts. As, uh, as an employer, I’ll just pop in and say that we use we have our, um, staff use AI not to draft things, uh, to final product or anything like that, but more for research purposes. Uh, I loved your examples, Diane, of some of the things that you’ve asked your students to do. We do the same, you know, read this. Tell me what’s missing. Um, just to prompt ideas. Um, you know, when you’re staring at the proverbial blank page, you know, give me five words that describe space. Um, you know, that kind of thing. Um, we use it carefully at this time. Um. We’re still waiting to see. And I know you can do a whole show on this. We’re still waiting to see where it all shakes out in terms of, um, you know, need for disclosure and all that stuff, uh, that’s happening in the world. So, yeah, I mean, we do expect students to come with some AI, uh, fluency when they show up to work. Yeah. We have a comment. And this is from, uh, Brian Kilgore. Uh, I think he means twenty twenty five here, not twenty fifteen. But why don’t PR students in twenty twenty five have video production skills? Uh, any thoughts on that? I mean, I’m doing video all the time, uh, largely based on the trend of of people paying attention to short form video. Yeah. Our our staff do it with their, with their phones. I mean, they do video production and we have actually on our staff, we have a graphic designer who’s an animator. And so he does animations and that sort of thing. So you can absolutely do Video without having to have the old school, you know, video camera situation happening. Yeah. Our students certainly can do video for social media. But, you know, if you’re asking them to do something that might be like a public service announcement where, you know, they would need locations and actors and all that sort of thing, they’re certainly not up to to that level of production or complexity of logistics. Yeah. There are things where I would still hire a video production company, but on the other hand, for example, for recruiting, uh, I saw a trend of injecting videos into, uh, job listings that have people currently doing that job talking about what it’s like to do that at this company. And I went out and recorded the interviews. I edited it together. I did all the titles and the transitions. I just used Camtasia, you know, if you can use word, you can use Camtasia. And, uh, you know, they’re fine. Um, are they up to the level that I would get, uh, production quality from a professional? No. Clearly not. But, you know, some of these things, uh, as Martin Waxman would say, just have to be good enough. Yeah. From from what I’ve seen, they’re good with video production and tech and using even, like, their own phones to do stuff. I think that the, the script is a gap, like the key messaging, how they’re reaching their, their audience. So I see a lot of good videos, like from a video production standpoint, but I don’t see the background of it that I would have liked to see, you know, like like how do you reach your target audience? What are you trying to say? The key messaging. You know what we did that with, uh, with a few of our staff. We do a lot of videos and, uh, you know, we’ll get new staff or young graduates out of school and we’ll say to them, hey, how’d you like to write a video script today? And we walked them through the process of how you do that, how you do the key messages, how you, you know, make it interesting and engaging, uh, concise, uh, because you’ve got about fifteen seconds these days. Um, you know, if you’re doing it for social media. So again, we need to expose young employees. I say young, but I mean new to the field. Um, you know, right at the beginning, don’t let them, you know, hang out for two years before you let them do something like write a script, model the behavior, show them how to do it. And they’ve been terrific. Yeah, I’ve been, uh, actually doing some, uh. Yeah. Go ahead. Mary, I want to ask a follow up question to Sue because you’re employing people and I hear a lot. So I fully agree with what you said. Uh, the backlash when I give that advice to employers, just, you know, get people engaged really early on, they’re like, oh, they don’t know nothing. So let them just learn first and then I’m going to engage. Engage them. So how do you feel about that? Like how do you would you, um, respond to someone who says they’re not ready? They’re very young. They don’t know what they’re doing. Um, what would you say to that? Well, you don’t learn anything sitting in a cubicle by yourself. I’m sorry. So to say that they’re not experienced, they’re not going to get it sitting in a cube by themselves. So we always bring our staff along, uh, on, on client meetings, both on zoom or out of the office or at the client’s workspace. Um, we did a website for a client. Uh, that was a retail site. So we took the staff there to go and look around and experience it like a customer. So you have to do these things because it’s a real world out there. You cannot just sit at your desk and expect to, I don’t know, learn things by osmosis and waiting two years is to wait. You’re going to lose good people in the short term, because they will go and find a company that will give them those experiences. And arguably, those of us with two hundred years of experience don’t know what we’re doing. So there you go. There’s that. There’s a lot of learning to do both ways. That gets to what I was going to say. I’ve been creating animated videos, uh, which, you know, two years ago I would not have had the budget to do. Now they’re pretty damn close to free. Uh, I watched a couple of YouTube videos on how to do it. Um, I paid the ridiculously low monthly fees for some of these services. And for example, we’re doing one that is, uh, their three minute management tips. And I’ve got a sort of a grizzled, wise old construction manager out in the project trailer or out on the construction project, uh, talking about this particular management issue. And then we do cutaways to scenes that sort of convey the idea, tell the story, and he comes back and says, well, here’s the four things that you need to remember. Um, and so far, uh, they seem to be well received. Uh, they take me about a half a day to to crank one out. Um, which which leads to the question I have. Yeah. Uh, I had to learn how to do this. Um, you know, uh, how do we keep people engaged in learning new things? I find that a lot of the people who speak to that. Yeah, go for it, Sue. Yeah. So we we hired a really talented young guy fresh out of university, uh, a couple of years ago. And, uh, he expressed an interest in learning all about digital media. So we supported him with, uh, training courses, with, um, webinars, with research, anything he needed. And now he is, uh, making incredible, um, advancements for our clients in the areas where they want to either grow their client base or they want more, um, click through to a register for something or what have you. Um, He designed himself for one of our fundraising clients. Uh, he came up with an idea that raised through digital media. Organic? No, uh, no paid posts. Um, he did a little video, he did a little pitch, and he raised thirty thousand dollars in ten days for that client. Just an initiative he did on his own. I’m going back to your question, Shel. I think it’s quite. If we want to future, future proof our communication careers. Precisely. It’s to adopt a mindset of continuous learning. Because technology will evolve. But curiosity, adaptability and ethics will always matter. So, uh, we need to advise to learn one new tool every few months. Seek feedback often. Never stop refining your ability to tell stories that inspire action, because the future will see professionals moving away from manual content creation and focusing more on leveraging creativity, contextual awareness and strategic input. And the winners will be business communicators and agencies that can integrate AI to drive insights, campaign evaluation, and and stakeholder engagement skills. Less susceptible to automation. If I may add to that, I’ll tell you what I tell my students in class and also even in executive education workshops, which is very risky. But I do tell them I give the content and I always tell them what I’m telling you now is probably good and valid for the next two to three years. I don’t know what’s going to happen next. Maybe in three years time we come, you come back and I’m going to tell you something different. So I kind of position, uh, all the content, especially communication based leadership based on the fact that there’s also so many societal perspectives to take into account. What I’m telling you now might not be the same in five years from now, and people need to understand that there’s a dynamic, um, nature into what we teach, at least at least what I do with leadership and leadership communication skills. It’s it’s almost as if, uh, Frank Diaz was listening in. I, I happen to have LinkedIn up on one of the screens here in the office, and he just posted. Are we hiring internal comms roles for yesterday? Uh, three years after ChatGPT launched, ninety one percent of IC internal comms roles are designed as if generative AI didn’t exist. Uh, based on an analysis of one hundred job postings. Uh, he said, uh, ninety percent of roles still demand strong writing skills. Uh, while storytelling is vital, reframing the role around a skill that is rapidly being commoditized. By AI scaling, eighty percent ask for strategic expertise. Yet this is rarely defined. Employers want strategy, but they describe tactics, managing channels, drafting updates and supporting campaigns. And his killer stat only nine percent of roles mention any AI capabilities or skills. Um, I don’t know if that’s worth a comment, but, uh, I thought it was interesting that that scrolled by just as we were talking about this. You know, I will say that my clients on the on the client side of things, from my perspective, um, if I think of my three, say, three top clients right now, like in the last six months, um, none of them are. They’re very they’re slow to AI. And so, um, I do think that the that the hiring organizations are doing right now are not necessarily with that skill set in mind, because the organizations themselves aren’t, um, Um, for various reasons, including, uh, firewall related and and security related, etc.. Um, for various reasons, the organizations themselves are are hesitant or moving more slowly toward that, including for Diane, some of the reasons you talked about from an environmental impact perspective, organizations are getting their their their own strategies around their own strategic heads around that part of it. Um, so I do think that’s worth considering. Like, we’re not it’s not like it’s not like new employees, depending on the industry, are are new to the field. Employees are running to running to catch, uh, running to catch, catch up kind of thing. Um, it might be the other way around. I think what’s also interesting is that AI is rapidly being baked into every kind of platform. So you can’t not use AI in a way, if you use Canva, if you use PowerPoint, you know, Web browsers, I mean, AI is baked into everything. So, um, I think at a certain point it’s not even going to be relevant to mention it. It would be like saying, um, that you have internet skills, you know, so we’re we’re all on the internet, right? It’s kind of assumed, uh, so, you know, I think you have that. And then as others have pointed out, uh, most organizations don’t have their heads wrapped around AI yet either. And they still think of it, especially in terms of the kinds of things that communicators do. Uh, it’s seen as kind of cheating, you know, it’s like, well, you’re having this write it for you. And do you really know how to do this? Do you really know anything? Or are you just having AI do your job for you? Yeah, I guess you could always make the t, I, uh, calculator uh, argument there is, you know, there was great resistance to having a Texas Instruments calculator to do your math for you. And now it’s a given that you’ll do that both in school and at work. Uh, I want to get Brian’s follow up comment in here. Uh, he said there’s no significant difference writing a video than there is writing a brochure than inviting a TV news crew to come to a factory for a business interview. Sue, you have a thought on that? Yeah, I have to disagree. Um, I mean, the video scripts that we write are very strategic, and they’re definitely, um, set to meet the client’s needs with whatever that is, whether it’s, uh, like I say, it could be sales, it could be fundraising, it could be, um, hiring. Um, and so that is very different than, um, just asking a news crew to come. Uh, of course, videos belong in the owned category of media. So you have the right and the ability to control the message and the distribution of that message onto the channels where you want it to be seen. So yeah, I’m not sure I would agree with you, Brian, on that one. Um, the skill level is very different because you have to think both, uh, with words and visuals together at the same time, um, when you’re writing a script. So. Yeah. Okay. Uh, I don’t remember who it was who mentioned critical thinking earlier, but I did write it down, uh, several years ago when I was still an independent consultant. Uh, I had a client who asked if I could, uh, create an online course for his staff on critical thinking because they didn’t learn it in school. Uh, this was an organization that dealt with a lot of scientific papers, uh, generally papers that were, uh, not complimentary to the kind of product that this organization was, an association, uh, professional, uh, business association, not like ABC. It was around a a product type. Um, and they would come up with a response to it that was not based on critical thinking, and he felt it was important for them to learn it. Um, I did develop that training, but I’m wondering, uh, do you find that students have not been taught critical thinking at earlier grade levels? And is it important for students of communication to to learn that before they enter the workforce? Absolutely. It’s it it is quite probably the most critical skill that we can encourage and, and, and teach and insist on and hold to a standard, um, and, you know, uh, like Mary, I teach in a business school. And so I would say that those, uh, qualities are more baked into the business curriculum in general, um, because of because of more, um, um, quantitative analysis Processes that that come with business. Um, and, and so ensuring that we are also bringing that same level of rigor and thinking to communications teaching, uh, I think is, um, is is absolutely essential. And to, you know, that idea, that slow process of, of questioning everything and, and coming to our own conclusions and, and, uh, applying our, our, our own original human and ethical brains, um, to processes. So for me, critical thinking, a huge element of critical thinking has to do with ethics and integrity as it relates to our work and our businesses. Let me add on to that. Critical thinking is a core course, uh, for for our business school. It was not like that. Uh, shout out to my good friend and colleague Amish, who’s amazing at teaching it, but she was the one. She’s a lawyer, uh, by by bye profession, and she was the one to propose it and make it a core course for the curriculum. But what is interesting is that we do a lot of executive education together, like for senior professionals, and sure enough, they get a leadership development program that combines critical thinking, which Risa does, uh, with communication, which I do. So it’s a very important skill. Um, it’s often overlooked, but I do feel that there’s, um, I see a trend, uh, where the industry has really recognized that this is something that people really want. I’m not sure that it’s embedded in undergraduate education, though. It is. It is difficult. Yeah. Sorry. Go ahead. Uh, I just wanted to mention that I think that, uh, if there’s one thing that can differentiate us as communicators, uh, from just, uh, writing or speaking or delivering a message. Is that the skill of critical thinking, uh, adaptability, data literacy, uh, to know how to interpret analytics, understand human behavior, collaborate across functions, and being able to translate complex ideas into human language, I think will will make us invaluable. So I think, uh, that really would be the difference that we can make, uh, in order to be strategic and to give a strategic advice to the CEO and to the executive committee. I think it goes back to those job descriptions, uh, you know, and, and expectations. You know, I see that business leaders say they want critical thinking, but then I also see how they treat communication professionals, especially young ones, and they treat them like order takers and and they want them to be responsive and helpful. Right. And give them good customer service. When really I think what critical thinking implies is to be able to look at a problem from a very different angle and probably reframe things and show, um, the client or project sponsor the person requesting your services to kind of push back or give them a different viewpoint or a different approach to something. And it’s and I think in our job descriptions and expectations, we’re, we’re supposed to be, um, you know, pleasant little order takers who are creative and come up with cute ways, cute words or graphics to kind of make our audience happy. And, and and that’s not critical thinking. So I think there’s two aspects of it, I think especially for newer employees, early career employees. They’re kind of not sure how to employ that critical thinking. They’re not sure whether they’re just supposed to do what they are asked to do and not raise questions, or whether they are supposed to use their brains. Yeah, I think that’s really on the employers, to be honest. They need to, you know, make sure that they’re they’re being open to all of that kind of thinking. I mean, as I said at the beginning, there’s wisdom in many voices. And even if you are like five minutes out of university, you have life experience and cultural background and, you know, whatever, um, that that can inform something maybe we’re working on. So I don’t know why I would discount that just because you’re young or you’re fresh out of school. I mean, that makes no sense. Um, so, you know, I think you’ve really hit on something, um, Diane, that, uh. I think it’s important, but I think the onus is on the is on the employer to be honest. Yeah, I’ve been paying a lot of attention very recently to quantum computing. Uh, there are people who are saying that it is going to be more consequential than AI, uh, and bigger than AI. Uh, I really can’t wrap my mind around how this works. The idea of superposition is, is something that I just can’t grasp. But, uh, I am looking at the fact that these computers, which, uh, should be, uh, ready for businesses to buy in the next three to five years if what I’m reading is right, will be able to do in five minutes what it would take a supercomputer today, uh, a couple of trillion years to do. Uh, it’s going to be remarkable when it comes to things like drug discovery. Um, I wonder how many communicators at the mid level are paying attention to emerging technologies and, you know, massive trend shifts that will prepare them to thrive and be relevant as as they move forward. So what do those mid-level or mid-career communicators need to do in order to be relevant, and how do they impart what what they what they learn and what they do to to those, you know, incoming communicators, they need to take training from folks like the wonderful people on this panel who are educators, uh, with their various universities. So you need to keep up training. You know, I think all of us probably started as I did, which is on a manual typewriter. Big day when we moved up to an electric with an automatic correct button. Uh, everything I know about computing is. And technology is self-taught. Um, I think we’re beyond the ability for people to teach themselves. And so continuous learning. It’s just that simple. I will I will second that and say that everybody, including us, need to do continuous upskilling and especially mid-career communication professionals. And also I think that they the the way that I conceptualize mid-career communication professionals right now is that they would have some, uh, AI fluency and data data literacy, but the differentiator for them would be the ability to frame decisions, uh, which goes back to what Diane was, was saying to guide leaders and how to navigate complexity. And I will confess that in classes, it’s very difficult to teach that. And or we have not been trained as as professors to teach navigating ambiguity or how to, um, do strategic thinking. And I think that we need to reinvent ourselves and how we teach to help young people to go into the into the work with more skills, um, into that direction we have. Go ahead. Um, I was just going to add that I think that part of the upheaval that we’re discussing and talking about and bringing into classrooms, um, is right across the spectrum of all careers. So, again, we are most of us with our two hundred years of experience are, um, are though although I, you know, we are bringing some fresh thinking, we have grown up in a framework that is that is linear, that talks about a early level career, a mid-level career, and an advanced level career. And I think that’s part of the upheaval that’s going on right now, is rethinking what those entry points in the career and entry and exit points and contribution points look like. Um, I, uh, you know, I’m sure for any of us in the classroom, I, I it’s a cliche, but I learn as much from my students as, as I hope they learn from me. And, uh, and, and I would jump at the chance and I tell my clients I would jump at the chance to employ, uh, many of them, um, as business school graduates with strong communication skills and, and, and vice versa. Um, and the same. So I think it comes more to the packaging of and the, the curiosity and again, the ethics and integrity that we bring at afresh in the face of new ambiguities at all points in our career that makes us that is going to make us valuable, that future proofs us. We. I’m sorry. Go ahead. Yes, definitely. I would also recommend just to build that is, uh, evolving from tactical execution to strategic leadership. That is, shift your value from from producing content to shaping decisions, to influence strategy, advise leaders and connect communication to business outcomes. Uh, to build tech fluency but not take of sexual obsession. So you don’t need to be like an AI engineer. But you do need to understand how AI, analytics, automation, and digital ecosystems reshape audiences and reputation and workflows, and but also to strengthen our unique human advantages like judgment, empathy, cultural sensitivity, um, facilitation, storytelling, ethical decision making. I think those those characteristics I think could help also mid-career professional. We have about three minutes left. And I do have a last question and I’ll go around and ask you each to give me a a nice, terse answer. Letty, you suggested this question, so I’m going to ask you first. And that’s what’s one thing you wish you had known earlier about the future of communication? Um, I think, uh, mainly, uh, that I had to develop strategic judgment, uh, becoming comfortable with ambiguity, uh, strengthening my storytelling and influence skills and understanding that trust and not technology is is the real currency. Theo. Mary, how about you? Well, this is, uh, relatable to Letty’s experience, but I’m a very structured person, and I like preparation. And I like the degrees and the studies and all of that. That’s why I have a PhD. Um, but I wish I knew sooner how much adaptability and agility would be a huge skill for the future. I can confess that even now I know it. Cognitively, I know how much adaptability means in this world and how we need to change. Uh, like in real time, even like I’m going into classes and I’m getting interactions I almost have to change every day. And I find that very challenging, very difficult. But I also will tell you that it’s it’s a skill for the future. Agility, adaptability and and resilience. Diane, what do you wish you had known? Boy, um, so I’m kind of a tech nerd. I was a radio and television major undergrad, and so I feel comfortable around technology. I’ve always been playing around with whatever was new. Uh, but what I didn’t realize, and what a very kind client, uh, kind of advised me, was that, um, there are times that when I meet, especially with more senior executives, I need to get my hands off the technology. One. One very nice client, said, Diane. You’re great at setting all this stuff up, but you don’t. You shouldn’t ever do that. Bring somebody with you and have them set all this stuff up. So I guess I wish I knew earlier how to be convincing to upper level management and how to get out of the nerd, uh, technology creator content creator mode and get more into the strategic conversations. Great. Sue. Um, similar to Thea Mary. Um, I wish that I had known when I started my career that change was going to be as rapid as it is today, um, in the mid nineteen eighties, the organization I was with had a major organizational change program, and my goodness, there was a lot of pearl clutching and swooning and the whole thing around. Oh, change, change. What’s that going to mean? And now today, it’s like we’re going to change. People go, yeah, okay, cool. So I wish I just wish I’d known that I had to get comfortable with it a lot faster and just know that that’s the way it is. And what does Jennifer wish she had known how much fun I was going to have. And if you’re not having it and you’re not around people who are inspiring you and and, um, giving you juice, giving you fuel on a regular basis, then, um, look for it somewhere else, uh, because we all got to be having some fun. I couldn’t agree more. Uh, I want to thank the panel. It’s been a terrific discussion. We could probably go on for another hour, but I do need to let everyone know about next month’s circle of fellows, which will be episode one twenty three, uh, scheduled for five p m Eastern time on Thursday, December eighteenth. It will be, uh, morning on December nineteenth for two of our panelists, uh, we have Zora Artis and Adrian Cropley coming to us from Australia and Bonnie Carver and Mary Hills from here in the US, and we’re going to be talking about, uh, something very, uh, related to what we’ve been talking about today. It’s our crystal ball moment, the future of communication opportunities for our profession in twenty twenty six and beyond. So mark your calendars. That’ll be a fun one. Uh, it’s always fun when Adrian’s, uh, part of the panel. Uh, again, thank you everybody. It’s been great. Uh, and see, everybody next time. Thank you for facilitating, Chelle. Thank you for everything. Hey, everyone. Bye. Bye bye
The post Circle of Fellows #122: Preparing Communication Professionals for the Future appeared first on FIR Podcast Network.
In this episode, Chip and Gini tackle the difficult subject of firing an underperforming and problematic employee. They discuss a real-life scenario where an employee with a bad attitude refuses to do their work, causing frustration among team members.
They advise against prolonging the inevitable firing decision, suggesting that acting swiftly can alleviate overall team stress. Both hosts share insights on why Performance Improvement Plans (PIPs) are largely ineffective, stressing the need for proper documentation and the guidance of an HR advisor during termination processes.
Additionally, they highlight the importance of showing proactive steps to the remaining team to mitigate the workload burden and maintain morale. The episode emphasizes the critical role of leadership in making tough decisions for the greater good of the team and the business. [read the transcript]
The post ALP 289: Firing underperforming team members appeared first on FIR Podcast Network.
In the long-form episode for November 2025, Shel and Neville riff on a post by Robert Rose of the Content Marketing Institute, who identifies “idea inflation” as a growing problem on multiple levels. Idea inflation occurs when leaders prompt an AI model to generate 20 ideas for thought leadership posts, then send them to the communications team to convert them into ready-to-publish content. Also in this episode:
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody and welcome to episode number 489 of For Immediate Release. This is our long-form monthly episode for November 2025. I’m Shel Holtz in Concord, California.
Neville Hobson: And I’m Neville Hobson in Somerset in England.
Shel Holtz: We have a jam-packed show for you today. Virtually every story we’re going to cover has an artificial intelligence angle. That shouldn’t be a surprise — AI seems to dominate communication conversations everywhere these days.
We do hope that you will engage with this show by leaving a comment. There are so many ways that you can leave a comment. You can leave one right there on the show notes at firpodcastnetwork.com. You can even leave an audio comment from there. Just click the “record voicemail” button that you’ll see on the side of the page, and you can leave up to a 90-second audio.
You can also send us an audio clip — just record it, attach it to an email, send it to [email protected]. You can comment on the posts we publish on LinkedIn and Facebook and elsewhere, announcing the availability of a new episode.
There are just so many ways that you can leave a comment and we hope you will — and also rate and review the show. That’s what brings new listeners aboard.
As I mentioned, we have a jam-packed show today, but Neville, I wanted to mention before we even get into our rundown of previous episodes: did you see the study that showed that podcasting is very male-dominated as a medium?
Neville Hobson: I did see something in one of my news feeds, but I haven’t read it.
Shel Holtz: I heard about it on a podcast — I don’t remember which one — but I found it really interesting because the conversation was all about equity. And I’m certainly not in favor of male-dominated anything, but podcasting is not an industry where there is a CEO who can mandate an initiative to bring women into a more equitable position in podcasting.
This is a medium — let’s face it, even though The New York Times and The Wall Street Journal and other major media organizations have jumped into the podcasting waters — where it’s essentially a hobbyist occupation. You and I started this because we wanted to, and the tools are available to anybody who wants them.
I remember when we started this, one of the analogies we used was trying to walk into a radio station and say, “Hey, I want to have an hour-long show every day on public relations.” You’d be laughed out of the radio station because there’s not an audience big enough to support that kind of content. But here, if you can find an audience, you can have a podcast.
So I don’t know how you go about making this more equitable, but I found that to be an interesting perspective.
Neville Hobson: Yeah, I agree. There are some podcasts I’ve listened to that are hosted by women — which, frankly, are few beyond the realms of kind of “feminine-oriented” content. But there are a couple in our area of interest in communication that are. So they’re out there, but the majority, very much, are men.
Shel Holtz: Yeah. I mean, just in internal communications, there’s Katie Macaulay, and there are a lot of women doing communication-focused podcasts. Maybe if you’re going to look for somebody to make this a more equitable media space, it has to start with the mainstream media organizations that are producing podcasts — The New York Times, The Wall Street Journal of the world.
Neville Hobson: Yeah, over here you’ve got The Times and a few others who have women doing this. They are there in the mainstream media orientation, but the kind of homebrew content that we started out with, I don’t see too many.
Shel Holtz: No.
Well, Neville, why don’t we move into our rundown of previous episodes?
Neville Hobson: Okay, let’s get into it.
So we’ve got a handful of shows. We’re actually recording this monthly episode about a week and a half earlier than we normally would. I think the reason for that, Shel, is something to do with U.S. holidays, your travel, and stuff like that.
Shel Holtz: Yeah, I’m going to be in San Diego next weekend, visiting my daughter and granddaughter because they’re not able to come up here for Thanksgiving. And then the next weekend is Thanksgiving weekend. So that’s why this is early this month.
Neville Hobson: Right. Okay, that explains it.
We are, we are. So, not too many episodes since the last one, but they’re good ones, though, I have to say.
Before we talk about those, let’s mention episode 485, which was prior to the last monthly. We had a comment.
Shel Holtz: We had two that we didn’t have when we ran down this episode in our last monthly episode. The first is from Katie Howell, who says, “Already reward return visits over one-off reach and the clever brands are catching up. If your brief still says ‘go viral,’ you’re chasing a metric that won’t help you keep your job. Repeat engagement with the right people is the proper goal. Less glamorous, miles more useful.”
And Andy Green says, “Good clarification over strategies, but you also need to recognize viral — also known as meme-friendly — is at the heart of effective communications. Also greater recognition of the impact of zeitgeist. Check out Steven Pinker’s latest book, When Everyone Notes.”
Neville Hobson: They were on LinkedIn, I think, weren’t they? That’s where most of them come in.
So, to the ones we did: we have the monthly of October that we did on the 27th of October, when it was published. The lead story we focused on in the headline was “Measuring sentiment won’t help you maintain trust.” Other topics — there were five others — including an interesting one: Lloyds Bank, the CEO and executive team learning AI to reimagine the future of banking with generative AI.
We talked about case studies in a piece that described, “Conduct, culture, and context collide: three crisis case studies,” reviewed in Provoke Media.
Shel Holtz: Yeah, they did 13 or 14 case studies. It was a very interesting article, so we highlighted a couple. And there was more content there too.
Neville Hobson: Episode 487, we published on the 5th of November. This was a really interesting discussion. You and I analyzed and discussed Martin Waxman’s LinkedIn post about slower publishing, deeper thinking, better outcomes — a pivot he’s made with his business and his newsletter.
He left a number of comments, but on the show notes post he left a long comment that was great. We don’t normally get comments on the show notes, so thank you, Martin.
Shel Holtz: Yeah, there were several comments from Martin. I’m going to run through these. He said, “Thank you for having me as a virtual guest once-removed on the episode, Neville. I just listened today and enjoyed your and Shel’s take on my post. You gave me a fresh perspective and I was honored and thrilled to be a conversation topic. And thanks to both of you for holding up the comms podcasting torch all these years and having a lot of fascinating and insightful ideas to share.”
You replied. You said, “Thanks so much, Martin. It was our pleasure. Your post struck a chord with many of us who feel the pace accelerating. It was a great springboard for our discussion, and I’m glad our take offered something new in return. Slowing down to think more deeply about how we use AI feels like the most human move we can make right now.”
But Martin also posted on his own LinkedIn account — and this isn’t short, so bear with us, everybody, as I read through this because I think it’s worth sharing:
“As the first and longest-running communications podcast — and one I’ve been listening to for a long time — this meant a lot. As I listened and heard Shel and Neville’s take on my observations, I gained a new perspective, one I didn’t see when I was writing and revising my post.
“Something I didn’t mention out loud is that it’s been getting more and more difficult to come up with fresh ideas on where AI fits in marketing and communications and the various implications around that, the kind that inspire a person to write. Like social media, it feels like we’ve tipped past the point of saturation.
“As Shel said, we’re now getting drenched by the all-too-familiar commentary and quasi-expert advice swirling around our feeds. That certainly doesn’t diminish the utility of AI or using it where it helps. And I appreciate Shel’s view on how AI helps speed up doing the good-enough tasks that are inherent in all work, to concentrate on the things you want to spend more time on.
“I could also relate to Neville’s comments about saying no to projects that don’t excite you so you can focus on the ones that do. And yes, the three of us are all fortunate to have reached that stage in our careers when we have a little more freedom to pick and choose. I also realize that many people aren’t in that situation.
“As someone who has spent my entire career writing, it’s exciting and a bit frightening to wonder what I’m going to write about next. Yet there’s energy in uncertainty. So thank you to Shel and Neville for having me back as a guest, albeit one who didn’t have to press record.”
Neville Hobson: Really, really super comments that Martin left. Thank you, Martin.
And then our final one before this episode, 488, we published on the 10th of November. I enjoyed this discussion a lot — about Coca-Cola’s generative AI Christmas video that they have done before, but this one got rid of all the people; it was full of bunny rabbits and sloths and all sorts of stuff and those red trucks.
There were plenty of opinions out there, ranging from “What a creative and technical masterpiece this is” to “Utter AI slop.” So we were quite impressed with it and stood back to look at what they were doing rather than being judgmental in any shape or form. But there were plenty of comments, and we had at least one we should mention, right?
Shel Holtz: Yes, from Barbara Nixon, who said, “Thanks for sharing this. I’ll use it as a basis of discussion in my PR writing class next week.”
Neville Hobson: That’s cool. So that’s the content leading up to this one. And of course, now we’re in the November episode that kicks off the next cycle of reporting for the next edition, when I can talk about what we did since this edition.
Shel Holtz: That’s right. And I also want to let everyone know that there is a Circle of Fellows coming up. I would be reporting on this if we were recording at the normal time of the month toward the end of the month, but it hasn’t happened yet.
It is coming up on November 25th, Tuesday instead of Thursday, because Thursday that week is Thanksgiving. So it’s happening at 6 p.m. Eastern Standard Time on Tuesday, November 25th. This is episode 122, and the topic is “Preparing Communication Professionals for the Future.”
It’s a larger-than-usual panel — there are five Fellows instead of four. It’s going to be a good discussion. I think the future — obviously AI factors in here, I think quantum computing does too, as we’re going to talk about shortly in this episode — but also changes in business trends. The zeitgeist is changing, and politics is going to have more of an influence on business. All of these are things that I’m sure we will be discussing.
We look forward to having you join us for that. Of course, if you can’t be there to watch it in real time, it is available both as a video replay on YouTube and as an audio podcast that you can subscribe to right here on the FIR Podcast Network.
And we will now jump into our content for the month — but not until we run this ad for you.
Neville Hobson: So, one of the most interesting shifts happening inside large organizations right now is the move to combine communication and brand under a single leader. We’re seeing this across companies as varied as IBM, GM, Anthropic, and Dropbox, and the trend is accelerating.
According to research cited by Axios, CCO-plus roles — where communication leaders take on brand or marketing responsibilities — have risen nearly 90% in recent years.
What’s driving this? The short answer is volatility, says Axios. AI is changing how people discover what a company stands for, and reputational storms seem to ignite faster and with far greater consequences. A marketing decision that once would have sparked a debate in a meeting room can now become a political flashpoint within hours. That forces the question of who should really own the brand narrative.
Communication leaders are increasingly being seen as the natural fit. They understand stakeholders. They have a risk mindset. And they are often the ones who know how to navigate the cultural and political sensitivities that shape reputation today.
In other words, this is not just about messaging. It’s about trust, judgment, and the ability to connect what a company says with how it behaves. There is still a need for specialist marketing functions, but for many companies, brand stewardship is shifting toward the people who are closest to reputation.
And in a world where AI can bend or reinterpret a narrative in seconds, bringing communication and brand together under one trusted voice feels less like a structural tweak and more like a survival strategy.
So the bigger question for us is what this means for the future of the communication profession. Are we seeing the emergence of a new kind of leadership role — or simply a correction to reflect the reality that brand and reputation have always belonged together?
Shel Holtz: That’s a very interesting trend, and I don’t disagree with it in general. If you look at the big picture, it does make sense. Public relations is all about reputation; it’s all about maintaining relationships with the various stakeholder audiences.
So, as a communicator, you tend to have a big picture. You understand what the reputation is among investors, among the local communities in which your organization operates, among the media, for example, among your customers.
Marketing is all about driving leads for sales in most industries, and they don’t necessarily have that big picture. So it makes sense. And to bring marketing into the communication fold means that you get the benefits of the things that marketing is exceptional at — and branding is one of those things.
Most communicators aren’t involved in developing the trademarks for the organization and the logos and the like — that tends to be marketing, and for good reason. But to have that within the purview of communications enables that chief communication officer-plus to ensure that what’s coming out of that operation aligns with and is consistent with the things that we know drive the reputation of the organization.
You can find some gotchas maybe in the outputs that they’re developing that they wouldn’t have thought of.
That said, I know in my industry, which is commercial construction, the marketing department is not doing traditional marketing. There’s not a lot of effort to drive leads. The relationships with prospective clients are driven through other means. It’s getting to know people through industry contacts and the like. It’s building those personal relationships with developers and owners and the like.
I’ve just celebrated my eighth anniversary where I work, so I’ve seen this in play for long enough to understand that it’s right and it works very, very well.
In my company, the marketing department is also the steward of the brand, and I am fine with that because I’m mostly doing internal communications. I’m also responsible for PR, as far as it goes — media relations and the like — but I don’t have that relationship with the client base. Not at all. It’s rare that I meet a client. Usually I’ll shake hands at a groundbreaking or something like that if I’m out covering it, but by and large, this is something that the marketing department does.
So I’m inclined to say I agree with this, but it depends. And I think there are probably exceptions, and my industry is probably one of them. I’m part of a group called the Construction Communicators Roundtable — 18 or 20 commercial construction companies represented there — and I get the impression that it is the same with all of them. So this may be an industry-by-industry thing.
I don’t disagree with it, but I do think it depends.
Neville Hobson: “I think it depends” is definitely the start point to the discussion on this, I would say. My thought when I read the article — and the reason I included it in the topics for this episode — was precisely that: it does depend.
I’m not sure it is strictly industry-by-industry, meaning that this industry is entirely this way and this one isn’t. It’s probably a mixture. But there are some compelling reasons, I think, why it makes sense to do this even with the argument you’ve made for not doing it, let’s say.
For instance, one interpretation I have from Axios’s research is that the argument is: brand is no longer just a marketing asset. It’s a reputational construct shaped by every stakeholder interaction. That squarely leans toward understanding the impact on reputation — particularly in that communicators are the ones for that, not the marketing person.
It also speaks to the need for a trusted, politically aware leader. This combined role, according to Axios, is shaped by the reality that brand crises are increasingly political. Organizations want leaders who bring judgment, sensitivity, and crisis literacy. And that, in my view, leans much more into the communication person than the marketing/brand person.
And the one I think that is most interesting is the broader reinvention of the communication function. Sorry, marketing folks — this is about communication. The trend echoes the ongoing elevation of communicators as strategic partners rather than support functions, reinforcing the argument that communication is increasingly a governance role, not just an executional one.
Now, that argument would apply to marketing too, but not in quite the same way. Taking into account all of that — particularly the connection with reputation, the political awareness, and I like this term “crisis literacy,” fair enough, it’s a good way of describing it — this is more likely to fit in the bucket where the communicator sits than the marketing one.
And by the way, I’ve seen a number of people’s job titles — communication and brand. And I saw someone recently on LinkedIn who is a Chief Communication Officer and Director of Brand and Reputation, playing exactly to what Axios’s point is.
So yes, “it depends,” but I think there’s a compelling reason why, if you’ve got to pick one person, it should be the communicator.
Shel Holtz: Yeah, and again, I don’t disagree. And still I am untroubled by the fact that marketing owns the brand where I work. And I should clarify: they’re not engaged in traditional marketing. This is not a marketing department like at, say, Procter & Gamble or Coca-Cola. They’re engaged primarily in business development.
So they’re putting together the proposals, they’re responding to the RFPs, they’re preparing the members of the team to go out and be interviewed by the owner or the developer who’s selecting the general contractor. So it is B2B. And, I mean, if they’re not concerned about the organization’s reputation, nobody is.
So this is why I say it depends.
The other point I will make is that even though we are not part of the same reporting structure, we’re pretty well joined at the hip. The VP of Marketing and I talk all the time. He’ll call me into his office to run stuff by me, I’ll run stuff by him. We meet regularly. We have a marketing director right now we are working with incredibly closely to develop a year-long recruiting campaign. We’ve won a ton of work and we need to staff up to support that work.
We’re going to take advantage of her expertise in branding and in marketing to recruits, and we’re going to take advantage of our expertise and the things that we do well. And that collaboration is probably going to produce a much better result than if it had just been one of us or the other of us.
So at the end of the day, I don’t think it matters who has the highest title, as long as everybody’s working together, they’re aligned, and they’re working toward the same goals. So again, I don’t disagree with the sentiment and the underlying foundation of the point that was made in this piece, but I think there are organizations where that is being done without having the communicator necessarily at the top of the food chain.
Neville Hobson: That’s the place where I think the communicator should be — which, of course, plays to the decades-old desire expressed by many in our profession that the communicator needs a seat at the top table.
I guess the concluding point I would say is: anyone listening to this discussion who occupies that joint function and would care to share his or her thinking about all of that — we’d love to hear a comment.
Shel Holtz: Yeah, a seat at the table, yeah.
We would always love to hear comments.
If you feel like AI is sucking all the oxygen out of the room, you’re not wrong. It seems like it was just last week we were talking about blockchain and the metaverse and a slew of other technologies. But while we’ve been fine-tuning prompts and governance, another technology has been quietly moving toward the comms agenda — and that is quantum computing.
The BBC recently framed it as potentially as big, if not bigger, than AI. It’s time to start paying attention to quantum computing and how it matters to communicators.
A quick primer: classical computers process bits, zeros and ones. Quantum computers use quantum bits, known as qubits, which can be zero and one at the same time. That’s called superposition.
If you read the book or watched the Apple TV series Dark Matter — I did, it was really good — you know about superposition, and it has been the foundation of a lot of other science fiction: this idea of being able to be in two places at the same time, quantum superposition.
Two, the zero and one in the same place at the same time can influence each other through something called entanglement — a phenomenon where two or more quantum bits, those qubits, become linked, sharing a single quantum state, so they cannot be described independently even when separated by vast distances.
In some problem classes — chemistry, simulation, optimization, factoring — this enables speed-ups that make the impossible suddenly possible. The machines we have today are still noisy, error-prone. But the security world is acting as if a capable quantum machine will arrive within the planning horizon, which is why standards bodies and platforms are shifting now.
You’ve already seen early signals in consumer tech: post-quantum cryptography, warnings from cybersecurity experts, and quantum-resistant messaging from big platforms. Quantum-resistant messaging uses new encryption algorithms to protect communication from both current and future quantum computers. It’s also called post-quantum cryptography and aims to safeguard data by using mathematical problems that are believed to be difficult for both classical and quantum computers to solve — unlike current algorithms, which can be broken by a powerful enough quantum computer.
In fact, I’m reading a really interesting book right now. It takes place about 150 years in the future, and everything that we today thought was encrypted and nobody would ever see — they’re seeing it all because they have access to quantum computing.
These aren’t just niche issues. They tie directly into how you tell stories, how you prepare for crises, and how you work.
So what does this mean for communicators beyond asking IT if we’re on top of it? I’m going to run through three buckets, and then we’ll tie in how quantum and AI overlap, because that’s where things get especially interesting.
First, storytelling and public understanding. Quantum is famously hard to explain, which makes it vulnerable to hype and confusion. Your job is to translate it without overselling it. “Quantum-safe” doesn’t mean “quantum-proof,” for example, and timelines remain uncertain — we don’t know when you’re going to be able to go to your local Best Buy and get a quantum computer.
You’ll want to build narratives now that help your audience support the idea that your organization is looking ahead, not getting caught flat-footed. Use everyday language. Say, “We’re updating encryption today to protect the data of tomorrow.” That works better than “We’re quantum resilient.” You’ll gain credibility when you help people understand what’s changing and why they should care.
Second, this is all about crisis preparedness and trust. If your organization holds long-lived sensitive data — health records, intellectual property, government contracts — then you need a communications plan for cryptographic agility. That means plain-language FAQs explaining why you are updating encryption, updates to stakeholders as you migrate to approved standards, and scenario planning for legacy data exposure.
Quantum computing introduces a new dimension of risk: the idea that what you publish or promise today could be decrypted or exposed years later. In a crisis, you’ll need to be ready to say, “We anticipated this risk, and here’s what we did.” That anticipatory positioning goes a long way toward preserving trust.
Third, it’s about how communicators can use quantum — and quantum plus artificial intelligence — in our work. Eventually, you’ll have new tools. For example, quantum computing may be able to provide far more advanced modeling of message flows, audience networks, and sentiment behavior, letting you identify optimal outreach paths or refine campaigns under dynamic conditions.
You could simulate scenarios in complex environments more quickly, refining your messages in a what-if matrix classical tools can’t easily handle. These scenarios might include things like stakeholder cascade effects, social media virality, and supply chain disruption.
And as quantum key distribution and quantum-resistant encryption mature, you’ll be in a position to tell audiences, “Our channels use the latest quantum-secure messaging,” which becomes a differentiator from your competitors.
Then there’s the overlap with AI. Quantum computing will amplify AI’s capabilities, helping it crunch deeper patterns faster and handle volumes of data plus complexity that classical systems struggle with. For communicators, that means the analytics layer you rely on — for sentiment, for influence mapping, for risk modeling — will evolve.
AI plus quantum means faster insights, more complex scenario modeling, and new ways to anticipate issues before they explode. So when you describe your comms strategy, you might say, “We use advanced modeling powered by AI today, and we’re tracking quantum-enabled tools so we’re positioned for the next wave.”
The fact is, quantum isn’t just a side story to AI — it’ll reshape AI. Research indicates that quantum computing and AI together massively increase computational speed and breadth of analysis. For example, quantum can remove some of the bottlenecks in data size, complexity, and simulation time that limit today’s AI systems.
For you as a communicator, that means three practical things.
First, what you pitch as “AI-enabled” today will evolve into “AI-plus-quantum-enabled,” and part of the story you tell stakeholders is, “We’re future-proofing so we don’t fall behind.”
Second, monitoring of reputational risk must extend to both AI misuse and quantum misuse — encryption break, advanced surveillance, things like that. The combination raises the bar for your “what could go wrong” list.
And third, your metrics and narrative signals will shift. When AI and quantum intersect, you’ll need to help people understand not just faster insights, but insights from a new class of computing. That means simplified metaphors and careful framing. The message no longer just flows faster — the infrastructure itself is changing. If AI rewrote the message, quantum will test the envelope it travels in.
You don’t need to wait until quantum has fully arrived. You need to start telling that story now. You need to show that your organization is looking ahead, educating stakeholders, and building trust today so that when the change arrives, you’re not scrambling.
Neville Hobson: Well, that’s heavy stuff, Shel.
It’s interesting how Zoe Kleinman, the BBC journalist who wrote this piece, started her article. She says, “You can either explain quantum accurately or in a way that people understand, but you can’t do both.” So I think this is very much in the “accurately” bucket, this discussion.
Shel Holtz: Isn’t it, though? I strive for accuracy.
Neville Hobson: Yeah, and she notes as well, it’s a fiendishly difficult concept to get your head around. I couldn’t agree more. I’ve tried to thoroughly understand this — and maybe I should get rid of the word “thoroughly” because I can’t thoroughly understand it. I need to understand the bits that matter.
So to me, on the one hand I’m thinking, “Fine, this has not arrived yet,” but your point about “get prepared” is a valid one. Although I wonder how many people are going to say, “Well, it hasn’t arrived yet, so what are we going to do? How am I going to do this?” That’s where communicators come in, by the way.
But I think she gives a great example that you really can grasp. Talking about how quantum computers could one day effortlessly churn through endless combinations of molecules to come up with new drugs and medications — a process that currently takes years and years using classical computers.
She says to give you an idea of the scale, in December 2024 Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest supercomputers 10 septillion years to complete — that’s 10 with 24 zeros after it.
I mean, just thinking about the number, you cannot imagine how long that would be. The sun would probably have died before it gets to it. This would do it in five minutes.
So it then talks about what it paves the way for — personalized medication, all that kind of stuff.
I don’t think we’re at the stage yet where you could equate this to, “Okay, in your average business, all the business processes they do will be materially impacted by this in a very powerful way.” We’re not there yet, because you can’t explain it like that, I don’t think — hence these very big-picture examples.
Everything I read about quantum talks about this: personalized medication, chemical processing, quantum sensors to measure things incredibly precisely. That’s all coming. It’s not here yet.
So it’s interesting. The examples they give are all wonderful, I have to say, but the mind boggles. My mind certainly does, when you look at so much information on this that you wonder: what on earth are you going to pay attention to in order to get a handle on how it’s going to affect my industry, my company, my job, how we live, my family — all these things? No one’s got that yet, and that’s probably what people want to know — but you can’t yet.
Shel Holtz: No, but it’s close enough that we need to start preparing for it and we need to start communicating about it, especially if you’re in an industry that is computing-intensive in its work. And I’m not talking about customer relationship databases and things like that; I mean in your R&D, for example. And certainly the cryptographic implications are severe on the risk side.
So being ready for that now, rather than scrambling to get ready once it’s actually here, is, I think, an imperative.
You do need to be a physicist or physics-adjacent to really understand this. But I’ll be honest: science has never been my thing. Science and math were my worst subjects in school. The humanities were where I rocked. And I struggle understanding the zeros and ones in fundamental computing — the opening of the gates and all that.
But you know what? I don’t need to know how my carburetor works in order to drive my car. The fact is that these tools are coming, and understanding how they work or not, people are going to be able to use them.
And as I say, it’s close enough. It’s probably within the next 10 years that companies are going to be able to buy quantum compute time, if not buy a quantum computer, that we really need to start thinking about it. We really need to start preparing for it, especially from a security standpoint.
Neville Hobson: Yeah, I get that. I think, though, that people — communicators, this is our area of interest and focus — would need to know: how are we going to do all this when so much of it is theory?
They’re talking about — I’m just looking at the piece here that goes into detail about how to break current forms of public key encryption. Hot topic: security of information. It says here it’s awaiting a truly operational quantum computer. That’s years away. But as the article notes, quoting a cybersecurity expert, “The threat is so high that it’s assumed everyone needs to introduce quantum-resistant encryption right now.” That’s not the case. So there’s probably a lot of hype.
Although it mentions earlier — and I think you might have mentioned — that there’s even more hype about AI. So this was the king of hype before AI emerged.
The prediction I’m reading is that an operational quantum computer could be around the year 2030. So that’s five years away. Okay, in that case, now is the time to get prepared for this, then.
Shel Holtz: That’s pretty fast. And there are operational quantum computers in labs.
Absolutely — there are operational quantum computers in research labs right now. They’re not commercially viable yet, but as you say, the projections run anywhere from five to 15 years. That’s fast; that’s soon.
When we were talking a lot about the metaverse, we were saying the fully operational metaverse was 10 years away — you need to start thinking about that now. Same thing here.
Neville Hobson: Did you notice the concluding paragraph? This is actually where it kind of fits in with the current status of alarm and concern from a political point of view about what certain countries are up to — China, which it calls out as an example.
It says the GCHQ — that’s the UK’s intelligence cyber agency — says that it’s credible that almost all UK citizens will have had data compromised in state-sponsored cyber attacks carried out by China, with that data stockpiled for a time when it can be decrypted and studied, and that you need a quantum computer for that.
For instance, the economic headline in the UK right now — the cause of the kind of unexpected dip in GDP — is caused specifically by the cyberattack on Jaguar Land Rover, the automaker. That cost nearly two billion in losses because of the cyberattack that compromised them and their supply chain.
So this brings it home to you: what are they doing with the data? They can’t do much with it until they’ve got the computing power to be able to. So these things add to… I’m not sure it really adds to understanding — it adds to confusion, adds to worry, probably.
So it’s helping people organizationally, in this context, understand why we need to be prepared for this. And it needs to be, I think, presented in terms they can more readily grasp and understand than is currently the case for what I’ve seen people talk about in quantum computing.
This is a good article, by the way, and I think Zoe Kleinman did a really good job. I’ve read another article — I think it was from Microsoft — where you truly need to have a degree in advanced physics just to understand the article. These are not designed for your average Joe to grasp. There’s a gap.
Shel Holtz: Absolutely. But I think the role of the communicator here isn’t to help people understand how quantum computing works any more than it is with classical computing. Our job is: what are the benefits and what are the risks? What do we need to prepare for? Where do we need to start building that foundation so that when it arrives, we’re ready and not suffering consequences or falling behind our competitors?
So I think that’s the role of the communicator: to say, “Look, you don’t need to understand how it works. These are the things that it’s going to be able to do, and these are the implications for us and our business and our reputation and our competitiveness.”
Neville Hobson: So I see an opportunity here for someone like Lee LeFever to come up with one of his really cool videos that explains in simple terms what quantum computing is.
Shel Holtz: I’ve got to go find myself a good explainer video — see if there is one out there that does a really great job of it. There probably is. Maybe Lee has, for all I know.
Neville Hobson: So, let’s continue on the theme of a computing topic which is not really connected to it, but it’s a similar theme. We’re going to talk about vibe coding and what it means for communication leaders.
Every so often a piece of technology comes along that seems small on the surface but signals a much bigger shift beneath the surface. Vibe coding is one of those moments.
On paper, it sounds like a technical trend: using AI to build software by simply describing what you want in natural language. No coding, no syntax, no engineering background needed. You just talk, and an AI generates a working prototype. Sounds wonderful.
In early November, it was named Word of the Year by Collins Dictionary. Of course it’s two words, but who’s counting? Anyway, it was chosen to reflect the evolving relationship between language and technology and how AI is making coding more accessible to a wider range of people.
This is not a coding story; it’s a future-of-work, future-of-skills, and future-of-organization story.
What makes this interesting for us is not the code; it’s what happens when anyone in an organization can create digital tools on the fly. A business analyst can build a workflow. Someone in HR can automate a process. A communicator can sketch out an app for an event or a campaign — all without waiting for IT.
Suddenly the boundary between people who solve business problems and people who write software starts to blur.
This has real implications for culture and communication. It empowers people in new ways, but it also introduces new risks. AI-generated code is fast, but it’s not always secure, compliant, or ready for production — or even necessarily working properly.
And as we know, when technology becomes more accessible, organizations need a stronger narrative on how to experiment safely, what the guardrails are, and when creativity gives way to rigor.
There is also a shift in skills. According to Cognizant, in the age of AI the most important capability is moving from problem-solving to problem-finding — being able to frame the right questions, articulate needs clearly, and work collaboratively with both humans and machines. That is a communication skill at its core.
So the story here isn’t about developers being replaced or apps being magically created. It’s about how work changes when AI becomes a conversational partner.
And it raises a bigger question: if every team can now build its own tools, what role do communicators play in shaping culture, governance, and the shared understanding of how organizations innovate? Big questions there, Shel.
Shel Holtz: It is a big question. There are big questions there.
I’ve been doing a lot of reading about vibe coding and listening to a lot of podcasts that talk about it. I have been so excited about it, I’ve been working on a proposal — completely unsolicited, no one at my company knows it’s coming — but it is for a vibe-coding training program for project engineers: the entry-level people on the building side of our industry.
Because right now, if they need something — say a dashboard, an app that creates a dashboard that pulls data in from various sources, or that allows you to plug data in and produce charts and graphs and the like — they have to open a ticket, and IT has to create it if they have the time. They’ll prioritize based on the urgency of the things that they’re working on, and you may not get what you want, and it may take a long time.
Now you can just do it yourself.
So I’m very excited about this, especially given the threat that entry-level jobs around all of the business world are facing from AI. They need to be redefined, because entry-level people have to be part of the mix — how do you develop those who are going to move into higher roles in the organization if they don’t start somewhere?
So it’s a rethinking of what those roles are, and enabling these people to create their own apps is one of them.
But now they would still have to submit that app for approval, because if you don’t have expertise in coding you may have done things that you’re unaware of that can create certain risks or problems, or it may stop working at some point. All types of things could go wrong.
I think vibe coding without any foundation in coding is fine for some very, very simple things. I think the more complex it gets, the more of that foundation you need.
While you were talking, I went and looked at what Christopher S. Penn had to say about it, because I’ve heard him talk about it a number of times both in his writing and in the video podcasting that he does.
He thinks that you do — if you’re going to be doing this in a serious way — need to have an understanding of the software development life cycle.
At a minimum, this is what he says: you have to be able to provide detailed instructions and guardrails to the machine. You have to know what you’re doing to prevent poor results, like a vague code request — it’s like asking an AI to write a fiction novel with little information. That would just result in slop, right? Same with code. You have to give it a precise enough series of prompts to get the output that you want.
You need to know not only when the solutions are right or wrong, but also whether they’re right or wrong in the context of the work.
He says best practices for vibe coding require a structured approach that relies heavily on planning, which maps to the Trust Insights Five P Framework — which is really good, go look it up at trustinsights.ai.
This structured method is essential to vibe-code well and includes steps like: spending three times as much time in planning as in writing the code, creating a detailed product requirements document and a file-by-file work plan, and integrating security requirements and quality checks.
And then, of course, if it doesn’t work right the first time, you can keep iterating — but you should have some understanding of debugging and know somebody who does in order to get it to do exactly what you want.
So I think for very simple stuff, yeah, you can just tell it, “Please create me an app that does X, Y, or Z.” But the more complex it gets, the more of a grounding in coding you’re going to need.
Neville Hobson: So that’s where guardrails and guidance and policies and procedures come into play.
But you know what’s going to happen — we saw it with ChatGPT — is that people are going to get hold of the tools to do this and just go ahead and do stuff themselves. That’s what’s going to happen, with the risks inherent in doing that for everything you’ve outlined.
I look at what I’ve done that I suppose you could call vibe coding. What I did on my websites, which run on Ghost — it’s not like WordPress in that you want a theme that you customize, dead easy, build a child copy and all that. Not with Ghost; you’re into the code.
So I used a combination of tools, including the excellent VS Code tools from Microsoft, but also a tool called Docker — astounding, running on Windows. But my “coding partner” was ChatGPT-5. I prompted it with what I wanted to do, and it wrote the code.
We tested it and nothing fell over, except where there were some things like CSS for styling — some dependencies didn’t work for some reason until it fell over and we went back and fixed it.
I was amazed, constantly, by being able to talk to the chatbot in plain English about what I wanted to achieve, and it then proposed how we find the solution to doing that, and then it wrote the code.
I couldn’t do that because I don’t know the code. I would have had to hire a developer or, if I were doing this properly in an organization, file a ticket to get support. I did this myself over a weekend, and I’m still truly amazed. It was an offline copy — all working, everything worked. We packaged it, uploaded it to the Ghost server, enabled it, and the live site just worked. All the changes were perfect; nothing was wrong by that point.
Now, that’s not necessarily the same as building a website for an event, or an app for an event. That would be interesting to see how that would work. So there are levels to all of this.
I think it’s finding the balance between: you have to follow these rigid guidelines if you want to build X for your role in the company, or you want to do something like a website or an app, where the guidelines are not so rigid — they’re still guidelines.
This is designed for a world where content creation and data analysis are becoming everyday skills, as is software creation. Yet I don’t disagree with your take on training at all, nor with what you quoted Chris Penn saying — they make complete sense to me.
But the reality, particularly in enterprise organizations and even more so in small- to medium-sized businesses, is: you’re just going to give it a go and see what happens. Risks and all.
Shel Holtz: Sure. Regardless of whether you’re trying to do something very simple, where you don’t need an understanding of the software development life cycle — you can just tell the AI, “Write me this app,” and if it’s simple enough you’re probably going to get something serviceable — or something more complex.
For the more complex stuff, you need to have a deeper understanding of the output you want, and you have to spend a lot of time planning so that you can give it the right information. It’s not that you sit back in your chair and say, “I think A, B, and C.”
You can work with the AI to develop that stuff, of course, but one thing I do at the end of virtually every prompt — not the really simple stuff like “How many Oscars did somebody win,” but the more complex prompts — is add, “Ask me questions one at a time that you need answered before you give me your answer.” Because it’s going to think of things that I haven’t thought of.
So yeah, I think my point is that whether you’re doing something simple that doesn’t require a lot of upfront work or you’re doing something more complex that does require a lot of upfront work, this is going to speed up the development of software immeasurably and have a big impact on how this gets done and by whom. That represents significant change in the current structures in organizations, I would say.
Neville Hobson: You’ve got it. So this is worth paying attention to as well. Get used to the phrase “vibe coding,” I would say.
Dan York: Greeting Shel and Neville and our listeners all around the worl. This is Dan York coming at you today from Los Angeles, California, where I have been attending an event by an organization called the Marconi Society that has been looking at internet resilience. Basically, how do we keep the internet functioning, uh, in the light of disruptions and things of various different forms?
Great conversations. Great, uh, thinking. I look forward to talking a bit more about it in the future when there’s some things that are useful to share with our listeners, but in the meantime, I wanna talk about chat. Specifically two different platforms. But first, if you’ve been paying attention for a while, you know the chat systems like WhatsApp, iMessage, uh, telegram, signal, whatever.
They’re all their own thing. You can only chat with people in there. Well, in the European Union, the EU passed something that’s called the Digital Markets Act, or DMA that required. Chat systems that operate in the EU to roll out, to have third party integration in some way that you, that other chat systems could interoperate with that.
So we’re starting to see a bit of what this could be with the announcement from Meta that they will very soon be launching third party integration between WhatsApp in Europe with two other messaging systems called Birdie Chat and hiit. Now, if you haven’t heard of them. Neither have I and neither did the writer of the Verge who was putting this together.
But the point, the point is they’re trying to make it so that chat systems could interoperate. We’ll have to see what this means. The important part to me. Was that WhatsApp is actually doing this to ensure that the end-to-end encryption continues to work, that your data can’t be seen by other people when you’re using the messaging system, which for me is critical for the privacy and security of any kind of conversation I’m having.
So that is preserved in the system. So we’ll have to see where that goes. But speaking of end-to-end encryption X, the formerly known as Twitter, which I don’t even use anymore, but I was pleased to see that they are rolling out a new system to replace what we’ve always called dms or direct messages.
They’re rolling out a new system that they call Brilliantly Chat, but it will have video calling. It will also have end-to-end encryption, and it will have other things. It’s coming first. It’s rolling out on iOS and the web, and then it will be coming on to Android, et cetera. So it looks interesting. So there’ll be a new messaging component there.
So for those who are still using X, stay tuned. Your messaging system may be changing around in what you’re looking at, moving to something completely different. Mozilla announced an AI window for Firefox. Uh, there’s been a slate of AI browsers. I think I talked about some last week. There’s last month. I mean, there’s been other different announcements, but this isn’t available yet.
But it appears to be an interesting thing that it would be a, a window that would sort of be separate from your main browsing experience that would allow you to engage with an ai um, assistant basically, while you are. Browsing and stuff. Stay tuned. Again, we’ll have to watch. It’s if you’re a Firefox user, this will be something that you’ll be able to go and work with as you go along.
Another, just switching gears again. WordPress is coming up on its final release for 2025. It’ll be WordPress 6.9. The target delivery date is December 2nd, and it’s got a couple of interesting things. A lot of the release is focused on, uh, a new APIs for developers on performance improvements and things like that, but there’s.
Two interesting parts for people listening to this podcast probably one is that it will be introducing something called notes or block notes that you can be able to add in a similar form to how you might do something with Google Docs, where if you’re editing a doc, you can leave an A note. And be able to go and, and respond to that, reply to it, you know, close it out, whatever else.
This capability is coming into WordPress so that if you’re doing collaborative editing with other people on your team, you would be able to leave notes about this and say, you know, I, I don’t like this text, or whatever, or, you know, that we should really include an image here, and then other people can reply to that and work with it.
This is all visible only within the. Editor interface. So one of the big pushes for WordPress right now is to look at collaboration. And so this is part of that enabling you to be able to work with your team and, and leave notes for each other and be able to work with that and, and use that. So stay tuned.
This is coming out December 2nd with WordPress 6.9. Another interesting piece about in this release. Is the ability to, to very quickly and without a plugin or anything, change the visibility of blocks mostly so that, so that they’re visible in the backend, in the editor, but not out on the front end. And the important part about this is if you’re testing things, if you’re, if you’re working on, on developing a new interface or a new pages and something and you want to try something out, you can get it all ready to go in the editor.
Then you can flip on the visibility, look at it, check it, whatever, flip that off if you don’t want to, or if you’re preparing for a new announcement, you can have all of that ready to go, uh, on a page. And then just toggle the visibility of the blocks. Now, yes, there are other ways you could do this as well inside of WordPress, but this is just a new way that you could work with this in doing that.
So two features I personally find interesting of the upcoming release. The ability to toggle the visibility of blocks and the ability to leave comments if you’re collaborating with other people. Switching to something completely different. Again, if you’ve been listening to me over the years, you know that I’ve been following what’s called low Earth orbit or LEO satellite systems, such as starlink and what they can do out there.
For the last, actually about seven years, one of the other competitors has been Amazon’s, what they’ve called Project Kuper. Now. The, one of the challenges, first of all, is they’ve had issues launching their rockets, but they’re getting there. They’re getting their satellites up, they’re getting ready to offer service.
One challenge has been, people haven’t known how to pronounce it. Is it Kiper Keeper Cooper? What is it? Well, Amazon’s, it was, it was a project name. It wasn’t really meant to be the out there, it was just an internal project name. So they’re solving this by calling it simply Amazon, Leo. That’s what they’re calling it.
Now, what’s I find fascinating is, I mean, first kudos to them. Good to have it. So now we’ll be hearing about SpaceX’s starlink and we’ll be hearing about Amazon. Leo, I find it rather clever because of course people have been talking about satellite systems in leo, which is low earth orbit. It’s a specific range from zero to 2000 kilometers or 1200 miles.
That’s this range, um, above Earth. But now when you talk about LEO systems, you’ll sort of be like, well, are you talking about LEO systems in general or are you talking about Amazon Leo? So. You know, kudos to them for being clever to take that name and, and run with it. So stay tuned, more tech, more things coming out.
With that, they’re gearing up to really launch this service and provide a competitor for starlink. So especially if you’re in an area that has poor internet access, this may be an option at some point soon. Finally. Chat, GPT just announced something, which is that they are making it so that you don’t, it won’t generate m dashes for all those people who work with typography and punctuation who have liked their M dashes.
One thing about chat GPT was that it was putting m dashes in a lot, which are the longer dashes, and it was being a, a signal that something was created by AI or by chat GPT. Well now. You can now turn that off so it’s becomes a little bit harder, but perhaps the people who like using M dashes will be able to start using them again.
We’ll see. That’s all I’ve got for you this month. This is Dan York. You can find more of my audience [email protected] and back to you shall Neville. I look forward to listening to this episode. Bye for now.
Neville Hobson: Thanks a lot, Dan. Great report as always.
There was some stuff in there that was really quite interesting. I think the one I would comment on was your last topic — what ChatGPT has done with the em dash.
Shel and I talked about this in a recent episode, and it is quite extraordinary how people get so exercised and excited about how it “indicates without any shadow of doubt” that an AI wrote something, no matter whether you did or not. There are lots of opinions flying about that.
But the thing you mentioned I found quite interesting, that OpenAI has done this so that you can tell the chatbot not to use an em dash and this time it’ll work.
Well, I started doing that about eight months ago in the custom personalization box. One of the things I’ve told it to do is to avoid using em dashes and instead use en dashes, with a space either side. That certainly goes against all the rules of grammar people talk about in terms of how you should use these things, but I like that. I prefer that.
I don’t like em dashes at all — particularly where they touch the preceding and following words in the sentence. It doesn’t look right to me. Yeah, I know, it’s been like that for centuries, I know all that. But they did that.
So I thought, okay, does that mean my personalization command will actually work properly now? Because sometimes it does, sometimes it doesn’t. I have to keep reminding the chatbot to do this.
What struck me as well is that when OpenAI announced this, both OpenAI and Sam Altman himself, there was no statement about, “Okay, this is what will happen now: if you put an em dash in, it’ll change it to a normal hyphen or an en dash,” or what. No one said anything. And I can’t find anyone with an answer to that question. So that’s still the question: what’s going to happen?
Shel Holtz: Yeah, I’m of mixed mind on the whole dash controversy. I’ve been using dashes as a writer for more than 50 years. I started using them extensively when I was setting type as a part-time gig in college, and a lot of the technical manuals that I was typesetting had dashes in them.
The reason — and I’ve said this before on the show — the reason AI is using dashes is because in all of the stuff on the web that it hoovered up as part of its training model, there were lots and lots of dashes that humans created.
I have no issue with an em dash. I think this whole attack on the dash is absurd, and it’s from people who don’t know punctuation.
On the other hand, I look at a lot of outputs I see from AI — and I’m not talking about stuff I plan to use in publications, just answers to questions or research that I’m going to factor into, say, a proposal — and I see the dashes misused. They’re put in places where commas belong.
So from that perspective, yeah, I’d rather do the placement of the dashes or en dashes myself. I mean, I don’t remember what the rules were, but there used to be rules around when to use an em dash and when to use an en dash, right? I think those have largely fallen by the wayside.
Neville Hobson: Yeah, no, there still are rules — largely ignored by my use completely against those rules.
But I find it’s a very good point you just made, because when I write — and this I find quite interesting — I’ll write a piece of text, say a first draft of an article, and I’ll run it through the chatbot to give me its opinion. And it will often “correct” commas and put en dashes in instead.
And I think: is this an American thing, or is it the kind of bastardization of the English language generally, that things are changing with variants of how people use the language, so it’s hard to know what correct syntax is now?
It doesn’t matter, in my view, as long as people understand what you’re trying to convey. Yet I recognize equally that to many people it is of significant importance. So this is not an argument that’s going to stop anytime soon, I don’t think.
Shel Holtz: No. And the other thing is, along with everything that was contained in the training sets that the models used, so were the rules of grammar and punctuation. So I suspect at some level it’s actually using them correctly, but not the way we use them in current modern English.
And that’s why I will change a lot of them to commas if I’m going to extract something from AI output and use it in a proposal or in a research document.
Neville Hobson: So I suppose people like authors — and others, not just authors, but anyone who feels strongly about using dashes and who uses ChatGPT — I would say to you: put in a custom personalization line that tells the chatbot to use dashes, not take them out.
Shel Holtz: Yes, that’s absolutely right. And especially in technical documents now, because that’s where I saw most of them.
I want to give a shout-out to Robert Rose over at the Content Marketing Institute, among other ventures. This Old Marketing is a great podcast — Robert Rose and Joe Pulizzi. If you’ve never listened to that, I highly recommend it.
Robert has written an article called “Why AI Idea Inflation Is Ruining Thought Leadership and Team Dynamics.” And if you lead a content team, it probably feels less like a think piece and more like a documentary.
His core point is pretty simple: gen AI has made it incredibly easy for senior leaders and subject matter experts to generate ideas for content. Not thoughtful, worked-through concepts — just lots and lots of “We should do something on this”-type ideas. It’s like we turned content strategy into Netflix. There’s always something new in the queue, but more often than not, you don’t feel great about picking any of it.
This isn’t hypothetical. The Content Marketing Institute’s latest B2B content marketing trends report found that 95% of B2B marketers now say their organizations use AI-powered marketing applications. Ninety-five percent — that’s pretty much everyone.
And going back a bit, a previous CMI study found 72% were already using gen AI, but 61% of their organizations had no guidelines for it.
So we have this perfect storm: nearly universal use, very little governance, and leaders with what Robert calls “idea superpowers” that they didn’t earn the hard way.
You’ve probably seen this movie inside your own organization: an executive spends a weekend playing with ChatGPT and asks for “20 provocative points of view we should publish this quarter.” And Monday morning, your content Slack channel lights up with screenshots. None of these ideas are attached to actual budget, resources, or strategy — but because they came from the corner office and because they looked polished, they land on the team like assignments.
Robert’s argument is that this idea inflation doesn’t just create more work; it erodes trust between leaders and content teams. The strategists and writers become order-takers, constantly reacting to an AI-fueled idea fire hose instead of shaping a coherent editorial agenda.
Over time, resentment builds. Leaders feel like, “I keep bringing you ideas and nothing gets done,” while teams feel like, “You keep throwing spaghetti at the wall and calling it thought leadership.”
The data backs up that this isn’t just a workflow annoyance — it’s starting to show up in audience behavior. One study from last year, from Bynder, found that about half of customers say they can spot AI-written content, and more than half say they disengage when they suspect something was generated by AI. We referenced this earlier, Neville.
Another study published this year looked at brands using gen AI for social content and found that overt AI adoption actually led to negative follower reactions unless it was blended carefully with human input.
So the idea treadmill doesn’t just burn out your team; it risks flooding your channels with content that audiences increasingly mistrust.
At the same time, we’re seeing a massive shift on the supply side. Axios, working with Graphite, reported that the share of articles online created by AI jumped from about 5% in 2020 to nearly half — 48% — by mid-2025.
In other words, the content universe is experiencing its own inflation problem: a lot more stuff, not a lot more meaning.
So where does that leave content marketing leaders? Robert’s prescription — and I think this is where communicators really earn their pay — is not “turn the AI off.” It’s to reassert our role as editors of the idea layer, not just the content layer.
That starts with reframing the relationship with your thought leaders. Instead of treating every AI-generated list as a backlog to be cleared, treat it as the raw ore. You sit down and say, “Great, let’s pick one of these and go deep. Which of these ideas would you still fight for if AI hadn’t made it so easy to generate 20 others?”
This is where the leadership part comes in.
The CMI 2026 Trends Report — yes, we’re at the point where we’re looking at 2026 trends — makes the point that the teams who are winning aren’t the ones shouting “AI” the loudest; they’re the ones doubling down on fundamentals like relevance, quality, and team capability, and letting AI breathe more life into those efforts.
In practical terms, what does this mean?
It means putting a simple idea filter in place. If an idea doesn’t align with your documented content mission, target audience, and a defined business goal, it doesn’t make the calendar — no matter how clever the AI prompt was.
It means creating a shared point-of-view backlog where leaders can park AI-assisted concepts, but agreeing that only a small number graduate into actual content every quarter.
And it means being transparent with your team about volume: “We’re going to say no to more ideas faster, so we can say yes to a few that matter.”
There’s also a morale decision here. Other research shows a weird tension: a majority of marketers say AI makes them more productive and even more confident, but a lot of them also fear it could replace parts of their role.
If you’re leading a content team, how you handle idea inflation becomes a signal about your priorities. Are you using AI to respect people’s time and focus on better work — or are you using it to flood them with tasks they can never realistically complete?
And while Robert’s article is squarely aimed at content marketing, I don’t think it stops there. The same dynamics are starting to show up in internal communications, executive comms, even investor relations. Anywhere a leader can spin up “10 talking points for our next town hall” with a prompt, you’re going to see this idea inflation in practice.
If communicators don’t step in to slow that down and curate it, we risk overwhelming every stakeholder group with more, faster, shallower content — and training them to tune us out.
So I read Robert’s piece less as a complaint about AI and more as a leadership challenge. In a world where ideas are cheap and infinite, can you be the person who protects your team, your audience, and your brand from inflation?
Neville Hobson: Yeah, it’s a very good piece. I agree. I’m not sure I like the phrase “idea inflation” — it sounds pretty gimmicky to me, I must admit — but it does capture it quite well.
I found it really interesting reading Robert’s article where he talks about “why AI feeds the engagement crisis.” Now that’s a phrase I can get my head around: engagement crisis.
There are some to-the-point views here which make you think, “Absolutely right.”
For instance, he says when people hear the phrase “employee engagement” they tend to picture enthusiasm — people who are motivated, satisfied, and inspired by their work. But he says employee engagement means more than how people feel about their jobs; it’s also about how much meaning they find in the relationships that shape their work, and AI is causing those relationships to fracture.
“The dynamic between marketing leader and content practitioner, once a creative dialogue, has become transactional. The leader produces ideas, the practitioner packages them.” That’s in line with what you were saying. “Each side feels overextended, underappreciated, and increasingly indifferent.”
And I like how he progresses the thinking here, because you can picture this. “Nobody challenges ideas anymore because nobody loves or hates them enough to care about getting them right.” That’s a hell of an indictment, but I think it’s quite spot-on about some of the behaviors that are happening.
“In that scenario,” he says, “there is no right. When the origin of the idea and the expression of it both come from a machine, neither side can recognize originality or craft when they see it.”
These are alarm bells, to me, that are ringing — that leaders need to really pay attention to.
It’s difficult, though, because it reminds me of two or three cartoons I’ve seen recently on LinkedIn — different, but broadly the same — which show a kind of flowchart of an idea for a press release or an announcement you need to make, and it needs to be this.
So it goes to the next step; then someone says, “Actually, we need to make sure we include that.” Okay, fine, you do that. Then the CEO chimes in with six things he reckons need to be in there as well. And so it goes around all these various steps until you’ve got this bloated thing that gets to the final point in the cycle, with the person at the top of the loop saying, “This is terrible. This is rubbish. This isn’t telling a story. We need to be a lot simpler than this.”
The communicator — by the way, the smart person in this story — had kept the original draft: short, concise, three bullet points. So she submitted that, and everyone said, “Oh, that’s what we need; we’ll approve that.”
To me, that’s a great analogy for this. But someone’s got to recognize the bloat — the inflation of ideas, let’s say — that arises in situations where you’ve got so many people who’ve got their own stakes in the ground and their own agendas they follow.
This isn’t a criticism; it’s a recognition of reality in organizations.
So the communicator in that little story I just told was the smart person here. You’ve got to navigate this sort of thing when it happens. The marketing folks who get dumped on from the corner office — someone at that stage has got to recognize the likely trajectory of all of this and plan accordingly, so that when it gets around to the top of the circuit, they go back to the smarter idea.
It sounds easy saying that, doesn’t it? In reality, it’s not quite like that. Nevertheless, this is a leadership issue. This is not a marketing or content issue — this is a leadership and management issue, it seems to me.
Shel Holtz: It is, and I think it’s an opportunity for communicators to demonstrate some leadership. Because, as Robert says, it’s really easy for an executive over a weekend to get a model to produce 20 ideas for “thought leadership.”
That’s not thought leadership. We’ve talked about thought leadership fairly recently on FIR. This is subject matter expertise that brings new thinking, new angles, sheds new light on the situation. You have a unique perspective to share — that’s thought leadership. It’s generating content that makes people go, “Wow, I hadn’t thought about it that way before; now I’m thinking of it.” You’re leading thinking — that’s what thought leadership is.
I’m not sure that, “Here’s a list of 20 things Gemini came up with for me,” is anywhere near thought leadership unless you see one that you actually have unique perspective and expertise on. And you say, “That’s a great one to talk about.”
If that’s what you’re using AI for, great. But if you’re just copying and pasting that list and sending it to your communications team and saying, “Write these, these are thought leadership pieces,” that’s just going to erode trust in that leader and the organization they represent.
I’ve got no issue with generating lists in AI models — I do it all the time.
On our intranet we have a “Construction Term of the Week,” and I exhausted the list that our engineers sent us, and they’re not inclined to add more. So now I’ll say, “Give me 20 terms related to MEP” — that’s mechanical, electrical, plumbing — and I’ll pick one and that’ll be the term of the week. That saves me a lot of research. It’s a great use of AI, I think.
But if I were to say, “Give me 20 ideas for thought leadership that I can propose to my CEO so we can get a thought leadership article up on LinkedIn,” man, I would never do that. That’s a terrible idea — but evidently, lots of people are.
Neville Hobson: Plenty to think about here. So let’s move on to another topic with plenty to think about.
Question: is it okay to use AI-generated images for LinkedIn profiles?
Over the past few months, we’ve seen AI-generated headshots spreading across LinkedIn. I certainly have. The ultra-polished portraits with perfect lighting, perfect posture, and, in many cases, a slightly uncanny resemblance to the person they represent — these are typical. That description defines most of what I see.
Your first thought when you see it is, “They’re clearly AI-generated,” and you don’t necessarily have a critical take on it, but you note it.
I have to admit, I tried this myself recently — a few months ago, in fact. For a while, my own LinkedIn profile featured an AI-generated photo. It looked professional enough with uncanny realism, but the longer it sat there, the more uncomfortable I felt about it.
It wasn’t quite me. What if people thought it was me and later realized it wasn’t? I hadn’t said it was an AI-generated image created from a selection of actual photos of me. What would the effects be?
Eventually, I removed it and used a real photo.
That personal hesitation is exactly what Anna Lawler explores in a thoughtful LinkedIn article about the ethics of AI headshots. Lawler is Director and Head of Digital and Social Media at Greentarget, a corporate comms agency based in London.
She describes the pressure to have a sharp, executive-style image ready the moment a new role is announced — something many of us will recognize. AI offered her a quick, clean solution. But then came the real question: should she use it?
Her piece gets to the heart of what communicators are wrestling with right now — well, many communicators, I would say.
What does authenticity look like when technology can generate a version of you that is polished and accurate, but still artificial? Does using that image strengthen your professional brand, or does it introduce a small crack in trust?
What if you don’t disclose how the image was created? And does it matter if no one can tell the difference?
Anna’s LinkedIn post attracted many comments about whether to do this. One comment was blunt: “Just no. Not at all. Never.”
Another explored the idea a bit: “You’ve used an AI image of yourself which looks dead like you — so much that your dad couldn’t tell the difference, other than to say you look well. How different is this to putting a filter on a real photo of you? So no major harm done for a personal LinkedIn photo. But what happens when PRs and marketers start doing this on behalf of others?”
Another analyzed the situation: “Most images — portrait or otherwise — are subject to some form of post-production. It is similar to editing a paragraph of text. You take the original content and adapt it to fit the requirements of the medium, ensuring the tone and voice are appropriate. In the case of a photograph, a human may use Photoshop. In the case of text, they can do it in Word or use Grammarly. If the final decision of whether or not to accept the edits lies with the human, does it matter what method was applied to make them?”
If the purpose of a profile photo is to represent who you are, does an AI-enhanced or AI-created version cross a line? Does “close enough” count?
Anna makes a thoughtful distinction between personal use and corporate use — on websites or official materials, where misrepresentation risks are far greater. She also highlights the reputational and ethical factors that communicators must now weigh, because our profile photos are no longer just photos. They signal identity, credibility, and intent.
It raises a bigger question for all of us: as AI becomes more deeply woven into our professional lives, where do we draw the line between convenience and authenticity? And how do we guide our organizations through those decisions when the norms are still being formed?
Now I know you’ve got some views about this, Shel, so what do you think?
Shel Holtz: Hell yeah, I have some views on this.
I’ve stated before on the show and elsewhere that I think the line is around deceit. Are you trying to deceive somebody? And if the use of AI could lead somebody to be deceived, then I think you need to disclose. If not, I don’t think there is any compulsion to disclose.
What if I have a photo of me — and that’s what I use — but I use a service like Canva or Photoshop to remove the background and put in an AI-generated background? Is that okay?
AI is a tool. It’s just a tool.
We use tools for… I mean, we use photos — there was a time when there were no photos available. You had to hire an artist to paint your portrait if you wanted somebody to know what you looked like.
I think the utter prohibition that some people are suggesting on AI images on LinkedIn is, frankly, stupid. I disagree with it wholeheartedly.
My profile picture on LinkedIn is AI-generated. Now, why did I do that?
When I started at Webcor in 2017, there was a professional photographer who was taking everybody’s photo, so your profile photo on the intranet directory was consistent and professional. I used that everywhere for about six and a half years. Then I lost 70 pounds, and frankly, I didn’t look like that anymore.
I didn’t have access to a professional photographer through work, and I didn’t have the time to go sit for a portrait. So I did that thing where — and I didn’t use one of the paid services; I think it was Gemini — I gave it 20 headshots of me looking the way I do now, post-70-pound loss, and I said, “Aggregate these into a professional headshot.”
I had to do it eight or 10 times before I got one that actually looks like me, where you can’t tell the difference. And that’s the one I’m using.
Is it misrepresenting me? No, it’s not. It looks like me, and I am fine with that. I don’t think I’m deceiving anybody. I don’t think I’m pulling the wool over anybody’s eyes. It’s me.
I don’t have any issue with that at all, and I can’t imagine an argument that would convince me otherwise.
Neville Hobson: No, I get you 100% on that, Shel.
In my case, I mentioned I had an AI-generated image as my LinkedIn profile picture, which I removed. There’s now a normal shot; it’s not as good, in my view, as the one I took down.
But that same picture, large size, I’ve got on my About page on my website. And there’s nothing there saying it’s AI-generated. People I’ve shown it to — only about four or five — couldn’t tell the difference that it wasn’t real when I told them it was AI-generated.
So your point about deceit is a very valid one.
If I put a picture of me up there looking slightly thinner maybe, with fewer age-driven gray hairs appearing, and I made myself blond maybe, changed my eye color or something — that’s not me at all. That, to me, would cross the line.
But I also, on the other hand, get entirely the illogic — if I can use that word — of people who are critical about this. But that is part of the platform you’re on, and people will judge you.
Now, I’m of strong belief myself that I really don’t care much about what people think about me in the sense of that, but this can have impacts.
I don’t want to do something that stimulates that kind of discussion or opinion-forming or commenting. And people are doing that a lot.
So to me, it’s simple: this is not a huge deal, to have an AI-generated image up there, when I can just have a normal pic that I take with my webcam and touch it up in Photoshop — which I do. I had one previously where I changed the background because I didn’t like the background.
That happens all the time. That’s not deceit. Nevertheless, there are some things you might want to take a stand on. This isn’t one of them for me — “I’m not going to use it” or “I am going to use it.”
So why have I kept it on my blog, you might ask? That’s part of a simple experiment. No one’s noticed or commented, and it actually fits the way I want to portray myself in the context of what I’ve written about myself on that page.
Shel Holtz: Thematic consistency.
Neville Hobson: Yeah, that’s different from using it on LinkedIn, because that’s a wholly different description on LinkedIn. So I’ll keep it up there until someone screams loud enough, saying, “You’re a fake, you’re deceitful,” which I don’t believe is going to happen.
Shel Holtz: The camera is a tool. A photo of you is not you; it is a representation of you that was captured by the camera. What if the white balance was off? What if the depth of field was off? There are so many things that a camera captures that are inaccurate or inconsistent.
AI is a tool. In five years, no one’s going to be having this discussion. It’s going to be so common, and the outputs are going to be so spot on that this isn’t even going to be an issue.
I just think if people are talking about this, they need to find more fruitful things to spend their time talking about.
Neville Hobson: This is always going to be here, and it depends on how you want to judge it.
But to me, there’s another thought to throw into the mix here, which we’ve touched on previously: this is not just about a photo. It has more about it than just a photo of someone. This is about your identity. This is about your credibility. This is about how others perceive you. That does matter — to varying degrees, depending on the industry you’re in and how you portray yourself and the people you’re connected with.
So it’s a preview, I suppose you could argue, of wider ethical decisions that we must make as AI is embedded everywhere — until it gets to the point, as you say, where no one’s talking about this anymore. We’re not at that point yet.
Shel Holtz: Maybe I’ll take my LinkedIn portrait and have the AI generate it in the style of a Pixar 3D animated movie and see what people say.
Neville Hobson: Well, you used to have a cartoon up there back in the early days.
Shel Holtz: I did. That was a service that would take your photo and turn it into a cartoon, an illustration. It was a service that used freelance artists. They would parcel it out to one of them. It was pretty cheap; you got it back in multiple file formats. It was great.
Neville Hobson: There you go.
I think I can answer my own question about why I’ve kept it on my blog, because the blog serves multiple purposes. It’s no longer a business site — I’ve changed what I do. It’s much more a personal site that’s intermingled with business. That’s different to LinkedIn, which is a social network with a business focus — that’s different. So that’s why I keep it up there, I guess.
Shel Holtz: All right. So if an executive has their photo taken and they have a makeup artist work with them, is that an accurate representation of them? Do they need to disclose that they were wearing makeup for this photo?
Come on. Let’s talk about more serious things, folks.
Neville Hobson: Like I said, logic is not part of this discussion; it’s emotion-driven. This is again a reflection, I think, of accessibility to ways to voice your opinion if you have one — and everyone has one, and they are voicing it.
Shel Holtz: Clearly. Well, let ’em.
Neville Hobson: I say thank you to Anna Lawler because that prompted this. She wrote the piece at the beginning of the year, but it did prompt all of this in my mind. I think it’s worth reading, so there’ll be a link to it in the show notes.
Shel Holtz: Well, I read an article recently with a pretty brutal headline: “Your Staff Thinks Management Is Inefficient. They May Have a Point.” This was in Inc. magazine.
It’s just the latest in a long string of big changes that employees feel are being done to them rather than with them.
The article by Bruce Crumley leans on new data from Eagle Hill’s 2025 Change Management Survey. In the past year, 63% of U.S. workers say that they’ve been through significant change: tech like AI, new products, return-to-office shifts, headcount changes, cost-cutting, cultural changes, acquisitions. But only a third of them think those changes were worth the effort.
A lot of them say their efficiency actually went down, their workload and stress went up, and the supposed innovation never really materialized.
Now, when Eagle Hill digs into the “why” around this, the picture gets even more familiar. Employees say management is picking the wrong priorities, not managing the rollout well, not supporting people as they adapt, and not monitoring how the change actually lands.
Only about a third feel leaders really listen to their input on what needs to change. Forty percent say they’re basically ignored.
The line that jumps out for communicators is Eagle Hill’s conclusion that the key to successful change is not what you change, but how you change — and that change is experienced at the team level, not somewhere on the org chart.
Now, layer AI on top of that. From the employee perspective, there’s a pretty consistent story emerging: they’re interested in AI, but they don’t feel included or supported.
Eagle Hill’s tech and AI research found that 67% of employees aren’t using AI at work yet, but more than half of those non-users actually want to learn about it. At the same time, 41% say their organization isn’t prepared for the rise of AI.
Workday’s global survey paints a similar picture. Only about half of employees say they welcome AI in the workplace, and nearly a quarter aren’t confident their organization will put employee interests ahead of its own when implementing it.
Leaders are more positive about AI than employees are, but they share that same lack of confidence about whether the rollout will be done in a people-first way.
And there’s a trust gap on top of that. Gallup finds only 31% of Americans say they trust businesses to use AI responsibly. Over two-thirds say “not much” or “not at all.”
Let’s make it even spicier. A recent global study from Dayforce found that 87% of executives are using AI at work compared with just 27% of employees. Execs are out ahead, using AI heavily, while a big chunk of the workforce is still on the sidelines — worried, undertrained, or just not invited in.
So if you’re an employee sitting in the middle of all this, what does it look like?
You see leadership trumpeting AI as the future. You get more tools, more dashboards, more “transformations,” as they call it. Your workload goes up during rollout. Your voice doesn’t seem to shape the priorities. And you’re told it’s all about efficiency and innovation while your own day-to-day experience feels more chaotic.
“Management is inefficient” starts to sound like a very reasonable conclusion.
That’s where communicators can earn their keep, especially around AI.
First, we can make the “why” legible. A lot of AI change stories stop at “This is cutting-edge” or “This will make us more efficient.” The Eagle Hill findings are basically a giant flashing sign that says that’s not enough.
We need to tell a story that starts with the team: What pain point is AI solving for you? What are you going to stop doing because this is now available? What does success look like in your specific function, not just on an earnings slide? Helping leaders anchor AI messaging in outcomes people actually care about is step one.
Second, we can bring employees into the design of the change rather than just leaving them on the receiving end. That means building in genuine listening — pulse surveys that ask, “What’s getting harder as we roll this out?” Small-group sessions where teams can talk about how the AI actually fits into their workflow.
Storytelling that highlights not just the shiny pilot, but the tweak that came from frontline feedback. And then — and this is the part we skip so often — closing the loop and saying, “Here’s what you told us, and here’s what changed.”
Same as surveys, right? We issue surveys, we get the feedback, and maybe changes are made — but we don’t tell anyone. If 40% of people feel unheard during change, that loop is our job.
Third, we can equip managers to be translators instead of amplifiers of confusion. Most people don’t experience “the organization”; they experience their manager. So when Eagle Hill says the team should be the core unit of change, that’s a giant invitation to communicators to build manager toolkits around AI.
Simple talk tracks: “Here’s how to explain this change in two minutes.” “Here’s what to say if people are worried about their jobs.” “Here’s how to be honest about the short-term workload bump.”
FAQs, slides, even suggested phrases that sound human instead of legalistic — that’s all in the comms wheelhouse.
Fourth, we can push for pacing that matches reality and help leaders talk about trade-offs. A lot of the resentment in these surveys comes from people feeling like change is something piled on in addition to their regular day jobs.
Eagle Hill’s advice to slow down, phase changes, and temporarily ease workloads isn’t just an HR tactic; it’s a narrative opportunity.
Imagine the difference between: “Here’s another AI tool, please adopt it,” and: “For the next eight weeks, we’re pausing X reports and Y meetings so you have time to learn this new workflow. Here’s the schedule. Here’s where to get help.”
We communicators can frame that pacing as a deliberate, respectful choice.
And finally, we can insist that AI change stories include trust as a first-class citizen, not a footnote. That means naming the concerns, not dancing around them.
Employees are reading headlines about bias, surveillance, job loss. They’re seeing that most people don’t fully trust businesses on AI. We can help leaders say out loud, “Here are the guardrails; here’s what we will use AI for, and here’s what we will not. Here’s how we’ll measure the impact on workload. Here’s how you can challenge a decision if you think AI got it wrong.”
That transparency is the only way to close the trust gap.
If we don’t do any of this, AI just becomes the latest exhibit in the “management is inefficient” file — another transformation employees experience as stress without payoff.
If we do our jobs well, AI can actually become a proof point that this time, the organization learned from the last wave of change — that it listened, it paced itself, it treated teams as the unit of change, and it used communication as a way to share power, not just spin the story.
Neville Hobson: I have to admit, I’m quite shocked to hear the picture you’ve painted there — that it’s so bad. Is it truly that bad?
Because this is actually, to me, like what you just said, particularly your concluding part — this is Leadership 101, for Christ’s sake, and yet so many people aren’t doing this.
Shel Holtz: Well, if the research is accurate, then it really is that bad.
Neville Hobson: What the hell is going on?
This actually touches on everything we’ve said so far in this episode — what leaders need to do in certain situations. Don’t allow it to be like this.
The whole idea of “management” being all up to speed with AI while employees are completely in the dark and don’t have a clue how to use the tools — I find that truly hard to believe as a significant factor across the board.
That doesn’t gel with some other research I’ve seen here in the UK — and mostly in the US — where the issue is getting leaders to embrace it, while employees are out there experimenting, which is why there aren’t guardrails or guidance properly.
So this is a pretty shocking state of affairs, it seems to me, Shel.
Some of the things here are so obvious that I just wonder why people enable this situation to be the norm, if it is as portrayed in this article.
There are a lot of tips though — I have to say everything you need to know about what to do is here. So pick this up and read it, for God’s sake, please.
Shel Holtz: I remember early in my career, I was at a Ragan Communications conference and a CEO was speaking. He said he believes that every CEO, as soon as they sit in the CEO chair, gets hit by a “stupid ray” aimed right at their head — because they stop listening.
They think, “I’m the CEO. I’m here because I know everything, and I can make these decisions in a vacuum. I am at the top of the food chain.”
I think that’s happening right now. If you look at the number of layoffs that are happening, and AI is a factor in these — they’re coming right out and saying it. They’re not hiding it; they’re saying, “AI can make us more efficient.”
They’re not talking to the teams that do the work, to find out, “If we end up with three people instead of 10 because you think AI can do the work, we happen to know that’s not the case, and this is going to make us less efficient.”
There’s not a lot of listening going on in these decisions being made. There’s not a lot of querying of the teams to find out exactly how they can use AI to be more efficient and what that means for the staffing of the team.
I think there are executives who say, “I have this tool, I’m in charge, I’m slashing the workforce.” I think that’s what’s happening. And I think that’s why so many employees think that the leaders are now inefficient.
Neville Hobson: Well, it’s missing completely the voices that — as we’ve discussed in previous episodes, and indeed thinking back to our interview with Paul Tighe from the Vatican — it’s missing the humanity.
It’s missing, “How does AI augment and improve how people do their jobs, not replace them? Not ‘become more efficient, therefore we don’t need so many people here.’” That voice is missing.
To me, that’s the essential part. You could extend that thought: that voice is not just about AI, although that’s a huge element because it is permeating organizations, and in many cases not in a good way, because the conversations are all about becoming more efficient and not needing people. It’s part of that bigger picture.
This article does talk about, in its concluding parts, “Change is experienced collectively, not individually.” That means a team, not the org chart, must be the core unit of change.
They talk prior to that paragraph about how the majority of modern workplaces are shaped by the teams that drive most activity and success. Initiatives come from the top, but success relies on the base embracing them. These are the fundamentals of leadership, surely.
I’ve noticed here — as an aside — that in some of these things you hear about that are going wrong in organizations, the people leading are just damn incompetent. Some of the speeches and things they say in public exhibit nothing but utter incompetence, and they should be fired.
That’s a bigger story, frankly, but it’s part of it. The most useless people leading these organizations are dragging them down, and the employees are the ones who are suffering. I’m straying into big-picture politics and opinions, but nevertheless, that’s what you see.
Shel Holtz: I’ll join you in that string.
It seems clear to me that a lot of leaders are abdicating the principles of leadership to the exuberance they feel about the potential for AI, and they’re just running with it. I don’t think that’s going to bode well for the performance of their organizations, especially when they’ve lost the trust and confidence of the people who are expected to execute on all of this.
Neville Hobson: So on that note, we would say that the next episode will have a lot of good news.
Shel Holtz: I sure hope so.
But that’ll be a 30 for this episode of For Immediate Release. Our next long-form monthly episode — we will return to doing this toward the end of the month. We’re planning on recording that on Saturday, December 27th, and releasing it on Monday, December 29th.
Until then, go back to the beginning of the episode and learn about all the ways that you can comment. And we will have our midweek short episodes beginning in a week or so.
And until then, that will, in fact, be a 30 for this episode of For Immediate Release.
The post FIR #489: An Explosion of Thought Leadership Slop appeared first on FIR Podcast Network.
For the second year in a row, Coca-Cola turned to artificial intelligence to produce its global holiday campaign. The new ad replaces people with snow scenes, animals, and those iconic red trucks, aiming for warmth through technology. The response? A mix of admiration for the technical feat and criticism for what some called a “soulless,” “nostalgia-free” production.
Shel and Neville break down the ad’s reception and what it tells us about audience expectations, creative integrity, and the communication challenges that come with AI-driven content. Despite Coke’s efforts to industrialize creativity — working with two AI studios, 100 contributors, and more than 70,000 generated clips — the final product sparked as much skepticism as wonder.
The discussion explores:
Why The Verge called the ad “a sloppy eyesore” — and why Coke went ahead anyway
The sheer scale and cost of AI production (and why it’s not necessarily cheaper)
Whether Coke’s campaign is marketing, corporate signaling, or both
How critics’ reactions reflect discomfort with AI aesthetics in emotional brand spaces
Lessons for communicators about context, authenticity, and being transparent about “why”
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, November 17.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville Hobson:
Hi everyone, and welcome to For Immediate Release, episode 488. I’m Neville Hobson.
Shel Holtz:
And I’m Shel Holtz. Coca-Cola is back with a holiday spot created using AI for the second year running, and the blowback is about as big as the media buy.
If last year’s criticism centered on uncanny humans, this year they tried to sidestep that by leaning into animals, snow, and those iconic red trucks. The problem is that a lot of viewers still found the whole thing visually inconsistent and emotionally hollow — more of a tech demo than Christmas magic.
The Verge didn’t mince words, calling it a “sloppy eyesore.”
This wasn’t a lone creative prompting a model in a dark room. According to The Verge, Coke worked with two AI studios — SilverSide and Secret Level — involving roughly 100 contributors. So when people say AI is taking work away from humans, this example complicates that argument. The project generated and refined over 70,000 clips to assemble the final film, with five AI specialists dedicated to wrangling and iterating those shots.
If you think of AI work as cheap and easy, that scale tells a different story. This was massive, industrialized production. Despite all that, audience reaction has been harsh. Delish collected consumer responses labeling the ad “soulless,” “nostalgia-free,” and — my favorite phrase — “intentional rage bait.” In other words, people felt provoked, not moved.
The general sentiment is familiar: “Just bring back the classic trucks or polar bears and let real filmmakers work their craft.” The level of blowback reflects a mainstream discomfort with AI aesthetics invading a beloved ritual.
So why is Coke doing this again? Partly for speed and efficiency, sure — but the more interesting rationale is signaling. As Forbes argues, this isn’t just marketing, it’s corporate communication: a message to investors and partners that Coke is a modern operator experimenting across its value chain. In that sense, the ad is a press release in moving pictures — “We’re innovating.”
Whether consumers cheer or jeer, the signal still gets sent.
For communicators, I see three takeaways.
First, scale doesn’t guarantee soul. You can throw 100 people and 70,000 clips at a film and still end up with something that feels off. Craft and continuity remain stubbornly human problems, and current video models still struggle with temporal consistency and art direction.
Second, context beats novelty. Holiday ads are about rituals and memories. When the urge to adopt AI clashes with audience expectations for warmth and authenticity, “innovative” can come across as “indifferent.” If you’re going to bring AI into sacred brand moments, you need strong creative guardrails — and maybe keep flagship storytelling human-first until the tools catch up.
Third, be explicit about your “why.” If your real audience is Wall Street or prospective partners, say so — ideally without sacrificing the consumer experience. Coke’s narrative of blending human creativity with new tools can work, but only if the end result still feels like Coca-Cola. Otherwise, you’re asking consumers to bankroll your R&D with their attention during the most sentimental time of the year.
These trucks will keep rolling — and so will the debate — until the models solve for continuity and feel. Brands risk trading wonder for workflow, and audiences know the difference.
That said, I watched this ad last night during Monday Night Football. Looking at it through that lens, I didn’t see what the critics were talking about. I suspect most of the audience didn’t either. The vast majority probably aren’t aware it was generated with AI and didn’t see any problem with it. I think the hypercritical responses are mostly from people who are following the AI conversation closely — and maybe looking for an excuse to slam something that wasn’t made by human creators.
Neville, what do you think?
Neville Hobson:
I watched the video on YouTube — both the global version and the one Coca-Cola uploaded for European audiences. Honestly, I couldn’t tell the difference. They’re exactly the same length. Like you, I thought it was well done.
It was pretty clear to me within a few seconds that it was AI-generated — not because it looked AI-generated, but because of the scale and scope. You just know they’d use AI for something like this.
Coke has used this theme for years — the trucks, the snow, the feel-good singing. This time, there aren’t any humans front and center; it’s all animals. But as storytelling, I thought it worked.
That said, I did see some severe critiques, particularly from design industry voices. Creative Bloq, for example, called it an example of “how a company risks decades of hard-won brand equity through the use of nascent tech that’s still not up to the job.” I think that’s a bit unfair and shows a lack of understanding of what Coke was really trying to do.
There’s also a fascinating behind-the-scenes video Coke posted. It’s narrated by AI voices — the same ones from NotebookLM, actually — so it’s an AI explaining an AI. And the prompts they show are incredible: dozens of paragraphs for a single shot. This wasn’t a one-line “make a Christmas ad” job.
That explainer reinforces your Forbes point — this could be as much about corporate signaling as marketing. Personally, I see it more as a brand experiment than a corporate ad, but I can see both perspectives.
And yes, some critics are inevitably Coke detractors. One UK designer, Dino Berberich, posted screenshots showing technical errors — missing truck wheels, misaligned shots, and so on. Maybe Coke fixed those later, maybe not. But if they take that kind of feedback seriously, it’ll be invaluable.
Overall, I think it’s what you’d expect from Coke. Set aside the fact that it’s AI — it’s actually quite good. It continues the “Real Magic” theme they’ve been running for years. I remember one a couple of years ago with paintings in an art gallery coming to life when they got a Coke — also beautifully done.
So this feels like the next step in their evolution. Most viewers won’t realize it’s AI unless they’re already thinking that way. Awareness is growing, but the average person just sees a nice Christmas ad.
Of course, we’re now in a world where people start by asking, “Is this AI?” before saying, “Wow, what a great image.” That mindset can distract from the story — but it’s part of the landscape now. This kind of work will only get better, and Coke is helping to move it forward.
Shel Holtz:
Yeah, I agree. And if you look at Berberich’s LinkedIn post, you can see the issues he points out, but that’s not how most people watch ads. They’re not stopping every frame to analyze wheel placement. They’re watching during a football game or between shows. Most people just see a Coke commercial with some fuzzy bunnies.
One critique I read said the ad couldn’t decide whether it wanted cartoony or semi-realistic animals. I didn’t notice that. If you go in looking to criticize AI, sure, you’ll find something. But again, that’s not most people.
The YouTube comments are full of people saying things like “I’ve never wanted a Pepsi more in my life.” But honestly, nobody’s switching brands because of an ad like this. People drink Coke or Pepsi based on taste, not commercials.
As for Forbes’s point about corporate signaling — I don’t think this was meant as a corporate ad, but rather a way to say, “We’re embracing AI.” And the fact that they released a behind-the-scenes explainer reinforces that. They’re telling the world they’re leaning into this technology, iterating, and getting better at it.
You know, I don’t remember this kind of backlash when animation shifted from hand-drawn to CGI. That shift also displaced artists — the inkers and colorists who worked on traditional cells. This feels like a similar transition.
You still have people giving thought to story, design, and imagery — but the tools have changed. Does it have the same soul as a Pixar film? No. But then, Pixar doesn’t have the same soul as early Disney animation either. Time marches on. Deal with it.
Neville Hobson:
Exactly. And a lot of the negativity is just the nature of online discourse these days. Anything posted publicly attracts critics, trolls, and nitpickers. Among them, there are some valid points, but it’s hard to find them amid the noise.
The explainer video also includes a section showing how Coke evolved its Christmas ads — from sketches to animation to AI-rendered realism. It’s fascinating to see how deliberate that process was. Again, Coke released this publicly, which supports your point: this is about transparency and experimentation.
So yes, critics have a right to their opinions, and some make constructive points. But for most people — what we’d call “Joe Bloggs” here in the UK — it’s just a nice Christmas ad. They’re not thinking about AI strategy.
From real trucks to AI ones, Coke is pushing creative boundaries. Some say they should’ve shot it live and hired more people, but there’s no crime in trying something new. They’re pushing the envelope, and I think they’ve done a pretty good job.
Shel Holtz:
And to reiterate: they did employ people. Two studios, probably dozens of professionals. I doubt they saved much money doing it this way. They’re just moving forward with the technology — and that’s the point.
And that will be a 30 for this episode of For Immediate Release.
The post FIR #488: Did a Soda Pop Make AI Slop? appeared first on FIR Podcast Network.
In this episode, Chip and Gini discuss the growing concerns surrounding AI in the agency world. They highlight the irrational fears and cyclical nature of technological disruptions, drawing comparisons to social media and content marketing trends of the past.
The hosts argue against the notion that agencies should discount services due to AI efficiencies, emphasizing that AI should be seen as a tool to enhance productivity and strategic value rather than a cost-cutting measure. They stress that agencies should focus on delivering more value and maintaining regular client communication instead of simply protecting existing revenue.
The discussion also touches on the importance of transparency in AI use without oversharing minute details. Finally, they underscore the benefit of quarterly planning to align agency efforts with client business goals, thus fostering stronger client relationships and ensuring mutual success. [read the transcript]
The post ALP 288: AI myths agencies must avoid appeared first on FIR Podcast Network.