FIR Podcast Network

FIR Podcast Network

The FIR Podcast Network is the premiere podcast network for PR, organizational communications, marketing, and internal communications content. Each of the FIR Podcast Network's shows can be accessed individually. This is the EVERYTHING Feed, which gets you the latest episodes of every show in the network.

  • 25 minutes 54 seconds
    FIR #510: Should Companies Embrace Shadow AI?

    Employees have long found ways to use software tools to get the job done, even when those tools are not approved. It’s called Shadow IT, but ever since generative Artificial Intelligence hit the scene in 2022, employees have adopted a new version: Shadow AI. The company approves Microsoft Co-Pilot, but employees opt to use their smartphones or personal laptops, along with their personal accounts with ChatGPT, Gemini, Claude, Midjourney, or whatever best suits their needs.

    For most companies, this is a problem that needs to be addressed through repeated policy announcements and vigorous crackdowns. One company, though, took a different approach. In this short, midweek FIR episode, Neville and Shel outline what the company did and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Shel Holtz: Hi everybody, and welcome to episode number 510 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson: And I’m Neville Hobson. There’s a quiet tension playing out inside many organizations right now. On one side you have leadership teams, IT, legal, and compliance, all trying to put structure, governance, and control around how artificial intelligence is used at work. On the other side you have employees who’ve already moved on. They’re not waiting for official tools. They’re not sitting through pilot programs. They’re not asking permission. They’re opening ChatGPT on their phones. They’re using Claude in a browser tab. They’re experimenting quietly, often invisibly, finding ways to make their work faster, easier, and sometimes better. And in many organizations, this shadow AI behavior is still being treated as a problem — something to restrict, monitor, or shut down. It’s a topic Shel and I discussed on this very podcast in episode 419 nearly two years ago, and it hasn’t gone away.

    Neville Hobson: In fact, recent data suggests it’s accelerating. A study last November by Blackfog and Sapio Research found that nearly half of employees surveyed in the UK and US are using unsanctioned AI tools. Even more striking, 60% said they would take security risks with those tools if it meant meeting a deadline. So this isn’t fringe behavior — it’s become normal. An article in the Harvard Business Review this month argues that instead of treating unauthorized AI use as a compliance issue, organizations should see it as a signal — a sign that people are already finding value in these tools, even if the organization hasn’t caught up. We’ll explore that idea in just a moment.

    Neville Hobson: The article calls this the hidden demand for AI inside your company. And when you look at it through that lens, the picture changes quite dramatically. Because instead of asking, “How do we stop this?” you start asking, “What are we missing?” The piece goes further than theory. It looks at what one organization actually did when it recognized this dynamic: BBVA, a Spanish multinational financial services company with more than 125,000 employees. Rather than clamping down on shadow AI use, they moved quickly to provide a secure enterprise environment. But more importantly, they didn’t try to control everything from the center. They took a different approach. They identified and empowered what they call “champions” and “wizards” — the people already experimenting, already curious, already building things. They created a network, a community of practice, a way for ideas, use cases, and practical solutions to spread peer to peer across the organization.

    Neville Hobson: And the results, at least as reported, are striking: thousands of employees actively using AI tools, thousands of internally created applications, and measurable time savings of hours per person every week. But perhaps the most interesting part isn’t the numbers — it’s the philosophy behind it. The idea that successful AI adoption doesn’t start with a perfectly designed top-down strategy. It starts by recognizing that innovation is already happening, just not where leadership expects it. So the question becomes: do you try to control that energy, or do you find a way to harness it? And that opens up a much broader conversation, one that goes well beyond technology. It touches on leadership, trust, and culture — on how change actually happens inside organizations. And, importantly for communicators, on how you surface, legitimize, and guide behavior that may already be happening under the radar.

    Neville Hobson: Because if employees are already using these tools — and most evidence suggests they are — then silence or restriction alone isn’t really a strategy; it’s a gap. So in this conversation, we want to explore that gap. What shadow AI really tells us about organizations today, whether the BBVA approach is something others can realistically replicate, and where the risks still sit, because they have not disappeared. And we should be clear: BBVA may be an outlier. It’s a highly data-mature organization with strong leadership alignment. Many organizations don’t have that foundation. So the question isn’t just whether this works — it’s whether it can work anywhere else. And what that means for the future of work, and for the role communicators play in shaping that future. Shel?

    Shel Holtz: Well, a few thoughts, starting with the fact that BBVA has the financial resources to provide a secure environment for those tools that employees are using. There are many organizations whose IT budgets are razor thin and don’t have those resources, so they would need to figure something else out. But I think there’s a caution here worth raising. The numbers from Blackfog are real, even if the framing from the Harvard Business Review is optimistic: 34% of employees using free versions of tools when paid, approved versions exist; 58% of unsanctioned users on free tiers with no enterprise protections. The reframing from threat to signal doesn’t eliminate the exfiltration risk — it reframes how we need to respond to it.

    Shel Holtz: Communicators should be careful not to let the BBVA-style narrative become an excuse to ignore governance. The right frame is: harness the demand, don’t suppress it, and build the governance at the same time. Employees using unsanctioned tools and putting secure data and company information into them — that’s a governance risk, and I don’t think we can ignore it. I mean, I think what BBVA did is great, and I think they baked it into some governance while looking at a new approach they could afford to take. But for many organizations, governance is still a requirement.

    Neville Hobson: Well, I agree. It’s important and it’s not to ignore by any means. I think, Shel, you fleshed out a little bit the survey that I mentioned, which is actually useful to have that level of detail. But the big question for me is: if this is the picture in many organizations, according to that survey — compared to data previously — this is getting worse, or rather, it’s happening more frequently. People are just going ahead and using what works for them as opposed to what’s the official thing. What is that a symptom of? Maybe a lack of trust? It’s probably a mix of things. And to me, the communicator’s role here seems to be to try and help people on the one hand understand what the tools can do for them, and on the other hand to help the organization understand that we need to address this issue. People aren’t using the approved ones. They’re doing stuff on their own, and that isn’t good.

    Neville Hobson: You mentioned security risks. The Harvard article goes into some detail about that, as indeed do the people who conducted that survey. You can just picture the severe risk. We’ve seen examples in recent months of organizations that have suffered from unauthorized use of unapproved software tools — not necessarily generative AI tools, but software certainly. And it’s a big deal. So the question — do you try to control all this and look at ways to stop it? — we asked this very question two years ago in our conversation, and we could probably just insert the recording from then and replay the answer. But let’s talk about it. I don’t think they should try and stop it personally. That’s a fail. There’s no win in that at all, certainly not for the organization. So how would communicators go about that, do you think?

    Shel Holtz: Well, I’m not suggesting that organizations crack down on this and become Big Brother, looking at the tools that people are using, especially when they’re using them on their personal phones or personal laptops. But there are definitely things communicators can do. The first is to surface and amplify the internal use cases — not just the fact that people are using these tools, but what they’re using them for. When the security people and the legal people find out that this is actually driving effective work product from these employees, I think there might be more appetite for figuring out a way to bake this into the governance documents and policies the organization has established.

    Shel Holtz: And I think giving employees permission narratives — telling them it’s safe to experiment, letting them know how to do it, and suggesting where the guardrails are — matters. So if you are using shadow AI, here are the things to be careful about. Let them know what the risks are and how to avoid falling into those traps. Communicators can also translate the IT and legal guardrails into plain language that doesn’t read as prohibition, because prohibition just leads to negative thoughts from employees about the organization, and then they’ll just continue using what they’re using. And then there’s collecting and routing the demand signal back to leadership. Why are employees using these when there are approved tools around? What are the advantages? So that leadership can make investment decisions that match the patterns of usage employees are actually engaged in. There’s a lot of work here for communicators that goes beyond simply saying, “Don’t do this.”

    Neville Hobson: Agreed. And in fact, you can learn from much of what BBVA did, even if you’re not an organization with that established foundation and 125,000 employees. They did things most companies aren’t going to be able to do. For instance, they reached an agreement with OpenAI and deployed a customized version of ChatGPT Enterprise in a secured, exclusive cloud just for the company. The reasoning is interesting. What the Harvard Business Review report says is that the strategic decision was clear: it was more dangerous to have unmanaged, hidden AI usage than to rapidly deploy a managed, secure solution that aligns with existing needs. Most companies aren’t going to be able to do that. So it comes back to perhaps what you’ve just proposed — explaining it to people, the pros and cons, the risks, and so forth.

    Neville Hobson: But I think you need more than that, too. Otherwise, you’re going to have significant numbers of people who will ignore it and just go ahead anyway with what they’ve been doing. So maybe elements of what BBVA did — for instance, the network of internal champions and expert wizards to spread knowledge, rather than the formal top-down communication you might expect. You’d have people within the organization who are knowledgeable, who have a history of responsible use themselves, who can help explain to others and help them replicate that. You end up, I think, with steps toward broad compliance that everyone can buy into. That would be helpful, because I can see that the idea of anyone in an organization of whatever size just doing their own thing with whatever tool they like is not a good idea at all.

    Neville Hobson: And that isn’t unique to this. We’ve had that kind of conversation in decades past about software. I remember when Hotmail first came out, and when Microsoft Network first came out, the arguments in organizations — and indeed the one I worked for at the time — was, “You’re not allowed to use this on your company laptop, so use it on your own,” stuff like that. That’s definitely not a good thing. So you need to act to address issues like that so that people trust you and respect you and are willing to follow a restriction — or a behavior change, if you like — that would help. It’s interesting, the learning you can get from BBVA’s example, even though you’re not an organization that size with a budget to match. It’s a lot about education. It’s trusting employees, absolutely, as you pointed out, Shel. But I think that’s a two-way street. You need to have a quid pro quo: if you have these freedoms to use whatever you want, you need to do it responsibly. Share your learnings with others in the organization. Things like that. To me, that seems like a really good place to communicate.

    Shel Holtz: Yeah, there’s communication happening at BBVA. They have 11,000 active users and 4,800 custom tools being used by those folks. That didn’t happen because the communications department posted an article about them. This was peers talking to their peers about what was working. It validates something you and I have been talking about for years, which is that authentic, lateral, employee-to-employee storytelling beats top-down cascades every time.

    Neville Hobson: Precisely.

    Shel Holtz: But it is communication. And why wouldn’t that be something the internal communications department jumped on and helped to facilitate — providing the channels for that, rather than the sneakernet that’s probably happening now? And also, because they’re engaged and trying to keep this from happening below the surface, they’re in a position to identify the use cases worth taking to leadership. The Blackfog survey you referenced found that almost 70% of C-suite executives believe speed is more important than privacy or security. So if people are getting things done faster — if you can demonstrate that there actually is productivity improvement happening, and it’s because of the tools employees are using that aren’t approved — I think that’s motivation for leadership to look at either approving those tools or finding ways to allow people to use their own accounts while protecting the integrity of their data.

    Neville Hobson: Yeah. The results the Harvard Business Review reports from BBVA are worth noting, even though the scale isn’t what many companies would experience. They talk about 80% of usage of the system they set up coming through direct chat prompting, and the remaining 20% through employee-created GPTs. Now, this is not shadow AI — it was part of the rollout of what they did. But these numbers are quite impressive. Over 83% of employees now use the system every week, averaging 50 prompts per week. That’s above comparable enterprise deployments, says the review, quoting OpenAI. Users report average time savings of two to five hours per week — a number worth noting. More than 4,800 custom GPTs have been created internally, and they’re used three times more frequently than the enterprise average. So they’re ahead of the game in that regard. The article goes into more detail about which departments are more active than others, and so forth.

    Neville Hobson: It also prompted a thought in my mind: the other surveys I’ve seen and other reporting on the resistance from leadership in organizations — that isn’t minor. It’s not a little thing. It happens, unfortunately, too frequently. I’m thinking of keystroke logging on employee usage, auditing computers surreptitiously and covertly without telling them, watching which apps they’ve installed — and indeed, probably more common, your company laptop refusing to install things that aren’t on an approved list, or reporting to IT that you tried to install stuff. This is a dreadful situation in organizations. It’s common, but we’re going to see more of it, I think, because that seems to be the way of the world these days on distrust. This is a diminished-trust environment we’re talking about. So in all of that, where do we sit in terms of enabling stuff like this? We can see the advantages of allowing employees to use tools like this. I think the better way is to try to do something within the framework of the organization — not, “Oh sure, go ahead and use ChatGPT whenever you want on any device, no big deal.” I wouldn’t be keen on that. I wouldn’t stop it, but I would look at ways of weaning people off that approach. We have to help them and encourage them to do this. And that, I suspect, is a hard task for communicators — to persuade leadership to do that if the climate in an organization is resistant to it anyway.

    Shel Holtz: Well, I think it is a hard sell to leadership, but we have data. We’re supposed to be engaging in two-way communication and facilitating two-way communication. One of the roles of internal comms is listening. And it doesn’t have to be through direct information that you get from people through focus groups or surveys — it could be this Blackfog survey. When 49% of employees are using unsanctioned tools, and 63% think that’s fine as long as there’s no approved option for what they want to do, you may look at that as rogue behavior, but you can also look at it as market research. And communicators are the people in the best position to translate that data into something actionable for leadership. You take that to leadership and say, “Look, this is what’s happening. We’re the ones who can interpret what the behavior means and pass that along to leadership.”

    Shel Holtz: I think part of our role is that listening through the data that’s already out there — and maybe what we can determine is going on in our own organizations — and taking that to leadership and saying, “Look, this isn’t going to go away if you crack down on it. It’s not going to go away if you block installation on company laptops. People have their own phones. People have their own laptops and tablets. This is going to continue.” And this isn’t new. I mean, this goes back to the earliest days of computers. I think I’ve mentioned this once or twice on the show, but I needed to produce charts and graphs in the mid-‘80s, and I wanted to use Harvard Graphics because somebody had shown it to me and it was what worked, and the company had a different program that was terrible. So I just used Harvard Graphics. I bought my own copy and installed it. There were no blocks back then — you put the floppy disks in the drives and it installed. People are going to do what they need to do to get the job done. Maybe some will pay attention to what the official rules are, but I think the governance needs to be flexible enough to adapt to this. I applaud BBVA for what they did. Again, I don’t think every organization is in a position to replicate it, but I think you can take lessons from what they did.

    Neville Hobson: You can. Not everyone can roll out what they rolled out — enterprise licenses and so forth — but some of the things they went about, and how they went about them, definitely. One thing the review article points out quite strongly — a very, very good thing — is that they say, toward the end of their conclusions, that in whatever you do, there must be a hard human-in-the-loop rule. Human employees should always own the work. There should not be direct writes to core systems. Internal GPTs need quality scores and guardrails. They specify scope and context, include samples, and so on. This is simple, scalable, and non-bureaucratic.

    Neville Hobson: So that’s something that kind of ties back into this emerging phrase — if it’s even emerging — of human-centered AI. Let’s look carefully at this. It’s about people first, technology second, and the human needs to be in the loop. The “hard human,” as the review calls it — I interpret that as meaning someone who’s actually cognizant, aware, and able to act upon things that matter, to keep humans in the loop, to own the work, not the technology. You’ve got to think about things like that. And I think for communicators, that’s an important aspect of what they do — having in mind that element that is about the people first. So when you’re trying to persuade leaders to take a course of action you’re recommending, this needs to be in your mind too: that the humans need to be in control.

    Neville Hobson: I have to say, this is great. I love stories and examples like this. I love them more than the ones that talk about disasters, although those are useful to know about as well. Yet I feel, as communicators, we have a constant, constant task on our hands to explain this to people in organizations, to help others understand. I think this is a good example — the shadow AI element. For me, if I were actively involved in an organization as the communications person, I’d be looking at: how do I persuade people not to do that? How do I persuade people to use the approved stuff? But at the same time, how do I persuade the leaders to make sure they offer employees stuff that actually works, that’s in line with their expectations, all that kind of stuff? There’s a bit of a job on their hands. And if budgets get in the way, then you’ve got an even harder job. But hey, that’s what we’re here for. That’s part of what we have to do.

    Neville Hobson: These are good examples you can learn from. There are elements you could start on. And I think, like most things, Shel, you need to say, “OK, fine — this idea has a dozen constituent elements, and let’s just start with two.” So you don’t try to think, “Oh my god, this is a massive project. How on earth can we do this?” You look at just a couple of things. I like another point the Harvard Business Review makes: ensure that managers know what they’re doing. You can’t expect managers to be persuasive in encouraging others to use AI if they’re not good at it themselves. So there’s another element — you need to train them well, says the Harvard Review. At a minimum, they should learn how to write staffing notes, sensitive communications, and KPI reviews with AI help. So there are some things you could do straight away as a communicator in an organization. I’d say: good luck and godspeed, and it’ll all work out in the end.

    Shel Holtz: Yeah, a manager’s role in all of this is probably an episode in its own right. I would just reiterate the point you made about the human in the loop. This is a governance element that should be overarching — not applying just to shadow AI, but to all use of AI in the organization. It should be a primary consideration in governance, not to turn things over to AI. Otherwise, you end up with fake citations going out to clients that paid a million dollars for your work — another little slap on Deloitte’s wrists. And that will be a 30 for this episode of For Immediate Release.

     

    The post FIR #510: Should Companies Embrace Shadow AI? appeared first on FIR Podcast Network.

    21 April 2026, 12:54 am
  • 20 minutes 49 seconds
    FIR #509: Does Corporate Content Need Copyright Protection?

    When bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content — and win, at least initially. That’s because the systems platforms used to validate copyright claims are automated and configured to treat whoever files first as the rightful holder. The result: musicians like Murphy Campbell, a folk artist from North Carolina, lose both revenue and control of their own creative identity.

    The same mechanism works just as well against any organization that publishes audio or video content online. In this midweek episode, Shel Holtz and Neville Hobson break down how the scam works, why it matters to communicators, and what you should be doing right now — before an incident forces your hand.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Neville Hobson: Hi everyone and welcome to For Immediate Release, this is episode 509. I’m Neville Hobson.

    Shel Holtz: And I’m Shel Holtz. And today we’re going to talk about something else that communicators need to worry about. I think we need to develop a worry list for communicators. This one starts with a tale about a folk singer from the mountains of Western North Carolina. She’s named Murphy Campbell. She plays banjo and dulcimer and records old Appalachian ballads, some of them written by her own distant relatives. And she posts videos of herself performing in the woods. She has about 7,800 monthly listeners on Spotify. And she is, as Shelly Palmer put it in a recent column, exactly the kind of artist the copyright system was designed to protect.

    In January, some of her fans started messaging her about songs on her Spotify profile that she had never uploaded. Someone would have taken her YouTube performances, run them through AI voice cloning tools, and posted synthetic versions of her songs under her name on streaming platforms. These fake tracks, to put not too fine a point on it, were really bad. Her dulcimer sounded like — and these were her words — a warbled metallic mess. Her voice had been deepened and auto-tuned into what she called a bro country singer. But here’s where it gets interesting for those of us in communications, because that’s not the end of the story. It didn’t stop at impersonation.

    Whoever uploaded the fakes through a legitimate music distributor called Vydia (V-Y-D-I-A) then filed copyright claims against Campbell’s original YouTube videos — the very videos the AI had been trained on. Because YouTube doesn’t use humans to review initial copyright claims, Campbell stopped earning revenue on her own content. That revenue started going to the person who had filed the copyright claims.

    She described herself as being in a weird limbo where “I’m telling robots to take down music that robots made.” Shelly Palmer called this a reverse copyright scam, and he confirmed, speaking to other content creators off the record, that this is more common than he might have believed.

    Now, I know what you’re thinking — music streaming platforms, artists, what does this have to do with me? And the answer is everything. Because the mechanism that elbowed Murphy Campbell out of earning royalties for her own music will work just as well against any organization that publishes content on platforms with automated enforcement systems. That is virtually every organization that has a YouTube channel, a podcast feed, or any kind of public video or audio presence.

    So here’s the structural problem as Palmer frames it. The copyright system we have was built on a foundational assumption that the first entity to register a claim is the rightful owner. That assumption held when human creativity was the bottleneck. It breaks completely when AI can generate a synthetic version of any content in seconds using any voice. Think about what your organization puts out there publicly — executive speeches, earnings calls, thought leadership videos, branded audio, training content, podcasts, content marketing pieces. Every one of these is a potential training data set for someone who wants to clone your voice, your leaders’ voices, and then upload a synthetic version through a low-cost distributor. We’re talking about something that costs $25 to $90 a year. Then they file a claim against your legitimate content before a human ever reviews it.

    Neville Hobson: (pause)

    Shel Holtz: That means the system is going to see them as the first one to file that claim and assume they are the legitimate copyright holder. Now, Rolling Stone confirmed that this isn’t an isolated case. Paul Bender, Veronica Swift, Grace Mitchell — these are just a few of the artists who have faced the same attack. One musician even ran an experiment he called Operation Clown Dump, uploading fake content under his colleagues’ names across platforms. His success rate was 100%.

    So what do communicators need to do? First, audit your public content footprint. Do it now, before an incident forces you to. Know what you’ve published, where it lives, and what revenue or visibility is attached to it. Second — and here’s something that’s new for a lot of communicators — register your copyrights. Formal registration is the prerequisite for meaningful legal recourse in the United States. Third, build a rapid response protocol for platform disputes. The organizations that survived these attacks quickest were the ones who knew who to call and knew what to say. And fourth, have this conversation with your legal team today, not after something goes wrong.

    Murphy Campbell eventually got Vydia to withdraw its claims, but only after her story went viral. Most organizations won’t have that option. Your story won’t go viral. The bad actor doesn’t need to win permanently — they just need the automated system to act before you do. And that is the lesson, and it’s one we’d better learn from musicians before we have to learn it the hard way.

    Neville Hobson: Extraordinary, isn’t it, Shel? I guess you could call it a new phenomenon, only in the sense of the speed with which this can be done. I must admit, I’m astonished that the system is such that the first person to file the copyright claim is assigned ownership. Maybe that’s similar here in the UK — every jurisdiction is different, of course — but that’s rather unsettling. It obviously goes back to a time when people weren’t exploiting the system the way they are now. There are similar examples here in the UK of this kind of activity where people unwittingly find that their content is being misused and misrepresented. And although no major artists — though I may be wrong about that — I did see an article noting that YouTube allows some users to clone the voices of stars like Charli XCX and Sia, with their permission. But unauthorized AI covers of artists like Harry Styles — hundreds of thousands of copies — is a widespread phenomenon, and one that barely registers in mainstream news.

    A number of artists, a bit like your example of Murphy Campbell — there’s one I’ve heard about, Greg Rutkowski, a Polish-born artist known for his work on Dungeons & Dragons, who found his style being used in over 400,000 AI prompts, raising serious concerns about the obsolescence of human artists. And to your point about what communicators should watch out for: your corporate communication messaging that’s in audio, your CEO on an earnings call that’s been recorded and distributed. So never mind video — audio alone, at that scale of 400,000 AI prompts, is not a good situation. If you project the thinking out, this is utterly relevant to anyone publishing audio or audiovisual content online.

    I find it astonishing that some platforms, notably Spotify — which features prominently in a lot of reporting on this — are being used to literally steal someone’s intellectual property by replicating it. And I think it reinforces the point that registering copyright isn’t an idle exercise. It’s something that should be front of mind, and it does other things for you as well as the owner of the property.

    Something as simple as displaying a current copyright notice on your website — it’s remarkable how many sites I come across that still show “Copyright 2016,” never updated. Displaying a current notice signals that the business is active and its information is up to date. There are also tools to protect against AI scraping, though how effective they are is still unclear. Creative Commons licensing is another option, setting out the terms under which people can use your content — though that requires everyone to play by the rules, which frankly isn’t always the case these days.

    Nevertheless, you’ve got some protection — or at least the peace of mind that you’ve taken steps. But it really is quite extraordinary, isn’t it, Shel? When I looked into what’s happening in the UK, I came across a recent movement — over a thousand UK musicians, including Paul McCartney, Annie Lennox, and Damon Albarn — who released a silent album to protest proposed legislation that would allow AI companies to train on copyrighted material without consent. It struck me as a real head-scratcher: why would a government enable that to happen?

    Shel Holtz: Probably very effective lobbying from the AI companies, I’m sure, is behind that.

    Neville Hobson: No doubt, no doubt. But there are other things going on — organizations like the Musicians’ Union and Equity campaigning for better copyright protection, consent, and fair compensation for creators. It’s not getting much mainstream coverage, but activity is happening behind the scenes. Nevertheless, the example of Murphy Campbell and others represents a genuine threat that you need to be aware of if you’ve got content online that matters to you. Never mind the “they shouldn’t be doing this” argument — the point is, if it’s important to you, have you thought about this?

    Shel Holtz: If you think about the days before the web, copyright wasn’t something most people had to worry about that much. Professional artists with record deals had people to handle it. Same with authors — someone like Stephen King never had to worry that somebody would be the first to file a copyright claim under his name and siphon off his revenue. But now you have artists who don’t get record deals — like Murphy Campbell — publishing on YouTube and Spotify, building small followings, and making a reasonable living. This is the working class musician concept we talked about, oh, it’s got to be 15 years ago now.

    The fact is, you can use Spotify and YouTube to build a following, play some small clubs a few times a year, and make enough to pay the mortgage and put your kids through school. You’re not going to get the penthouse suite from playing to 100,000 people, but you can make a living. But this has also opened up the ability for bad actors to take advantage of that. And now with AI able to reproduce your voice and create new music at scale, all the pieces are in place for this kind of theft. Unless you’re able to get your story to go viral — as Murphy Campbell did — it’s not clear what you can do, because YouTube and Spotify have set up systems that automate this process with no human review. When you used to register with the copyright office yourself, a human was checking. So it’s not likely most organizations have revenue-generating content online — though I’m sure some do, and I’ve actually argued there are ways to use content to generate revenue.

    For example, I’ve always loved the idea of a Webcor YouTube video series called “Building for Girls,” where our employee resource group, Women of Webcor, does a five-minute lesson every two weeks on construction to get young girls interested in STEM and engineering careers. Get enough views and YouTube starts paying you. If you don’t copyright-protect that content, someone can come along, produce similar videos, claim the rights, and suddenly your revenue is going to someone else. But even if you’re not producing revenue-generating content, there are other reasons to ensure nobody else can claim ownership of what you create — especially as content marketing demands more and more output. So yes, register that copyright.

    Neville Hobson: Yeah, it made me think about watermarking for written content — though I’m not sure there’s something truly effective offering the same protection for audio and video yet. And even if there were, you’ve got situations like Murphy Campbell’s, where it’s her style and tone — the whole persona that defines her music — that’s being copied. And you don’t know about it until strange things start happening: your revenue drops, someone says “I love that new song you just published,” and you discover it wasn’t you. Or you read a review and think, wait — I didn’t write that.

    Shel Holtz: Or “I hate that new song you published” — in Murphy Campbell’s case.

    Neville Hobson: Exactly. I’m sure people are working on the technology. You’ve got digital rights management, which isn’t new, but I’m not sure it helps here because the issue isn’t copying your content outright — it’s imitating or repurposing it at scale. Hundreds of thousands, or millions of instances. I think the platforms need to do far more than they currently are. It’s a similar argument to what we’re hearing here in the UK about Meta and X doing nothing effective to protect children. This is in the same territory, and it needs a lot more from those platforms — who are making serious money throughout all of this. As to what exactly “more” looks like, I’m not entirely sure, but they need to do more.

    Shel Holtz: Yeah, and they probably won’t until there are some high-profile, visible court cases that create real reputation issues for them — then they’ll take action. The easy thing to do right now is simply register the copyright. That’s your protection. When someone imitates you, or claims the content you produced is theirs, you have legal standing to act. That’s why you need to have this conversation with your legal team.

    But I wouldn’t wait for either the platforms or the government to do anything. They’re both reticent to act. You have the ability to do something about this right now, and it’s just a matter of working with your legal team and filing those copyrights.

    Neville Hobson: Yeah, exactly. And even using Creative Commons licensing — if you’re an individual without all the formal resources, but you have a niche following, even that’s a start. Keep a record of every iteration of everything you’ve created — “I did this in 2017, here’s proof, backed up here.” That gives you something to stand on, a way to demonstrate that you can act if someone uses your content. And if you don’t do this, there’s another consequence worth considering: your original content gets buried in search results because the AI-generated imitations have somehow accrued better signals to rank higher. That kind of pollution from AI slop is its own problem.

    Shel Holtz: Yeah — and then people stop paying attention to your content altogether because they’re so fatigued by the AI slop that they tune everything out. But at least this one has a solution communicators can follow: something new to add to the copyright to-do list. And that will be a 30 for this episode of For Immediate Release.

    The post FIR #509: Does Corporate Content Need Copyright Protection? appeared first on FIR Podcast Network.

    14 April 2026, 7:26 pm
  • Get Oriented with The New Communication Compass (Part 1)

    The next two “Circle of Fellows” episodes will offer something different from our panels of the last several years. We welcome Dianne Chase, a veteran communicator and former IABC chair, to the discussion. While Dianne is not a Fellow, she did recruit six Fellows to write all but one of the chapters for her new book, The 7Cs of the New Communication Compass. (Dianne wrote the seventh chapter.)

    The book, which has five stars on Amazon, “offers both a guiding framework and a practical roadmap for mastering strategic communication in complex environments,” according to its description. “If you are a leader, manager, educator, public official, influencer, or anyone striving to make an impact, this book is an essential and thought-provoking read. It distills communication excellence to foster collaborative results and organizational effectiveness.”

    The book’s Cs include Collaboration, Connection, Compassion, Cohesion, Community, Congruency, and Calibration.

    For the first of these conversations, Dianne will join Shel Holtz, Ginger Homan, Jane Mitchell, and Brad Whitworth to discuss Connection (Brad’s chapter), Compassion (Dianne’s chapter), Congruency (Jane’s chapter), and Calibration (Ginger’s chapter).

    Join us for this very different “Circle” at noon EDT on Thursday, April 23. Participants in the live stream can ask questions and share comments, observations, and experiences, and become part of the discussion. If you’re not able to join us, you can listen to the audio podcast later or watch the YouTube replay.

    About the panel:

    Dianne Chase helps organizations and leaders harness the power of strategic communication to navigate crises, build trust, and drive positive change. With over two decades of experience in journalism and corporate communications, Dianne has developed a unique approach for training and consulting clients that combines crisis management expertise with the art and science of business storytelling. Dianne is an award-winning media, journalism, and strategic communication professional with profound expertise in communication disciplines, most notably crisis communication, issues and reputation management, media training, and executive communication. She is one of two people in the world accredited in the powerful GENIUS Business Storytelling methodology, created by international communications thought leader, Gabrielle Dolan. She is former chair of the International Association of Business Communicators, and author/editor of The 7 Cs of The New Communication Compass.

    Ginger Homan, ABC, SCMP, IABC Fellow, counsels senior leaders seeking to bring out the best in their people and brands. Her award-winning communication model for driving transformation has been used to change behaviors, align cultures, and build thriving communities worldwide. Her work with senior communication professionals has enabled them to align their department goals with business goals, achieve measurable results, and expand their influence. Founder of Zia Communication, she is a seasoned speaker, coach, and workshop facilitator. Her clients include Walmart, the Walmart Foundation, the Walton Family Foundation, T.D. Williamson, CITGO Petroleum, Phillips Seminary, and MOSAIC. IABC, PRSA, and SMPS have honored Ginger’s work on the local, regional, and international levels. A past chair of IABC, her volunteer work has been honored with three IABC International Chair’s Awards for leadership, and she is a recipient of the Leadership Tulsa Paragon Award for work in her local community.

    Jane MitchellJane Mitchell’s career began at the BBC in London on live TV programs. She moved on to producing award-winning films and videos for public- and private-sector organizations and to developing groundbreaking employee engagement programs. Since 2006, when she formed her own consultancy, she has guided organizations (some of which have experienced cultural trauma) in embedding values and ethics by understanding culture and leadership, and their link to high-performing, sustainable organizations. She has worked with Top 100 companies worldwide and is a regular conference speaker. Jane has been a member of IABC since 2008 and has served on local, regional, and International IABC Boards. In 2021, she was Chair of the (virtual) World Conference and became an IABC fellow in 2022. She is based in the UK and now spends the majority of her professional time as a Non-Exec on company boards and Employee-Owned Trusts.

    Brad Whitworth, ABC, SCMP, IABC Fellow, is a pre-eminent thought leader, lecturer, and author in organizational communication. He has led global internal and executive communication programs at HP, Cisco, Hitachi, PeopleSoft, AAA, and MicroFocus. He holds an MBA from Santa Clara University and undergraduate degrees in journalism and speech from the University of Missouri. Brad lives in California, a wine country, and he grows Pinot Noir on his property. A former broadcaster, Brad has made more than 300 presentations to executives, communicators, and university classes worldwide. Brad is a past board chairman of the International Association of Business Communicators and a Fellow of the association. He is one of the authors of The IABC Handbook of Organizational Communication and the new IABC Guide for Practical Business Communication: A Global Standard Primer. He chaired the Global Communication Certification Counsel in 2021.

    The post Get Oriented with The New Communication Compass (Part 1) appeared first on FIR Podcast Network.

    13 April 2026, 5:53 pm
  • 21 minutes 4 seconds
    ALP 301: Five words every agency owner needs to understand

    Most agency owners spend a lot of time thinking about growth, clients, and revenue. Far fewer think carefully about the words that define how they actually operate their businesses. In this episode, Chip and Gini dig into five of those words: leadership, management, accountability, responsibility, and authority.

    Leadership and management aren’t the same thing. Leadership is about vision and getting people to follow you. Management is about making the work happen. Knowing which one you’re stronger at is the first step toward building a team that covers your gaps.

    Accountability is the wrong place to start when a team member isn’t delivering. You can’t hold someone accountable for something you never clearly assigned, and you can’t hold them accountable if you didn’t give them the authority to get it done.

    Gini offers a useful comparison: when a client hires you for your expertise and then second-guesses every decision, it’s demoralizing. That’s exactly how your team feels when you delegate the work but not the authority to do it.

    The episode closes with a simple reminder. If you want more freedom as an owner, you have to be willing to actually let go. And if your team isn’t capable of handling more responsibility, you should be asking yourself why you hired them. [read the transcript]

    The post ALP 301: Five words every agency owner needs to understand appeared first on FIR Podcast Network.

    13 April 2026, 1:00 pm
  • 20 minutes 39 seconds
    FIR #508: Inside AI’s Human Raw Material Supply Chain

    When workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies that hire gig workers are shifting from delivering food or sharing rides to creating content to train AI systems. This raises various communication and ethical issues. Neville and Shel explain what’s happening and discuss the implications in this short midweek episode.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Shel Holtz
    Hi everybody and welcome to episode number 508 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson
    And I’m Neville Hobson. Over the past few weeks, I’ve come across a set of stories that all point to something quite striking — not just how AI is evolving, but how it’s being built. Increasingly, the raw material behind AI isn’t just data scraped from the web. It’s us: our voices, our movements, our everyday lives, and increasingly, our identities. There’s a new layer of the gig economy emerging. We’ll explore this in just a minute.

    People are being paid, typically in small amounts, to record themselves walking down the street, having conversations, folding laundry, even just going about their day. That data is then used to train AI systems because those systems need examples of how people actually speak, move, and interact in the real world. In one case, delivery drivers in the US are being redirected to film tasks for robotics training. Platforms are turning existing gig workers like delivery drivers into distributed data collectors for AI. In another example, people are selling access to their phone conversations through apps that pay contributors to upload voice and text data. And in yet another, workers are strapping phones to their heads to record household chores so humanoid robots can learn how to move. The work is global, fragmented, and often invisible, with workers spanning Nigeria, India, South Africa, the US, and far beyond. Humans are no longer just users of AI — they are raw material suppliers. In China, there are even state-run centers where workers wear virtual reality headsets and exoskeletons to teach robots how to carry out everyday physical tasks. What we’re seeing is the rise of what you might call data labor, where identity itself becomes part of the work.

    There’s a clear driver behind it. AI companies are running out of high-quality training data. The open web isn’t enough anymore, and synthetic data has its limits. So the industry is turning to something else: real human lived experience. Because if you want a robot to understand how to load a dishwasher, navigate a room, or interact with objects, you need to see humans doing it at scale.

    But there’s an interesting contrast here. One of the stories highlights a 23-year-old in the US, a guy called Cale Mouser, who earns well into six figures repairing diesel engines. It’s something he’s developed great skill in doing. His work depends on judgment, experience, and problem solving in the real world — things that don’t easily translate into data. So while some people are being paid small amounts to generate data for AI systems, others like Cale Mouser are building highly valuable careers precisely because their skills can’t be reduced to it. And that contrast feels important.

    Because on one level, this new kind of work does create opportunity. For some people, especially in lower-income regions in the Global South, this is real income — paid in dollars, flexible and accessible. But there’s another side to it. Because what people are actually selling isn’t just time, it’s identity: their voice, their behavior, their presence in the world. And often once that data is handed over, it’s gone — permanently licensed, reused, repurposed, potentially in ways the individual never sees or understands.

    So you have this asymmetry: individuals earning small immediate payments while companies build long-term, highly valuable AI systems. Perhaps it’s a new version of the Mechanical Turk for the AI era. And that raises a deeper question. What does it mean when the inputs to AI are no longer abstract data, but pieces of human identity? When the training set is not just content, but behavior, voice, and presence? And when those pieces can be reused, replicated, and scaled, often without the individual’s ongoing knowledge or control? Many platforms grant royalty-free perpetual licenses, where workers get paid once and lose control forever. There’s potential for deepfakes, identity theft, and misuse without consent. And perhaps more uncomfortably, what does it mean when people are contributing to systems that could automate their future jobs?

    For communicators, this feels important because this isn’t just a technology story. It’s a story about trust, consent, transparency, and how organizations explain what they’re doing with AI. If AI ethics lives anywhere, it’s here — in how these systems are built and how that’s communicated. So the question to explore — one of the questions to explore, perhaps — is this one: Are we comfortable with an economy where identity itself is becoming labor? And if not, what responsibility do organizations and communicators have in shaping it?

    Shel Holtz
    It’s a big story with a lot to consider. On one level, it seems like the high-tech version of the sweatshops where high-end fashions were made — Nike shoes, for example — with people paying premium prices to get those products while the people making them are earning a pittance in factories with long hours and terrible working conditions. And then you add onto it the identity issue. So it’s something that I think — something at least I hope — we’re going to be talking about for a while.

    In terms of the AI element, what this suggests is that the gig economy didn’t go anywhere when AI came along; it just became the training ground for AI. And it’s interesting that the workers who are being squeezed out of knowledge jobs are selling their voices and their movements to build the systems that squeezed them out. Because where do a lot of these people who are being laid off because of AI go? Well, they go drive for Uber, they go drive for DoorDash. And you do that long enough and you get really accustomed to the idea that they send you a task, you go do that task, and you get paid for it. So if that task shifts from picking up a meal at a restaurant and delivering it to somebody’s house to going to your own house and washing your dishes because that’s what they want to capture on video — it’s the same thing. You’re getting a task on the app. You’re doing the task and you’re getting paid for it. So I think for a lot of people, this is going to be a fairly easy shift, and they’re not going to think a lot about what’s happening to the information and the content that’s being created with their movements and their voices, which is now being shared and used to make a lot of money for the people who are paying a pittance to these folks.

    So I see three issues here that connect directly to organizational communication. The first is consent and transparency — and I’m talking about inside organizations — because companies are already deploying AI tools trained on data that their own workers have supplied, and sometimes they’ve supplied this data unknowingly. The ethical and reputational questions that employees are going to ask are questions like: Was my voice used to train a bot that you activated in order to replace my friend who sat next to me and I had lunch with? And regulators are going to end up asking these questions too. So communicators really need to be out front with clear internal messaging about what data employees generate and how the company is using it. Let’s talk about that before I hit the other things that popped into my mind.

    Neville Hobson
    Yeah. I mean, the transparency element is key. That’s not new — that’s always been the case. But how organizations should communicate this may not be as simple as it might seem. I mean, the example you mentioned is an interesting one: a company uses data from its employees without them knowing. Well, let’s say — don’t do it like that. Don’t do that. You need to disclose if you’re doing this. Surely that is an ethical issue: if you don’t tell them and you go ahead and do that, that’s not what you should be doing. So there’s an easy one to address.

    The other element, which is also ethics-related, is: is this whole thing ethical if participation is driven by economic necessity? Whatever reason you might give — we need to get an edge on the competition, whatever — you’re still up against that element.

    That’s the big-picture ethics question. But common sense tells you how you should do this. Should individuals be compensated long-term for use of their data? On the one hand, you might say, fine, let’s tell everyone: your data may be used — your day-to-day interactions with colleagues, the recordings of your conversations on our internal Teams tool — that’s kept. So the employee might say, I’m okay with that, but I want to be compensated for it. And now there’s an interesting position.

    Shel Holtz
    You mean like as if they’re licensing it?

    Neville Hobson
    Exactly. And the organization might retort —

    Shel Holtz
    Well, the organization might retort: you are being paid for it. You’re being paid a salary. You come in here every day, you do your work. Read your employment agreement. I mean, this is kind of like — what was it? Velcro or Post-it notes? Maybe both — where the person who invented it never made a penny off the royalties because they were an employee of the company and the work product belonged to the company. I think organizations might be able to make the same argument here.

    Neville Hobson
    They could. But they’re not sure whether they should, just because they could — because the climate is very different today from those examples back in the 1960s. So you’ve got to think about things like: if we don’t do this right, are we going to get an exodus of employees who are going to go work for a company that treats them better in this same context?

    Shel Holtz
    Well, now you have the economic environment and the hiring situation where a lot of companies are trying to avoid hiring. They’re also trying to avoid layoffs, but they’re trying to avoid hiring. It’s pretty flat out there right now — it’s definitely a buyer’s market. So I don’t know that I would leave an organization because they’re using my data unless I already had another job lined up, because they’re hard to find right now.

    Neville Hobson
    I agree. It’s slightly a hypothetical scenario, but I think it is worth recognizing that it could well come to that. From the research on those articles — and some other things I saw — there’s already a strong imbalance of value and control between the individuals who provide the data and the companies who are getting that data and making economic use of it. AI companies rely on real-world human data because of data scarcity. So there’s a challenge on both sides of the argument: they need the data, but there’s probably a finite amount that employees can provide, so they have to look elsewhere too.

    And the thing is, a new economy is emerging where people monetize their identity and behavior voluntarily. In the case of the examples we heard about — the guy in Uganda filming himself walking down the street — and then the flip of that, as I mentioned, the example of young people in America, which the Guardian has a really good analysis of, who have skills that cannot easily be translated into something AI can do. The key element in that part of the discussion was about the skill this young guy has — 23 years old. It’s not unique, but he’s got a skill that isn’t just “I know how to repair a diesel engine.” It’s that he can, at a glance, literally see what’s wrong and already formulate the six things he needs to do to fix it. And that is valuable. He’s earning $150,000 a year already in salary doing this, and he’s 23 years old.

    So there are other examples mentioned in that Guardian piece too that are interesting. On the one hand, you’ve got gig economy workers like DoorDash drivers doing what they’re doing. On the other hand, you’ve got people like this guy developing a career not related to AI at all — a skill that cannot easily be replicated by AI. So that’s part of the landscape. I’m not sure where all of that fits within this, Shel, to be honest, but it’s part of the picture.

    Shel Holtz
    Yeah. I think it was MIT that came out with a report not too long ago saying something like 93% of jobs are AI-safe — and there were a lot of people saying this really paints a different picture from what we’ve been anticipating. I don’t know how accurate it is. But in the meantime, there are AI companies working very hard to elevate these systems to the point where they can do some of the work that currently might be considered AI-safe. I think for many jobs, it’s probably just a temporary designation.

    I raised the issue of employees inside the organization. Those gig workers are another issue for organizational communicators, because these workers — the ones very accustomed to having the app tell them to do a task, doing the task, and getting paid for it — these folks aren’t covered by traditional internal communications. Organizations relying on gig workers and contracted labor, and increasingly if your AI tools were trained by them, have a stakeholder relationship they may not have a communication strategy for. I’d argue they don’t have a communication strategy for it.

    I’ve often made the distinction between internal communications and employee communications. Employees are the people who come in and get paid by you directly, whether salaried or hourly. But you have other internal stakeholders, and we develop strategies for them — the contractors embedded in our organization. I work in construction; we have subcontractors; there are ways the organization communicates with them. There are all kinds of internal stakeholders, and these gig and contract workers are now among them. We should figure out a way to communicate with them, talk about our ethical use of their data, and engage with them in ways that are meaningful, useful, and produce positive results.

    Neville Hobson
    Yeah, makes sense. You had a couple of other points you were going to mention. What’s the next one?

    Shel Holtz
    Just one other, actually, and that’s about keeping the human in the loop. A lot of companies, in order to feel good and look good as they move into the AI world, are positioning human oversight as really important. But what the stories we’ve been talking about reveal is that humans are raw material — physically, biometrically, behaviorally. Workers aged 22 to 25 in the most AI-exposed occupations — things like paralegal work, for example — have experienced a 13% decline in employment since 2022, which is the year OpenAI released ChatGPT to the public. On the other hand, employment for less exposed or more experienced workers — think about your 23-year-old diesel mechanic — has been steady or in some cases even increasing.

    So organizational communicators talking about AI as just augmenting human workers need to be careful, because I think increasingly we’re going to hear stories about how that isn’t actually true, particularly for this younger demographic. We have to be honest about that asymmetry. I mean, whose labor is augmenting whom?

    Neville Hobson
    Yeah, I get that. It does make sense. It’s an issue that embraces communications, ethics, and trust more than anything. But at the heart of it, there is the technology aspect. I’m thinking about other things that you and I have discussed in previous episodes that are kind of adjacent to this issue — where if you analyze what the real issues are, they tend to be a mixture of communication, ethics, and trust. So that’s a good starting point for communicators who might be wondering how the hell to address this: communication, ethics, and trust. Work out how you can develop the procedures that embrace and recognize the importance of those things and execute them inside the organization.

    I agree with the premise in all the articles we’ve linked in the show notes that a new data labor economy is emerging where people monetize their identity and behavior and, in the case of the Global South in particular, don’t think twice about it. Employers have a duty of care to recognize what they need to do to bring that group into their structure — one where communication, ethics, and trust play the bigger role.

    Shel Holtz
    Yeah, absolutely. And I think there are a number of places to look. You don’t want to be the next organization to have it disclosed that you have exploited labor producing the data that you need, because those scandals were pretty difficult for the fashion companies that went through them. Also, one of the things that generative AI models are really good for is scenario planning.

    Neville Hobson
    Ha!

    Shel Holtz
    And for your organization, in your industry, with your markets, it wouldn’t hurt to do some scenario planning about who the stakeholders are that you should be communicating with, and what the challenges are going to be both internally and externally, and start developing some communication strategies. And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #508: Inside AI’s Human Raw Material Supply Chain appeared first on FIR Podcast Network.

    8 April 2026, 8:16 pm
  • 24 minutes 14 seconds
    ALP 300: 300 episodes in: what’s changed, what hasn’t, and what we got wrong

    Eight years and 300 episodes later, Chip and Gini take stock of what the Agency Leadership Podcast has actually been about and where their thinking has shifted since they sat down for lunch outside Wrigley Field and decided to start a show.

    Chip shares an AI-generated analysis of the 10 most common themes across 300 episodes. Gini distills them into four she considers non-negotiable: communication fixes most problems, know your numbers, focus on particular wins, and the owner sets the temperature. Chip adds that communication doesn’t just solve problems, it prevents them. Ironic, given that probably everyone listening is in the communications business.

    On what’s changed, Gini has moved from annual retainer-focused planning to quarterly reviews that constantly show results and surface what’s working. She also notes that her advice for navigating a tough business environment now mirrors what worked during the pandemic: find the project work, start with an assessment, and build trust before building a retainer.

    The biggest evolution for Chip is his position on AI. While he was skeptical a few years ago about the timeline, now he thinks agencies are under-emphasizing it. He and Gini disagree on AI’s limits. Gini believes critical thinking, emotional intelligence, and crisis work still require human judgment. Chip is less certain those guardrails will hold. What they do agree on: AI is turning everyone into a manager, and that puts a premium on skills that were already in short supply.

    The episode closes with a lightning round covering worst advice agencies still believe, best scary decisions, and prospect red flags including unreasonable expectations and unwillingness to discuss budget. [read the transcript]

    The post ALP 300: 300 episodes in: what’s changed, what hasn’t, and what we got wrong appeared first on FIR Podcast Network.

    6 April 2026, 1:00 pm
  • 20 minutes 40 seconds
    ALP 299: Hire people who understand how to solve problems

    Most hiring processes obsess over the wrong things. Do they know our project management software? Are they proficient in this specific tool? Meanwhile, the one capability that actually determines whether someone will make your life easier or harder—their ability to solve problems independently—gets a cursory “are you a good problem solver?” question that everyone answers with “yes.”

    In this episode, Chip and Gini break down why problem-solving ability should be the primary hiring criterion, especially as AI makes technical skills easier to acquire and offload. The conversation explores why this matters more now than ever: as AI handles tactical execution, the ability to define problems clearly, break them into components, and figure out solutions becomes the differentiator between humans who add value and humans who get replaced.

    Chip and Gini discuss how problem-solving cuts across every role, even ones you don’t typically think of as problem-solving positions. Designers facing impossible deadlines, account people navigating last-minute client demands, anyone dealing with the reality that things rarely go according to plan. They all need to be able to figure out how to move forward rather than escalating every obstacle upward.

    The episode tackles the mechanics of actually interviewing for this capability. You can’t just ask “are you a good problem solver?”—you need scenario-based questions that reveal how candidates think through challenges. But not hypothetical scenarios you make up; real situations that have happened in your agency. Ask them to walk through how they’ve handled compressed timelines, missing information, conflicting priorities, or last-minute changes in past roles.

    Gini shares how her daughter’s school explicitly focuses on humanities and emotional intelligence rather than technical skills, anticipating that AI will reshape what jobs exist. She connects this to Anthropic’s hiring practice of seeking people with humanities degrees who can absorb information, think critically, and demonstrate emotional intelligence rather than just technical proficiency.

    The episode concludes with an important reminder: if you hire problem solvers but then micromanage how they solve problems, you’ve wasted the hire. You need to let them solve things their way, even if it’s different from how you’d do it, or you’ll end up with everything back on your plate anyway. [read the transcript]

    The post ALP 299: Hire people who understand how to solve problems appeared first on FIR Podcast Network.

    30 March 2026, 1:00 pm
  • 25 minutes 37 seconds
    FIR #507: Should Nobody Really Ever Write with AI?

    Take a stroll through LinkedIn. You’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write for them. While that case isn’t hard to make for professional writers, there are countless professionals in other fields who struggle with writing, never trained to be writers, yet now have to write everything from emails to reports as part of their jobs. Should they really sweat for hours over wording, time they could be devoting to the core areas of subject expertise, when AI can produce content that is cogent, clear, and direct? In this short mid-week episode, Neville and Shel look at the trends in using AI for writing, despite the plethora of opinions from the pundits.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Neville: Hi everyone and welcome to For Immediate Release episode 507. I’m Neville Hobson.

    Shel: And I’m Shel Holtz. And if you spend any time at all on LinkedIn, you’ll see the degree to which anti-AI sentiment is ramping up. A lot of it’s aimed at using AI for writing and how absolutely wrong that is. Yet just last week, on the same day, Wired Magazine and The Wall Street Journal both published articles on reporters using AI to help write and edit their stories. So today, let’s talk about using AI to write.

    Specifically, is it okay for employees to use AI to help them write for work? And my answer is not only is it okay for many employees, it might be one of the most genuinely useful things AI can do. Here’s the framing I would push back on. When we talk about AI writing assistants, we tend to picture a journalist or a marketer or a communications professional, someone whose craft is writing, it’s what they’re paid for, handing their keyboard over to a robot. And for those of us who are professional writers, that raises legitimate professional and ethical questions. But that’s not the population we’re talking about when we’re communicating AI adoption in most organizations. Think about who actually has to write at work. Engineers document processes. Product managers write status updates. Safety officers draft incident reports.

    Shel: Finance analysts compose budget justifications. Scientists write up findings for non-technical stakeholders. These are not people who chose their careers because they love writing. Writing is a tax they pay to do the work they actually care about. And many of them pay that tax really, really badly. The idea that a structural engineer should produce elegant prose unaided is the same logic as saying a communications director should coordinate the concrete mix for a construction project. We don’t expect that. So why do we expect every knowledge worker to be a competent writer? Muckrack’s 2026 State of Journalism report found that 82% of journalists, professional writers, people whose job this is, are now using at least one AI tool. That’s up from 77% the year before.

    If the people whose professional identity is tied to their writing are using AI tools, it shouldn’t surprise us that everyone else is too, or that they should. Now the research does tell us something important about how to use these tools. A University of Florida study of 1,100 professionals found that AI tools can make workplace writing more professional.

    But regular heavy use can undermine trust between managers and employees, particularly for relationship-oriented messages like praise, motivation, or personal feedback. The study found that employees are more skeptical when they perceive a supervisor is leaning heavily on AI for those kinds of communications. Now that’s a meaningful finding and it’s exactly the kind of nuance internal communicators need to help their organizations understand.

    It’s not an argument against AI writing assistance. It’s an argument for knowing when it’s appropriate. Purdue Business School Professor Casey Roberson, who literally wrote one of the first business writing textbooks to address AI, puts it this way: AI is a great tool for brainstorming when you’re stuck, for outlining and structuring documents, for revising drafts to improve clarity and tone, but it should not be used for confidential information, and using it to write first drafts can stifle creativity and critical thinking. The Wharton communication program makes a similar distinction. Their guidance frames AI tools as powerful and skilled hands for the right task, valuable for brainstorming, editing, improving conciseness, and anticipating challenging questions, but a liability when used as a substitute for your own thinking, your own knowledge of your audience, and your own credibility.

    So what’s the practical guidance for internal communicators trying to help their colleagues use AI responsibly in their writing? First, make the distinction between communication types explicit. Routine informational writing — process documentation, project updates, meeting recaps, technical reports — that’s where AI assistance is most defensible and most valuable. That’s exactly where the trust risk is lowest and the productivity gain is highest. Conversely, messages that carry relationship weight, like a manager recognizing someone’s contribution or a leader addressing a team through a difficult moment, that deserves a human voice. Help your employees understand that difference.

    Second, reframe the conversation around who’s actually writing. A systematic review published in the International Journal of Business Communication found that AI can significantly help with idea generation, structure, literature synthesis, editing, and refinement. Essentially all the phases of writing that non-writers find most daunting. AI isn’t replacing a writer’s voice. In many cases, it’s giving non-writers a voice they otherwise wouldn’t even have.

    Third, be honest about the nuance inside the journalism conversation. The Columbia Journalism Review published a fascinating piece where journalists across major newsrooms shared their practices. Nicholas Thompson, the CEO of The Atlantic, described using AI the way he’d use a fast, well-read research assistant who’s also a terrible writer — helpful for checking consistency, flagging chronological issues, examining logical claims, but not for the writing itself. Amelia Daly, a senior reporter at VentureBeat, put it this way: AI helps her productivity, but she refuses to use it to write because writing is how she maintains trust with her readers. That distinction — AI as research and process support versus AI as voice — maps directly to the guidance you should be giving your colleagues.

    I read one other reporter in one of these articles who said he actually does use it to write because he didn’t become a journalist in order to write. He didn’t like writing; he liked reporting. So he did all the other work and then lets the AI produce the writing.

    And here’s the thing I’d leave your employees with because I think it gets lost in this debate. Wharton’s communication faculty make the argument that writing is thinking, that when you rely on AI for drafting, you don’t know your content as deeply as you should, and you lose the nimbleness to adapt when the moment requires it. And that’s true. But for an engineer who agonizes over every sentence of a procedure document, who spends four times as long on the writing as on the analysis,

    Shel: AI doesn’t replace their thinking. It clears away the friction so their thinking can actually reach the page. For internal communicators, this is a genuinely useful message to take to your AI adoption rollouts. AI writing assistance isn’t about cutting corners. It’s about removing a barrier that prevents good ideas from being communicated clearly while still insisting on the judgment, authenticity, and relational awareness that only human beings can bring.

    Neville: Yeah, it’s a big topic, I have to admit. And I think of it from not the employee communication point of view so much. That’s pretty a major part of it, I think, major usage. Is anyone writing, in fact? Whether you’re in public relations, whether you’re a journalist, et cetera, people who need to write as part of their roles is what’s in my mind mostly.

    I’m also drawn by a very good analysis by Josh Bernoff. You and I interviewed Josh, what, two, three months ago. He wrote an assessment of Charlene Li’s new book, Winning with AI, which she used AI extensively in the creation of the book. Worth pointing out that the book — the AI didn’t write any of the content.

    She and her co-author, Katia Walsh, talked about the way in which they divvied up the work. And the AIs, plural, did research amongst other tasks, too. But Josh did a lengthy post setting out all the areas where they found AI useful and AI not so useful. And it struck me reading Josh’s post and then also Charlene’s postscripts, as it were, in the book itself, which I am reading, by the way, that this would apply to anyone writing, not just would-be book authors, in my view. Whether you’re writing fiction or nonfiction doesn’t make any difference. Whether you’re writing a report, whether you’re writing an article or for a blog or for a newspaper, whatever, doesn’t matter. These principles, I think, apply to that. And it’s not so much about whether your role in your organization or in your job is to do with this and you’re not very good at writing. It’s not so much that. It’s more focused on those whose job is writing, or writing is part of their job in some form.

    So there are a number of things that I took from it. But to go to the main point about Charlene’s book Winning with AI, AI wasn’t doing the writing, as I mentioned. It was supporting the thinking. It handled things like the research, summaries, the structure, which speeds everything up. But the ideas, the voice, and the judgment — that all stayed firmly human. And to quote from Josh’s post, he says that the two authors describe how they used Claude to structure the content, ChatGPT to create a custom GPT with four years of their work, which it used in a sense as a training aid, Perplexity to do the research, and Gemini to search a vast collection of interview transcripts. It’s much more detailed than that. It’s well set out in the book. And I thought, that’s interesting. That’s a very intelligent way to go about using different AI chatbots for different purposes on your projects.

    So three things I took from this, and this applies to all the points you made, Shel, and it will repeat some of those, but it just shows you that this is how you need to think of this. First, AI works best as a thinking partner, not a writer. Like I said, the two authors used AI as a note taker, researcher, brainstorming partner — essentially a third collaborator. It helped them structure the ideas, surface insights, and challenge assumptions, and they did not rely on it to produce the final prose.

    The second point: it saved time on the drudge work, as Josh called it, but it requires human judgment. It was highly effective for research and summarization, structuring outlines, surfacing missed ideas from earlier drafts. That resonated with me because I often find in my own experience when I’m doing research on either blog posts or articles or reports or just research about something I’m interested in, it usually surfaces something AI wouldn’t have thought of, or I might have done, but it might have surfaced later after I’d written it, and it requires a rewrite or something like that. Structuring the outlines, too, is another thing. And this is definitely worth noting — we’ve discussed this before. Everything still required the humans to fact-check and validate everything the AI produces, because in Charlene’s words, AI has no built-in truth function. And I think that’s a worthwhile way of looking at it.

    And the final point that I took from this: you can’t outsource originality, voice, or quality — i.e., the writing. They tried it. AI failed at core creative tasks. There are three of them that Josh points out in his article. Generating genuinely new ideas — this is not very good at this, because it’s trained on existing writing that humans have done over the years and the centuries even. It can’t create something new from that other than guesswork. It’s about the same as what we do, I think, except we’re likely to do the more informed approach. It can’t write in a compelling human voice. And it cannot edit to a high standard. They all described — Charlene and Katia and Josh, for that matter — AI writing as bland, repetitive, and jargon-heavy. And in fact, Charlene talks about how they could not stop jargon creep in anything that the AI produced. And she had this big thing about one draft where they used AI to review it — it changed every use of the word “use” to “utilize.” The AI changed it to that, full of that kind of jargon.

    Shel: One of my biggest pet peeves, by the way, is “utilize.”

    Neville: Right, totally. And the final quality, nuance, personality, and insight remained entirely human because the humans wrote it. So I take all of that, add it to what you’ve been talking about, and say, I guess I’d conclude from that: it doesn’t matter what your role is. These are the principles you need to pay attention to and approach your use of AI as an aid. And we’re not, you know, suddenly coming out with a revelation here. I see people saying this all over the place. AI is an aid to help you, in a sense, create extremely good content, either as a writer or something else that you might be doing, where this is contributing to that end. And it doesn’t matter what your role is, whether you’re no good at this or that — that reporter you talked about likes to report but not to write. I’m wondering how the hell he gets away with doing that. Reporters have to write, don’t they?

    Shel: Well, I’m sure he just poured a lot of effort and energy into it when he would have rather been out in the field gathering information.

    Neville: Got it, got it. So yeah, this is not too difficult a thing to kind of grasp, in my view, yet I’m constantly bemused by the fact that I see — and maybe LinkedIn’s not the best place to look for this stuff — but I see it all the time. You and I were talking about this before we started recording about people posting there about, you know, you should never use AI. Here’s a list of words I see, and if I see them in LinkedIn posts, I’m going to unfollow that person and call them out. I see this all the time. And I think your example you mentioned to me about the person who wrote a LinkedIn post saying that you should — it was like, you should never, ever — and there’s the list of things — use AI for. That’s insane. That’s insane.

    Shel: Yeah, she said nobody wants to read emails written by AI. Nobody wants to read reports written by AI. And she just went down every form of writing you can think of. And I was thinking, really? Nobody? Nobody wants to read this? And I’ve got data that says people prefer emails written by AI when they’re written by people who are terrible writers and have a hard time expressing the main point they’re trying to get to. Their own writing — the AI has actually made the emails of these people better, and people would rather read those.

    Neville: So did you use AI to research this?

    Shel: To research, to find that data? Yeah, of course I did. It’s easier than using Google, but I also verified the source of that research.

    Neville: Right, okay. No, no, no, hang on a second. The point of that though is it’s illustrative of something that I’m astonished when I hear people that have not heard of doing this before. “That’s a good idea,” which is: anything you’re working on, literally anything, and you either have your list of things you need to research, but something that occurs to you during your work — I wonder who said X, or I wonder how you do this — ask your AI to go research it. And it then becomes a natural part of your workflow. And that’s one of the things it’s very good at.

    But we’ve got the example we talked about last October with Deloitte in Australia and Canada. You’ve got to check everything it creates, particularly if it’s a topic you really don’t know about yet. But even if you do know, you’ve still got to check it. That means when you tell it to go out and look for stuff, and you’ve already given it your preferences — like anything it finds, it’s going to come back with a link to the source as well — so you’ve got all that stuff, you’ve got to then go and check all those things too. So there are no easy shortcuts here to this use. But it still saves you a huge amount of time because you’re then spending time, in a sense, understanding the output that you’re going to use to create your final version of this.

    That I see people often criticizing — “If you use AI, your brain gets kind of frozen and doesn’t learn stuff.” Yeah, that’s not, in my experience, the case, because you’re doing it differently is how I would see it. You are asking your assistant to go and find this and this and this, and they come back with this and this and this, and you then go and research it yourself to check up that it is this, this, and this and not that.

    So it’s, I think, an interesting aspect to the broader debate on those who are anti and those who aren’t, where most of us are sort of somewhere in the middle there. But you need to totally understand the pros and the cons of this and indeed the limitations of AI, as well as the human limitations, and work out what works best for you.

    The reality, though — I guess the bottom line in terms of how I see this — is that you cannot take the human being out of the picture. This tool is purely that: something to assist you that gives you what you need to create the final product, if you like. And that doesn’t matter your job role. That’s what it’s about.

    Shel: Well, I would argue that if you are in a job where writing was not taught in school beyond what you learned in your basic English class or whatever language you were raised with, and you need to produce writing, and this tool is now there to help you do that — if you’re an engineer, for example, engineers are brilliant. Many of them are

    Neville: Not good writers.

    Shel: Terrible writers. And they have to produce something that’s going to be useful to the people that they’re distributing it to. And if AI is going to write a better draft than they could do on their own and produce better output that people can make better use of, then they should let AI write that stuff. In an engineer’s report, there is no need for lived human experience that we keep hearing about. Empathy does not have to come into these reports. They’re technical in nature. Let the AI write it for them. Absolutely edit it, review all the facts to make sure it’s right. Presumably it’s writing based on what you gave it in terms of the information that you have learned that you need to produce in this report. So less opportunity for hallucination when you’re telling it: only use this data that I have put into this ChatGPT project for the output. But you still have to review it very, very carefully. That’ll still save you time and grief if you’re not a writer and you need to produce this stuff. I feel really strongly: we have this great tool here that’s going to make the outputs better and make business better.

    Neville: Yeah, I think I don’t disagree with you at all, but I think I’m not as optimistic about it as you are in the sense of this is going to work seamlessly if people do all the things you just said, because typically they’re not going to do that. I think the key — and I can see scenarios exactly as you’ve outlined, someone in a job that’s a valuable job and he or she does a great job but lacks the skills to write — then I would say that’s fine, get the AI to write. You need to be educated then on how to get the AI to do what you want. You then need to, without fail, verify and check every single thing that the AI has created. And I’m not sure that many of the folks that you might think of are truly geared up to do that kind of thing. So you might need to have colleagues assist you then. I mean, I guess the point is that…

    Shel: Well, it’s…

    Neville: This is going to be a debating point forever, I would imagine, until people stop talking about it. But you’re going to encounter — I can see it now — “But yeah, you’ve got to disclose the fact that you used AI.” No, you don’t. You get down to that rabbit hole argument about, do you do that when you use Grammarly? Do you do that with your spell checker? No, you don’t. So why would you say you’d have to do this? Because it’s such an emotive topic where logic is missing in many of the arguments. It’s all emotional.

    That’s the minefield you have to walk. For much of the work that many people might do, they won’t use the AI to write it. They’ll use AI to assist them in creating it. And that could mean they do an outline, or it suggests the construct of a draft, or you draft it and it reviews it and makes suggestions on how to improve it.

    I do that quite a bit with my AI assistants. And I don’t have a rigid format. Much depends on the topic and how I feel about it, basically. And often I’ll ask it a topic that is something I’ve been thinking about and say, is this worth writing about? If so, give me some suggestions on the angle I should approach it from. And that always sparks much more discussion and thought on what the content might be, including, “Now this is not worth writing about for me.”

    So it’s a big topic. You had in your prep for this loads of links to articles all over the place about this. And I think it’s good to do that. But this is emotive. And it’s going to not be a simple thing to avoid criticisms.

    Shel: Yeah, and I think it’s a governance issue inside organizations. I hear about the lack of AI training going on in many organizations or how superficial it is. I think for those people who have to write in their jobs, you want to do targeted training about how to use this to write. From the idea generation to the brainstorming to the back-and-forth discussions that you might have about approaches to take, or

    Shel: using it to structure the document right down to writing it for that first draft, if you just could do better with that than you can on your own and you’re not a professional writer. All of that needs to be trained and it needs to be articulated in the governance policies in the organization around AI, and there need to be resources. And yeah, we need to have subject matter experts that people can call. This is on us right now as internal communicators who deal with writing in general to lead this conversation in the organization and make sure that these kinds of governance activities are implemented.

    Neville: Work to do.

    Shel: And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #507: Should Nobody Really Ever Write with AI? appeared first on FIR Podcast Network.

    30 March 2026, 7:01 am
  • 1 hour 2 minutes
    Circle of Fellows #126: Communicating in the Era of the Polycrisis

    The days when a crisis communicator could simply reach for a dusty binder and follow a pre-scripted, linear checklist are gone — and they aren’t coming back. In the “good old days,” a crisis was often a contained event with a predictable lifecycle; crisis teams could address them by checking off items on a checklist. Today, we face the era of the polycrisis, where economic instability, geopolitical friction, and a 24/7 social media cycle collide, creating a torrent of simultaneous challenges. This new reality has effectively obliterated the traditional news cycle, replacing it with an always-on environment where a single viral post can tarnish a brand before leadership even knows there is a problem.

    Thriving in this volatile landscape requires a move away from rigid manuals toward a more fluid, strategic approach. Rather than a step-by-step rulebook, modern practitioners need logical scaffolding — a flexible framework of principles and values that provides a foundation for action while allowing for real-time adaptability. It is about preparation, not just prescription. As the boundaries between internal and external perception continue to erode, the ability to maintain transparency and connection through these multifaceted disruptions is no longer a luxury; it is table stakes for organizational survival.

    Four Fellows of the International Association of Business Communicators (IABC) shared their perspectives in this episode of IABC’s Circle of Fellows.

    About the Panel:

    Edward “Ned” Lundquist is a retired U.S. Navy captain with 43 years of professional public affairs and strategic communications experience. His company, Echo Bridge LLC, which provides outreach and advocacy support to government and commercial clients. He served on active duty for 24 years in the U.S. Navy as a surface warfare officer and public affairs specialist. Captain Lundquist was a Pentagon spokesman with the Office of the Assistant Secretary of Defense for Public Affairs, Director of the Fleet Home Town News Center, and director of public affairs and corporate communications for the Navy Exchange Service Command. His last tour of duty was commanding the 450 men and women of the Naval Media Center. He is an accredited business communicator and award-winning communicator who served as president of IABC/Hampton Roads and IABC/Washington, director of U.S. District 3, and chair of the International Accreditation Council. He was named an IABC Fellow in 2016. Captain Lundquist received the Surface Navy Association’s Special Recognition Award in January of this year, for his service on SNA’s executive committee and chair of the SNA communications committee. He writes for numerous naval, maritime, and defense publications and chairs and presents at communications, naval, and maritime security conferences around the world.

    Robin McCasland, IABC Fellow, SCMP, is Senior Director of Corporate Communications for Health Care Service Corporation (HCSC). She leads the company’s communications team and the employee listening program, demonstrating to senior leaders how employee and executive communication add value to the business’s bottom line. Previously, Robin excelled in leadership roles in communication for Texas Instruments, Dell, Tenet Healthcare, and Burlington Northern Santa Fe. She has also worked for large and boutique HR consulting firms, leading major communication initiatives for various well-known companies. Robin is a past IABC chairman and has served in numerous association leadership roles for over 30 years. She was honored in 2023 and 2021 by Ragan/PR Daily as one of the Top Women Leaders in Communication. She’s also received IABC Southern Region and IABC Dallas Communicator of the Year honors. Robin is a graduate of The University of Texas at Austin and a Leadership Texas alumnus. Her own podcast, Torpid Liver (and Other Symptoms of Poor Communication), features guest speakers addressing timely topics to help communication professionals become more influential, strategic advisors and leaders. She resides in Dallas, Texas, with her husband, Mitch, and their canine kids, Tank and Petunia.

    George McGrath is founder and managing principal of McGrath Business Communications, which helps clients build winning corporate reputations, promote their products and services, and advance their views on key issues. George brings more than 25 years in PR and public affairs to his firm. Over the course of his career, he has held senior management positions at leading strategic communications and integrated marketing agencies including Hill and Knowlton, Carl Byoir & Associates, and Brouillard Communications.

    Caroline Sapriel, founder and Managing Partner of CS&A, brings over 30 years of specialized expertise in risk, crisis, and business continuity management to the table. A Fellow of the International Association of Business Communicators (IABC) and a recipient of the Gold Quill Award for her “10 Commandments of Crisis Management,” Sapriel is a recognized authority in providing high-level, results-driven counsel to senior leaders across the energy, pharmaceutical, and aviation sectors. Her deep academic roots as a lecturer at Antwerp, Leuven, and Leiden Universities, combined with her authorship of Crisis Management – Tales from the Front Line, underscore a career dedicated to transforming systemic vulnerabilities into robust reputation management strategies. Fluent in five languages and possessing a multi-disciplinary background in International Relations and Chinese Studies, she offers a uniquely global perspective on the evolution of stakeholder engagement during high-stakes disruptions.

    The post Circle of Fellows #126: Communicating in the Era of the Polycrisis appeared first on FIR Podcast Network.

    29 March 2026, 7:33 pm
  • 19 minutes 35 seconds
    ALP 298: Build the business you want to own, not the one you hope to sell

    Most agency owners have read Built to Sell. But many have internalized the wrong lesson from it—fixating on that final chapter where the protagonist drives off into the sunset with a pile of cash, rather than the actual business-building advice throughout the book. The result is owners spending years building businesses optimized for a sale that may never happen, or that won’t deliver the outcome they’re imagining.

    In this episode, Chip and Gini discuss Chip’s “Build to Own” philosophy as a counterpoint to the built-to-sell mindset. The core principle: focus on creating a business that serves you today, not some hypothetical buyer tomorrow. This doesn’t mean you can’t or won’t sell—it means you stop treating the sale as the primary objective and start treating ownership as the thing you’re optimizing for right now.

    Chip breaks down the TMRW framework for thinking about what you want from your business: Time (how much you spend and what flexibility you have), Meaning (what gives you satisfaction—clients, team, impact), Rewards (financial outcomes that fund your life today and tomorrow), and Work (the actual role you’re crafting for yourself). Gini shares her decision to retire from speaking despite conventional wisdom saying agency owners should be out there raising their profile—because the anxiety wasn’t worth the marginal business benefit.

    The conversation tackles the uncomfortable reality that most agency owners counting on a sale to fund their retirement are likely building businesses that won’t command the multiple they’re hoping for. Meanwhile, owners who build businesses that throw off enough cash to fund retirement directly—while also being enjoyable to run—end up with something far more attractive to buyers when and if they do decide to sell.

    Gini tells the story of a friend who prepared five years in advance for a sale: removing himself from day-to-day operations, hiring a president to build culture, ensuring the business wasn’t founder-dependent. The result? An 18x multiple. But the episode’s point isn’t “here’s how to get a great sale”—it’s that you should make every decision through the lens of “would I still be happy with this if I never sold?” [read the transcript]

    The post ALP 298: Build the business you want to own, not the one you hope to sell appeared first on FIR Podcast Network.

    23 March 2026, 1:00 pm
  • More Episodes? Get the App