- 30 minutes 10 secondsThe AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital
Our host, Ray Rike sits down with Jake Saper, General Partner at Emergence Capital, to unpack the firm's AI-Native Services Playbook. Jake brings a unique lens: 12 years at Emergence, early-stage bets on companies like Zoom, and a portfolio of seven AI-native services businesses already in the portfolio. The conversation covers what separates AI-native services from SaaS, why the business model is harder to execute than it looks, and the five metrics and structural choices that determine who wins.
WHAT WE COVER IN THIS EPISODE
Domain Expertise: Critical - But Not Required from the Founders
AI-native services companies are selling outcomes, not products. That means trust and credibility are the first sales. Domain expertise is non-negotiable, but it does not have to live in the founding team if two conditions are met: the founders go as deep as humanly possible on the service before launch, and they hire senior domain experts early. Emergence portfolio company Hanover Park, an AI-native fund administrator, is the case study. The founder interviewed 150 CFOs before writing a line of code and hired respected fund accounting veterans to sit alongside the AI. That combination unlocked enterprise trust from day one.
Hire a Product Leader Before You Think You Need One
The biggest structural trap in AI-native services is over-relying on human delivery while the product falls behind. Market pull is strong by design — if you promise faster, better, cheaper outcomes in an existing market, customers will buy. But if delivery is primarily human, you have a services company with venture capital financing and no AI leverage. The fix is a dedicated product leader whose sole KPI is productizing the service. The best AI-native services companies run a tight feedback loop between the doers (service delivery) and the builders (engineering), and the PM owns that loop.
The Mirage of Product Market Fit
In SaaS, fast growth plus strong net dollar retention meant you had product market fit. In AI-native services, those are necessary but not sufficient. Revenue growth powered by human labor is a false signal. True product market fit requires that AI is delivering the majority of the service value. Jake's framework: track both leading indicators (a North Star product metric showing AI leverage improvement, such as human review time per contract or time to migrate a line of code) and lagging indicators (revenue per FTE trending up quarter over quarter, and gross margin). The leading indicators tell you if you're building leverage. The lagging indicators confirm it.
Outcome-Based Pricing: The Direction of Travel
AI-native services companies that started with labor-based pricing will need to migrate toward outcome-based pricing over time, and the transition requires patience. Emergence portfolio company Prosper AI, an AI-native healthcare services provider handling prior authorization and benefits verification, navigated this by moving a portion of contracts to resolution-based pricing while keeping the remainder on a per-minute basis. That hybrid approach gave both sides the data and comfort to expand the outcomes-based portion at renewal. Jake's view: as AI does more of the work, downward pricing pressure is inevitable, but upward margin pressure offsets it.
Revenue Per FTE and Gross Margin: The Two Metrics That Matter Most
Revenue per FTE is the primary signal of AI leverage, but it needs to be benchmarked two ways: against the legacy service provider in the same vertical, and against itself quarter over quarter. The latter is more important. If revenue per delivery FTE is not improving each quarter, the AI is not compounding. On gross margin, the industry is still in the Wild West. Two common errors: allocating service delivery headcount to R&D instead of COGS because the team "helped train the model," and excluding inference spend from COGS. Both understate the true cost of delivery. Customer-specific model training belongs in COGS. Base model training belongs in R&D.
The Moat Question
Brand trust and proprietary data are the two sources of durable advantage. Brand matters because enterprises buying AI-delivered outcomes need a trusted guarantor. Data matters because high-volume AI-native operations accumulate transaction data that legacy providers, running at lower volume with more human overhead, simply cannot match. Emergence portfolio company Harper, an AI-native insurance broker, is outperforming brokerages ten times its size on placement speed and carrier-risk matching because its data volume is superior.
LINKS Emergence Capital AI-Native Services Playbook: em.cap.com
ABOUT AI TO ROI Ray Rike is the Founder and CEO of Benchmarkit, the leading B2B SaaS and AI-native software benchmarking company. The AI to ROI podcast brings a metrics-first lens to enterprise AI adoption, ROI measurement, and the business models being built on top of AI. Subscribe on your favorite podcasting app and connect with Ray on LinkedIn.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
7 May 2026, 4:00 am - 28 minutes 41 secondsThe Role of the CAIO in a Managed Service Provider - with Jim Piazza, CAIO Ensono
Ray Rike sits down with Jim Piazza, Chief AI Officer at Ensono, a managed services provider scaling AI across both its internal operations and customer environments. Jim brings a rare combination of deep infrastructure experience, nearly a decade at Meta scaling data center operations with machine learning, and a rigorous framework for connecting AI investments to business outcomes that executive operators can actually measure.
Key Topics:
Defining the Chief AI Officer Role in an MSP: Jim describes the CAIO role as a blend of CDO, CIO, and CTO with an AI lens, but with a critical distinction: the job is not to ask what AI can do. It is to identify where AI improves service delivery, customer outcomes, and financial performance. At Ensono, that meant starting small as VP of Predictive Systems, demonstrating results, and earning the mandate to expand. Prioritization, not ideation, is the core skill.
Building AI Tools That Drive Internal Operational ROI: Ensono developed three production AI systems for internal use. Envision Predictive Engine analyzes telemetry data across systems to predict failures before they cause business impact, including one case where a problem was detected 144 minutes before it would have affected a major logistics customer outside Ensono's own scope of responsibility. Diagnose Now puts the right diagnostic data in front of engineers at the right moment and has delivered up to a 66% reduction in mean time to repair in A/B testing. ChangeGuardian assesses risk scores for the 8,000-plus changes Ensono executes monthly, auto-generating methods and procedures from a decade of historical change data to reduce both risk and manual effort.
Structuring AI Governance: The Three Musketeers Model: Jim, the CTO, and the CIO operate as a deliberate leadership triad. The CTO owns the platforms. The CIO owns data quality and structure. The CAIO owns the build-versus-buy decision and solution development. Shared accountability, not siloed ownership, drives alignment. Each business unit also contributes one to two subject matter experts through a formal value stream mapping process to identify where AI should focus first.
Measuring AI ROI Before Writing a Line of Code Jim's most consistent lesson: define your value metrics before touching the technology. AI use cases must tie back to core business metrics such as mean time to repair, customer satisfaction, SLA risk reduction, and gross margin improvement. Business unit leaders own the outcome measurement. The CAIO owns the budget and the technology. That separation of responsibility keeps AI programs anchored to results rather than activity.
The CAIO and CIO Relationship: Where the Lines Get Drawn: For companies bringing in a Chief AI Officer alongside an existing CIO, Jim offers a practical delineation. The CIO owns data infrastructure and quality. The CAIO is a consumer and a builder who depends on that foundation. Without clean, accessible data, AI programs stall regardless of the use case. The CAIO's job is to surface missing or insufficient data and partner with the CIO to close the gap.
Lessons Learned and Career Advice for the AI Era: Jim's framework for AI program success: start with one or two high-probability use cases where data is already in good shape, build credibility through results, then expand. Avoid the ten-pilot trap. Kill weak use cases early. For early-career professionals, his advice is equally direct: learn to work with AI, not compete with it. Build problem framing, critical thinking, and business judgment. Technical fluency matters, but business judgment is what separates the people AI replaces from the ones AI makes more valuable.
This episode is essential listening for technology and operations executives navigating the practical reality of AI deployment inside complex enterprise environments. If you are a CIO, CTO, COO, or Chief AI Officer trying to figure out how to structure governance, measure impact, and build internal credibility for AI programs, Jim Piazza gives you a real-world operating model, not theory. For managed services leaders and enterprise buyers evaluating MSP capabilities, the Ensono case studies show what it looks like when an MSP moves from reactive service delivery to predictive, AI-driven outcomes. And for executives still debating whether to hire a Chief AI Officer, this conversation makes a direct case for what the role should own, how it should partner, and what success looks like when it is done right.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
28 April 2026, 4:00 am - 33 minutes 51 secondsOn Paper, the SpaceX IPO is Not So Heavenly
SpaceX filed for what could be the largest IPO in history, targeting a $1.75 trillion valuation and $75 billion raise on NASDAQ in June. Ray Rike and Peter Buchanan cut through the narrative and go straight to the numbers, business unit by business unit.
Key Topics:
The Launch Services Monopoly Falcon 9 launches cost roughly $67 million, compared to $110-160 million for competitors. With over 100 launches per year, $4 billion in NASA contracts, and a freshly awarded Space Force contract, SpaceX has no meaningful competitor at scale. The catch: the next-generation Starship rocket, critical to everything else in the bull case, is already five years behind its original commercial timeline.
Starlink: The $10 Billion Business You Never Think About Starlink generates nearly $10 billion in annual revenue from 10 million global subscribers, representing 54% of SpaceX's total revenue. The real margin engine is not residential subscribers but aviation and maritime, where per-customer annual revenue runs $300K and $34K respectively. Amazon's Project Kuiper remains far behind with under 700 satellites versus Starlink's 10,000-plus.
XAI and X: The Problem Child SpaceX acquired XAI in February 2026 in an all-stock deal valued at $250 billion. The financial reality is stark. XAI burned $9.5 billion in cash during the first nine months of 2025 on only $210 million in revenue, nearly $28 million per day. A combined 2025 P&L would have shown a $5 billion net loss on $18.5 billion in revenue, reversing SpaceX's standalone $8.5 billion profit in 2024. Grok, its large language model, is described in internal SpaceX memos as clearly behind Anthropic, OpenAI, and Gemini, and Elon Musk himself has said publicly it needs to be rebuilt.
The IPO Mechanics: Structure, Retail Allocation, and a Controversial NASDAQ Rule Change Five banks are co-leading the offering with no single lead book-runner, and each was reportedly required to purchase Grok subscriptions as a condition of participation. Retail investors receive a 30% share allocation, three times the typical size. Most controversially, NASDAQ shortened its index inclusion waiting period from 90 days to 15, which could trigger mandatory passive fund buying from vehicles like Invesco's QQQ shortly after listing. Market veterans are calling it structural manipulation.
The Bull and Bear Case The bull case requires Starship reaching commercial operations within 18 months, Grok building a real enterprise sales engine beyond Elon's existing relationships, and the vertical integration thesis playing out as planned. Starlink as a global AI distribution layer, Grok trained on real-time X data, and orbital data centers as a structural competitive moat. The bear case is simple: every element depends on Starship staying on schedule, and if it slips again, the entire investment thesis slips with it.
Executive Takeaways for Technology Leaders The valuation is not priced on current fundamentals. It is priced on a version of this business that does not exist yet and may not until the early 2030s. For technology executives evaluating SpaceX or XAI as vendors or partners, multi-year contract stability is a real consideration. The NASDAQ rule change also has downstream implications for OpenAI, Anthropic, and other AI companies in the IPO pipeline.
This episode is designed for B2B SaaS and enterprise AI executives who need to understand where capital is flowing and why it matters in their own strategic context. If you are making decisions about AI vendor relationships, enterprise infrastructure partnerships, or simply need a clear-eyed read on how AI-era IPO valuations are being constructed, Ray and Peter give you the data behind the headlines, not just the hype. No investment advice. Just the numbers, the business model mechanics, and the questions every executive should be asking before the June listing.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
24 April 2026, 4:00 am - 33 minutes 40 secondsAI's Organizational Impact: McKinsey's State of Organizations 2026 Report
Ray Rike and Peter Buchanan dig into McKinsey's 2026 State of Organizations Report, a landmark study drawing on more than 10,000 senior executives across 15 countries and 16 industries. The central finding is both simple and uncomfortable: the vast majority of organizations are actively experimenting with AI, and that same majority reports no meaningful impact on their bottom line. This episode is about closing that gap.
Topics Covered
- Three Tectonic Forces Reshaping Every Organization. McKinsey identifies AI and agentic systems, economic and geopolitical fragmentation, and workforce transformation as structural shifts rather than temporary headwinds. Ray and Peter unpack why these forces are interdependent and why three in four leaders say their organizations are not ready to face what is coming, including leaders who describe themselves as optimistic.
- Why AI Initiatives Keep Falling Short. The diagnosis is clear: most organizations are running scattered pilots and point solutions that augment individuals but never transform the enterprise. McKinsey's data shows that organizations redesigning entire domains, marketing, finance, and operations, see dramatically greater financial impact than those pursuing isolated use cases. Ray calls this systems thinking and walks through five specific variables required to move from pilot to production at scale.
- Humans and AI Agents: A New Collaboration Model. Only one in four executives expect AI to take on truly agentic, autonomous roles in the next 12 to 24 months. Ray and Peter discuss why senior leaders are more conservative than younger high-potential talent, what the Hitachi and Allianz case studies reveal about workforce redesign versus workforce replacement, and why demand for AI fluency has increased 7x faster than any other skill tracked in job postings.
- Geopolitical Disruption and the Cost of Organizational Rigidity. Three in four leaders report a material impact from geopolitical uncertainty on their organizations. Ray and Peter discuss the Tonies case study, a German toy company that launched a production facility in Vietnam on the same day US tariffs were announced, as a model of what organizational preparedness looks like in practice. Two thirds of surveyed executives also said their organizations are overly complex and inefficient, and McKinsey's diagnosis of why traditional structural fixes are no longer working is worth hearing.
- People and Performance: The Four-Times Multiplier. McKinsey's data shows that organizations investing equally in people development and operational performance are four times more likely to sustain top-tier financial results, grow revenue twice as fast, and carry half the earnings volatility of peers. Ray and Peter connect this to why 80% of leaders leave non-financial motivation levers completely untouched, and to what GE's model of purpose, autonomy, recognition, and growth still gets right.
- Business as Change: The New Operating Condition. McKinsey's closing argument is that transformation is no longer a periodic program with a defined start and end. It is a permanent operating condition. Ray frames four implications for leaders, and Peter adds the critical point that the gap between AI activity and AI impact is an organizational problem, not a technology problem. The tools exist. The redesign is the work.
Why Listen
This episode is for senior executives who are experiencing growing discomfort between how much their organization is investing in AI and how little of it is showing up in the numbers. Ray and Peter move well beyond summarizing the McKinsey findings. They connect the research to hands-on operating experience, call out where most organizations get stuck, and give listeners a practical framework for thinking about workforce redesign, change management, and leadership accountability. If you are responsible for AI strategy, organizational performance, or the people agenda at a B2B software or enterprise company, this is one of the most data-rich and actionable conversations you will find on the topic.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
22 April 2026, 4:00 am - 31 minutes 24 secondsBeyond OpenClaw - The Rise of Personal AI Agents
In this week's AI to ROI: Big Story episode, Ray Rike and Peter Buchanan unpack the OpenClaw phenomenon and what it reveals about the future of personal AI agents for both individuals and enterprises.
From a solo developer's side project to 1.5 million active agents in two months, OpenClaw has ignited a new category and forced every major AI company to respond.
Ray and Peter break down what is working, what is still broken, and which vendors have the best shot at winning the enterprise.
Top Insights from This Episode
- OpenClaw Proved the Market, But Not the Product Peter Steinberger built OpenClaw in days and attracted 1.5 million users before OpenAI acquired him and opened the codebase. The product validated massive pent-up demand for always-on personal AI agents, but security researchers at Cisco and Northeastern University quickly surfaced serious vulnerabilities, including data exfiltration risks and prompt injection without user awareness. Even the Chinese government restricted its use in state agencies. The pioneer made the promise real; the product is not yet enterprise-safe.
- NVIDIA Jumped In Fast with NemoClaw, But Gaps Remain NVIDIA wrapped OpenClaw with a three-layer security architecture (OpenShell runtime, privacy router, and governance layer) and launched NemoClaw at GTC with nearly 20 partners, including Box and Cisco. Box demonstrated human-matching permission controls for enterprise file workflows, and Cisco showed a zero-day vulnerability response with a full audit trail. But governance experts noted NemoClaw still lacks basic IT safety features, particularly around rollback, audit trails, and policy enforcement. Fast to market; not yet enterprise-ready.
- Perplexity Made a Quiet Pivot to Enterprise AI Agent Infrastructure Six months ago Perplexity was an AI search company. Today they are building a three-product personal agent suite: Perplexity Computer for multi-model orchestration across 18-plus AI models, Personal Computer for local 24-7 file and compute access on Mac, and Comet Enterprise as an AI-native browser tying the stack together. Their Samsung Galaxy S26 integration via Bixby gives them significant distribution, and their CEO framed the shift simply: traditional operating systems take instructions; AI operating systems take objectives. The model-agnostic architecture may be their biggest differentiator.
- Anthropic Is Playing a Different and Potentially Smarter Game Rather than shipping a standalone personal agent, Anthropic is embedding agentic capability into existing products. Claude Code scaled to an estimated $2.5 billion in ARR in nine months. Claude Cowork gives Claude direct control of Mac-level tasks with a permission layer built in. And the Microsoft partnership puts Claude Cowork as the multi-step reasoning engine inside Microsoft 365 Copilot Wave 3, branded as Copilot Coworks. A recent survey showed 66 percent of enterprise technical buyers said they purchased Claude first, with ChatGPT in the thirties. Anthropic's enterprise trust advantage may matter more than feature parity.
- Enterprise Adoption Will Be IT-Led and Slow by Design Unlike SaaS, which grew through decentralized, shadow-IT purchasing that bypassed central IT, personal AI agents require direct access to local files, compute, and company systems. That puts CISOs and IT leaders in the approval seat from day one. Ray and Peter agree the enterprise version of personal AI agents is likely 12 to 24 months away from broad deployment, with adoption following a managed, permission-controlled model rather than the freewheeling consumer version that drove OpenClaw's early growth.
If you are a company executive, evaluating allowing, enabling or even developing personal AI agents for your company, this episode is a great listen...it might even inspire you to create your own personal AI agent for your personal use!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
16 April 2026, 4:00 am - 33 minutes 30 secondsThe Power of Eye Tracking for the Enterprise - with Adam Gross, Co-Founder & CEO of HarmonEyes
Eye tracking has moved far beyond the clinic and the sports performance lab. In this episode, Ray Rike sits down with Adam Gross, co-founder and CEO of HarmonEyes, to explore how AI-powered eye-tracking is being deployed in enterprise environments to measure cognitive load, predict performance degradation, and reduce costly employee burnout and attrition before problems occur.
What You Will Learn:
- What eye tracking actually measures and why objective, passive, quantifiable eye movement data is more reliable than self-reported assessments for measuring cognitive and attention states
- How AI transforms raw eye data into actionable intelligence, including real-time model inference, individual adaptation across a population normative database of 15 million+ records, and predictive time-to-transition modeling
- Why personalization at scale matters and how Harmonize uses advanced machine learning to adapt its models to individual differences in age, sex, and experience level, making population-level models actually work for every individual
- Enterprise use cases with measurable ROI, including pilot training in flight simulators (shorter time to proficiency), remote operator and call center environments (fatigue and overload intervention before safety incidents), and employee burnout detection over extended time horizons
- The device-agnostic deployment advantage, covering webcams, phone cameras, smart glasses, and vehicle cabin cameras as signal sources that eliminate the need to purchase dedicated hardware
- How team leaders use real-time cognitive state data to shift from reactive management to proactive intervention, reducing performance risk across shifts and high-stress operating environments
- Privacy as a design principle, not an afterthought: Harmonize does not collect, store, or record eye tracking data or PII; the prior second of data is destroyed with each new output delivery
- Where to start as an enterprise buyer: the highest-value entry points are high-stress, high-stakes roles where burnout and performance degradation already show up as operational problems with measurable costs
- Career advice for early professionals: the best defense against AI-driven job displacement is not avoidance but mastery; become the human in the loop who knows the technology best
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
14 April 2026, 4:00 am - 35 minutes 53 secondsThe Power and Promise of Vertical AI
While the AI headlines obsess over foundation model fundraises and hyperscaler spending, a quieter revolution is generating real, measurable returns. In this episode of AI to ROI: The Big Story, Ray Rike and Peter Buchanan break down why vertical AI companies may be building the most durable and valuable businesses in the history of enterprise software, and why most people aren't paying attention yet.
What's covered in this episode:
- Defining Vertical AI: What separates vertical AI from horizontal tools like Microsoft Copilot or Google Workspace AI, and why the distinction matters for buyers and investors alike
- A fundamentally different business model: Why vertical AI companies target labor budgets (10x the size of enterprise software budgets) rather than IT spend, and how outcome- and consumption-based pricing is replacing the traditional per-seat model
- The funding explosion: Vertical AI investment grew from $8B in 2023 to $22B in 2024 to $42B in 2025, with unicorn counts in the sector jumping nearly 6x in just two years
- Harvey (Legal AI): How this $8B+ valuation company grew ARR from $100M to $190M in just four months by orchestrating multiple AI models across legal workflows and embedding deeply into law firm operations
- Abridge (Healthcare AI): How a cardiologist-founded company reached a $5.3B valuation by turning physician-patient conversations into structured clinical documentation in real time, with deep Epic EHR integration across 150+ health systems
- Sierra (Customer Experience AI): How Brett Taylor's enterprise AI platform hit $100M ARR in just 21 months and crossed the $10B decacorn threshold, raising the question of whether the agent era could produce the first trillion-dollar enterprise software companies
- MaintainX (Industrial/Manufacturing AI):How this maintenance management platform is tackling $1.4 trillion in annual equipment failure costs across 11,000 customers and 11 million assets — with a 34% reduction in unplanned downtime for customers
- Why vertical AI moats are so durable: Proprietary data that compounds with every transaction, embedded institutional knowledge that makes switching costs higher than any legacy ERP migration, and a model architecture that gets stronger as foundational models improve
- Advice for enterprise buyers: Why 2026 is the year to evaluate vertical AI vendors, insist on outcome-based pricing, and start with one workflow before expanding
Interested in reading the details on the Vertical AI industry and trends? Check out the AI to ROI Newsletter providing even more detail by clicking here.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
31 March 2026, 4:00 am - 33 minutes 25 secondsPricing Strategy for AI Software and SaaS: When to Change, Who Should Own It, and the CFO's Role with Dan Balcauski
Pricing is one of the most underleveraged strategic levers in B2B SaaS and AI Software. Most companies are getting it wrong. In this episode, Ray Rike sits down with Dan Balcauski, founder of Product Tranquility and a 20-year software industry veteran, to cut through the noise around consumption, usage, outcome, and hybrid pricing models. Dan brings a practitioner's perspective on when to review pricing, who should own it, and how the CFO fits into the equation.
Signs Your Pricing Needs a Review
- Best-in-class companies review pricing at least quarterly -- but review does not always mean change
- Key warning signals include declining net revenue retention and unexpected shifts in win/loss conversion rates
- AI-native companies are iterating on pricing monthly due to rapid competitive dynamics
- Sales cycle length is a practical constraint: a 12-month enterprise cycle limits how frequently you can test and observe pricing changes
The Role of Customers in Pricing Strategy
- Never anchor your pricing strategy entirely to your existing customer base -- they carry inherent bias
- A practical research mix: roughly one-third existing customers, two-thirds prospects
- Existing customers know your real value; prospects only know what you show them -- both perspectives matter
- When introducing a second product, maintain structural similarity in pricing tiers even if the pricing metric differs
Pricing Ownership and Governance
- Below $5M ARR, the founder/CEO owns pricing; above $20M it shifts to Product or Marketing -- the gap in between is where ownership gets dangerously vague
- Product Marketing is best positioned to own pricing because it sits at the intersection of positioning and value communication
- Sales owning pricing is a misalignment of incentives -- "like putting Dracula in charge of the blood bank"
- Best practice is a pricing council with a designated decision-maker, not design by committee
Discounting and the CFO's Role
- Discounting policy is often the easiest and fastest win -- and one of the first places Dan looks with any client
- Enforcement matters as much as policy: without monitoring, no new pricing strategy will ever reach the market as intended
- The CFO plays a dual role -- operational (contracts, billing, deal desk guardrails) and strategic (modeling cash flow and KPI impact when shifting pricing models)
- Caution: A finance-led focus on consistent margin profiles across products can misread how different market segments actually behave
Outcome-Based Pricing: Hype vs. Reality
- Outcome-based pricing is "the future and always will be" -- it is not new, and it is genuinely difficult to execute
- True outcome pricing only works when you are directly in the revenue or savings transaction, as Stripe is
- A more practical frame is output-based pricing -- Intercom's 99 cents per resolved support ticket is a strong example of measuring a clear, attributable unit of value
If you are involved in how best to monetize and price your B2B AI or SaaS product - this is a very valuable listen!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
31 March 2026, 4:00 am - 32 minutes 9 secondsThe Superhuman AI Agent - with Amanda Kahlow, CEO & Founder, 1Mind
In this episode of the AI to ROI Podcast, host Ray Rike sits down with Amanda Kahlow, founder and CEO of 1Mind. Prior to 1Mind, Amanda was the founder and former CEO of 6sense, an early pioneer in intent data.
The Vision Behind 1Mind: Amanda founded 6sense to help companies find buyers; she founded 1Mind to close them. 1Mind builds what she calls "go-to-market superhumans", AI agents that take on multiple roles across the full customer lifecycle, from inbound qualification and live demo delivery to deal closing for SMB/commercial accounts, and even post-sale onboarding, upsell, and cross-sell motions.
Why the Buyer Journey Has Fundamentally Changed: Amanda argues that traditional intent data and one-way marketing are becoming obsolete. Buyers no longer follow a linear path of Google searches and form fills; they expect real-time, two-way, solution-oriented conversations, much like they get from interacting with large language models today. The old model of blasting outbound emails or routing inbound leads through a sequential SDR → AE → SE handoff chain is increasingly misaligned with how modern buyers want to engage.
Top Use Cases: How Customers Deploy 1Mind: The most common starting point is the inbound website use case, customers start by placing a superhuman on the website that can qualify a visitor, deliver a personalized live demo, answer deep technical questions, and in some cases take the deal all the way to close, all on first touch. From there, customers frequently expand to the "ride-along" use case, where the superhuman joins every sales call as an always-available AI sales engineer. Human sellers retain control but can call on the superhuman in real time to answer hard questions, surface the right case study or slide, run an integration demo, or ask the qualifying questions (MEDDIC and similar) that sellers often avoid.
Measurable Business Impact: Amanda shares compelling early results from enterprise customers, including a ~40% reduction in sales cycle length (from ~90 days to ~60 days) and a doubling of ACV for deals that passed through the superhuman pipeline versus the traditional pipeline. She attributes the ACV lift to getting buyers to vendor-of-choice status earlier in the cycle, eliminating the need to compete on price. 1Mind also has use cases for existing customer bases — proactively engaging customers about new features to drive upsell and cross-sell, a task that human CS teams increasingly can't keep pace with, given the speed of product development.
How Customers Measure ROI: Amanda is direct: the right measurement framework is revenue impact, not top-of-funnel pipeline metrics. She encourages customers to tie superhuman performance to shortened deal cycles, higher ACV, and bottom-of-funnel revenue influence. She acknowledges there is a maturity curve — some customers start by measuring meetings booked — but the companies seeing the most value are those willing to shift away from MQL-based thinking toward board-level outcomes: revenue growth, lower CAC, and expansion revenue.
Onboarding & Time to Value: 1Mind has invested heavily in its self-serve platform to reduce deployment time from a four-month process to an average of about four weeks today, with some customers going live in as little as four days. All deployments are full enterprise contracts, as 1Mind does not run pilots.
Advice for Leaders on AI ROI Amanda emphasizes that realizing meaningful AI ROI requires a top-down mandate from the CEO. Incremental point solutions can improve efficiency at the margins, but the big needle-movers require new playbooks and organizational willingness to change how work gets done, not just layer AI on top of existing processes.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
24 March 2026, 4:00 am - 29 minutes 52 secondsDeloitte 2026 State of AI Report - The Untapped Edge
On this AI to ROI Big Story episode, our hosts Ray Rike and Peter Buchanan dig into Deloitte's 2026 State of AI Report, a 41-page annual study surveying over 3,300 business leaders on the state of enterprise AI adoption. Deloitte calls it "The Untapped Edge," and Ray and Peter unpack exactly why.
They walk through the report's seven key inflection points from scaling pilots into production and reimagining business processes, to agentic AI, sovereign AI, and physical AI, with a focus on what the data actually means for companies trying to drive real ROI in 2026.
Key topics covered in this episode include:
- Pilot to Production: Why 54% of respondents expect a major leap in production deployment in the next 3–6 months, and why 37% of companies are still making little or no change to existing processes
- Productivity & Revenue: How 66% of organizations report efficiency gains today, but only 20% are seeing actual revenue impact from AI - and what it will take to close that gap
- Business Transformation: Why 84% of companies have yet to redesign jobs around AI, and what that means for long-term competitiveness
- Agentic AI: What the jump from 26% to 74% expected adoption of agentic AI over two years signals, and the top enterprise use cases including customer support, supply chain, R&D, and cybersecurity
- Governance: Why only 21% of companies have a mature governance model for autonomous agents, and what leading companies are doing to build responsible frameworks from the ground up
- Sovereign AI: How 83% of multinational board members view sovereign AI as at least moderately important, and why the US, Europe, and the Middle East are approaching it very differently
Ray and Peter close with a clear-eyed summary of what enterprises need to do now: close the gap between strategy and operational readiness, redesign work with an AI-first mindset, and shift focus from incremental efficiency to genuine strategic reinvention.
📰 This episode is based on the February 19th edition of the AI to ROI newsletter. Subscribe at ai2roi.substack.com
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
23 March 2026, 4:00 am - 31 minutes 17 secondsAI to ROI: Big Story - Will the Angst, Agony, and Adversity of AI be Worth It?
Is the trillion-dollar AI bet actually going to pay off? In this episode of AI to ROI, hosts Ray Rike and Peter Buchanan tackle the big question head-on: with hyperscalers pouring over $600 billion into AI infrastructure this year alone, enterprises struggling to move pilots into production, and white-collar job postings already falling 16% year-over-year, the anxiety is real and justified. But so is the optimism.
Ray and Peter break down why the same supply constraints slowing AI buildout may actually give companies and workers more time to adapt, why foundation model costs have plummeted 97% since 2023, and how IBM's internally deployed AI has already generated $4.5 billion in productivity savings. From healthcare transcription to AI-native go-to-market tools, the ROI is emerging, but not evenly or quickly enough for most.
What We Cover in This Episode:
- The staggering scale of AI infrastructure spending: The five largest hyperscalers (Amazon, Microsoft, Alphabet, Meta, and Oracle) are on track to spend over $600 billion in CapEx this year, with Oracle committing 57% of its annual revenue and Microsoft 45%, ratios more typical of heavy industrial companies than software firms
- Why the build-out is slower than everyone thinks: Grid upgrade timelines in the US run 8+ years, data center construction is broadly behind schedule, and critical shortages in chips, transformers, skilled labor, and construction materials aren't expected to ease until at least 2028
- The pilot-to-production gap is real: Only 6% of enterprise AI projects are delivering returns within a year, and most organizations lack the frameworks and experience to move from experimentation to operational deployment at scale
- Trust, hallucinations, and governance are still major blockers: Regulated industries like financial services and healthcare face compounding uncertainty, caught between pre-AI regulations still on the books and a patchwork of conflicting state, federal, and international AI policy
- The workforce impact is already being felt : Salesforce cut 4,000 customer support roles, Klarna reduced headcount by 40%, white-collar job postings are down 16% year-over-year, and college graduate placement rates have dropped from 83-88% to roughly 23%, hitting data science, software development, and graphic design hardest
- But the technology itself is accelerating fast: Foundation model costs have dropped 97% since early 2023, the number of available models has grown from 60 to 650, and enterprises are getting smarter about orchestrating multiple models for different tasks
- Real ROI stories are emerging: IBM has generated $4.5 billion in productivity savings from internally deployed AI since January 2023, automating nearly 4 million hours of work annually at $3.50 returned for every dollar invested
- Vertical AI is gaining serious traction: Healthcare AI is the fastest-growing vertical, with one transcription tool alone saving 50,000 clinician hours. Legal, cybersecurity, customer support, and IT operations are all seeing meaningful gains
- The competitive pressure is intensifying: 54% of business leaders in a Mercer study believe they won't remain competitive in five years without AI at scale, and 92% of firms plan to increase AI budgets over the next three years
Why You Should Listen:
If you're a business leader, investor, or professional trying to cut through AI hype and understand what's actually happening on the ground, this episode delivers the balanced, data-driven perspective that's hard to find. Ray and Peter don't just cheerlead or catastrophize; they give you the real picture: where the bottlenecks are, where the returns are genuinely showing up, and why the next two to three years of slower-than-expected adoption might actually be the window your organization needs to get AI right.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
20 March 2026, 4:00 am - More Episodes? Get the App