The 4 am Report

Follow The 4 am Report
Share on
Copy link to clipboard

Do marketing problems keep you up at night? .......Tune in to this micro podcast each Thursday as hosts Susan and Will welcome weekly guests ranging from corporate marketing leads to veteran PR mavens to small business owners, to creatives and more, in a rousing and informative problem solving audio arena. With a strategically short 10-minute (ish) run time, listeners can expect the c+p digital duo to cut through the clutter, get to the heart of the matter and hopefully help offer up some sleep-inducing solutions.

c+p digital


    • Dec 20, 2025 LATEST EPISODE
    • weekdays NEW EPISODES
    • 20m AVG DURATION
    • 265 EPISODES


    Search for episodes from The 4 am Report with a specific topic:

    Latest episodes from The 4 am Report

    EP 266 - Literacy, Leadership, and the 'AI for the Sake of AI' Trap with Shona Boyd

    Play Episode Listen Later Dec 20, 2025 40:19


    Host Susan Diaz is joined by Shona Boyd, a product manager at Mitratech, a SaaS company, and a proudly AI-curious early adopter, for a grounded conversation about what AI literacy actually means now. They talk about representation, critical thinking, everyday meet-you-where-you-are workflows, shadow AI, enterprise guardrails, and why leaders must stop chasing AI features that don't solve real user problems. Episode summary Susan introduces Shona Boyd - AI-curious early adopter and SaaS product manager—whose mission is to make AI feel less scary and more accessible. Shona shares how her approachable AI philosophy started in product work: she used AI to build audience insights and feedback loops when job seekers weren't willing to do interviews, and quickly realized two things: (1) AI wasn't going away, and (2) there weren't many visible women or people who looked like her leading the conversation. So she raised her hand as an approachable reference point others could learn from. From there, the conversation expands into what AI literacy has evolved into. It's no longer just "which tool should I use?" or "how do I write prompts?" Shona argues that today literacy is about critical thinking, learning how to talk to an LLM like a conversation, and choosing workflows that benefit from AI rather than chasing hype. They also get practical: Shona gives everyday examples (Medicare PDFs, credit card points, life admin) to show how AI can meet you where you are, without requiring you to build agents or become super technical. Finally, Susan and Shona go deep on organizational adoption: why handing out logins without policies is risky, how shadow AI shows up (hello, rogue meeting note-takers), why leadership sponsorship matters, and what companies should stop doing immediately: AI for the sake of AI. Key takeaways Representation changes adoption. When people don't see anyone who looks like them using AI confidently, they're less likely to lean in. Shona chose to be a visible, approachable point of reference for others. AI literacy has shifted. It's no longer mainly about which model or prompt frameworks. It's about: learning the language (LLM, GPT, etc.) staying curious building critical media muscles to evaluate what's true, what's AI, and what needs sources. Workflows aren't just corporate. A workflow is simply: tasks + the path to get them done. Shona's examples show AI can help with day-to-day life admin (PDFs, policies, benefits, points programs), which makes AI feel more approachable fast. The first output is not the final. "I can spot AI content" usually means people are publishing raw first drafts. High-quality AI use looks like: draft → critique → refine → human judgement. What good organizational training is NOT: handing out tool logins with no policy, no guidance on acceptable use, and no understanding of enterprise vs personal security. Shadow AI is already here. People are adding unapproved AI note-takers to meetings and uploading sensitive info into personal accounts. Blanket bans don't work - they push experimentation underground. Adoption needs product thinking. Shona suggests leaders treat internal AI like a product launch: run simple feedback loops (NPS-style checks) analyse usage patterns to find sticking points apply AI where it solves real pain, not where competitors are hyping features. Leadership ownership matters for equity. When AI is run department-by-department, you create "haves and have-nots" (tools, training, access). Top-down support plus safe guardrails reduces inequity and increases psychological safety. Spicy take: stop doing AI for the sake of AI. If you can't explain how an AI feature improves real user life in a non-marketing way, it probably shouldn't ship. Episode highlights [00:01] The 30-day podcast-to-book sprint and why leaders are still showing up in December. [01:14] Shona's origin story: using AI to build audiences and feedback loops in a job board context. [02:17] The visibility gap: not many women / people who looked like her in early AI spaces. [05:55] What AI literacy means now: critical thinking + conversation with an LLM + workflow selection. [07:16] "Workflows" made real: Medicare PDFs and credit card points examples. [10:13] Three essentials: foundational language, curiosity, and critical media literacy. [12:23] What training is NOT: handing out logins with no policy or guardrails. [15:49] Handling fear and resistance with empathy and a human-in-the-loop mindset. [23:27] Product lens on adoption: NPS feedback loops + usage analytics to find real needs. [28:14] Shadow AI: rogue note-takers, personal accounts, and why bans backfire. [31:17] Policies at multiple levels, including interviewing and candidate use of AI. [36:49] "Stop AI for the sake of AI" and the race to ship meaningless features. [39:13] Where to find Shona: LinkedIn (under Lashona).   Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.   Connect with Lashona Boyd on LinkedIn  

    265 Buying AI vs Building AI - A Leader's Decision Guide

    Play Episode Listen Later Dec 19, 2025 27:50


    Most teams are stuck in tool obsession: "Should we build agents?" "Should we buy this AI platform?" In this solo, workshop-style episode, host Susan Diaz pulls you back to reality with a simple decision guide: buy vs bolt-on vs build, four leadership filters, and a practical workflow exercise to help you choose the right approach - without falling for agentic fantasies. Episode summary Susan opens with a pattern she's seeing everywhere: 75% of AI conversations revolve around tools - agents, platforms, add-ons - and they're often framed as all-or-nothing decisions. She reframes it: AI is best understood as robotic process automation for the human mind, not a single agent replacing a person or a department. This episode is structured like a mini workshop. Susan asks you to grab paper and map a real workflow step-by-step - because the decision isn't "which AI tool is hot" it's what job are we automating. Then she defines the three choices leaders actually have: Buy: purchase an off-the-shelf solution that works as-is. Build: create something custom (apps, integrated experiences, models). Bolt-on: the underrated middle path - use tools you already have (enterprise LLMs, suites), then add custom GPTs/projects, prompt templates, and lightweight automations. She introduces a six-level "ladder" from better prompts → templates → custom GPTs/projects → workflow automation → integrated systems → custom builds, and offers a gut-check on whether your "agentic dreams" match your organizational capacity. Key takeaways Start with the job-to-be-done, not the tool. The most common mistake is choosing tech before defining the workflow. A workflow is simply a chain of small tasks with clear verbs and steps. AI is RPA for your brain. Think "Jarvis" more than "replacement." It's about removing repetitive noise while keeping human judgement, discernment, and creativity in the lead. Buy vs Build vs Bolt-on: Buy when you need reliability, guardrails, enterprise support, and the use case is common (summaries, note-taking, analytics). Build when the workflow is your differentiation, data is proprietary, outcomes are strategic, and you can support ongoing maintenance and governance. Bolt-on for most teams: fast, cheaper, easier to change. Start by layering custom GPTs/projects and lightweight automation on top of existing tools and licences. Six levels of maturity (a ladder, not a leap): Better prompts (one-off help) Templates / prompt libraries (repeatable help) Custom GPTs / projects (consistent behaviour + knowledge) Workflow automation (handoffs between steps) Integrated systems (data + permissions + governance) Custom builds (strategic + resourced) Four decision filters for leaders: A) Repeatable workflow or one-off? B) Is the value in the tech itself, or in how you apply it? C) Data sensitivity and risk level? (enterprise controls matter) D) Do you have operating maturity to run it? (monitoring, owners, governance, feedback loops) Automation ≠ autopilot. Automation is great. Autopilot is abdication. If you ship first-draft AI output without review, you'll get "garbage in, garbage out" reputational risk. A simple friction-mapping exercise: Map a 10-step workflow (open, check, find, copy, rewrite, compare, ask someone, format, send, follow up). Circle the friction steps. Label each friction point: R = repeatable J = judgement-heavy D = data-sensitive Then choose: buy / bolt-on / build based on what dominates. Reality check for "agentic dreams": Before building: Do you have a documented workflow? Do you have a human owner reviewing weekly? Do you have a feedback loop? If not, you're building a liability, not a system. The real bet isn't build vs buy. It's this: "What repeatable work needs a personalised tool right now?" Episode highlights [00:02] Why most AI conversations are tool-obsessed (agents, platforms, add-ons). [01:50] "RPA for the human mind" + the Jarvis analogy. [04:14] Workshop setup: buy vs bolt-on vs build + decision filters. [05:15] Step 1: define the job-to-be-done (not the department). [08:13] The 10-step workflow template (open → follow up). [10:49] Definitions: buying AI vs building AI vs bolt-on AI. [14:13] The ladder: prompts → templates → custom GPTs → automation → integrated systems → builds. [16:42] Filter A: repeatable vs one-off (and why repeatable is bolt-on territory). [18:27] Filter C: data sensitivity and enterprise-grade controls. [19:45] Filter D: operating maturity—where agentic dreams go to die. [20:08] Automation vs autopilot (autopilot = abdication). [21:24] Circle friction points + label R/J/D to decide. [25:42] Reality check: documented workflow, owner, feedback loop. [26:33] The takeaway: personalised tools for repeatable work beat agent fantasies.   Try the exercise from this episode with your team this week: Pick one recurring, annoying-but-important job. Map it in 10 simple steps. Circle friction points and label R / J / D. Decide: buy, bolt-on, or build—and write: "For this workflow, we will ___ because the biggest constraint is ___."   Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  

    EP 264 How AI Changed Marketing with Suzanne Huber

    Play Episode Listen Later Dec 18, 2025 28:01


    Host Susan Diaz sits down with her business buddy and go-to-market consultant Suzanne Huber to talk about what AI has actually changed in marketing. Together they explore AI as "robot arms" (an extension of expertise), why first-draft AI content gets a bad rap, how modern marketers use AI for research, planning, editing, and proposals, and why thought leadership and personal brand matter more than ever. Episode summary Susan and Suzanne have been talking about AI since 2022. In this episode, they make it official. Suzanne introduces a metaphor that sticks: AI as "robot arms". You're still the driver. AI can extend your reach, speed up the grunt work, and help you close expertise gaps—but it still needs human judgment, critical thinking, and craft. They compare marketing before vs after AI: headlines, research, applying feedback, simplifying complex plans into executive-friendly formats, cross-checking sources (especially Canadian vs US), and building repeatable workflows with custom GPTs. They also tackle the bigger questions: Does expertise still matter? Is personal brand becoming more important in the age of AI? What should writers do if they feel threatened? Spoiler: AI can speed up output. But insight, values, differentiation, and taste are still the human edge.   Key takeaways AI is "robot arms" not a replacement brain. It's an extension of expertise. You still need to steer, evaluate quality, and avoid publishing raw first drafts that can damage trust. First-draft AI is the content factory problem. AI-assisted content gets a bad reputation when junior-level or high-volume systems publish credible-sounding fluff with no real subject matter expertise behind it. Craftsmanship still matters. Marketing got faster because the grunt work collapsed. Headlines, rewrites, reformatting, applying feedback, outlining, and turning long documents into charts/tables can happen in minutes - not hours. You still refine, but you're starting from a better baseline. Research and fact-checking changed dramatically. Instead of trawling search results for hours (and getting US-default sources), AI tools can surface targeted sources fast - then humans choose what's credible and relevant. Custom GPTs shine for repeatable processes. Susan shares how she uses custom GPTs (including MyShowrunner.com) for guest research, interview questions, emails, and packaged deep research briefs - turning recurring work into reusable systems. Expertise always matters - especially for positioning and thought leadership. Differentiation, values, hot takes, and human intuition are what attract the right people (and repel the wrong ones). AI can assist, but it can't replace lived POV. Personal brand matters more in the age of AI. As audiences get more suspicious of generic content and AI avatars, trust increasingly attaches to real humans with visible ideas, proof, and consistency. For writers who feel threatened: use it or get outpaced. AI can accelerate production for factual formats (press releases, timely content). Writers who combine craft + AI + fast learning become the force multipliers. But journaling/introspective writing still belongs to the human-only zone. Episode highlights [01:29] Suzanne's "robot arms" metaphor: AI as an extension of expertise. [02:47] Why first-draft AI should never leave your desk. [03:56] The telltale signs of lazy AI writing (and why it gets a bad rap). [05:00] Before vs after AI: the research + writing process changes. [07:24] Simplifying complex work: plans → tables → charts for execs. [09:10] Deep research for Canadian sources without wasting hours. [10:25] Custom GPT workflows (MyShowrunner + research briefs). [12:29] Where expertise still matters in an AI-saturated world. [16:56] Personal brand: attracting the right people + repelling the wrong ones. [20:00] AI for proposals and even pricing guidance. [22:00] Advice for writers who feel threatened by AI. If you've been resisting AI because you're worried it will erase your craft, try this reframing: Use AI for the grunt work. Keep the human parts for the parts that build trust: taste, judgement, voice, and values. And if you want a simple starting point, ask yourself: What could use "robot arms" in your marketing workflow this week - headlines, research, rewrites, proposals, or planning? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.   Connect with Suzanne Huber on LinkedIn.   

    EP 263 AI is the App. Culture is the OS.

    Play Episode Listen Later Dec 17, 2025 26:31


    AI doesn't fail in organizations because the tools are bad. It fails because culture is glitchy. In this solo episode, host Susan Diaz explains why AI is just the "app" while your organizational culture is the real operating system - and she shares six culture pillars (plus practical steps) that determine whether AI adoption becomes momentum… or messy risk. Episode summary Susan reframes AI adoption with a simple metaphor: AI tools, pilots, and platforms are "apps". But apps only run well if the operating system - your culture - is healthy. Because AI is used by humans, and humans have behaviour norms, and they value incentives, safety, and trust. She connects this to the "experiment era" where organizations see unsupervised experimentation, shadow AI, and uneven skill levels - creating an AI literacy divide if leaders don't intentionally design expectations and values. From there, Susan defines culture plainly ("how we think, talk, and behave day-to-day") and shows how it shows up in AI: what people feel safe admitting, whether experiments are shared or hidden, how mistakes are handled, and who gets invited into the conversation. She then walks through six pillars of purposeful AI culture and closes with tactical steps for leaders: naming principles, building visible rituals, supporting different AI archetypes, aligning incentives, and communicating clearly. Key takeaways Stop treating AI like a one-time "project". AI adoption doesn't have a clean start/end date like an ERP rollout of yore. Culture is ongoing, and it shapes what happens in every meeting, workflow, and decision. The "experiment era" creates shadow AI and uneven literacy. If unsupervised experimentation continues without an intentional culture, you get risk and a widening gap between power users and everyone else. Six pillars of an AI-ready culture: Experimentation + guardrails - Pro-learning and pro-safety. Define sandboxes and simple rules of the road not 50-page legal docs. Psychological safety - People won't admit confusion, ask for help, or disclose risky behaviour without safety. Leaders modelling "I'm learning too" matters. Transparency - A trust recession + AI makes honesty essential. Encourage show-and-tell, logging where AI helped, and "we're not here to punish you" language. Quality, voice, and ethics - AI can draft, humans are accountable. Define what must be human-reviewed and what "good" looks like in your brand and deliverables. Access + inclusion - Who gets to play? Who gets training? Avoid new "haves/have-nots" dynamics across departments and demographics. AI literacy is a survival skill. Mentorship - Champions programs and pilot teams only work if mentorship is real and resourced (and doesn't become unpaid side-of-desk work). Four culture traps to avoid: Compliance-only culture (all "don't", no "here's how to do it safely") Innovation theatre (demos and buzzwords, no workflow change) Hero culture (1-2 AI geniuses and nothing scales) Silence culture (confusion and shadow AI stay hidden and leadership thinks "we're fine") Culture is the outer ring around your AI flywheel. Your flywheel (audit → training → personalized tools → ROI) compounds over time, but culture is what makes the wheel safe and sustainable. Episode highlights [00:01] AI is a tool. Culture is the system it runs on. [01:30] The experiment era: shadow AI and unsupervised adoption. [02:01] The AI literacy divide: some people "run apps," others can't "install them." [03:00] Culture defined: how we think, talk, and behave—now applied to AI. [04:56] Pillar 1: experimentation + guardrails (sandboxes + simple rules). [07:23] Pillar 2: psychological safety and the shame factor. [11:37] Pillar 3: transparency in a trust recession. [13:57] Pillar 4: quality, voice, ethics—AI drafts, humans are accountable. [16:33] Pillar 5: access + inclusion—AI literacy as survival skill. [19:00] Pillar 6: mentorship and avoiding unpaid "champion" labour. [23:31] Four bad patterns: compliance-only, innovation theatre, hero culture, silence culture. [25:47] The closer: AI is the latest app. Culture is the operating system. If your organization is buying tools and running pilots but still feels stuck, ask: What "AI culture" is forming by default right now - compliance-only, hero culture, silence? Which one pillar would make the biggest difference in the next 30 days: guardrails, safety, transparency, quality, inclusion, or mentorship? What ritual can we introduce this month (show-and-tell, office hours, workflow demos) to make AI learning visible and normal? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  

    EP 262 Women, AI, and the Invisible Load (with Heather Cannings)

    Play Episode Listen Later Dec 16, 2025 32:09


    Why are women adopting AI at lower rates than men - and what's really going on underneath the stats? In this episode, host Susan Diaz and Heather Cannings, Women Entrepreneurship Program Lead at InVenture and producer of StrikeUP Canada, dig into time poverty, "cheating" fears, late-night upskilling, and what real support for women entrepreneurs needs to look like in an AI-forward world. Episode summary Susan is joined by Heather Cannings, who leads women's entrepreneurship programs at InVenture and runs StrikeUP, a national digital conference that reaches thousands of women entrepreneurs across Canada and beyond. Heather shares what she's seeing on the ground: huge curiosity about AI, mixed with pressure, fatigue, and a sense that it's "one more thing" women are expected to learn on their own time. Many of the women she serves are juggling multiple roles - business owner, employee, caregiver - then experimenting with AI at 10-11 pm after the workday and bedtime routines are done. They unpack the emotional layer too: why AI still feels like "cheating" or being an imposter for many women the question of whether you have to disclose using AI how to reconcile charging premium prices while using AI behind the scenes Susan and Heather link lower AI adoption rates among women to a wider pattern: another version of the wage gap and systemic inequities in who has time, safety, and support to skill up. The conversation then turns to: Practical, real-world use cases women are already using AI for (grant writing, content, summarizing long docs). What good support systems actually look like for time-strapped entrepreneurs (designing for constraints, not fantasy calendars). How small, scrappy businesses and large organizations can learn from each other on speed, governance, and risk. The uncomfortable reality that many roles most at risk of AI automation - admin, entry-level comms, research - are heavily female. They close with a hopeful lens: how women can use AI to increase their value and control over time and income, why this moment is a genuine opportunity for democratization, and how Heather's StrikeUP event is trying to meet women exactly where they are. Key takeaways AI doesn't feel neutral for women - it feels like another test. Many women entrepreneurs are curious about AI but also feel judged, worried about "doing it wrong," or like they're cheating if they use it. Imposter syndrome shows up as: "Is this really my work if AI helped?" Time poverty is the real barrier, not lack of interest. Heather sees women using AI at 10-11 pm after full workdays and caregiving, trying to finish newsletters, social posts, or grant drafts. They are upskilling - just in stolen moments, not spacious strategy sessions. Support systems must be designed for real constraints. Don't assume people have: unlimited time teams strong internet quiet workspaces Many women join digital events from cars, back rooms, or storage areas between tasks. Training and support must be consumable, flexible, and realistic. One-off AI webinars aren't enough. A single 60-minute "intro to AI" often just generates an overwhelming to-do list. What works better: smaller, workshop-style sessions hands-on guidance on a specific task or tool practical, "do it in the room" support so women leave with something done, not just inspired. Women are already using AI for practical, high-impact tasks. Common use cases include: writing and improving copy content planning and social media summarizing long documents drafting grants and pitches The focus is on time savings, staying within tight budgets, and safely getting more done—not chasing cutting-edge AI for its own sake. Enterprise and small business can - and should - learn from each other. Big firms: bring resources, governance, and policy thinking. Small businesses: bring speed, scrappiness, and the ability to implement immediately. Ecosystem players (non-profits, funders, educators) can translate between the two and help find a healthy middle ground. There's a gendered risk in AI-driven job change. Roles often flagged as "at risk" - admin, entry-level comms, research - are heavily staffed by women. Without intentional upskilling and redeployment, AI could quietly deepen existing inequities. There's also real opportunity. AI can be a "quiet force in the background" that removes 5-10 hours of repetitive work a week - enough to change a woman's lifestyle, income, and capacity. It can help women move up the ladder, redesign roles, or reshape their businesses around higher-value work. Designing AI with women's realities in mind matters. Women shouldn't just be users; they should help shape how tools are designed, so AI reflects real constraints like caregiving, part-time work, and patchy access - rather than assuming a mythical founder with unlimited time and support. Episode highlights [00:01] Susan sets the scene: 30 episodes in 30 days and how Heather fits into the series. [00:57] Heather introduces InVenture and her role as Women Entrepreneurship Program Lead, plus the StrikeUP conference. [01:55] Why AI remains a hot topic for StrikeUP's audience of women entrepreneurs. [02:57] AI as a catch-22: it can save time, but learning it feels like "one more thing." [03:56] "Is this cheating?" – women's fears about using AI and being judged. [05:09] AI, transparency, pricing, and the complexity of "should I tell clients I used AI?" [05:39] How this ties to stats showing women adopting AI 25% less than men—and why Susan sees it as another version of the wage gap. [07:07] Draft vs final: why treating AI output as a first draft, not finished work, is crucial. [08:33] The problem with generic, AI-generated content about "women in AI" that sounds impressive but says very little. [09:20] Real-world use cases Heather sees among small business owners. [10:22] The 11 pm pattern: women learning AI in stolen, exhausted moments. [12:06] Why women are resilient and experimenting—but lack daytime access to deep learning and setup time. [13:27] Designing support systems that don't assume unlimited time, teams, or bandwidth. [14:24] Making training consumable, recorded, and accessible from phones, cars, and storage rooms. [15:34] Why one-off webinars don't work—and the case for small, workshop-style sessions. [18:09] What big firms can learn from scrappy entrepreneurs (and vice versa). [20:10] The myth that corporates "have it all figured out" on AI. [22:19] AI and job loss: the gendered impact on admin, entry-level comms, and research roles. [23:20] Reframing: how women can use AI to increase their value and move up. [25:16] Adaptation over doom: calculators, the internet, and why we'll adjust again. [27:04] Heather's vision: AI as a quiet force helping women gain more control over time and income. [28:41] StrikeUP 2025 details: date, format, giveaways, and on-demand access. If you support or are a woman entrepreneur, use this episode as a prompt to ask: Where are women in your world already using AI - in stolen moments - and how could you meet them there with better support? How can you design AI training and tools that assume real constraints, not fantasy calendars? What's one concrete way you can help a woman in your ecosystem use AI to increase her value and control, instead of feeling like she's at risk of being automated away? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. You can learn more about StrikeUP and register for the free digital conference at strikeup.ca and connect with Heather Cannings on LinkedIn.

    EP 261 The Four AI Cliff Archetypes

    Play Episode Listen Later Dec 15, 2025 25:16


    Some AI projects in your organization feel weirdly easy. Others feel impossible. In this solo workshop-style episode, host Susan Diaz introduces the Four AI Cliff Archetypes - Divers, Pathfinders, Operators, and Bridge Builders - and shows how understanding your mix of people (not just your tools) explains most of your AI momentum or lack thereof. Episode summary Susan opens with a familiar problem: in the same organization, some AI projects glide and others grind to a halt. The difference, she argues, isn't the tech - it's how different types of people respond when they hit a "cliff" moment, where the familiar path disappears and AI represents a big, unknown drop. Drawing on personality and operating-style frameworks like Kolbe for inspiration, she introduces the Four AI Cliff Archetypes: Divers - jump first, learn in motion, create raw experiments and speed. Pathfinders - map risk and opportunity, research, and ask the hard questions. Operators - take a plan and run it, turning ideas into executed workflows. Bridge Builders - turn chaotic experiments into systems, documentation, and "this is how we do it here" Listeners are invited to score themselves 0-10 on each type as Susan walks through how each archetype behaves at the cliff, what sentences give them away, and how they help or hurt AI adoption if unmanaged. She then moves from personal reflection to organizational design: how to sequence work so each type shines in the right place - especially across the AI flywheel of audit, training, personalised tools, and ROI. She closes with a "cliff to bridge" sequence - Divers jump, Pathfinders map, Operators ship, Bridge Builders scale - and a practical homework exercise for mapping real people on your leadership team to each archetype so you can stop fighting human behaviour and start designing with it. Key takeaways The friction isn't just tools, it's temperament. AI feels like a cliff: the path ends, the map is unclear, the bottom is invisible. People respond to that uncertainty in patterned ways - and those patterns shape your AI projects. The Four AI Cliff Archetypes: Divers - "Let's just try it." Early experimenters who move fast, download tools before memos, and learn by motion. They create velocity and risk (shadow AI, lack of documentation, burnout). Pathfinders - "Hold on, what does this do?" Risk scanners who research, ask for evidence, and think about policy and edge cases. They prevent disasters but can get stuck in analysis. Operators - "Tell me the plan and I'll run it." Execution machines who thrive on clear outcomes, ownership, timelines, and metrics. They build powerful machines… which can be pointed at the wrong target if leadership is vague. Bridge Builders - "No one should have to jump this every time." System designers who create repeatable workflows, playbooks, and training so experiments become infrastructure. They can over-engineer too early if they don't have real-world data. No one type is "best" - you need a mix. A team full of Divers = chaos. Pathfinders-only = analysis paralysis. Operators-only = beautifully executed wrong things. Bridge Builder-only = process with no proof. Balance beats dominance. Sequence the humans, not just the tasks. Susan offers a simple sequence for AI initiatives: Divers jump - generate raw experiments and discover real use cases. Pathfinders map - assess risk, compliance, and opportunity. Operators ship - turn what works into pilots and deployed workflows. Bridge Builders scale - standardize, document, and build bridges so others can cross safely. Map archetypes onto your AI flywheel. In audit, Pathfinders and Bridge Builders lead with Divers exposing shadow systems. In training, Bridge Builders and Operators lead while Divers provide examples. For personalized tools and ROI tracking, all four types play different roles - from prototyping to governance to metrics. Design for behaviour, don't fight it. You can't force Divers to become Pathfinders or Operators to become Bridge Builders. You can design projects, governance, and sequencing so each type does the work they're naturally wired for - reducing friction and accelerating adoption. Episode highlights [00:02] Why some AI projects feel easy in your org—and others feel impossible. [00:26] "It's not the tools. It's the people." Setting up the archetype model. [01:16] The cliff metaphor: the path ends, the map is unclear, and AI = the drop. [01:57] Inspiration from Kolbe and operating modes for creating these archetypes. [03:11] Introducing the four types: Divers, Pathfinders, Operators, Bridge Builders. [04:14] How to play along: scoring yourself 0–10 on each archetype. [04:53] Deep dive on Divers: language, strengths, and how they accidentally create shadow AI. [06:41] The "sandbox plus guardrails" playbook for managing Divers (including burnout protection). [08:02] Pathfinders: risk scanning, research, and how to avoid permanent evaluation mode. [09:37] Two-week sprints and one-page memos as tools to keep Pathfinders moving. [11:02] Operators: "tell me the plan and I'll run it," and why goals matter more than tools. [13:04] Translating AI into workflows and metrics Operators can own. [14:22] Bridge Builders: turning chaos into infrastructure and culture ("this is how we do it here"). [15:40] Pairing Divers + Bridge Builders, and Pathfinders + Bridge Builders, to avoid over-engineering. [17:27] Why a team full of any single archetype breaks your AI efforts in predictable ways. [18:35] Mapping each archetype onto the AI flywheel: audit, training, tools, ROI. [21:28] Applying the model to your leadership team: spotting overloads and missing roles. [22:37] The "cliff to bridge" sequence: Divers jump, Pathfinders map, Operators ship, Bridge Builders scale. [23:38] Homework: map one current AI initiative against the four archetypes and adjust who does what. Use this episode as a mini workshop for your next AI initiative: Score yourself across Diver, Pathfinder, Operator, Bridge Builder. Pick one real AI project and write actual names next to each type on your team. Ask: "Where are we overloaded, where are we missing a type, and how can we re-sequence the work so each archetype shines at the right moment?" That's how you stop treating AI like a terrifying cliff - and start treating it like a crossing your whole team actually knows how to make. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP 260 Human-First in an AI-Forward World with Helen Patterson

    Play Episode Listen Later Dec 15, 2025 40:02


    What does it really mean to build an AI-forward company that is still deeply human-first? In this episode, host Susan Diaz and senior HR leader and mentor culture advocate Helen Patterson talk about jobs, guardrails, copyright, environmental impact, and why mentorship and connection matter more than ever in the age of AI. Episode summary Susan is joined by Helen Patterson, founder of Life Works Well, senior HR leader, and author of the upcoming book Create a Mentor Culture. They start with a Y2K flashback and draw a straight line from past tech panics to today's AI headlines. Helen shares why she sees AI as the latest evolution of technology as an enabler in HR - another way to clear the admin and grunt work so humans can focus on growth, development, and real conversations. From there, they dig into: The tension between "AI will kill jobs" and tens of thousands of new AI policy and governance roles already posted. How shadow AI shows up when organizations put in blanket "no AI" rules and people just reach for their phones anyway. The very real issues around privacy, copyright, and intellectual property when staff feed proprietary material into public models. The less-talked-about environmental impact of AI and why leaders should demand better facts and more intentional choices from tech providers. In the second half, Helen brings the conversation back to humanity: mentorship as a counterweight to disconnection, her One Million Mentor Moments initiative, and how everyday "micro-mentoring" at work can help people adapt to rapid change instead of being left behind. They close with practical examples of using AI for good in real life - from travel planning and research to late-night dog-health triage - without letting it replace judgement. Key takeaways This isn't our first tech panic. From Y2K to applicant tracking systems, HR has always framed tech as an enabler. GenAI is the newest layer, not an alien invasion. Looking back at history helps calm "sky is falling" narratives. Jobs are changing, not simply disappearing. Even as people worry about AI-driven job loss, platforms like Indeed list tens of thousands of AI policy and governance roles. The work is shifting toward AI-forward skills in every function. Blanket "no AI" rules don't work. When organizations ban external tools or insist on only one locked-down platform, people quietly use their own devices and personal stacks anyway - creating shadow AI with real privacy and IP risk. Guardrails and education beat prohibition. Copyright and confidentiality need more than vibes. Without clear guidance, staff will copy proprietary frameworks or documents into public models and re-badge them. Leaders need simple, well-communicated philosophies about what must not go into AI tools. Environmental impact is part of human-first. Training and running large models consumes energy. The real solution will be systemic (how tech is built and powered), but individuals and organizations can still use AI more efficiently, just like learning not to leave all the lights on. Mentorship is the ultimate human technology. Helen's work on Create a Mentor Culture and One Million Mentor Moments reframes mentoring as everyday, one-conversation acts that share wisdom, reduce fear, and help people reskill for an AI-forward world. Tech should support that, not replace it. Upskilling beats layoffs. When roles change because of AI, the most human-first response isn't to cut people loose, it's to invest in learning, mentoring, and redeployment so existing talent can grow into new, AI-augmented roles. Use AI to simplify life, not complicate it. From planning multi-country trips to triaging whether the dog really needs an emergency vet visit, smart everyday use of AI can save time, money, and anxiety - freeing up more space for the work and relationships that actually matter. Episode highlights [00:01] Susan sets the scene: 30 episodes in 30 days to build Swan Dive Backwards in public. [00:39] Helen's intro: Life Works Well, heart-centred high-performance cultures, and her focus on mentorship. [03:43] What an AI-forward and human-centred organisation looks like in practice. [04:00] Y2K memories and why today's AI panic feels familiar. [06:11] 25–35K AI policy jobs on Indeed and what that says about the future of work. [07:49] Jobs lost vs jobs created—and why continuous learning is non-negotiable. [15:19] The danger of "everyone is using AI" with no strategy or safeguards. [19:25] Shadow AI, personal stacks, and why hard bans don't stop experimentation. [21:13] A real-world IP scare: proprietary material pasted into GPT and re-labelled. [23:06] GPT refusing to summarise a book for copyright reasons—and why that's a good sign. [24:03] The case for a simple AI philosophy doc: purpose, principles, and communication. [25:24] Environmental concerns, fact-checking, and the server-room-to-laptop analogy. [30:17] New social media laws for kids and what they signal about tech accountability. [30:41] One Million Mentor Moments: why one conversation can change a career. [31:22] From elite programmes to everyday mentor cultures inside organisations. [35:01] AI for mentoring and coaching: bots, big-name gurus, and internal use cases. [36:30] Using AI for travel planning, research, and everyday life admin. [37:35] Susan's story: using AI to triage a dog-health scare instead of doom-scrolling vet sites. [38:37] Life Works Well's roots in work–life harmony and simplifying with tech. [39:35] Where to find Helen online and what's next for her book. If you're leading a team (or a whole organization), use this episode as a prompt to ask: Where are we treating AI as a tool in service of humanity - and where are we forgetting the human first? Do our people actually know what's OK and not OK to put into AI tools? How could we use mentorship - formal or informal - to help our people navigate this shift instead of fearing it? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. You can connect with Helen Patterson on LinkedIn and follow her work on Create a Mentor Culture and One Million Mentor Moments via lifeworkswell.ca

    EP 259 What Is an AI Flywheel? (And Why your Pilots Stall)

    Play Episode Listen Later Dec 13, 2025 26:17


    Most leaders talk about AI in terms of pilots, projects, and one-off tools. In this solo episode, host Susan Diaz explains why that mindset stalls adoption - and introduces the idea of an AI flywheel: a simple, compounding loop of audit → training → personalized tools → ROI that quietly turns experiments into momentum across your whole organisation. Episode summary Susan opens by contrasting how most organizations approach AI - pilots, isolated chatbots, a few licences to "see what happens" - with how enduring companies build flywheels that compound over time. Borrowing from Jim Collins' Good to Great, and examples like Amazon's recommendation engine, she reframes AI from "one big launch" to a heavy wheel that's hard to move at first, but almost impossible to stop once it's spinning. She then introduces her AI flywheel for organizations, built on four moving pillars: Audit - reality-check where AI already lives in tools, workflows, risks, and guardrails. Training - raise the floor of AI literacy so more people can safely experiment. Personalised tools and workflows - move beyond generic prompts into department- and workflow-specific systems. ROI tracking - measure time saved, errors reduced, risk reduced, and adoption so the story keeps getting funded. Instead of a linear checklist, these components form a loop - each turn of the wheel making the next easier, and creating an unfair advantage for organizations that start early. Finally, Susan adds the outer ring: human-first culture and governance as the operating system around the flywheel - psychological safety, champions and mentors, and values like equity that ensure AI momentum doesn't quietly recreate hustle culture or leave people behind. She closes with practical questions any leadership team can use this week to start their own AI flywheel. Key takeaways Projects start and end. Flywheels don't. Treating AI as a string of pilots and vendor launches creates start–stop energy. Designing a flywheel turns every experiment into input for the next win. A flywheel is heavy at first - but gains unstoppable momentum. Like a giant metal train wheel, it needs a lot of initial force, but each full turn adds speed. AI works the same: early experiments feel slow, compounding learning later feels unfairly fast. The AI flywheel has four core pillars: Audit - map current tools, workflows, risks, and guardrails; discover hidden wins and power users. Training - treat AI like financial literacy: a minimum viable level for everyone so they can ask better questions and prompt more effectively. Personalised tools & workflows - stop asking "Which LLM?" and start asking "Which steps in this 37-step process should AI do?" Workflow first, tool second. ROI tracking - measure time saved, errors reduced, faster time to market, risk reduction, and % of AI-augmented workflows so leaders keep investing. Culture is the operating system around the flywheel. Without psychological safety, people hide experiments. Without support, power users burn out. Values like equity matter: who's getting trained, who has access, and who you're helping reskill. Governance should feel like guidance, not punishment. You don't build an AI flywheel in a day. You start with one audit, one workflow, one dashboard that makes things more transparent - and commit to one small centimetre of momentum at a time. Episode highlights [00:02] Why "we're piloting a chatbot" is not a strategy. [01:34] Flywheel 101: the train-wheel analogy and why momentum beats one-off effort. [03:19] Amazon's recommendation engine as a classic business flywheel. [05:02] Applying Jim Collins' Good to Great flywheel lens to AI initiatives. [05:30] From big bang ERP-style AI projects to small, compounding loops. [08:00] Introducing the four pillars: audit, training, personalised tools, ROI. [08:53] Audit as reality check: surfacing hidden wins and DIY power users. [11:14] Training as "raising the floor" of AI literacy. [14:08] Workflow-first thinking and the myth of the single all-powerful agent. [17:33] ROI stories: error reduction, faster time to market, and risk reduction. [20:19] Culture as outer ring: psychological safety, champions, values in action. [23:06] Starting your flywheel: three questions for your leadership team. Use this episode as a design tool, not just a definition. Grab a whiteboard with your leadership team and map: Where are we already auditing, training, personalising tools, and measuring ROI - however informally? Where is the wheel broken, or missing entirely? What's one centimetre of movement we can create this quarter - one audit, one workflow, one dashboard - to start our AI flywheel turning? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  

    EP 258 AI, Scale, and Outcome-Over-Output Leadership (with Kirsten Schmidtke)

    Play Episode Listen Later Dec 12, 2025 42:09


    What does modern sales leadership look like when AI is in the mix? In this episode, host Susan Diaz and sales leadership coach Kirsten Schmidtke unpack how AI and humanity can peacefully coexist in sales, why scale starts with clarity and process (not tools), and how leaders can shift from output-obsessed hustle to outcome-focused, identity-level leadership in an AI-forward world. Episode summary Susan sits down with sales leadership consultant Kirsten Schmidtke to talk about AI, scale, and the "identity-level shifts" leaders need to make in modern sales. They start at the intersection of mindset and skillset - why AI is now part of the sales skill stack, but can't replace the human mindset, judgment, and presence required to sell well. Kirsten shares how sales organizations have moved from using AI as a basic copy/research tool to embedding LLMs in meetings, CRMs, and internal platforms, and even building their own AI features once they deeply understand their market and product. From there, they zoom out to the trust recession, spammy AI outreach, and the difference between being AI first and AI forward. They discuss AI as a way to free people into their zone of genius (hello, The Big Leap), the historical pattern of tech disruption and new job creation, and why AI should be seen as a massive upgrade to human potential - not a replacement. In the second half, they dig into scale and operations: why AI will only scale chaos if you don't have clear goals, processes, and SOPs. Why many sales orgs still lack documented sales and go-to-market processes. And how documenting before automating is the hidden unlock for using AI well. Kirsten closes with her identity-based leadership model (be → do → have), her outcome-over-output philosophy, and practical invitations for leaders who want to use AI to reduce burnout instead of fuelling hustle culture. Key takeaways Modern sales lives at the intersection of AI and humanity. AI is becoming part of the sales skillset, but the mindset - who you are being as a leader or seller - still drives how effectively those tools get used. Sales orgs have evolved past AI as copy tool. Early use was mostly email drafting and light research. Now teams are: choosing an LLM of choice (ChatGPT, Copilot, Perplexity, etc.) and tailoring it to their sales strategy embedding AI in meeting tools to surface questions and summaries in real time building AI into internal platforms based on deep knowledge of market, product, and GTM. We're in a trust recession - and lazy AI is making it worse. Spray-and-pray LinkedIn DMs and generic AI pitches erode trust and make buyers more sceptical and confused. Being AI forward means intentional, human-centred use of AI, not pushing AI for its own sake. AI should move you toward your zone of genius, not further into busywork. Borrowing from Gay Hendricks' The Big Leap, Kirsten and Susan talk about AI as a way to strip away tasks in your zones of incompetence/competence so you can spend more time in your zone of genius - and potentially unlock higher human experiences and contribution. Scale requires clarity and process before tools. AI isn't a magic scale button. Without a clear what and why, it can't help with the how. Leaders must: define the outcome and purpose of what they're scaling decide what not to do document the current process (SOPs) before asking AI to automate or optimise it. Otherwise AI just scales the chaos. Most salespeople are executors, not system builders. They're brilliant at doing the thing - calls, meetings, negotiation - but often not trained to design processes and ops. Pairing them with ops-minded people (and AI) to document and structure their best practices is where real scale lives. Identity-level leadership: be → do → have. Instead of "when I have the title, I'll be a leader", Kirsten coaches leaders to start with identity: "I am the leader of an AI-forward sales organization." That identity shapes thinking, then actions, then results. Shift from output to outcomes to avoid AI-fuelled burnout. If you treat AI as a way to cram more tasks into the same day, you just recreate hustle culture. Focusing on outcomes (what actually changes for customers, teams, and the business) allows you to use AI to create space - for thinking, rest, and higher-value work - instead of filling every spare minute. Episode highlights [00:01] Meeting Kirsten and why you can't talk about modern sales without talking about AI. [01:07] Mindset + skillset at the intersection of AI and humanity in sales. [02:35] How sales orgs first used AI as a copy / research tool—and what's changed. [04:45] Embedding AI in meetings and tools vs building AI features in-house. [06:11] The "spray and pray" LinkedIn problem and AI's role in the trust recession. [08:53] Being "AI forward" instead of "AI first." [10:39] Why humans remain safe: discernment, judgment, spidey senses, and taste. [11:39] Arianna Huffington, Thrive, and using AI to free time for human development. [13:19] The Big Leap and using AI to move into your zone of genius. [17:01] Tech history, job loss, and why we're in the messy middle of another big shift. [19:34] What scale really means: more impact with less time and effort. [20:33] Why AI can't fix a lack of clarity—and how it can accidentally add work. [23:32] "AI will scale the chaos" if you skip documentation and SOPs. [25:08] Salespeople as executors, not ops designers, and the power of pairing them with systems people. [27:47] Branding, buyer clarity, and why AI can't replace the hard work of positioning. [31:00] Identity-level shifts for leaders: adopting "I am…" statements. [35:21] AI and burnout: from productivity for productivity's sake to outcome-focused leadership. [37:25] Newtonian vs Einstein time and rethinking how we use the time AI frees. [39:59] "Outcome over output" as a leadership mantra in the age of AI. [40:38] Kirsten's invitation: a Sales Leader Power Hour to work on your mindset and identity. If you're leading a sales team - or are the sales team - and you're feeling the tension between AI, scale, and leadership start here: Pick one sales process and document it end-to-end. Identify one step where AI could genuinely reduce effort or time. Ask, "Who do I need to be as a leader of an AI-forward sales org?" and let that identity shape your next move. Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. To go deeper on mindset and identity shifts, connect with Kirsten Schmidtke on LinkedIn and book a Sales Leader Power Hour here: https://www.kirstenschmidtke.com/sales-leader-power-hour   

    EP 257 What My 30-Episode Sprint Is Teaching Me About AI, Energy, and Experimenting in Public

    Play Episode Listen Later Dec 10, 2025 16:49


    Midway through a 30-episodes-in-30-days podcast-to-book sprint, host Susan Diaz gets honest about what's working, what's hard, and how she's actually using AI as a thinking partner, draft machine, pattern spotter, and quiet project manager - plus what leaders can learn from this for their own AI experiments.   Episode summary This solo episode is a behind-the-scenes check-in from Susan's "completely unhinged" (her words) experiment to record 30 episodes in 30 days as the raw material for her next book. Nine episodes into twelve days, she talks candidly about fatigue, capacity, and why she refused to skip this recording even though she could have. She pulls back the curtain on the very practical ways she's using AI to structure ideas, draft assets, spot patterns across episodes, and manage the subtle project/energy load of a sprint like this. Then she zooms out to translate those lessons for founders and teams: why consistency beats intensity, why experiments are allowed to be small and honest, and why capacity has to be part of your AI strategy instead of an afterthought. Key takeaways This sprint is a live experiment in sustainability, not heroics. The goal isn't to "win" 30 episodes perfectly, it's to see what pace, support, and structure actually make ambitious AI-powered work sustainable for a real human. AI is a thinking partner first. Susan uses voice input in her LLM to dump messy thoughts, then asks it to shape them into outlines, angles, and talking points so she's never facing a blank page. (Pro tip: the built-in mic usually cuts off around five minutes - annoying but survivable.) Drafting support is where AI shines next. From show notes to extra research points to contextualising guest insights, custom GPTs help expand and refine ideas so she can focus on judgement and voice instead of first drafts. Pattern spotting turns episodes into chapters. By feeding multiple conversations into AI and asking for common threads or how ideas map to her core pillars, she can see where book chapters naturally want to live - and build something far more cohesive than her first, fully manual book. AI also helps with energy management. It quietly supports the admin around the sprint: drafting guest emails, summarizing notes, organizing ideas, and helping her see where there's too much on the go so she can re-plan. For organizations, three big lessons emerge: Consistency beats intensity - small, steady steps with AI are better than unsustainable bursts. Experiments can be small and honest - you don't need a centre of excellence to start. A one-hour training or a tiny workflow tweak counts. Capacity is strategy - pretending people have unlimited time and energy guarantees failure. Designing AI work around real capacity gives it a chance to stick. Good AI literacy lowers the cost of entry and raises the quality of thinking. Used well, AI doesn't replace your brain, it gives your best ideas a better chance of making it out of your head and into the world. Episode highlights [00:02] Setting the scene: a 30-episode sprint at the end of 2025 to get the book out of her head. [01:43] Nine episodes in twelve days, fatigue, and choosing to show up anyway. [03:21] Why the sprint mirrors how leaders feel about AI: "We know it matters… but keeping the pace is hard." [05:02] Using AI as a structure-building thinking partner via voice dumps and outlines. [05:30] The five-minute mic limit, word-vomit sessions, and how AI turns fuzz into flows. [07:02] Drafting support: research, context around guests, and custom GPTs for show assets. [07:44] Pattern spotting across episodes to find the book's real chapters and through-lines. [09:18] Why this AI-supported book will be "twice, thrice, ten times" better than the first one. [10:24] Energy and project management: emails, reflections, and organising all the moving pieces. [11:46] Lesson 1 – consistency over intensity for teams experimenting with AI. [13:29] Lesson 2 – small, honest experiments beat grand, delayed programs. [13:59] Lesson 3 – capacity as a core part of AI strategy, not a footnote. [15:01] Gentle prompts for listeners: where you're already experimenting, where AI can remove friction, and who your inside champions are. Use this episode as a mirror, not a mandate. Ask yourself and your team: Where are we already experimenting with AI, even in tiny ways? How could AI remove friction from that work instead of adding pressure? Who are our quiet inside champions - and what support or validation could we offer them this week? Answer even one of those honestly, and you're already moving from vague AI interest to real AI literacy. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP 256 From Secret Power Users to Visible AI Champions

    Play Episode Listen Later Dec 9, 2025 23:23


    In every organization there's at least one person quietly doing wild things with AI - working faster, thinking bigger, and building their own personal AI stack. In this episode, host Susan Diaz explains how to find those secret power users, support them properly, and turn their experiments into an organizational advantage (without burning them out or making them your unpaid AI help desk). Episode summary This solo episode is a field guide to the people already living in your organization's AI future. Susan starts by painting the familiar picture of the "suspiciously fast" teammate whose work is cleaner, more strategic, and clearly powered by AI - even if no one has formally asked how. She names them for what they are: AI power users who have built quiet personal stacks and are effectively acting as your R&D lab for AI. From there, she walks through: How to spot these people through audits, language, manager input, and usage data. Why most organizations ignore or accidentally exploit them. A practical three-part framework - Recognition, Resourcing, Routing - to turn power users into supported AI champions instead of secret heroes headed for burnout. The equity implications of who gets seen as a "champion" and how to ensure your AI leaders reflect the diversity of your workforce. She closes with a simple champion blueprint and a piece of homework that any founder, leader, or manager can act on this week. Key takeaways You already have AI power users. The question isn't "Do they exist?" It's "Where are they, and what are we doing with them?" Power users are your unofficial R&D lab. They're not theorising about AI. They're testing it inside real workflows, finding what breaks, and figuring out how to prompt effectively in your specific context. They are rarely the most technical people. Your best champions are often people closest to the work - sales, customer-facing roles, operations - who are simply determined to figure it out. Not just IT. If you ignore them, three things happen: They get tired of doing extra work with no support. Their workflows stay trapped in their heads and personal accounts. Your organization misses the chance to scale what's working. Use the 3 Rs to turn power users into champions: Recognition - Name the role (AI Champions/Guides), make their contribution visible, and invite them into strategy and training conversations. Resourcing - Give them real time (10-20% of their week), adjust workload and goals, and reward them properly - ideally with money, training, and access. Routing - Turn personal hacks into shared assets: playbooks, Looms, internal training, and workflows embedded in L&D or ops. Connect - don't overload - your champions. Give them a direct line to IT, security, legal, and leadership so they can sanity-check ideas and inform strategy, without becoming the AI police. Equity matters here. If you only see loud voices and people closest to power, you'll miss quiet experimenters, women, and people of colour who may be building brilliant systems under the radar. Use multiple ways (surveys, nominations, self-identification) to surface a diverse group of champions. Champions must be guides, not gatekeepers. Their role is to make it easier and safer for others to experiment - not to punish or shut people down. A simple champion blueprint: identify → invite → define → resource → amplify. Done well, your champions become the bridge between today's experimentation and tomorrow's AI strategy. Episode highlights [00:02] The "suspiciously fast" colleague and what their behaviour is telling you. [02:00] Personal AI stacks and why Divers "swan dive backwards" into AI without waiting for permission. [03:37] The risk of ignoring power users: burnout, trapped knowledge, and missed scaling opportunities. [05:03] Why power users are effectively your AI research and development lab. [06:33] How to surface power users through better audit questions, open-ended prompts, and usage data. [07:25] Listening for phrases like "I built a system for that" and "I just play with this stuff because I'm a geek." [08:25] Using managers and platform data to spot a small cluster of heavy AI users. [09:37] The danger of quietly turning champions into unpaid AI help desks. [10:33] The 3 Rs: Recognition, Resourcing, and Rooting. [11:18] What real recognition looks like—naming, invitations to strategy, and public acknowledgement. [12:05] Resourcing: giving champions time, adjusting workloads, and updating job descriptions. [13:14] Rooting: creating playbooks, Looms, and embedding workflows into L&D and ops. [14:29] Connecting champions with IT, security, legal, and leadership. [15:45] The equity lens: who gets seen as a champion and who's missing. [17:16] The risk that women and marginalised groups get left behind and automated first. [18:30] Using surveys, nominations, and explicit invitations to diversify your champion group. [19:07] Why champions should be guides, not AI police or gatekeepers. [19:47] The 5-step "champion blueprint": identify, invite, define, resource, amplify. [22:15] Your homework: talk to one secret power user this week and ask how you can make space for their experimentation. Think of one person in your organization who's already that secret AI power user. This week, have a conversation that goes beyond "cool, can you do that for everyone?" and into "This is important. How can we make space for you to keep experimenting like this and help others learn from you?" That's the first step in building your AI champion program - whether or not you call it that yet. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. 

    EP 255 How to Run an AI Audit in your Organization (Without Boiling the Ocean)

    Play Episode Listen Later Dec 8, 2025 21:52


    Most leaders have no clear, single picture of how AI is actually being used inside their organization. In this solo episode, host Susan Diaz walks through a practical, human-first AI audit you can run in weeks (not years) to map tools, workflows, adoption patterns, and risks - so your AI strategy isn't built on vibes and vendor decks. Episode summary This episode tackles a simple but uncomfortable question: "Do you actually know what's happening with AI inside your organisation right now?" Not what vendors say. Not what a slide in a strategy deck says. What people are really doing - with official tools, embedded features, and personal accounts. Susan breaks down a four-part AI literacy audit that gives leaders a coherent baseline: Tools - Which AI-powered tools are already in play, where AI is embedded in existing platforms, and where spend and capabilities overlap. Workflows - Where AI is already changing how work is done, which tasks are automated or accelerated, and which manual processes are obvious candidates for support. Adoption patterns - Who's confident, who's dabbling, who's avoiding AI entirely, and how evenly (or unevenly) AI usage is distributed across teams and levels. Risks and blind spots - Shadow AI, unsanctioned tools, data exposure, governance gaps, and the places where "nothing's gone wrong… yet" is not a strategy. She then walks through a step-by-step approach to running an audit without turning it into a year-long consulting project, and shows how to turn your findings into training, workflow redesign, and a credible AI ROI story. Key takeaways If you skip the audit, you're flying blind. Without a baseline, every AI decision - platforms, pilots, hiring, training - is a shot in the dark based on guesswork and anecdotes. A good AI audit is four-dimensional, not just a tools list. You need to understand tools, workflows, adoption patterns, and risk/gaps together if you want a true picture of AI activity. The hidden costs of "no audit": Duplicate spend on overlapping tools in different departments. Shadow AI and data risk from personal accounts and unsanctioned apps. Wasted efficiency gains because great use cases stay trapped in individual heads and folders. No convincing story of AI ROI for your CFO, board, or leadership. Think of the audit like an MRI, not a court case. The goal is visibility, not blame. If people feel they'll be punished for experimenting, they'll simply stop telling you the truth. You can run a meaningful audit in five practical steps: Listen - Short surveys + focused interviews with department heads, AI champions, and sceptics. Map tools and spend - Inventory official tools, quiet add-ons, free/low-cost apps, and personal subscriptions used for work. Document workflows - Pick priority functions (often marketing, HR, sales, ops) and map how work gets done today, then mark where AI shows up or could. Assess risk and governance - Where does confidential data touch AI? What's policy on paper vs in practice? Where are the biggest gaps? Build an opportunity backlog - Quick wins, experiments, and longer-term projects that emerge from the audit. Your audit output should be short and usable, not a 90-slide graveyard: An executive summary with top risks, opportunities, and 3/6/12-month priorities. A tool + workflow map that shows overlaps, gaps, and shadow usage. A risk and governance section with clear start / stop / continue recommendations. An opportunity backlog that can plug into project management and resourcing. Don't make it an IT-only exercise. AI touches how people think and work across functions. The audit should be leadership-backed and cross-functional, not dropped on a single department. The audit is the bridge, not the endpoint. Once you can see what's happening, you can design training, governance, workflow changes, and ROI tracking that match reality instead of hopes. Episode highlights [00:02] "Do you know exactly what's happening with AI inside your organisation right now?" [01:10] Why an AI audit should come before platforms, hires, or big training programmes. [03:18] Reframing audits: from "innovation killers" to foundations for better decisions. [04:00] The four dimensions of an AI literacy audit: tools, workflows, adoption, risk/blind spots. [05:24] Questions to ask about tools: what's in play, where AI is embedded, where teams overlap. [04:52–05:24] Questions to ask about workflows: where AI is changing work, what's automated, what's still painfully manual. [05:24–06:56] Mapping adoption patterns: power users, dabblers, avoiders, and distribution across departments and levels. [06:56] Shadow AI, unsanctioned tools, and governance gaps as audit essentials. [07:23] Why a single coherent picture of AI activity becomes your baseline for everything that follows. [07:54–10:28] Four costs of skipping the audit: duplicate spend, risk, wasted gains, and weak ROI stories. [10:51–13:03] Step 1 + 2: listening through surveys and interviews, then mapping tools and spend without turning it into a witch hunt. [13:03–14:22] Step 3: documenting workflows in priority functions and spotting patterns. [14:22–15:40] Step 4 + 5: assessing risk/governance and surfacing quick wins + deeper opportunities. [16:15–17:47] What a practical audit output looks like (and why it shouldn't die in a folder). [18:16–18:57] Common traps: making it IT-only, punitive, or overcomplicated. [19:56–21:10] Turning audit insights into training, governance, workflow redesign, and credible ROI tracking. If your current AI strategy rests on vendor promises, scattered pilots, and vibes, this episode is your sign to step back. Share it with: The exec who keeps getting asked for AI ROI. The IT or ops lead worried about shadow AI but unsure where to start. The internal AI champion who's been documenting everything in a lonely Notion doc. Then ask as a leadership team: "What would it take for us to have a clear, one-page picture of AI activity across this organisation in the next 60 days?" Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP254 Should You Build Custom GPTs or Just Prompt Better

    Play Episode Listen Later Dec 5, 2025 48:06


    Should you build custom GPTs, agents, digital interns, Gems, and artefacts… or just learn to prompt better? In this roundtable, Susan, social media + AI power user Andrew Jenkins, and GTM + custom GPT builder Dr. Jim Kanichirayil unpack when you actually need a custom build, when a strong prompt is enough, and how to stop treating AI output like a finished product. In this episode, Susan brings back two favourite guests who sit on different ends of the AI usage spectrum: Andrew Jenkins - multi-tool explorer, author, and agency owner who "puts the chat in ChatGPT" and loves talking with his data. Dr. Jim Kanichirayil - founder of Cascading Leadership, builder of thought leadership custom GPTs for go-to-market, content, and analysis. Together they break down: How Andrew uses conversation, prompt optimizers, projects, and tools like NotebookLM and Dojo AI to "talk to" his book, podcast, and data. How Dr. Jim uses a simple Role-Task-Output framework to design custom GPTs, train them on his voice (and the voices of his clients), and keep them on track with root-cause analysis when they drift. The messy reality of limits, context windows, and why AI is still terrible at telling you what it can't do. Why using AI on autopilot (especially for outreach and content) is a brand risk, and how to use it as a drafting and analysis system instead. Key takeaways You don't have to choose only prompts or only custom GPTs. Strong prompting is the starting point. Custom GPTs make sense when you see the same task, drift, or "bleed out" happening over and over again. Start every workflow with three things: Role, Task, Output. Who is the AI supposed to be? What exact job is it doing? What should the output include and exclude? Then ask the model: "What else do you need to execute this well and in my voice?" Knowledge bases are just your best examples and instructions in one place. Transcripts, scripts, PDFs, posts, style packs, platform-specific examples - they're all training material. AI does best when you feed it gold standard samples, not vibes. Projects and talking to your data are the future of reading and research. Andrew uses his entire book in Markdown as a project, then has conversations like "find me five governance examples" instead of scrolling a PDF. NotebookLM turns bullet points into decks, mind maps, and videos, then lets you interrogate them. AI is a 60-70% draft, not a finished product. If you post straight from the model, it will sound generic, over-written, and slightly robotic. The job is to take that draft and ask: "Does this sound like me? Would I actually say this?" Automation is good. Autopilot is dangerous. Using AI to analyze content performance, structure research, or standardise parts of a workflow = smart. Letting AI write content and outreach you never review = reputation risk and audience fatigue. More content is not the goal. Better feedback loops are. Dr. Jim chains GPTs: one for drafting with his voice, one for performance analysis, one for insights. That loop makes the next round of content sharper instead of just… louder. Episode highlights [00:13] The core question: build digital interns (agents/custom GPTs) or just prompt better? [01:09] Andrew's origin story and why he "puts the chat in ChatGPT." [03:39] How Andrew uses prompt optimizers, multiple models, and Dojo AI as an agentic interface. [07:24] Dr. Jim's world: sticking to GPT, building tightly scoped custom GPTs for repetitive work. [08:37] When "bleed out" in prompts tells you it's time to build a custom GPT. [09:26] Using root-cause analysis inside the GPT configuration when outputs go off the rails. [10:25] Projects, books in Markdown, and "talking to your own material" via AI. [13:05] Case study: using AI to surface case examples from a 3.5-year-old book instead of scrolling PDFs. [14:27] NotebookLM for founders and students: one email of bullet points → infographic, map, slide deck, video. [19:03] The Role–Task–Output framework and the importance of explicitly designing for your voice. [22:02] Platform-specific style packs and use cases (spicy vs informational vs editorial). [26:29] The frustrating reality of token limits and why models rarely warn you before they hit a wall. [36:54] What's happening "in the wild": early-stage founders treating AI output as final product. [39:01] Why "more" isn't better, "better" is better: drafts, polish, and content analysis GPTs. [42:03] Automation vs autopilot in B2B social, and why Andrew refuses to buy from a bot. [43:29] Emerging tools: Google's Pommely, Nano Banana for image creation, and AI browsers like Atlas, Comet, and Neo. If you've been stuck wondering whether to spend time on custom GPTs or just prompt better, this episode gives you the mental models to decide. Share it with: The teammate who keeps saying "we should build a GPT" but hasn't defined the workflow. The founder treating AI drafts as finished copy. The ops brain in your org who secretly wants to be a bridge builder. Then ask as a team: "Where do we actually need great prompts, and where do we need a repeatable GPT or project with a real knowledge base?" Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP 253 Swan Dive Backwards (The Story and Framework Behind the Book)

    Play Episode Listen Later Dec 4, 2025 24:53


    You may have heard host Susan Diaz say she "swan dove backwards off the cliff into AI". In this episode, she unpacks what that actually means, how it became the working title of her book, and the concrete frameworks leaders can use to move boldly into AI without being reckless. This is a personal, behind-the-scenes episode. Susan shares how she went from not being in the famous first 6 million users of ChatGPT… to becoming the person who showed up a week later and refused to leave. She explains why generative AI felt different from every underwhelming AI-ish tool she'd used before. Then she introduces two big ideas that will run through the book and the podcast series: The four cliff archetypes of AI in organizations. The five moves of a swan dive that turn bold experimentation into lasting infrastructure. It's part origin story, part field guide, and part invitation to join the Early Divers instead of waiting for the bridge to magically appear. Key takeaways Generative AI was a pattern-breaker. What hooked Susan wasn't hype. It was the combo of: credible output, ability to handle large volumes of messy information, and being free to use. That trifecta changed the game for everyday operators. "Swan dive backwards" is not recklessness. It's a personality pattern. Quick starts jump with a scan for rocks and a plan to tuck their elbows. The instinct is to move, not freeze, when the path ends. Every organization has four cliff archetypes of AI: Divers - the early experimenters pressing all the buttons. Pathfinders - the risk-mappers and governance folks asking "how do we do this safely?" Operators - the people who turn experiments into actual workflows and pilots. Bridge builders - the systems people who turn one-time wins into playbooks, platforms, and training. You are rarely just one archetype. You're more like a sound mix across all four. That mix determines how you respond when AI shows up as a cliff, not a gentle slope. The five moves of a swan dive give you a pattern: Spot the cliff - recognize this is a step-change, not another incremental tool. Check the water - test, set guardrails, understand risks and boundaries. The dive - move out of analysis into real use on real work. Surface with a map - name patterns, document what's working, share stories. Build the bridge - turn what you learned into infrastructure so others don't have to jump cold. AI is too big to leave to one personality type. Divers alone will splatter. Pathfinders alone will stall. Operators without bridge builders will create one-off wins that never stick. You need all four. This book and series are a public swan dive. Backwards! The 30-episode challenge, the naming of Swan Dive Backwards, and the frameworks are all being built where others can see and eventually walk the bridge. Episode highlights [00:00] "I swan dive backwards off the cliff into AI" - why that line sticks and what it actually means. [01:19] Naming the book Swan Dive Backwards and the meta moment for future readers. [01:47] Why Susan was not in the first 6 million ChatGPT users, and why early AI tools had underwhelmed her. [03:03] The three markers that made generative AI different: credible output, large-volume handling, and being free. [05:27] "Late to the party, then refused to leave" – how personality type shaped her AI journey. [06:28] The cliff analogy: divers, plotters, doers, bridge builders. [09:33] Why Susan is a classic "diver" and how that shows up in entrepreneurship. [12:08] The LinkedIn comment from Alison Garwood-Jones that locked in the book title. [14:53] The four cliff archetypes of AI inside companies, in explicit AI terms. [18:38] Move 1: spotting the cliff – realising AI is a calculator/PC-level shift, not a passing tool. [19:44] Move 2: checking the water – personal tests, failures, and organisational governance. [20:45] Move 3: the swan dive – moving from theory to workflow-level experiments. [21:50] Move 4: surfacing with a map – turning experiences into language, frameworks, audits. [23:03] Move 5: building the bridge – connecting experiments into ongoing systems and training. [23:31] Why the real courage is building so others never have to jump cold again. This episode is both an origin story and a mirror. Ask yourself and your team Which cliff archetype do you lead with: Diver, Pathfinder, Operator, or Bridge Builder? Where are you on the five moves of the swan dive: staring at the cliff… or quietly building the bridge? Share this episode with the biggest "diver" you know and the most trusted "pathfinder" in your organization. They're going to need each other. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP252 Turning Prompts into Real AI Workflows with Jason Dea

    Play Episode Listen Later Dec 2, 2025 39:29


    Many teams have a Notion page full of prompts. Very few have real, repeatable AI workflows. In this episode, host Susan Diaz and product/go-to-market leader Jason Dea dig into how to move from playing with prompts to designing workflows, building tiny specialist agents, and avoiding a new wave of shadow AI inside organizations. Susan is joined by venture studio and SaaS veteran Jason Dea from Coru Ventures in Toronto. They unpack why AI is not a magic wand or a single feature, but an enabling technology that only delivers value when it's wired into actual workflows. Jason shares his "swarm of bumblebees" metaphor for AI, how he builds small specialist agents to clone his own work style, and why enterprises are about to repeat the mistakes of shadow IT if they don't get serious about orchestration and governance. They close by talking about leaders using AI in their own day-to-day work, and Jason's personal experiments with family apps, coding, and even a butterfly-catching game for his daughter. Key takeaways Prompts ≠ workflows. Collecting prompts in a shared doc feels productive. But until you map the 8–10 steps of a job and decide where AI fits, you're just doing experiments, not transformation. AI is not a magic one-shot. It's an enabling technology. The real gains come when you see your work as a chain of small tasks and let AI take over the repetitive, boring, or "toil" links in that chain. Think "swarm of bumblebees." You are the queen bee. AI is a swarm of tiny worker bees, each doing one specific task very well (emails, slides, requirements, research), not one mega-agent doing everything. Documenting workflows doesn't have to be fancy. A workflow is just "tell me the 10 steps." Start with the human sequence. Tools come second. Once it's visible, the friction points where AI can help become obvious. Shadow IT is turning into shadow AI. Cheap, bolt-on AI features and swipe-a-card tools make it easy for every team to spin up their own stack. Without orchestration, you recreate silos, risk, and tool sprawl at AI speed. IT should govern, not own everything. Governance, security, and guardrails matter. But AI also democratises small bits of "coding" and automation, letting non-technical teams build more, faster—if they have guidance. Leaders need hands-on literacy. The fastest way out of the hype is to use AI yourself for your own toil. Drafting emails. Planning. Decomposing big tasks. You get more realistic about what it can and cannot do. AI is an "unstuck" tool in work and life. From relearning to code, to building tiny family apps, to cataloguing knick-knacks and designing games for kids, AI opens up projects that were unrealistic even five years ago. Episode highlights [00:01] Jason's background in startups, SaaS, product, and go-to-market, and his role at Coru Ventures. [02:00] Where we are on the Gartner hype cycle and why the trough of disillusionment is inevitable and useful. [04:40] Why some people can't imagine life before ChatGPT—and why that's not true for everyone inside organisations. [05:50] Mapping work as a sequence of steps instead of hunting for a single "magic" AI prompt. [08:01] The "swarm of bumblebees" metaphor: you as the queen, AI as many small worker-bee agents. [09:59] How to define a workflow in plain language: "tell me the 10 steps," tools aside. [11:00] Paperwork and OCR as a classic example of where generative AI finally unlocks messy, grey-area tasks. [13:50] Using AI first to remove the tasks you hate and identify the links you should outsource to machines. [15:20] Jason's "digital clone" AIs trained on his own content and patterns. [19:00] Building multiple mini-AIs: one for social posts, one for slide decks, one for product requirements. [21:10] Bolt-on AI features everywhere + messy workflows = amplified confusion and risk. [22:10] From shadow IT to shadow AI: why orchestration and shared understanding of workflows is critical. [24:40] Startups' speed vs enterprises' risk aversion, and what each can learn from the other. [27:10] Why IT should set guardrails while letting departments experiment and build more on their own. [30:10] Jason's advice to leaders: use AI yourself to see where it really helps and what it really takes. [36:00] Personal-life AI: relearning to code, family apps, cataloguing home items, and a butterfly game for his daughter. [38:00] Susan's idea: vibe-coding a family recipe app as a way to preserve memories and workflows. If your organization has a folder full of prompts but no clear AI workflows, this episode is your sign to pause and rethink. Share it with: The person who keeps buying new AI tools. The leader who thinks "IT will figure it out". The teammate who's already acting like the queen bee and quietly building their own swarm. Then ask as a team: "Where are our 10-step workflows, and which links should really be done by AI?" Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP251 The AI Literacy Divide is Why your AI Adoption is Stalling

    Play Episode Listen Later Dec 1, 2025 35:11


    Most enterprises don't have an AI problem. They have a literacy problem. In this episode, host Susan Diaz breaks down the "AI literacy divide" inside organizations, why it quietly creates haves and have-nots, and what baseline literacy actually looks like in practice. AI literacy should be treated the same way we treat financial or health literacy - as a non-optional, minimum standard for everyone, not a niche skill for "AI people". Susan maps out the current reality in many companies - a small group of confident experimenters, a vocal group of sceptics, and a silent majority stuck in the middle waiting for direction. Then she paints two futures and shows how intentional, organization-wide AI literacy turns curiosity into real innovation instead of resentment, inequity, and stalled adoption.   Key takeaways You don't have an AI tool problem. You have an AI literacy gap. Most people can "open ChatGPT" but don't understand what LLMs are, what they're good at, and where the risk line is. Think "financial literacy" not "prompt engineering". Just like everyone is expected to understand interest, debt, and prevention in health, everyone should understand the basics of everyday AI, not build custom agents on weekends. AI knowledge inside organizations is wildly uneven. A few people experiment confidently. A few are loudly doomsday. Many say nothing, don't feel safe asking questions, and quietly fall behind. That's the divide. Leadership is often the least literate group. Junior staff may be hands-on with tools, while executives and middle managers are too busy or embarrassed to be beginners again - creating a strange power/knowledge mismatch. Stop hunting for "one magic AI tool". AI in your company will look more like the internet than a single CRM. It will run through everything, not live on one platform. Literacy and workflows beat silver bullets. Two things to stop immediately: Stop treating AI as a binary "for or against" issue. It's already here, like calculators and the internet. The real question is how you'll adopt it. Stop pretending inequity isn't part of AI adoption. If training only reaches leaders, tech folks, or men who speak up first, you're baking old bias into a new system. Episode highlights [00:01] "Most enterprises don't actually have an AI problem. They have a literacy problem." [00:40] Financial and health literacy as models for what AI literacy should look like. [01:39] The current reality: pockets of brilliance, pockets of panic, and a big silent middle. [06:03] The Star Wars council metaphor: the Yoda faction, the doomscrolling faction, and the quiet middle. [10:16] The first big red flag: leadership has never sat down to talk about AI as a cultural, strategic, and operational shift. [12:13] Two employees in the same company: the confident AI experimenter vs the quietly left-behind colleague. [18:21] When formal power and AI experience don't live in the same people. [19:31] Why there will never be "one tool to rule them all" inside organisations. [26:20] Company A vs Company B: what baseline AI literacy actually looks like. [31:16] The skills every employee needs: plain-language understanding of LLMs, basic prompting, simple workflow mapping, and evaluation. [32:13] Two things to stop doing now: binary thinking about AI and ignoring inequity in who gets to learn. If you suspect your organization is quietly suffering scattered pilots, no shared language, lots of vibes but no vision, start here. Ask your leadership team: "What does baseline AI literacy look like for everyone here, and what's our plan to get there?" Then share this episode with one person in your org who's brave enough to start that conversation. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP 250 - The Hidden Cost of Experiment-Only AI Literacy

    Play Episode Listen Later Nov 30, 2025 24:05


    Lots of teams are playing with AI. Few are documenting, sharing, or governing what actually happens. In this episode, Susan unpacks the hidden cost of experiment-only AI literacy inside enterprises, from duplicate spend to shadow AI, and offers a path from Wild West to structured innovation. Episode summary In this solo episode, Susan looks at what really happens when AI experimentation is encouraged, but never captured or guided. She explains why leadership often only sees one of two AI universes running inside the same company. Then she breaks down how to keep curiosity alive and add just enough structure to protect brand, budgets, and people. Key takeaways AI is already in your organisation, whether it's "approved" or not. Even with blanket bans, people de-identify data and reach for personal tools like ChatGPT or Claude on their phones. You're probably running two parallel AI universes. One official, "enterprise safe" tool stack that leadership can see. One unofficial, personal stack that actually solves problems. Experimentation is good culture. "Experiment-only" is expensive. Without reporting, shared learning, or guardrails, you get duplicate tools, compliance risk, brand drift, and fake efficiency. People are treating AI the way they once treated Google. If they can't get answers inside the firewall, they go around it. That behaviour is normal… but now the stakes are much higher. Stop chasing a single super-agent. AI can replace steps, not entire, multi-step, values-based processes that require judgement, politics, and context. The real leverage is in literacy, not licences. Tools without shared language, playbooks, and training will never compound into competitive advantage. Episode highlights [00:02] The conference metaphor: high inspiration, zero notes, nothing sticks. [01:30] The uncomfortable truth: people are using AI, even if policy says they shouldn't. [03:20] Why internal "safe" chatbots often feel generic and miss political and market nuance. [05:22] How smart staff quietly step outside approved tools and into personal LLMs. [10:05] The rise of two AI universes: official vs shadow, and where leadership can actually see. [14:22] Experimentation as a sign of healthy, curious culture. Where it tips into risk. [16:35] Hidden costs: duplicate spend, overlapping capabilities, and tool sprawl. [17:28] Shadow AI, compliance risk, and what happens when sensitive data hits public models. [18:05] Brand voice drift and micro-messaging shifts that compound over time. [20:21] What leaders can do next: audits, simple guardrails, sandboxes, and shared findings. [21:19] What a real AI playbook is (hint: documented workflows, not a buzzword PDF). [22:24] The core question: do you actually know how your people are using AI today? If you suspect there's an invisible AI Wild West running inside your organization, start here. Listen to the full episode and then ask your leadership team one question: "Do we really know how our people are using AI today?" If the honest answer is "not really", that's your starting point for an AI audit and a literacy plan. Connect with Susan Diaz on LinkedInfor to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    EP249 The Podcast-to-Book with AI Challenge (Day 1 of 30)

    Play Episode Listen Later Nov 28, 2025 23:27


    What if you didn't have to disappear into a cabin for a year to write a meaningful book on AI? In this episode, host Susan Diaz kicks off a 30-day podcast-to-book challenge, sharing why her first book changed everything in her business and how she's now using AI and this podcast as a live "thinking lab" to build her next one. In this solo reflection, Susan: Looks back at how her first book Unboring: Take Your Content Marketing from Blah to Brilliant reshaped her identity, authourity, and client pipeline. Gets honest about why her AI book has been "stuck in a Google Doc" for over a year. Shares how a 30-day podcast challenge (inspired by Dan Sanchez and Ken Friere) will turn daily episodes into the raw material for a new, evergreen book on AI literacy for companies. Key takeaways Books change rooms, not just shelves. Being an author didn't make Susan "book rich" but it did change how decision-makers perceived her, filtered in better-fit clients, and gave her a framework for talks, workshops, and content. A realization that the book doesn't need to chase the news cycle. Instead of writing about tools and updates that age in months, Susan is focusing on evergreen questions: how we think, work, govern, and design AI inside companies. Stuck isn't a lack of ideas. It's a lack of structure and urgency. The AI book already existed as outlines, pillars, and scattered drafts. What was missing was discipline and a public commitment. Podcasting can be a "thinking lab" for your book. Daily episodes will act as live experiments for frameworks, stories, and interviews that can later be shaped into chapters. AI is a collaborator, not a ghostwriter. Susan uses AI to help think, outline, pattern-spot, and structure - while all ideas originate from real conversations, reflections, and lived experience. This is a long game for leaders. AI literacy and adoption inside organizations will take years, just like online banking. Some people will resist to the bitter end, but most will eventually adapt. Episode chapters (timestamps) [00:00] Why writing and storytelling still sit at the centre. [01:00] The identity shift of publishing Unboring and how it changed client perception. [03:45] How the first book became a "north star" for talks, workshops, and marketing content. [07:10] The uncomfortable truth: the AI book has been stuck as outlines, half-finished drafts, and scattered notes. [08:20] The fear that an AI book will be obsolete by the time it's finished - and why that thinking is flawed. [09:56] What this new book will be about: humans, companies, culture, governance, and real workflows. [10:53] Enter the catalyst: Dan Sanchez, Ken Friere, and the idea of building a book in public using AI. [12:50] Deciding to do a 30-day podcast challenge… at the end of November… right into the holidays. [14:18] What a previous 30-day Instagram Live challenge did for speaking opportunities and authourity. [16:03] How this 30-episode sprint will turn the podcast into a thinking lab for the book. [17:40] The mix of episodes to expect: solo reflection, teaching, futurism, and subject-matter-expert interviews. [18:48] Why AI literacy in companies will mirror the long, messy adoption curve of past technologies. [20:29] The types of guests Susan wants to bring on: innovators, practitioners, futurists, ethicists, and policy voices. [21:25] How AI will be used behind the scenes to turn conversations into chapters and frameworks. [22:10] An invitation: come along for 30 episodes of experiments, rough edges, and real-time learning. Links and resources Get Susan's first book - Unboring: Take your Content Marketing from Blah to Brilliant Connect with Susan Diaz on LinkedInfor behind-the-scenes updates on the challenge. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. If this episode sparks something in you, don't just listen - build alongside it. Use these 30 episodes as prompts to ask better questions about AI in your own company. Share this episode with a founder or leader who's "AI-curious" but stuck in planning mode. Hit follow/subscribe so you don't miss the next 29 days of this experiment. If you're leading a team and want help turning your lived experience into AI-powered IP (like a book, frameworks, or talks), send Susan a DM on LinkedIn with the words "podcast to book" and she'll share next steps.

    EP248 AI for Sales Enablement at Founder Scale (without sounding robotic)

    Play Episode Listen Later Nov 13, 2025 20:47


    Founder-led teams can use AI to run effective, specific outreach - without sounding robotic. In this episode of 'AI Literacy for Entrepreneurs', I share a five-part "non-cringe" follow-up, a reusable variable-block system, tone/quality checks, a 5-step SOP you can paste into your AI tool, and a 48-hour challenge to make it real. Inside the episode: The 5-line follow-up that doesn't make you cringe (context → value → ask → next step → grace). A variable-block library (Persona, Pain, Proof, Offer, CTA) so AI can personalize at speed. Three 60-second QC checks to keep tone clean and human. A tiny SOP you can paste into your LLM and ship five follow-ups this week. If referrals aren't enough anymore for your business, this is your nudge to build a simple system and hit send. Want more? Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. Join the Marketing Power Circle (MPC) Connect with Susan Diaz on LinkedIn If this helped, a quick ⭐⭐⭐⭐⭐ keeps the show discoverable for other entrepreneurs.

    EP247 Data, Docs, and your AI Knowledge Base - the 4-Folder System that Makes AI Work

    Play Episode Listen Later Oct 30, 2025 25:21


    "AI can't use what you haven't organized." In this solo teaching episode, host Susan Diaz lays out a lightweight, repeatable structure for an internal knowledge base that actually powers your AI - so custom GPTs, Gems, or projects stop guessing and start producing on-brand, accurate work.  You'll learn the difference between rules (how your AI behaves) and knowledge (what it must know), how to build a four-folder knowledge base, ways to keep it fresh, what not to include for privacy/safety, and a 48-hour challenge to prove it on a real workflow. What you'll learn Rules vs knowledge: rules = behaviour, steps, tone, guardrails; knowledge = the factual assets (offers, pricing, voice, proof) your AI must reference. Use both, or you'll get either generic tone or rambling, off-base outputs. The 4-folder knowledge base: Brand Voice, Product Facts, Policies & Pricing, and Examples - what goes in each, and why this crushes hallucinations. Freshness rhythm and versioning: set a monthly/bi-monthly review, version by date, and keep a simple changelog so quality doesn't decay. Privacy and safety notes: what to exclude (confidential contracts, unreleased IP), how to anonymize examples, and who should have edit vs view access. Live example: how Susan used this exact setup to draft a Northlight landing page that was ~80% right on first pass. 48-hour challenge: create the four folders and drop 1-2 docs into each; test on one real deliverable (do this now) Create the four folders. Drop 1-2 docs in each (rough is fine). Run one real deliverable through your setup; note time saved + edit depth. Bring your folder map to Susan's MPC open house for live feedback.   Want more? Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. Join the Marketing Power Circle (MPC) Connect with Susan Diaz on LinkedIn Please take a moment to rate and review this podcast: 5⭐ helps more founders find this show  

    EP246 From AI Literacy To Implementation That Ships (Introducing Northlight)

    Play Episode Listen Later Oct 15, 2025 27:49


    Episode summary Three different leaders told me the same thing last quarter: “We tried AI. It felt cool. It didn't change any results.” Northlight exists to fix that gap. In this episode I introduce Northlight - my AI literacy + implementation firm for teams that are done dabbling and ready for workflow wins they can measure. We turn AI from a novelty into a compounding asset using SOPs, custom GPTs/agents, and responsible guardrails. You'll also hear a simple 48-hour challenge to prove the value on one real workflow, plus a founder-friendly launch offer for a rapid diagnostic. What you'll learn Why tools aren't your bottleneck - workflows are. How to move from ad-hoc prompting to repeatable systems. The “calculator → computer → AI” analogy. Each shift frees humans to solve bigger problems (if you redesign the work). Northlight's 5D Flow (our method): Discover → Design → Deploy → Document → Dial-in. What our engagements look like: 2-week diagnostic, 6-week implementation sprint, ongoing enablement/governance. Two mini-case studies: Content ops cut from a day to

    EP 245 AI Meets DEI - Building Inclusive Workplaces in the Age of Automation with Gabby Zuniga

    Play Episode Listen Later Oct 1, 2025 45:16


    How do diversity, equity, and inclusion (DEI) goals evolve in a world being rapidly reshaped by AI? In this episode, I sit down with Gabby Zuniga, founder of InclusiveKind, where she helps organizations across nonprofit and corporate sectors do DEI right through assessments, strategy, training, and policy review. We dig into: The ebb and flow of organizational commitment to DEI since 2020 - and why some companies stick with it while others quietly pull back. Why DEI is not just about race or ethnicity but also about learning styles, generational diversity, and workplace equity at every level. How AI is creating new urgency for DEI conversations - from algorithmic bias to ensuring inclusive adoption of technology. Practical ways founders and leaders can keep DEI at the center, even as priorities shift. Gabby's perspective is both real and hopeful: while some organizations are stepping away, the ones that ground their DEI in values - not headlines - are leading the way. This is an important listen if you're navigating how to keep people and inclusion at the heart of your business while embracing AI as a growth tool.

    EP244 Getting Found on AI (and Why SEO Foundations Still Matter) with Andrew Jenkins

    Play Episode Listen Later Sep 17, 2025 47:09


    AI search is here. People are using ChatGPT and other tools to discover businesses - but my guest today on the podcast, Andrew Jenkins shows why the foundations of SEO still matter. Andrew is CEO of Volterra Digital, a top-ranked social media agency, and a long-time member of my Marketing Power Circle (MPC). We dive into: Why getting found on AI isn't a “flip the switch” formula The role of reviews, backlinks, and industry recognition in AI search rankings How Andrew used Clutch.co to build a discoverability flywheel How custom GPTs and vibe coding are transforming small agency workflows The mindset shift from AI as content spam to AI as your second brain

    EP243 Canva, AI, and the Future of Design for Non-Designers with Emily Baillie

    Play Episode Listen Later Sep 4, 2025 38:28


    In this episode of AI Literacy for Entrepreneurs, I sit down with Emily Baillie, founder of Compass Content Marketing and longtime digital marketing strategist turned AI literacy expert. Emily has been helping businesses navigate digital change for over 15 years. When ChatGPT first launched, she quickly saw the impact AI would have and started teaching AI and marketing workshops - which are now her most requested service. We dive into: Canva's AI superpowers: from the one-click background remover to magic resize, magic write, and even language translation. Where to start if you're new to AI in Canva (and which tools to skip). How Canva makes it easier to create high-quality content quickly, even if you're not a designer. Why iterating and refining AI outputs is the secret to avoiding “AI slop”. Creative use cases, from growing email lists with QR codes to recording presentations and sharing with a single link. Canva's free subscription for nonprofits and why more organizations should take advantage of it. Emily reminds us that AI tools are meant to save time, build confidence, and open doors to creativity - not replace the human touch.

    EP242 Prototype > Perfection - Deborah Carraro on Learning AI by Doing

    Play Episode Listen Later Aug 6, 2025 38:11


    This episode dives into the evolving space where entrepreneurship, education, and AI collide. Host Susan Diaz sits down with Deborah Carraro, an educator, AI leader, and founder of ideborah, to unpack how early-stage entrepreneurs can approach AI with creativity, experimentation, and values alignment. Deborah, who also leads AI efforts at Coralus (formerly SheEO), shares her insights from working with founders and students navigating new tech - often for the very first time.

    EP241 Winning Mindset in the Age of AI with Melissa Lloyd

    Play Episode Listen Later Jul 23, 2025 34:35


    In this episode, Melissa Lloyd, founder of Aigility Hub, joins host Susan Diaz to explore the mindset-first approach to adopting AI. We unpack why tools and tactics should follow clarity, confidence, and intentional leadership. This conversation is a must-listen for entrepreneurs and leaders feeling overwhelmed by tech shifts and wondering how to bring human-centred strategy to their AI journey.

    EP240 Why your Custom GPT Won't Work (and the 48-Hour Rescue Mission)

    Play Episode Listen Later Jul 9, 2025 16:35


    Ready to stop cursing at that under-performing GPT (that you lovingly built - only to watch it spit out meh)? In this short episode, host Susan Diaz breaks down five foundations every custom GPT needs before it can truly earn a spot on your team, plus a 48-hour challenge to tune-up (or totally transform) the bot you already have. What's inside

    EP 239 How 3 founders use AI to think 100x

    Play Episode Listen Later Jun 30, 2025 39:56


    In this week's ‘AI Literacy for Entrepreneurs' I tap three founders who turned very human skill-sets - strategy, street-level video, personal style - into AI-fuelled growth engines. What we cover Quiz funnels that work while you sleep Growth strategist Maiko Sakai dissects why most quizzes flop, then shows how she drafts curiosity-packed titles in ChatGPT and lets Interact's new AI builder spin out all the logic branches in minutes. From one hour of phone clips to 40+ assets Dori Adams, founder of Shutterb, explains how she'd turn everyday “content-paparazzi” into content for days. She walks us through the stack that multiplies content raw material into a month of social posts. Confident personal brands Personal-brand stylist Renee Lindo explains why your outfit is the “packaging” of your expertise and how AI is becoming a low-risk playground for trying colour palettes, outfit pairings and mood-board inspo before you buy. I close with three points: Small teams can now run at enterprise speed. Custom GPTs are documented SOPs on autopilot. Your next job is to decide what still needs your brain - and delegate the rest to the bot. Resources mentioned Interact AI Quiz Builder - Maiko's go-to quiz platform ReelTrends - Dori's “what's-popping” audio and format tracker Want deeper implementation? Founders are building these workflows live inside Marketing Power Circle (MPC), my AI-implementation mastermind for founder-led teams. You'll find information here. Rate + review if today's episode sparked an idea - and see you in the next episode!

    EP238 Success, Space and Systems-Rebel Energy (with Bárbara Daroca)

    Play Episode Listen Later Jun 13, 2025 26:36


    ‘AI Literacy for Entrepreneurs' gets a role-reversal this week: host Susan is in the hot-seat, guesting on Bárbara Daroca's Hello, Success! Podcast and the convo was too good not to syndicate here. If you're building a business on your own terms (and wondering how AI fits into that life-by-design plan), save this one for your next walk. Why listen: Susan's origin story - from lifetime marketer, to chocolate-maker, to AI-agency founder The 4am Report pivot, and why podcasting early (and tiny) massively paid off (even inspiring Barbara to launch her own podcast. How generative AI really changes the writing game (hint: commodity content is toast; strategy wins) Grocery-list automation, Instacart GPTs, and other un-sexy life hacks that buy back brain-space Defining success as 60 % white space - and how to protect it while scaling Timestamps:  03:12 - From newborn + nine-to-five to solopreneur experiments (including artisan chocolate)  09:40 - Launching the 4am Report - catching podcasting's “second wave”  15:30 - ChatGPT's light-bulb moment: “This is the next calculator.”  19:50 - Will AI replace writers? Only if you let it.  23:45 - Master-prompts: ask the AI to ask you better questions  27:20 - Success = space: designing work and life with 60 % white space Links and resources: Bárbara Daroca's podcast Hello, Success! - follow here → https://open.spotify.com/show/71SThCwTzHSFC1F9N5jBKZ Susan's AI agency: https://www.peacefulaimarketing.com Marketing Power Circle - the implementation mastermind for founder-led teams → https://cpdigitalinc.vipmembervault.com/products/courses/view/1157552 If this episode helped you rethink what success could look like, share it with a fellow founder. Drop a rating inside your podcast app. Your five-seconds of love keeps these conversations flowing. Stay curious, stay human, and keep 60% of your calendar blissfully blank.

    EP237 AI for Impact - Jesse Clarke on Grant Writing, Equity and Smarter Funding

    Play Episode Listen Later May 28, 2025 30:17


    In this episode of ‘AI Literacy for Entrepreneurs', Susan Diaz chats with Jesse Clarke - founder of JN Clarke Consulting and a strategic advisor to nonprofits and social impact entrepreneurs. With deep roots in both federal government and charitable sectors, Jesse brings a pragmatic, values-first lens to the role AI is beginning to play in nonprofit strategy. What to expect: How AI is quietly transforming grant writing and government funding workflows The ethical red flags nonprofits must consider as they adopt Gen AI Why 30-35% of orgs are already using AI (and more unofficially) How Jesse uses AI in her work - and her personal life The hidden opportunity for small teams to leap ahead before regulation catches up This conversation is packed with real-world use cases, policy-level insight, and warm, witty reminders that critical thinking and values must still lead the way.

    EP237 AI for Impact - Jesse Clarke on Grant Writing, Equity and Smarter Funding

    Play Episode Listen Later May 28, 2025 30:17


    In this episode of 'AI Literacy for Entrepreneurs', Susan Diaz chats with Jesse Clarke - founder of JN Clarke Consulting and a strategic advisor to nonprofits and social impact entrepreneurs. With deep roots in both federal government and charitable sectors, Jesse brings a pragmatic, values-first lens to the role AI is beginning to play in nonprofit strategy. What to expect: How AI is quietly transforming grant writing and government funding workflows The ethical red flags nonprofits must consider as they adopt Gen AI Why 30-35% of orgs are already using AI (and more unofficially) How Jesse uses AI in her work - and her personal life The hidden opportunity for small teams to leap ahead before regulation catches up This conversation is packed with real-world use cases, policy-level insight, and warm, witty reminders that critical thinking and values must still lead the way.

    EP236 The Soul of Strategy - Human-Centered Growth in a Tech-Driven World with Liat Horovitz

    Play Episode Listen Later May 14, 2025 35:39


    In this heartfelt and thought-provoking conversation, host Susan Diaz sits down with Liat Horovitz - results coach, founder of Revival Retreats, and host of The Results Club Podcast - to explore what it really means to grow a business (and a life) in the age of AI without losing your humanity. With a background in big tech and marketing, Liat walked away from corporate life to pursue a more connected, values-led path. In this episode, she shares what it takes to make a bold leap - and how we can embrace AI without sacrificing the human essence that makes us impactful leaders.

    EP235 AI 101 for Founder-Led Teams – The Building Blocks

    Play Episode Listen Later May 1, 2025 13:00


    In this foundational solo episode, host Susan Diaz breaks down the three core concepts every founder-led team needs to understand to build a meaningful relationship with AI. Whether you're just dipping your toes into AI or looking to sharpen your implementation strategy, this is your no-fluff starting point. What you'll Learn

    EP234 Websites that Work: Accessibility, SEO, and Essentialism with Steph Sedgwick

    Play Episode Listen Later Apr 2, 2025 45:08


    In this insightful episode of ‘AI Literacy for Entrepreneurs', host Susan Diaz sits down with Steph Sedgwick, founder of Clarity Web Design, to explore how good web design is evolving in the age of AI, accessibility, and changing SEO dynamics. About Steph Sedgwick: Steph specializes in creating websites that genuinely perform - going far beyond aesthetic appeal to ensure they're accessible, user-friendly, and strategically optimized. She highlights how designing for accessibility is not just inclusive but also excellent business strategy, significantly boosting organic traffic. Key Discussion Points: Good Web Design in 2025: Why visual aesthetics alone won't cut it. Good design means functionality, accessibility, and essentialism - focusing intentionally on content that delivers clear, measurable value. Accessibility as a Business Advantage: Steph shares startling statistics - 97% of websites aren't accessibility-friendly. She details how making your website accessible leads to dramatically improved user experience and organic search results (average 450% increase!) Why Pretty Paperweights Don't Work: Websites need to serve practical purposes, not just look appealing. Learn why functionality and accessibility should be your priority. AI and Web Design: Steph demystifies the hype around AI-generated websites. AI can enhance your workflow but can't replace human strategic insights or understanding of customer needs - yet. SEO and AI: Practical advice on using AI to evaluate your web copy for readability and skimmability, ensuring your content appeals to both humans and search engines. Steph's Pro Tips for Entrepreneurs: Avoid "ego-driven" website designs. Your website should appeal to your target audience, not just reflect your personal preferences. Conduct regular checks on basic website functionality (responsive design, working links). Use AI wisely - especially for tasks like editing and improving clarity, rather than expecting fully autonomous website creation. Time Stamps for Key Highlights in this Episode: 01:27 - The importance of moving from "pretty paperweights" to functional web design 02:32 - How accessibility boosts organic traffic by 450% 06:29 - Defining “good design” in 2025 11:21 - Why responsive design remains critical 27:10 - AI-generated websites: myth vs. reality 35:26 - Practical SEO tips using AI 39:14 - How Steph personally leverages AI for productivity Connect with Steph Sedgwick on LinkedIn. Website: claritywebdesign.ca  Instagram: @claritywebdesign Upcoming Roundtable Event: Steph host regular insightful roundtables. Her next one is "From Invisible to Unmissable: SEO and Accessibility Intersection", diving deeper into how to effectively blend SEO and accessibility for maximum business impact.

    EP233 Personal Brand and Style in the Age of AI with Renee Lindo

    Play Episode Listen Later Mar 20, 2025 31:05


    In this episode, Susan Diaz sits down with Renee Lindo, personal and brand stylist, to chat about why personal style is crucial for entrepreneurs building strong brands in the age of AI. Renee shares how style acts as a powerful communication tool, setting the stage before you even say a word. Key Takeaways: Your Style as Personal Branding Renee emphasizes how your style is your "packaging" and directly influences first impressions, branding consistency, and professional opportunities. Capsule Wardrobe 101 for Entrepreneurs Practical tips to create a personalized capsule wardrobe, saving entrepreneurs valuable time and mental energy. The Intersection of Style, Energy, and Confidence Insights on how dressing intentionally can shift your mindset, elevate confidence, and open doors to new opportunities. Common Style Challenges for Entrepreneurs Renee discusses overcoming blocks like feeling camera-ready and creating content with ease simply by being intentional about getting dressed each day. Can AI Enhance your Style? Tips on leveraging AI as a starting point for outfit ideas, online shopping recommendations, and even color choices, while still emphasizing the importance of personal touch. Practical Advice: Conduct a style audit: Align your wardrobe with your brand by clarifying your personal style and using tools like Pinterest for inspiration. Invest intentionality: Plan outfits ahead to boost content creation ease and reduce decision fatigue.   Episode Timestamps: 02:45 Why your personal style matters more than ever in the age of AI 07:12 How your wardrobe directly impacts your confidence and opportunities 12:30 Creating a capsule wardrobe to simplify entrepreneurial life 17:05 Overcoming style blocks and feeling camera-ready effortlessly 21:50 Leveraging AI to elevate your personal style without losing your authenticity 28:15 Conducting a practical style audit to align your wardrobe with your brand Guest Spotlight: Renee Lindo specializes in empowering entrepreneurs through style and confidence. Connect with her on LinkedIn and explore her insights on personal branding and styling. Connect with Renee on LinkedIn: Renee Lindo Tune in and discover how to amplify your entrepreneurial success through intentional personal branding and styling.

    EP232 Can AI Help you Show Up for yourself in a 24/7 World? Featuring Paige Percival

    Play Episode Listen Later Mar 6, 2025 37:15


    In today's episode, I sit down with Paige Percival, a health and life coach, to talk about self-care, balance, and how AI may be able to help us step away from screens instead of keeping us tethered to them.

    EP231 How AI is Making High-Quality Storytelling Accessible to Everyone with Dori Adams

    Play Episode Listen Later Feb 20, 2025 33:23


    In this episode of AI Literacy for Entrepreneurs, host Susan Diaz heads into the world of storytelling and content creation with Dori Adams, founder of shutterb. Dori is revolutionizing the way businesses capture content by making high-quality storytelling accessible to everyone through a unique gig platform that functions like the "Uber of content creators"

    EP230 The Future is Now – Navigating AI's Role in Education and Business with Leigh Mitchell

    Play Episode Listen Later Feb 6, 2025 40:10


    EP229 The Master Prompt - The AI Question That Changes Everything

    Play Episode Listen Later Jan 22, 2025 16:35


    EP228 Use ChatGPT to Plan Your Dream Life

    Play Episode Listen Later Dec 11, 2024 9:18


    Have you ever thought about designing your dream life but felt overwhelmed by where to start? In this episode of ‘AI Literacy for Entrepreneurs', we explore how ChatGPT can help you map out a vision for your ideal life - one thoughtful question at a time. Host Susan Diaz walks you through:  ✨ The impressive reasoning capabilities of ChatGPT o1 and how you can have it function as a pocket strategic life coach. ✨ Step-by-step examples of using AI to clarify values, define goals, and take actionable steps. ✨ Why narrowing down prompts and asking smarter questions unlocks deeper insights. ✨ How this AI tool goes beyond surface-level advice to uncover insights you might not have considered on your own. Susan also shares personal stories about discovering new hobbies, aligning values with daily actions, and creating a roadmap for meaningful change.  Whether it's enhancing personal growth, planning career moves, or tackling business challenges, this tutorial gives you an inside look at how AI can empower self-reflection and planning in ways we couldn't imagine before.

    EP227 Growth vs Scale in the Age of AI with Maiko Sakai

    Play Episode Listen Later Nov 19, 2024 46:55


    In this episode of AI Literacy for Entrepreneurs, I sit down with brilliant business consultant Maiko Sakai to talk about the difference between growth and scale for founder-led businesses

    EP226 AI and Creativity: How to Collaborate with AI Without Losing Your Voice

    Play Episode Listen Later Oct 30, 2024 11:47


    In this episode of ‘AI Literacy for Entrepreneurs', host Susan Diaz breaks down the dynamic between AI and creativity - specifically, how to collaborate with AI without compromising your own authentic voice.  Too often, we hear that AI-generated content can sound generic, robotic, or even "too ChatGPT-ish". Susan unpacks the core reasons why this happens and shares strategic steps you can take to ensure your AI-assisted content truly sounds like you. Key Takeaways: AI as a Strategic Partner: Treat AI as your second brain. Instead of assigning it junior-level tasks, elevate your thinking and have AI help you brainstorm at the strategic level. Use it to ask the right questions that guide your creativity. Iterative Collaboration: AI works best when you work iteratively. Instead of expecting perfect results from the first draft, treat it like any other team member - provide feedback, refine, and iterate. Define your Brand Voice Clearly: If AI isn't getting your voice, it's likely because you haven't defined your voice clearly enough. Develop a “brand book” that encompasses your tone, mood, and creative intentions. Use AI to help create this, ensuring that future outputs are aligned with your vision. Why this Matters:  Understanding how to make AI work for you while maintaining your unique style is crucial for entrepreneurs and creators. Whether you're a founder, a marketer, or just AI-curious, this episode provides a practical framework for using AI as a tool for creativity - without letting it overshadow your human touch. Resources Mentioned: ChatGPT, Claude, and Gemini as tools to explore for strategic collaboration. The idea of using AI as your “second brain” for creative expansion. Enjoyed the episode? Please leave a 5-star rating and review - it helps others discover AI Literacy for Entrepreneurs. Subscribe to stay updated on future episodes and share it with your entrepreneurial friends who are exploring AI! Further AI Learning Gen AI Myths Busted: Debunking 3 Damaging AI Misconceptions: Want to understand what generative AI really can and can't do? This article clears up some damaging myths to help you leverage AI effectively. 100x Mindset: How AI Empowers Small Businesses to Think Bigger: Discover how AI can help you think bigger and expand faster - moving from scarcity to a mindset of abundance. This piece is essential reading if you're ready to level up your small business with AI. Join Us:  Ready to implement AI at quantum speed? Join the Marketing Power Circle to get the support, frameworks, and community needed to take your AI game to the next level.

    EP225 AI and the Oppenheimer Paradox: Hope, Fear, and the Future of Work with Jesse Adams

    Play Episode Listen Later Oct 16, 2024 34:23


    In this thought-provoking episode of AI Literacy for Entrepreneurs, host Susan Diaz welcomes Jesse Adams, CEO and founder of Ember Experience, to explore the dual nature of AI and its implications for the future of work. The discussion points to a compelling parallel in history with Oppenheimer and the development of nuclear technology, bringing both hope and fear to the front. Key Topics Covered: The Paradox of AI: Susan and Jesse talk about how AI represents both tremendous opportunities and risks, much like the invention of nuclear technology.  Human vs. AI Decision-Making: Insights into the differences between human-controlled tools and AI's autonomous capabilities.While humans decided how to deploy atomic bombs, AI's potential involves it making decisions on its own - a powerful shift that requires careful stewardship. Cultural and Ethical Implications: There is a critical need for creating a value-driven approach to how AI evolves, ensuring that it serves humanity in a positive way, rather than just driving profit and power. Psychological Safety and AI: Jesse and Susan talk about the impact of AI on workplace psychological safety. As AI tools drive efficiency, there's an inherent tension between maximizing productivity and fostering a safe and supportive work culture. Jesse discusses how organizations can balance these pressures by focusing on intention and values. Hope for the Future: Despite the risks, we remain hopeful about AI's potential to be a force for good. Leaders must actively engage in the development of AI tools with ethical considerations at the forefront - fostering creativity, community, and the well-being of individuals. Highlights: AI's power is akin to that of nuclear technology—capable of immense good or bad. Decision-making shifts: AI can independently generate ideas and make impactful choices. Organizations must re-center on core values to guide AI implementation. The dual forces of hope and fear surrounding AI are shaping our approach to technology in the workplace.   Guest Bio: Jesse Adams is the CEO and founder of Ember Experience, an organization focused on improving culture within workplaces through leadership development and creating environments of psychological safety. Jesse's unique background spans from working with Olympic athletes to corporate wellness, and now leading teams to think deeply about how to navigate the complex, nuanced relationship between technology and humanity. Connect with host Susan Diaz on LinkedIn Connect with guest Jesse Adams on LinkedIn As you hit play on this episode please leave us a 5 start review!  

    EP224 AI Implementation in a Mastermind with Tara MacIntosh

    Play Episode Listen Later Oct 2, 2024 26:17


    In this episode of AI Literacy for Entrepreneurs, I am excited to introduce you to Tara MacIntosh, our new Director of Membership Growth for the Marketing Power Circle. Tara joins me and we go deep into the work we do and the impacts we achieve inside the Marketing Power Circle (MPC), our AI implementation mastermind designed for founders of small teams. Tara and I discuss: Tara's Journey into AI Tara shares her experience of discovering AI and highlights how being part of the MPC has accelerated her understanding of AI tools like ChatGPT and Perplexity, empowering her to use AI for deeper research, and writing. The Power of Community in MPC Tara speaks about the value of connecting with fellow entrepreneurs in the group who are at different stages of their AI journey. The sense of community, regular meetings, and accountability homework provide an environment for learning, experimenting, and implementing AI tools in real-time. She highlights the unique “early adoption” advantage of MPC members, with hands-on support from other AI-driven business owners. Why You Should Embrace Early Adoption AI isn't just for large enterprises. As small businesses, we have the agility and the power to implement AI faster than most. Tara and I discuss why it's essential to start now and build your AI literacy - gaining the competitive edge that's available to you today. Five Reasons to Join the MPC Early adoption of AI will set you ahead of your competition. The community of like-minded entrepreneurs allows for collective learning. Developing an organized system for AI literacy and strategy implementation. AI will enhance your productivity and cost-efficiency without replacing your team. The MPC is fun! We enjoy genuine conversations and laughs while learning. Key Tools we Discussed ChatGPT and Perplexity are at the core of our AI discussions. We also teach about tools like Canva and automation platforms, breaking down how they help small teams integrate AI into daily operations. Our discussions focus on actionable steps, not just theory - ensuring you can implement these tools with your team. Join the Marketing Power Circle If you are a founder or lead a small team and are curious about AI implementation, the Marketing Power Circle could be the place for you. We meet twice a month to collaborate, share ideas, and implement AI strategies in real-time. Want to learn more? Reach out to Tara or me for a prospectus. Resources Mentioned: ChatGPT Perplexity Canva Subscribe to our podcast today and stay updated with more episodes on AI literacy for entrepreneurs! Connect with us: Connect with host, Susan Diaz, on LinkedIn. Connect with Tara MacIntosh on LinkedIn. Website: peacefulaimarketing.com

    EP222 How I used AI to rebrand my AI podcast (A Behind-the-Scenes Look + AI Powered Rebranding Framework)

    Play Episode Listen Later Sep 4, 2024 25:49


    EP221 Gen AI Myths Busted - Debunking 3 (Damaging) AI Misconceptions

    Play Episode Listen Later Aug 21, 2024 13:55


    Welcome back to the 4am Report AI Literacy Podcast! As we transition our podcast name to better reflect our focus on AI literacy for entrepreneurs, I'm diving deep into debunking some of the most damaging AI myths that present often in the entrepreneurial community. What I Cover in This Episode:

    EP220 Kickstarting your AI Journey: First Steps for AI-Newbie Entrepreneurs

    Play Episode Listen Later Aug 7, 2024 18:43


    Welcome to Episode 220 of The 4am Report, a practical guide for entrepreneurs at the beginning of their AI journey. In this episode, I'll give you the foundational steps necessary for assessing and integrating AI into your business operations. I'll  demystify the process and highlight actionable strategies that will have you moving in days and weeks, not months and years! What you'll Learn: ⚡ Understanding AI and Its Potential Impacts on Small Businesses: We'll start with a basic introduction to AI, including its common forms such as machine learning and natural language processing, and discuss how it can enhance business operations, reduce costs, and improve decision-making.

    EP219 3 Lessons from a Year of Teaching AI to Small Businesses

    Play Episode Listen Later Jul 10, 2024 11:56


    EP218 Beyond Blogging - 6 Unique AI Strategies for Small Businesses

    Play Episode Listen Later Jun 26, 2024 9:08


    EP217 The Mindset of Thinking 100x with AI for Small Businesses

    Play Episode Listen Later Jun 12, 2024 12:30


    Welcome to The 4am Report, where talk AI literacy for founder-led brands with me, your host, Susan Diaz. Today I will be exploring what it means to think exponentially in the age of artificial intelligence and how this mindset can drastically shift your business landscape. On this podcast, we no longer do intros and outro on audio, so here's a short note on who I am. I am Susan Diaz, lifetime marketer with over 20 years of experience spanning startups to government sectors. I am the founder of a content marketing firm, and I am focused on how artificial intelligence can turbocharge small business operations. In this episode you should expect to hear about:

    Claim The 4 am Report

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel