POPULARITY
The Last Touch: Why AI Will Never Be an ArtistI had one of those conversations... the kind where you're nodding along, then suddenly stop because someone just articulated something you've been feeling but couldn't quite name.Andrea Isoni is a Chief AI Officer. He builds and delivers AI solutions for a living. And yet, sitting across from him (virtually, but still), I heard something I rarely hear from people deep in the AI industry: a clear, unromantic take on what this technology actually is — and what it isn't.His argument is elegant in its simplicity. Think about Michelangelo. We picture him alone with a chisel, carving David from marble. But that's not how it worked. Michelangelo ran a workshop. He had apprentices — skilled craftspeople who did the bulk of the work. The master would look at a semi-finished piece, decide what needed refinement, and add the final touch.That final touch is everything.Andrea draws the same line with chefs. A Michelin-starred kitchen isn't one person cooking. It's a team executing the chef's vision. But the chef decides what's on the menu. The chef check the dish before it leaves. The chef adds that last adjustment that transforms good into memorable.AI, in this framework, is the newest apprentice. It can do the bulk work. It can generate drafts, produce code, create images. But it cannot — and here's the key — provide that final touch. Because that touch comes from somewhere AI doesn't have access to: lived experience, suffering, joy, the accumulated weight of being human in a particular time and place.This matters beyond art. Andrea calls it the "hacker economy" — a future where AI handles the volume, but humans handle the value. Think about code generation. Yes, AI can write software. But code with a bug doesn't work. Period. Someone has to fix that last bug. And in a world where AI produces most of the code, the value of fixing that one critical bug increases exponentially. The work becomes rarer but more valuable. Less frequent, but essential.We went somewhere unexpected in our conversation — to electricity. What does AI "need"? Not food. Not warmth. Electricity. So if AI ever developed something like feelings, they wouldn't be tied to hunger or cold or human vulnerability. They'd be tied to power supply. The most important being to an AI wouldn't be a human — it would be whoever controls the electricity grid.That's not a being we can relate to. And that's the point.Andrea brought up Guernica. Picasso's masterpiece isn't just innovative in style — it captures something society was feeling in 1937, the horror of the Spanish Civil War. Great art does two things: it innovates, and it expresses something the collective needs expressed. AI might be able to generate the first. It cannot do the second. It doesn't know what we feel. It doesn't know what moment we're living through. It doesn't have that weight of context.The research community calls this "world models" — the attempt to give AI some built-in understanding of reality. A dog doesn't need to be taught to swim; it's born knowing. Humans have similar innate knowledge, layered with everything we learn from family, culture, experience. AI starts from zero. Every time.Andrea put it simply: AI contextualization today is close to zero.I left the conversation thinking about what we protect when we acknowledge AI's limits. Not anti-technology. Not fear. Just clarity. The "last touch" isn't a romantic notion — it's what makes something resonate. And that resonance comes from us.Stay curious. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On this episode 191 of the Disruption Now podcast:What happens when an algorithm knows more about your health than your doctor ever will? When AI can process threats faster than any human operator? When China, Russia, Iran, and North Korea are probing our systems 24/7?Dr. Richard Harknett has spent 30+ years answering these questions at the highest levels. As the first Scholar-in-Residence at US Cyber Command and NSA, a key architect of the US Cybersecurity Strategy 2023, and Fulbright Professor in Cyber Studies at Oxford, he's one of the few people who's seen how cyber threats actually unfold—and what we're doing (or not doing) about them.In this conversation, Richard breaks down:
If artificial intelligence is meant to earn trust anywhere, should banking be the place where it proves itself first? In this episode of Tech Talks Daily, I'm joined by Ravi Nemalikanti, Chief Product and Technology Officer at Abrigo, for a grounded conversation about what responsible AI actually looks like when the consequences are real. Abrigo works with more than 2,500 banks and credit unions across the United States, many of them community institutions where every decision affects local businesses, families, and entire regional economies. That reality makes this discussion feel refreshingly practical rather than theoretical. We talk about why financial services has become one of the toughest proving grounds for AI, and why that is a good thing. Ravi explains why concepts like transparency, explainability, and auditability are not optional add-ons in banking, but table stakes. From fraud detection and lending decisions to compliance and portfolio risk, every model has to stand up to regulatory, ethical, and operational scrutiny. A false positive or an opaque decision is not just a technical issue, it can damage trust, disrupt livelihoods, and undermine confidence in an institution. A big focus of the conversation is how AI assistants are already changing day-to-day banking work, largely behind the scenes. Rather than flashy chatbots, Ravi describes assistants embedded directly into lending, anti-money laundering, and compliance workflows. These systems summarize complex documents, surface anomalies, and create consistent narratives that free human experts to focus on judgment, context, and relationships. What surprised me most was how often customers value consistency and clarity over raw speed or automation. We also explore what other industries can learn from community banks, particularly their modular, measured approach to adoption. With limited budgets and decades-old core systems, these institutions innovate cautiously, prioritizing low-risk, high-return use cases and strong governance from day one. Ravi shares why explainable AI must speak the language of bankers and regulators, not data scientists, and why showing the "why" behind a decision is essential to keeping humans firmly in control. As we look toward 2026 and beyond, the conversation turns to where AI can genuinely support better outcomes in lending and credit risk without sidelining human judgment. Ravi is clear that fully autonomous decisioning still has a long way to go in high-stakes environments, and that the future is far more about partnership than replacement. AI can surface patterns, speed up insight, and flag risks early, but people remain essential for context, empathy, and final accountability. If you're trying to cut through the AI noise and understand how trust, governance, and real-world impact intersect, this episode offers a rare look at how responsible AI is actually being built and deployed today. And once you've listened, I'd love to hear your perspective. Where do you see AI earning trust, and where does it still have something to prove?
Bitcoin and blockchain are reshaping money, trust, and the future of AI. In Episode 189 of the Disruption Now Podcast, Rob Richardson sits down with Andrew Burchwell, Executive Director of the Ohio Blockchain Council, to break down why blockchain matters more than ever—and why understanding it now is critical for anyone navigating the next decade of technology.This conversation dives into Bitcoin's core value as a trust engine, why blockchain is essential for the AI era, how decentralized systems empower individuals and communities, and the massive economic transformation coming to states like Ohio. Andrew shares the personal story behind his leap from a secure energy-tech career into full-time blockchain advocacy, why his faith guided the transition, and how local policy can unlock global innovation.We unpack the realities behind Bitcoin's volatility, long-term value, inflation, the S-curve of exponential tech adoption, and why blockchain should be seen as a utility—not a gamble. You'll learn how agentic AI will depend on blockchain rails for payments, how on-chain verification combats deepfakes, and why crypto is a once-in-a-generation opportunity for regions left behind by globalization.Andrew also shares why privacy matters, what people misunderstand about crypto, where regulation should (and shouldn't) go, and why the next five years will be the fastest technological pivot in human history.If you've been curious, skeptical, or overwhelmed by crypto, this is the conversation that makes it all click.What You'll Learn:Why Bitcoin is “better, faster, more secure money”How blockchain + AI together solve trust, speed, and verification gapsWhy inflation quietly erodes wealth and how Bitcoin counters itThe real difference between gambling memes vs. real digital assetsHow agentic AI will need blockchain for payments and micro-transactionsWhy Ohio is emerging as a national leader in blockchain policyHow decentralized tech can help rebuild forgotten communitiesWhere privacy, transparency, and security intersect in Web3Chapters:00:00 Welcome & Andrew's story03:15 Why he left a secure career for blockchain09:45 The meaning of Bitcoin as sound money14:20 Inflation, trust, and why blockchain matters19:30 Blockchain + AI: the critical connection26:40 Privacy, regulation & misuse: what's real33:10 Meme coins vs. real utility38:20 The next 5 years: “The Pivot”42:15 What's ahead for Ohio & cryptoQuick Q&A:Q: Why does Bitcoin matter today?A: It creates trust, speed, and financial sovereignty in a system where inflation and centralization reduce purchasing power.Q: How do AI and blockchain work together?A: AI creates speed; blockchain creates trust and verification. Together they enable secure agentic automation.Q: What do people misunderstand most about crypto?A: They confuse speculation with utility. Blockchain's long-term value is in its function, not its hype.Connect with Andrew Burchwell:Website / Organization: https://ohioblockchain.org/X (Twitter): https://x.com/AndrewBurchwellLinkedIn: https://www.linkedin.com/in/andrew-burchwell-a7284994/Ohio Blockchain Council (organization page): https://ohioblockchain.org/Be a guest on the podcast or Subscribe to our newsletterAll our links - https://linktr.ee/disruptionnow#Blockchain #Bitcoin #Web3 #aiagents Music credit:calm before storm - moñoñaband
Dr. Kelly Cohen is a Professor of Aerospace Engineering at the University of Cincinnati and a leading authority in explainable, certifiable AI systems. With more than 31 years of experience in artificial intelligence, his research focuses on fuzzy logic, safety-critical systems, and responsible AI deployment in aerospace and autonomous environments. His lab's work has received international recognition, with students earning top global research awards and building real-world AI products used in industry.In this episode 190 of the Disruption Now Podcast,
Send us a textJohn Gamba is Entrepreneur in Residence at Catalyst @ Penn GSE, where he mentors education entrepreneurs and leads the Milken-Penn GSE Education Business Plan Competition. Over 17 years, the competition has awarded $2M to ventures that have gone on to raise more than $200M in follow-on funding, with a strong focus on equity and research-to-practice impact.
AI is moving fast in global health and development, but good intentions alone don't guarantee good outcomes. In this episode of High-Impact Growth, we sit down with Genevieve Smith, Founding Director of the Responsible AI Initiative at UC Berkeley's AI Research Lab, to unpack what it really means to build and deploy AI responsibly in contexts that matter most.Drawing on a decade of experience in international development and cutting-edge research on AI bias, Genevieve explains why labeling a project “AI for good” isn't enough. She introduces five critical lenses – fairness, privacy, security, transparency, and accountability – that program managers and product leaders must apply to avoid reinforcing existing inequalities or creating new risks.The conversation explores real-world examples, from AI-driven credit assessments that unintentionally disadvantage women, to the challenges of deploying generative AI in low-resource and multilingual settings. Genevieve also shares emerging alternatives, like data cooperatives, that give communities governance over how their data is used, shifting power toward trust, agency, and long-term impact.This episode offers practical insights for anyone navigating the hype, pressure, and promise of AI in development, and looking to get it right.Responsible AI Initiative – UC Berkeley AI Research Lab – A multidisciplinary initiative advancing research and practice around responsible, trustworthy AI.Mitigating Bias in Artificial Intelligence - A playbook for business leaders who build & use AI to unlock value responsibly & equitablyUC Berkeley AI Research (BAIR) – A leading AI research lab focused on advancing the science and real-world impact of artificial intelligence.Fairness, Accountability, and Transparency (FAccT) Conference – A major interdisciplinary conference on ethical and responsible AI systems.UN Women – An organization referenced in Genevieve's background, focused on gender equality and women's empowerment globally.International Center for Research on Women (ICRW) – A research organization mentioned in the episode, specializing in gender, equity, and inclusive development.Sign up to our newsletter, and stay informed of Dimagi's workWe are on social media - follow us for the latest from Dimagi: LinkedIn, Twitter, Facebook, YoutubeIf you enjoy this show, please leave us a 5-Star Review and share your favorite episodes with friends. Hosts: Jonathan Jackson and Amie Vaccaro
Gretchen Stewart knows she doesn't know it all, always asks why, challenges oversimplified AI stories, champions multi-disciplinary teams and doubles down on data. Gretchen and Kimberly discuss conflating GenAI with AI, data as the underpinning for all things AI, workflow engineering, AI as a team sport, organizational and data siloes, programming as a valued skill, agentic AI and workforce reductions, the complexity inherent in an interconnected world, data volume vs. quality, backsliding on governance, not knowing it all and diversity as a force multiplier.Gretchen Stewart is a Principal Engineer at Intel. She serves as the Chief Data Scientist for the public sector and is a member of the enterprise HPC and AI architecture team. A self-professed human to geek translator, Gretchen was recently nominated as a Top 100 Data and AI Leader by OnConferences. A transcript of this episode is here.
As AI tools become essential for efficiency, they also introduce new vulnerabilities for founders and companies. Tina Lampe, a cybersecurity expert and Responsible AI advocate, joins host Natalie Benamou discuss risks and data exposure. They reveal why AI note-takers should never be used for any board meeting and the legal risks of data discoverability. Tina shares how to balance these risks against all the potential AI can create in the world.Listen in to future-proof your business while staying secure.Keep shining your light bright. The world needs you.About Our Guest Tina LampeTina Lampe is a Technology and Digital Risk Executive with extensive expertise as a speaker, board director and AI and Cybersecurity leader. Tina is an AI Luminary in the NEXT2LEAD AI Council powered by HerCsuite®.Tina Lampe on LinkedInHerCsuite® is a leadership network where women build what's next. Our members land board roles, grow businesses, lead the AI conversation, and live their best portfolio career with our programs. Join us at HerCsuite.com. Connect with host Natalie Benamou on LinkedIn.
Send us a textThis week, we're sharing a fan-favorite replay, an episode that ranks in the top ten of our all-time most listened-to episodes. In this week's replay episode we unlock the secrets of building adaptive, personalized robots with Dr. Randi Williams, a leading figure in AI and robotics, as she shares her journey from a math-obsessed child inspired by Jimmy Neutron to a pioneering expert aiming to make technology fairer and more inclusive. Dr. Williams takes us behind the scenes of her work at the Algorithmic Justice League (AJL), discussing the triumphs and challenges of creating robots that can truly engage with humans. Through the lens of projects like PopBots, you'll discover how even preschoolers can grasp foundational AI concepts and start innovating from an early age. Hear the inspiring story of a young learner who programmed a multilingual robot, and explore the engaging tools and platforms like MIT's Playground that make learning AI fun and accessible. Finally, we tackle the crucial issue of algorithmic bias and the importance of diverse data sets in AI training. This episode underscores how creativity and a passion for learning can drive meaningful advancements in AI and robotics. Resources for parents and kids:Preschool-Oriented Programming (POP) Platform PopBotsPlayground Raise MITDay of AITuring Test GameUnmasking AICoded BiasPersonal Robots GroupScratchNational Coding Week Support the showHey parents and teachers, if you want to stay on top of the AI news shaping your kids' world, subscribe to our weekly AI for Kids Substack: https://aiforkidsweekly.substack.com/ Help us become the #1 podcast for AI for Kids and best AI podcast for kids, parents, teachers, and families. Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like o...
Rick DeLisi is an author and Lead Research Analyst at Glia, an online leader in Digital Customer Service. Rick shares his expertise on integrating AI into work processes to achieve effortless interaction. Enjoy the listen. Along the way we discuss AI for All (5:00), AI and new product launch (10:00), AI Pre-Op (15:30), the Chainsaw Analogy (17:30), Communicating with Bots (19:00), flying the plane (21:00), dealing with the skeptics (21:45), Glia, Data Security, and Responsible AI (25:15), and the AI 24/7 Focus Group (31:15). Empower your teams and drive revenue @ Glia, AI Built for Community Impact This podcast is partnered with LukeLeaders1248, a nonprofit that provides scholarships for the children of military Veterans. Send a donation, large or small, through PayPal @LukeLeaders1248; Venmo @LukeLeaders1248; or our website @ www.lukeleaders1248.com. Music intro and outro from the creative brilliance of Kenny Kilgore. Lowriders and Beautiful Rainy Day.
CES 2026 Just Showed Us the Future. It's More Practical Than You Think.CES has always been part crystal ball, part carnival. But something shifted this year.I caught up with Brian Comiskey—Senior Director of Innovation and Trends at CTA and a futurist by trade—days after 148,000 people walked the Las Vegas floor. What he described wasn't the usual parade of flashy prototypes destined for tech graveyards. This was different. This was technology getting serious about actually being useful.Three mega trends defined the show: intelligent transformation, longevity, and engineering tomorrow. Fancy terms, but they translate to something concrete: AI that works, health tech that extends lives, and innovations that move us, power us, and feed us. Not technology for its own sake. Technology with a job to do.The AI conversation has matured. A year ago, generative AI was the headline—impressive demos, uncertain applications. Now the use cases are landing. Industrial AI is optimizing factory operations through digital twins. Agentic AI is handling enterprise workflows autonomously. And physical AI—robotics—is getting genuinely capable. Brian pointed to robotic vacuums that now have arms, wash floors, and mop. Not revolutionary in isolation, but symbolic of something larger: AI escaping the screen and entering the physical world.Humanoid robots took a visible leap. Companies like Sharpa and Real Hand showcased machines folding laundry, picking up papers, playing ping pong. The movement is becoming fluid, dexterous, human-like. LG even introduced a consumer-facing humanoid. We're past the novelty phase. The question now is integration—how these machines will collaborate, cowork, and coexist with humans.Then there's energy—the quiet enabler hiding behind the AI headlines.Korea Hydro Nuclear Power demonstrated small modular reactors. Next-generation nuclear that could cleanly power cities with minimal waste. A company called Flint Paper Battery showcased recyclable batteries using zinc instead of lithium and cobalt. These aren't sexy announcements. They're foundational.Brian framed it well: AI demands energy. Quantum computing demands energy. The future demands energy. Without solving that equation, everything else stalls. The good news? AI itself is being deployed for grid modernization, load balancing, and optimizing renewable cycles. The technologies aren't competing—they're converging.Quantum made the leap from theory to presence. CES launched a new area called Foundry this year, featuring innovations from D-Wave and Quantum Computing Inc. Brian still sees quantum as a 2030s defining technology, but we're in the back half of the 2020s now. The runway is shorter than we thought.His predictions for 2026: quantum goes more mainstream, humanoid robotics moves beyond enterprise into consumer markets, and space technologies start playing a bigger role in connectivity and research. The threads are weaving together.Technology conversations often drift toward dystopia—job displacement, surveillance, environmental cost. Brian sees it differently. The convergence of AI, quantum, and clean energy could push things toward something better. The pieces exist. The question is whether we assemble them wisely.CES is a snapshot. One moment in the relentless march. But this year's snapshot suggests technology is entering a phase where substance wins over spectacle.That's a future worth watching.This episode is part of the Redefining Society and Technology podcast's CES 2026 coverage. Subscribe to stay informed as technology and humanity continue to intersect.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
OpenAI launches ChatGPT Health, Amazon takes Alexa+ to the web, and Google reinvents your inbox—6 game-changing AI updates for operators in under 5 minutes.
AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice & Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.
This and all episodes at: https://aiandyou.net/ . What's going on with getting AI education into America's classrooms? We're finding out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT's Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America's Youth, which is now running across all 50 states and will run an event called America's Youth AI Festival in July 2026 in Boston. Jeff has got master's degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises. In our conclusion, we talk about how Jeff sees the AI education of teachers evolving, responsible use of AI by students, differentiated learning, making AI in classrooms safe for teachers and students, the impact of AI on educational inequalities, the future of educational reform, and how you can get involved in AI in schools. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Introduction In this Deep Dive episode, we dive into PwC's latest AI Business Predictions — a roadmap offering insight into how companies can harness artificial intelligence not just for efficiency, but as a strategic lever to reshape operations, workforce, and long-term growth. We explore why “AI adoption” is now about more than technology: it's about vision, leadership, and rethinking what work and human potential look like in a rapidly shifting landscape. Key Insights from PwC AI success is as much about vision as about adoption According to PwC, what separates companies that succeed with AI from those that merely dabble is leadership clarity and strategic alignment. Firms that view AI as central to their business model — rather than as an add-on — are more likely to reap measurable gains. AI agents can meaningfully expand capacity — even double workforce impact One bold prediction: with AI agents and automation, a smaller human team can produce work at a scale that might resemble having a much larger workforce — without proportionally increasing staff size. For private firms especially, this means you can “leapfrog” traditional growth limitations. From pilots to scale: real ROI is emerging — but requires discipline While many organizations experimented with AI in 2023–2024, PwC argues that 2025 and 2026 are about turning experiments into engines of growth. The companies that succeed are those that pick strategic high-impact areas, double down, and avoid spreading efforts too thin. Workforce composition will shift — rise of the “AI-generalist” As AI agents take over more routine, data-heavy or repetitive tasks, human roles will trend toward design, oversight, strategy, and creative judgment. The “AI-generalist” — someone who can bridge human judgment, organizational culture, and AI tools — will become increasingly valuable. Responsible AI, governance, and sustainability are non-negotiables PwC insists that success with AI isn't just about technology rollout; it's also about embedding ethical governance, sustainability, and data integrity. Organizations that treat AI as a core piece of long-term strategy — not a flashy add-on — will be the ones that unlock lasting value. What This Means for Leaders, Culture & Burnout (Especially for Humans, Not Just AI) Opportunity to reimagine roles — more meaning, less drudgery As AI takes over repetitive, transactional work, human roles can shift toward creativity, strategy, mentorship, emotional intelligence, and leadership. That aligns with your mission around workplace culture and “Burnout-Proof” leadership: this could reduce burnout if implemented thoughtfully. Culture becomes the strategic differentiator As more companies adopt similar AI tools, organizational vision, values, psychological safety, and human connection may become the real competitive edge. Leaders who “get culture right” will be ahead — not because of tech, but because of people. Upskilling, transparency and trust are essential With AI in the mix, employees need clarity, training, and trust. Mismanaged adoption could lead to fear, resistance, or misalignment. Leaders must shepherd not just technology, but human transition. AI-driven efficiency must be balanced with empathy and human-centered leadership The automation and “workforce multiplier” potential is seductive — but if leaders lose sight of human needs, purpose, and wellbeing, there's a risk of burnout, disengagement, or erosion of cultural integrity. For small & private companies: a chance to leapfrog giants — but only with clarity and discipline Smaller firms often lack the resources of large enterprises, but according to PwC, those constraints may shrink when AI is used strategically. For mission-driven companies (like yours), this creates an opportunity to scale impact — provided leadership stays grounded in purpose and values. Why This Topic Matters for the Breakfast Leadership Network & Our Audience Given your work in leadership development, burnout prevention, workplace culture, and coaching — PwC's predictions offer a crucial lens. It's no longer optional for organizations to ignore AI. The question isn't “Will we use AI?” but “How will we use AI — and who do we become in the process?” For founders, people-leaders, HR strategists: this is a call to be intentional. To lead with vision, grounded in human values. To design workplaces that thrive in the AI era — not suffer. Questions for Reflection What parts of your organization's workflow could be transformed by AI — and what human strengths should those tools free up rather than replace? How might embracing AI shift your organizational culture and the expectations for leaders? What ethical, psychological, or human-impact considerations must you address before “going all in” on AI? As a leader, how will you ensure the “AI-generalists” — employees blending tech fluency with empathy, creativity, and human judgment — are cultivated and supported? How do you prevent burnout and disconnection while dramatically increasing capacity and output via AI? Learn more at https://BreakfastLeadership.com/blog Research: https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html
Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Fresh out of the studio, Jay Parikh, Executive Vice President of Core AI at Microsoft, joins us to explore how Microsoft is fundamentally transforming software development by placing AI at the center of every stage of the development lifecycle. He shares his career journey from scaling the internet at Akamai Technologies during the dot-com boom, to leading infrastructure at Facebook through the mobile revolution, and now driving Microsoft's AI-first transformation where the definition of "developer" itself is rapidly evolving. Jay explains that Microsoft's Core AI team, is moving beyond traditional tiered architecture to a new paradigm where large language models can think, reason, plan, and interact with tools—shifting developer time from typing code to specification and verification while enabling parallel project execution through specialized AI agents. He highlights how organizations like Singapore Airlines cut project timelines from 11 weeks to 5 weeks using GitHub Copilot and challenges both individuals and enterprises to raise their level of ambition: moving from being amazed by AI to being frustrated it can't do more, while building cultural experiments that unlock this exponential technology. Closing the conversation, Jay shares what great looks like for Microsoft's Core AI to enable AI transformation for every organization around the world. "There's this set of people that are using these AI-powered tools and they're like, 'Wow, that's amazing!' Stunned as to how incredible the response is from AI. Then there's another set of people that have these experiences when they work with AI—they're frustrated with it because they're just like, 'Why can't it do this for me yet?'And they're pushing the envelope of what this LLM or what this system can do, what this tool can do. If you are in the former group, then you need to raise your level of ambition. You need to delegate harder things to it. And if you're in the second group, then you need to learn more about how these things work." - Jay Parikh Episode Highlights:[00:00] Quote of the day by Jay Parikh[01:00] Introducing Microsoft's Core AI strategy and transformation[02:34] Career philosophy: pursuing hard problems and discomfort[04:08] Core AI team's mission: empowering every developer[06:00] Reinventing the entire software development lifecycle[09:17] Parallel projects and agents transforming development workflows[12:12] AI first strategy across Microsoft's product ecosystem[15:37] GitHub platform beyond code: context and orchestration[20:33] Building AI platforms: lessons from scale experience[21:00] Two mindsets: amazement versus frustration with AI[22:15] Raising ambition and pushing AI tool boundaries[25:00] Enterprise adoption challenges: tools and cultural transformation[28:00] Learning loops: shrinking circles to accelerate growth[31:00] Alignment without tight coupling across global teams[36:56] Concrete trends: use tools, understand model development[40:27] Responsible AI and security built from start[43:30] Asia innovation: two thirds of developers here[46:19] Raising ambition to unlock human creativity collaboration[48:35] Goal: AI transformation for every global organizationProfile: Jay Parikh, Executive Vice President, Core AI, Microsoft LinkedIn: https://www.linkedin.com/in/jayparikh/Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format.
How do you scale generative AI in healthcare without sacrificing trust, transparency, or governance? In this podcast hosted by Mphasis Vice President of Products Chenny Solaiyappan, Cognizant Senior Director Elliot Papadakis shares how Cognizant is building and operationalizing an AI gateway that sits at the center of its responsible AI strategy. Elliot discusses embedding generative AI into payer workflows, designing human-in-the-loop guardrails, and using AI orchestration to unlock productivity gains across complex healthcare systems while keeping accountability and patient impact front and center.
In this special, we reflect on insights from 2025 and what leaders should prepare for in 2026. From the realities behind last year's AI hype, to the need for emphasizing execution, data quality, culture, and leadership. We also touch on experiences gained from producing over 400 podcast episodes. We also look ahead to 2026 as the focus shifts to responsible and agentic AI moving from theory into practice, and to AI integration across business functions. We even lay out some of our commitments for the year ahead—tying AI to measurable business outcomes, investing in learning frameworks, and taking more calculated risks—while reinforcing a core belief: technology adoption without mindset and culture change will not stick. Key Highlights AI in 2025 was all the rage and yet execution and data quality were true differentiators. Action beats indecision, and sustainability beats speed. Growth comes from focus, not endless expansion. Culture and mindset must lead technology adoption. 2026 will emphasize Responsible AI, agentic workflows, and outcome-driven benchmarks. Continued commitment to provide resources, including our AI simulation, courses and books. Be intentional, stay informed, and be bold as you build for the future. Become a sponsor, Patreon member and pick up our latest book entitled, Identically Opposite: Find Your Voice and SPEAK. Timestamps: Leverage AI 6:02 Identically Opposite SPEAK framework 9:12 Responsible AI 23:22
In this special, we reflect on insights from 2025 and what leaders should prepare for in 2026. From the realities behind last year's AI hype, to the need for emphasizing execution, data quality, culture, and leadership. We also touch on experiences gained from producing over 400 podcast episodes. We also look ahead to 2026 as the focus shifts to responsible and agentic AI moving from theory into practice, and to AI integration across business functions. We even lay out some of our commitments for the year ahead—tying AI to measurable business outcomes, investing in learning frameworks, and taking more calculated risks—while reinforcing a core belief: technology adoption without mindset and culture change will not stick. Key Highlights AI in 2025 was all the rage and yet execution and data quality were true differentiators. Action beats indecision, and sustainability beats speed. Growth comes from focus, not endless expansion. Culture and mindset must lead technology adoption. 2026 will emphasize Responsible AI, agentic workflows, and outcome-driven benchmarks. Continued commitment to provide resources, including our AI simulation, courses and books. Be intentional, stay informed, and be bold as you build for the future. Become a sponsor, Patreon member and pick up our latest book entitled, Identically Opposite: Find Your Voice and SPEAK. Timestamps: Leverage AI 6:02 Identically Opposite SPEAK framework 9:12 Responsible AI 23:22
Dr. Chris Marshall analyzes AI from all angles including market dynamics, geopolitical concerns, workforce impacts, and what staying the course with agentic AI requires.Chris and Kimberly discuss his journey from theoretical physics to analytic philosophy, AI as an economic and geopolitical concern, the rise of sovereign AI, scale economies, market bubbles and expectation gaps, the AI value horizon, why agentic AI is harder than GenAI, calibrating risk and justifying trust, expertise and the workforce, not overlooking Rodney Dangerfield, foundational elements for success, betting on AIOps, and acting in teams. Dr. Chris L Marshall is a Vice President at IDC Asia/Pacific with responsibility for industry insights, data, analytics and AI. A former partner and executive at companies such as IBM, KPMG, Oracle, FIS, and UBS, Chris's mission is to translate innovative technologies into industry insights and business value for the digital economy.Related ResourcesData and AI Impact Report: The Trust Imperative (IDC Research)A transcript of this episode is here.
This and all episodes at: https://aiandyou.net/ . What's going on with getting AI education into America's classrooms? We're going to find out from Jeff Riley, former Commissioner of Education for the state of Massachusetts and founder of a new organization – Day of AI, started by MIT's Responsible AI for Social Empowerment and Education institute. And they are mounting a campaign called Responsible AI for America's Youth, which is now running across all 50 states and will run an event called America's Youth AI Festival in July 2026 in Boston. Jeff has got master's degrees from Johns Hopkins and Harvard. He was a Boston school principal and as commissioner, successfully navigated Massachusetts schools through Covid and other crises. We talk about what the campaign is doing and how teachers are responding to it, risks of AI and social media to kids, what to do about cheating and AI detectors, and much more! All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Rakesh Doddamane is a seasoned technology leader with over 25 years of experience specializing in Generative AI, UX Design, and Digital Transformation. Currently serving as Leader of Gen AI & UX at Philips, he has established the Generative AI Centre of Excellence and spearheaded AI governance frameworks across global organizations. On The Menu: Value-driven approach to scaling generative AI solutionsStrategic AI investments across Philips' business functionsCloud infrastructure governance and cost optimization frameworksGen AI Ninja Certification: three-tier upskilling programCustomer insights leveraging AI for product innovationFuture of autonomous agents and orchestration governanceNavigating EU AI Act compliance in regulated industries
A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.A transcript of this episode is here.
In season 6 Episode 6 I am delighted to welcome Graham Walker, MD to the podcast. Dr Walker is an emergency physician by clinical background who practices in San Francisco, completing his medical school training at Stanford University School of Medicine. He also majored in social policy as an undergraduate and has had a life long interest in software development and digital technology. Today he combines these interests and skills in his clinical and non-clinical roles.Graham is the co-director of Advanced Development at The Permanente Medical Group (TPMG), which delivers care for Kaiser Permanente's 4.6 million members in Northern California. At TPMG, Graham works on practical AI development, commercialization, partnerships, and technology transformation under the Chief Innovation Officer. As a clinical informaticist, he also leads emergency and urgent care strategy for KP's electronic medical record. With an appetite and aptitude for tech innovation and entrepreneurship Dr Walker created and founded MDCalc as a resident, a free online clinical decision support tool many clinicians including myself have been using for years now, indeed MDCalc turned 20 this month.More recently Graham co-founded the platform and community Offcall with a mission to bring joy back to medicine and facilitate contract transparency and financial literacy for US medical practitioners. These two online resources are dedicated to helping physicians globally. In his free time Graham writes about the intersection of AI, technology, and medicine, and created The Physicians' Charter for Responsible AI, a practical guide to implementing safe, accurate, and fair AI in healthcare settings.In this conversation I have an opportunity to explore Dr Walkers perspective at the intersection of AI, technology and medicine and revisit some emerging themes from season 6 in relation to leading change and innovation in complex, siloed healthcare systems. We discuss the challenges of communicating change efforts in this context and Graham shares his practical pearls for elevating clinical voices and translating information across disparate stakeholder groups. With a unique leadership role in his own organisation in this space I was keen to explore his perspectives on creating opportunities for intrapreneurship. Dr Walker is a globally respective clinical voice on AI in healthcare and we discuss the perils and promise of AI , the Physician Charter for responsible use of AI in medicine and his Offcall survey and pending report on physicians views on the technology application and implementation in medicine ( now published and linked below) Finally it is Graham's mission to bring joy and humanity back to medicine and to use technology responsibly to augment the clinician experience and skillset enabling safer and higher quality care that is the major draw for me. Thank you Dr Walker for you work which has global relevance and reach.Links / References:https://drgrahamwalker.comhttps://www.offcall.comThe Mind Full Medic Podcast is proudly sponsored by the MBA NSW-ACT Find out more about the charitable organisation supporting doctors and their families and/ or donate today at www.mbansw.org.auDisclaimer: The content in this podcast is not intended to constitute or be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your doctor or other qualified health care professional. Moreover views expressed here are our own and do not necessarily reflect those of our employers or other official organisations.
Recorded live at Calgary's SocialWest 2025, this episode of the Marketing News Canada podcast features guest host Laila Hobbs, Co-Founder of Social Launch Labs, in conversation with Hiba Amin, Co-Founder of Creative Little Planet.Hiba shares a candid look at navigating imposter syndrome throughout her marketing career, from being the sole marketer during a startup downturn to finding confidence through community, conversation, and lived experience. She also discusses the evolution of content creation, why authenticity resonates more than polished perfection, and how marketers can build meaningful connections with their audiences.The conversation dives into the responsible use of AI in content marketing, including where it can support creative work, where it falls short, and why strong foundational ideas must come before scale. Packed with thoughtful insights and real-world perspective, this episode is a must-listen for marketers navigating growth, creativity, and confidence in a rapidly changing industry.
What does responsible AI really look like when it moves beyond policy papers and starts shaping who gets to build, create, and lead in the next phase of the digital economy? In this conversation recorded during AWS re:Invent, I'm joined by Diya Wynn, Principal for Responsible AI and Global AI Public Policy at Amazon Web Services. With more than 25 years of experience spanning the internet, e-commerce, mobile, cloud, and artificial intelligence, Diya brings a grounded and deeply human perspective to a topic that is often reduced to technical debates or regulatory headlines. Our discussion centers on trust as the real foundation for AI adoption. Diya explains why responsible AI is not about slowing innovation, but about making sure innovation reaches more people in meaningful ways. We talk about how standards and legislation can shape better outcomes when they are informed by real-world capabilities, and why education and skills development will matter just as much as model performance in the years ahead. We also explore how generative AI is changing access for underrepresented founders and creators. Drawing on examples from AWS programs, including work with accelerators, community organizations, and educational partners, Diya shares how tools like Amazon Bedrock and Amazon Q are lowering technical barriers so ideas can move faster from concept to execution. The conversation touches on why access without trust falls short, and why transparency, fairness, and diverse perspectives have to be part of how AI systems are designed and deployed. There's an honest look at the tension many leaders feel right now. AI promises efficiency and scale, but it also raises valid concerns around bias, accountability, and long-term impact. Diya doesn't shy away from those concerns. Instead, she explains how responsible AI practices inside AWS aim to address them through testing, documentation, and people-centered design, while still giving organizations the confidence to move forward. This episode is as much about the future of work and opportunity as it is about technology. It asks who gets to participate, who gets to benefit, and how today's decisions will shape tomorrow's innovation economy. As generative AI becomes part of everyday business life, how do we make sure responsibility, access, and trust grow alongside it, and what role do we each play in shaping that future? Useful Links Connect With Diya Wynn AWS Responsible AI Tech Talks Daily is sponsored by Denodo
Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter
In this episode, host Sandy Vance sits down with Hadas Bitran, Partner General Manager of Health AI at Microsoft Health & Life Sciences, for a deep dive into the rapidly evolving world of healthcare agents. Together, they explore how agentic technologies are being used across clinical settings, where they're creating value, and why tailoring these tools to the specific needs of users and audiences is essential for safety and effectiveness. Well-designed healthcare agents can reinforce responsible AI practices (like transparency, accountability, and patient safety) while also helping organizations evaluate emerging solutions with greater clarity and confidence. In this episode, they talk about:How agents are used in healthcare and use casesThe risks if a healthcare agent is not tailored to the needs of users and audiencesHow healthcare agents support responsible AI practices, such as safety, transparency, and accountability, in clinical settingsHealthcare organizations should look to evaluate healthcare agent solutionsBridging the gaps in access, equity, and health literacy; empowering underserved populations and democratizing expertiseThe impact of AI on medical professionals and the healthcare staff, and how they should prepare for the change?A Little About Hadas:Hadas Bitran is Partner General Manager, Health AI, at Microsoft Health & Life Sciences. Hadas and her multi-disciplinary R&D organization build AI technologies for health & life sciences, focusing on Generative AI-based services, Agentic AI, and healthcare-adapted safeguards. They shipped multiple products and cloud services for the healthcare industry, which were adopted by thousands of customers worldwide.In addition to her work at Microsoft, Hadas previously served as a Board Member at SNOMED International, a not-for-profit organization that drives clinical terminology worldwide.Before Microsoft, Hadas held senior leadership positions managing R&D and Product groups in tech corporations and in start-up companies. Hadas has a B.Sc. in Computer Science from Tel Aviv University and an MBA from the Kellogg School of Management, Northwestern University in Chicago.
Masheika Allgood delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data. Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do. Masheika Allgood is an AI Ethicist and Founder of AllAI Consulting. She is a well-known advocate for sustainable AI development and contributor to the IEEE P7100 Standard for Measurement of Environmental Impacts of Artificial Intelligence Systems. Related Resources Taps Run Dry Initiative (Website) Data Center Advocacy Toolkit (Website) Eat Your Frog (Substack) AI Data Governance, Compliance, and Auditing for Developers (LinkedIn Learning) A Mind at Play: How Claude Shannon Invented the Information Age (Referenced Book) A transcript of this episode is here.
In this episode of My EdTech Life, Jeff Riley breaks down the mission behind Day of AI and the work of MIT RAISE to help schools, districts, families, and students understand artificial intelligence safely, ethically, and with purpose.Jeff brings 32 years of experience as a teacher, counselor, principal, superintendent, and former Massachusetts Commissioner of Education. His transition to MIT RAISE reveals why AI literacy, student safety, and clear policy matter more than ever.Timestamps00:00 Welcome & Sponsor Shoutouts01:45 Jeff Riley's Background in Education04:00 Why MIT RAISE and Day of AI06:00 The Challenge: AI Policy, Safety & Equity08:30 How AI Can Transform Teaching & Learning10:30 Differentiation, Accessibility & Student Support12:30 Helping Teachers Feel Confident Using AI15:00 Leading AI Adoption at the District Level18:00 What AI Literacy Should Mean for Students20:00 Teaching Healthy Skepticism & Bias Awareness23:00 Student Voice in AI Policy26:00 Parent Awareness & Common Sense Media Toolkit29:00 Responsible AI for America's Youth31:00 America's Youth AI Festival & Student Leadership34:30 National Vision for AI in Education37:00 Closing Thoughts + 3 Signature Questions41:00 Stay TechieResources MentionedDay of AI Curriculum: https://dayofai.orgMIT RAISE: https://raise.mit.eduSponsors
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food
AI has exploded across the legal industry but for many lawyers, it still feels overwhelming, risky, or simply “not for them.” Today's guest has made it his mission to change that. Joining us is Robert Eder an intellectual property lawyer, legaltech educator, and one of Europe's leading voices on AI prompting for lawyers. Robert designs legal automation solutions, and teaches lawyers around the world how to use AI safely, effectively, and creatively. Robert has trained hundreds of lawyers across Europe and is one of the clearest voices on how to use AI responsibly, safely and with real legal precision. Here are a few standout takeaways: Lawyers aren't bad at prompting they're undersold. Their analytical mindset actually gives them an advantage. Most people still treat AI like Google. Adding structure through XML tags, roles and answer-levelling changes everything. The first AI skill every lawyer should learn isn't drafting it's controlling output. Structure before substance. Hallucinations aren't a deal-breaker. Responsible AI frameworks give you quality control, not guesswork. You don't need 70% of AI tools on the market. With the right prompting, one model + the right workflow beats shiny software every time. Legal prompting is not the same as general prompting. Law has edge cases, nuance, and risk your prompts must reflect that. Two general points to reflect on:Lawyers don't need to become engineers. They need to become better communicators with machines.If you don't understand prompting, you'll always think AI is unreliable — when in reality, it's only as clear as the instructions you give it. It's practical, hands-on and genuinely career-shifting. AI isn't replacing lawyers. Lawyers who understand AI are replacing the ones who don't.
This is AI x Multilateralism, a mini-series on The Next Page, where experts help us unpack the many ideas and issues at the nexus of AI and international cooperation. AI has the dual potential to transform our world for the better, while also deepening serious inequalities. In this episode we speak to Dr. Rachel Adams, Founder and CEO of the Global Center on AI Governance and author of The New Empire of AI: The Future of Global Inequality. She shares why Africa-led and Majority World-led research and policy are essential for equitable AI governance that's grounded in the realities of people everywhere. She reflects on: why the work of the Center's flagship Global Index on Responsible AI and its African Observatory on Responsible AI are bringing much-needed research and evidence to ensure AI governance is fair and inclusive. her thoughts on the UN General Assembly's 2025 resolutions to establish an International Scientific Panel on AI and a Global Dialogue on AI Governance, urging true inclusion of diverse voices, indigenous perspectives, and public input why we need to treat AI infrastructure as an AI Global Commons and, the power of local-language AI and public literacy in ensuring we harness the most transformative aspects of AI for our world. Resources mentioned: The Global Center on AI Governance The Center's Global Index on Responsible AI The Center's African Observatory on Responsible AI, and its research series Africa and the Big Debates on AI Production: Guest: Dr. Rachel Adams Host, production and editing: Natalie Alexander Julien Recorded & produced at the Commons, United Nations Library & Archives Geneva Podcast Music credits: Sequence: https://uppbeat.io/track/img/sequence Music from Uppbeat (free for Creators!): https://uppbeat.io/t/img/sequence License code: 6ZFT9GJWASPTQZL0 #AI #Multilateralism #UN #Africa #AIGovernance
In this inaugural episode of AI at Work, Greg Demers, an employment partner, is joined by Meg Bisk, head of the employment practice, and John Milani, an employment associate, to explore how employers across industries can harness artificial intelligence responsibly while mitigating legal and operational risk.They discuss the most common pitfalls of employee AI use, including inadvertent disclosure of confidential information, model “hallucinations,” and source opacity, that undermine auditability and accuracy. Turning to HR functions, they examine emerging regulatory frameworks and compliance expectations, such as bias auditing requirements for automated employment decision tools and accessibility obligations, alongside practical steps for vetting vendors, embedding human oversight, and enforcing contractual protections. Listeners will come away with pragmatic strategies to update policies, document decisions and foster a transparent culture of accountability that will position organizations to leverage AI use that is smarter, not riskier.Stay tuned for future episodes where we will explore the use of AI in human resources, privacy implications and cybersecurity issues, AI in executive compensation & employee benefits, among other topics.
Today's guest is Lauren Tulloch, Vice President and Managing Director at CCC (Copyright Clearance Center). CCC provides collective copyright licensing services for corporate and academic users of copyrighted materials, and, as one can imagine, the advent of AI has exposed a large number of businesses to copyright risks they've never considered before. Today, Lauren joins us to discuss where copyright exposure arises in financial services, from the growth of AI development to more commonplace employee use. With well over a decade at the company, Lauren dives into the urgent need for proactive copyright strategies in financial services, ensuring firms avoid litigation, regulatory scrutiny, and reputational damage, all while maximizing the value of AI. This episode is sponsored by CCC. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
The rapid evolution of generative AI has led to increased adoption but also raises significant compliance challenges. ZS Principal Michael Shaw joins the podcast to discuss the importance of responsible and ethical AI adoption in the biopharma industry, particularly as it relates to compliance, risk management and improving patient outcomes. Highlights include:The importance of developing a comprehensive framework for responsible AI, focusing on principles like fairness, safety, transparency and accountability Why effective AI governance requires cross-functional collaboration and continuous trade-off assessmentHow leveraging AI to enhance workflows can drive efficiency and effectiveness but must be implemented thoughtfully with the right controls in place
Most people run from government bureaucracy. Pavan Parikh ran toward it—and decided to rewrite the system from the inside.He believes public service should move like a startup: fast, transparent, and built around people, not process.But when tradition, power, and red tape pushed back, he didn't fold—he went to the statehouse to fight for reform.So how do you disrupt a 200-year-old system that was never built for speed or equity?In Episode 188 of the Disruption Now Podcast, Pavan breaks down how he's modernizing Hamilton County's court systems, digitizing paper-heavy workflows, and using AI and automation to reduce barriers to justice rather than create new ones. Whether you work in government, policy, law, or tech, you'll see how startup tools and mindsets can create real impact, not just buzzwords.Pavan Parikh is the elected Hamilton County Clerk of Courts in Ohio, focused on increasing access to justice, improving customer service, and modernizing one of the county's most important institutions. In this episode, we talk about what happens when a startup mindset collides with decades-old court processes, why culture eats technology for breakfast, and how AI can help everyday people navigate civil cases, evictions, and protection orders more effectively.You'll also hear Pavan's personal journey—from planning a career in medicine to 9/11 shifting him toward law and public service to ultimately leading one of the most prominent offices in Hamilton County. We get into fear of AI, job-loss anxiety within government, and how he's reframing AI as a teammate that frees staff for higher-value work rather than replacing them.If you've ever looked at the justice system and thought “there has to be a better way,” this deep dive into startup thinking for government will show you what that better way can look like—and what it takes to build it from the inside.What you'll learn in this episode:How startup thinking for government can reduce friction and errors in court processesWhy is Pavan obsessed with access to justice and end-user experience for everyday residents?How Hamilton County is digitizing records, streamlining evictions, and modernizing civil protection order filingWhere AI and automation can safely support court staff and help-center attorneysWhy change management is the real challenge—not the technologyHow local government can be a faster “lab” for responsible AI than federal agenciesWhat it really looks like to design systems around people, not paperworkChapters:00:00 Why the government needs startup thinking03:15 Pavan's path from medicine to law and 9/11's impact10:45 Modernizing Hamilton County courts and killing paper workflows22:10 AI, access to justice, and reimagining the Help Center35:30 Careers, values, and becoming a disruptor in public serviceQuick Q&A (for searchers):Q: What does “startup thinking for government” mean in this episode?A: Treating residents as end users, iterating on systems, and using tech and AI to automate low-value tasks so staff can focus on service and justice outcomes.Q: How is Hamilton County using technology to improve access to justice?A: By digitizing records, expanding the Help Center, improving online access to cases, limiting or removing outdated eviction records, and building easier online processes for civil protection orders.Q: Will AI replace court jobs?A: Pavan argues AI should handle repetitive questions and data lookups so humans can spend more time problem-solving, doing quality control, and helping people with complex issues.Connect with Pavan Parikh (verified/public handles):Website: PavanParikh.comX (Twitter): @KeepPavanClerkFacebook: Pavan Parikh for Clerk of Courts / @KeepPavanClerkInstagram: @KeepPavanClerkOffice channel: Hamilton County Clerk of Courts – @HamCoClerk on YouTubeDisruption Now resources:Subscribe to YouTube for more conversations at the intersection of AI, policy, government, and impact.Join the newsletter for weekly trends in AI and emerging tech for people who want to change systems, not just complain about them. bit.ly/newsletterDN#StartupThinking #GovTech #AccessToJusticeDisruption Now: Disrupting the status quo, making emerging tech human-centric and Accessible to all. Website https://disruptionnow.com/podcast Apply to get on the Podcast https://form.typeform.com/to/Ir6Agmzr?typeform-source=disruptionnow.comMusic credit:Embrace - Evgeny Bardyuzha
Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI. Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.Related ResourcesThe False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)A transcript of this episode is here.
Security and privacy leaders are under pressure to sign off on AI, manage data risk, and answer regulators' questions while the rules are still taking shape and the data keeps moving. On this episode of Ctrl + Alt + AI, host Dimitri Sirota sits down with Trevor Hughes, President & CEO of the IAPP, to unpack how decades of privacy practice can anchor AI governance, why the shift from consent to data stewardship changes the game, and what it really means to “know your AI” by knowing your data. Together, they break down how CISOs, privacy leaders, and risk teams can work from a shared playbook to assess AI risk, apply practical controls to data, and get ahead of emerging regulation without stalling progress.In this episode, you'll learn:Why privacy teams already have methods that can be adapted to oversee AI systemsBoards and executives want simple, defensible stories about risk from AI useThe strongest programs integrate privacy, security, and ethics into a single strategyThings to listen for: (00:00) Meet Trevor Hughes(01:39) The IAPP's mission and global privacy community(03:45) What AI governance means for security leaders(05:56) Responsible AI and real-world risk tradeoffs(08:47) Aligning privacy, security, and AI programs(15:20) Early lessons from emerging AI regulations(18:57) Know your AI by knowing your data(22:13) Rethinking consent and data stewardship(28:05) Vendor responsibility for AI and data risk(31:26) Closing thoughts and how to find the IAPP
Send us a textIn the final Coffee Nº5 episode of the year, Lara Schmoisman breaks down the marketing ecosystem of 2026—an environment defined by AI clarity, human-led storytelling, micro-experts, privacy-first data practices, and integrated teams. This episode explains what it truly takes to operate, grow, and connect in a world where everything is interconnected.We'll talk about:The 2026 ecosystem: why everything in marketing is now interconnected—and why working in silos will break your growth.Generative Engine Optimization (GEO): clarity as the new currency for AI.AI agents as shoppers: what it means when software researches, compares, and negotiates for consumers.Responsible AI: why governance, rules, and human oversight will define how brands use technology.Content in 2026: real storytelling, crafted value, SEO-backed captions, and the end of shallow posting.The rise of micro-experts: why niche credibility beats mass follower counts.Privacy & first-party data: what owning your customer information really means.Subscribe to Lara's newsletter.Also, follow our host Lara Schmoisman on social media:Instagram: @laraschmoismanFacebook: @LaraSchmoismanSupport the show
In this episode of the HR Leaders Podcast, we sit down with Michiel van Duin, Chief People Technology, Data and Insights Officer at Novartis to discuss how the company is building a human-centered AI ecosystem that connects people, data, and technology.Michiel explains how Novartis brings together HR, IT, and corporate strategy to align AI innovation with the company's long-term workforce and business goals. He shares how the team built an AI governance framework and a dedicated AI and innovation function inside HR, ensuring responsible use of AI while maintaining trust and transparency.From defining when AI should step in and when a “human-in-the-loop” is essential, to upskilling employees and creating the first “Ask Novartis” AI assistant, Michiel shows how Novartis is making AI practical, ethical, and human.