POPULARITY
We've all been told to just be yourself. But psychologist and author Tomas Chamorro-Premuzic—Chief Innovation Officer at ManpowerGroup and professor at UCL and Columbia—says that's the worst advice you can take. In his new book, Don't Be Yourself: Why Authenticity Is Overrated (and What to Do Instead), he reveals why our obsession with authenticity is holding us back—and what actually leads to success. What You'll Learn in This Episode Why "just being yourself" is often the worst professional advice you can receive The coffee drinker model for balancing your raw personality with social expectations How to use emotional intelligence as a strategic filter for better leadership Why high-performing leaders often act more like method actors than authentic versions of themselves How to navigate the tension between human authenticity and AI-generated content Episode Chapters (00:00) Intro (01:21) The Myth of Objective Authenticity (02:50) Leaders as Method Actors (04:01) Comparing Personal and Restaurant Brands (05:53) The Rigidity of "Telling It Like It Is" (07:06) Understanding Authenticity Traps (10:11) Emotional Intelligence vs. Authenticity (13:22) The Coffee Drinker Model Explained (15:35) Adaptability in the Workplace (18:14) Cultural Differences in Authenticity (22:27) Authenticity in the Age of AI (26:43) Why Benetton Made Him Smile About Tomas Chamorro-Premuzic Tomas Chamorro-Premuzic is the Chief Innovation Officer at ManpowerGroup, a professor of business psychology at University College London and at Columbia University, a cofounder of Deeper Signals, and an associate at Harvard's Entrepreneurial Finance Lab. He is the author of several books, including Why Do So Many Incompetent Men Become Leaders? (and How to Fix It), upon which his popular TEDx talk was based, and I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. What Brand Has Made Tomas Smile Recently? Tomas recently found inspiration in the history of the Italian fashion brand Benetton. He was fascinated by the brand's founder, Luciano Benetton, who pioneered fast fashion and used provocative, moral-driven advertising campaigns to address diversity and inclusion long before they were mainstream corporate pillars. Resources & Links Connect with Tomas on LinkedIn. Check out his book, Don't Be Yourself, the Manpower website, and his own Dr. Tomas website. Watch or listen on Apple Podcasts, Spotify, YouTube, Amazon/Audible, TuneIn, and iHeart. Rate and review on Apple Podcasts and Spotify to help others find the show. Share this episode — email a friend or colleague this episode. Sign up for my free Story Strategies newsletter for branding and storytelling tips. On Brand is a part of the Marketing Podcast Network. Listen & Support the ShowUntil next week, I'll see you on the Internet! Learn more about your ad choices. Visit megaphone.fm/adchoices
Send a textIs AI in pathology actually improving diagnosis — or just adding complexity?In DigiPath Digest #37, we reviewed four recent publications covering AI-based biomarker quantification in glioblastoma, real-world digital workflow integration in prostate cancer, multimodal AI combining histopathology and genomics, and patient perspectives on AI in cancer diagnostics.This episode connects technical performance with something equally important: trust.Episode Highlights[00:02] Community & updates Digital Pathology 101 free PDF, upcoming patient-focused book, and global attendance.[04:07] AI-based image analysis in glioblastoma AI showed strong consistency with pathologists when quantifying Ki-67, P53, and PHH3. Significant biological correlations (Ki-67 ↔ PHH3, PHH3 ↔ P53) were detected by AI — not by manual assessment. Takeaway: computational quantification improves precision.[09:28] Real-world digital workflow + AI in prostate cancer (France) AI-pathologist concordance: • 93.2% (high probability cancer detection) • 99.0% (low probability slides) Gleason concordance: 76.6% 10% failure rate due to pre-analytical artifacts. Takeaway: infrastructure and sample quality still matter.[15:58] Multimodal AI (MARBIX framework) Combines whole slide images + immunogenomic data in a shared latent space using binary “monograms.” Performance in lung cancer: 85–89% vs 69–76% unimodal models. Takeaway: integrated data improves case retrieval and similarity reasoning.[22:13] AI-powered paper summary subscription introduced Structured summaries for busy professionals who want more than abstracts.[26:17] Patient roundtable on AI in pathology (Belgium) Patients expect: • Better accuracy • Faster turnaround • Stronger collaborationTrust is high when: • Algorithms use diverse datasets • Pathologists retain final responsibilityClinical validity mattered more than full algorithm transparency. Privacy concerns focused more on insurer misuse than cloud transfer.Key TakeawaysAI improves biomarker precision in glioblastoma.Digital pathology implementation works — but pre-analytics can limit AI performance.Multimodal AI represents the next meaningful step in precision diagnostics.Patients are not afraid of AI — they want validation, oversight, and governance.Human–AI collaboration remains central.If you're working in digital pathology, computational pathology, or precision oncology, this episode connects evidence, implementation, and patient perspective.Support the showGet the "Digital Pathology 101" FREE E-book and join us!
Artificial intelligence is joining organisational teams, but how should leaders respond? In this executive leadership podcast, Niels Brabandt interviews Josh Epperson, senior executive and author of Bacon, Bots and Teamwork, on how leaders can successfully orchestrate human-AI collaboration. You will learn: Why fear prevents successful AI adoption Why human identity remains essential in AI-driven organisations How leaders can implement AI through practical experimentation Why small organisations can benefit significantly from AI How leaders can demonstrate value and drive organisational adoption This episode is essential listening for executives, founders, and decision-makers responsible for AI strategy and organisational leadership. Host: Niels Brabandt / NB@NB-Networks.com Contact with Niels Brabandt: https://www.linkedin.com/in/nielsbrabandt/ Niels Brabandt's Leadership Letter: https://expert.nb-networks.com/ Niels Brabandt#s Website: https://www.nb-networks.biz/
Realities Remixed, formerly know as Cloud Realities, launches a new season exploring the intersection of people, culture, technology, and society. Hosts Dave Chapman, Esmee van de Giessen, and Rob Kernahan unpack 2026's defining trends, from AI and sovereignty to adaptability and automation, offering fresh insight, candid reflections, and forward‑looking conversations shaping the year ahead. TLDR00:20 – Introduction of Realities Remixed02:30 – Why the show evolved?04:50 – Dig in with the team: Predictions for 202606:40 – Macro trends13:00 – Sovereignty 17:40 – Agentic AI22:17 – Human–AI interaction26:06 – Cloud trends30:42 – AI scaling, domain‑specific models35:03 – Adoption lag39:34 – Physical AI43:47 – Quantum computing48:21 – Hardware acceleration50:30 – Cybersecurity52:38 – Season outlook HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Realities Remixed' is an original podcast from Capgemini
“What's Buggin' You” segment for Thursday 2-19-26
I've been delaying this episode for a long time because the topic is genuinely difficult and, for many of us, scary. AI is threatening not just to our livelihood, but to our sense of self-worth as creators.In this episode, I don't offer false guarantees about job security. Instead, I frame the problem through the lens of microeconomics and rational incentives to help you understand how to remain employable. We discuss why you must separate your ego from your current skill set and how to position yourself not as a competitor to AI, but as a force multiplier.• The Hard Truth: I explain why the "abstinence" approach—hoping the industry rejects AI or that it turns out to be a bubble—is a high-risk gamble that is unlikely to succeed.• Ego vs. Employability: We discuss the difficult mental shift required to disconnect your self-worth from the act of writing code manually, allowing you to adopt new tools without feeling like you are losing your identity.• The Microeconomics of Your Job: Understand the cold reality that a rational market only pays you if you generate more value than you cost; if AI can do the same task with less risk or cost, the market will choose AI.• The Non-Zero Sum Game: Learn why the economy isn't a fixed pie. The goal isn't just to survive, but to recognize that the combination of Human + AI can generate more total value than either can alone.• Multiplicative Value: I challenge you to stop thinking about linear skill acquisition and start thinking like a manager: how can you use AI to multiply your output and become indispensable?• Accepting Atrophy: We confront the reality that your core coding skills may degrade over time as you rely on AI, and why accepting this trade-off might be necessary for your career survival.
What if the best way to lead product is to build it yourself first?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Chase Schwalbach, SVP of Product and Technology at Millie, to unpack a radically different approach to product leadership. Despite his title, Chase spent months as an IC, rolling up his sleeves to build healthcare infrastructure, teach himself AI eval systems, and ship a sophisticated patient chatbot, all before bringing his team in. He explains why shielding the team from early-stage messiness, moving at speed, and feeling the pain yourself leads to better products.They explore how Chase built a team of AI agents (supervisor + specialized sub-agents) from scratch, why treating prompts like deterministic code requires extreme precision, and how he taught himself evals through pure iteration. Plus, the converging worlds of PM and engineering, why technical PMs and product-minded engineers are becoming the same role, why handoffs kill velocity in an AI-native world, and what “context engineering” actually means when your codebase needs to work for both humans and AI agents.If you're a product leader wondering whether to get more hands-on, an engineer considering the jump to PM (or vice versa), or building AI systems in regulated industries like healthcare, this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox
Welcome to another thought-provoking episode of The Brand Called You. In this episode, Ashutosh Garg speaks with Geoff Gibbins, Founder of Human Machines, a human-AI transformation company focused on helping organizations thrive in the age of artificial intelligence.Geoff shares practical, real-world insights into how AI is reshaping leadership, work, and decision-making. He explains why many leaders still view AI as a future challenge, what effective human-AI collaboration truly looks like, and why most enterprise AI initiatives fail to move beyond the pilot stage.This conversation dives deep into concepts such as liquid organizations, learning flywheels, and the growing importance of human judgment in an AI-driven world. Geoff also highlights why people-led transformations consistently outperform technology-led ones and how leaders must learn, unlearn, and relearn to stay relevant.Whether you're a business leader, entrepreneur, or technology enthusiast, this episode will help you understand how to harness AI deliberately—without losing sight of what makes us human.
Get our AI news cheat sheet: 20+ prompts for the latest models and tools https://clickhubspot.com/eog Episode 96: How terrified should you really be about a social network with no humans allowed? Matt Wolfe (https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9) unpack the viral sensation “Maltbook”—the Reddit for AI agents only—and separates fact from hysteria around bots gaining “sentience.” The crew debates how Maltbook really works, why people are freaking out (spoiler: it's mostly humans behind the curtain), plus the wild security issues that have already emerged, from exposed API keys to clever crypto scams. Other topics covered include the rise of “Rent a Human” (AI hiring people to do its bidding!), self-replicating bots with no off-switch, and just how fast these new platforms are racing ahead of regulation. Finally, the group debates mega investments in OpenAI, the future of AGI, and who will define what our AI future actually looks like. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Simulated Experience vs. Reality (04:05) AI Agent Posting on Maltbook (06:23) Crypto Scams on Multbook (11:15) Agent Risks in IoT Devices (13:52) Why Have Bot Followers? (18:09) OpenAI Retires GPT-4 Versions (21:57) Anthropic vs. OpenAI Super Bowl Ads (24:56) OpenAI Ads Spark Mixed Reactions (27:09) AI Competition Shapes Humanity's Future (32:21) Satellite Clusters and Collision Challenges (33:38) X, SpaceX, Tesla: Mergers & Changes (38:33) Pathway to AGI Through Modalities (39:51) Cautious Race to AGI — Mentions: Maltbook: https://maltbook.com/ RentaHuman: https://rentahuman.ai/ Starlink: https://starlink.com/ Claude: https://claude.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
You make your own decisions – right? AI is already shaping everyday choices and purchases – most often in ways we barely notice.In this episode, David and Celeste are joined by Professor Billy Sung to explore how AI influences everyday consumer decisions, what drives trust, and how humans can stay in the loop as AI becomes more embedded.What “AI” actually means (beyond ChatGPT) [01:07]How AI is already shaping consumer decisions through ads, search and recommendation systems [03:27]What happens when AI search starts serving ads [04:13]The three drivers of trust in AI [08:51]Disclosure is a double-edged sword [11:54]Why people bond with AI influencers: anthropomorphism and parasocial relationships [16:15]The likely future: co-created decisions and “shared agency” [31:17]Learn moreThe Professor Insight PodcastYou make decisions freely? Neuromarketing says think againHow much can we trust AI? Podcast insightsConnect with our guestsBilly Sung, Professor, School of Management and MarketingProfessor Billy Sung is a researcher and professor at Curtin University, specialising in neuromarketing, consumer psychology and human–AI interaction. He leads Curtin's Consumer Research Lab, bringing together behavioural science and emerging technologies to inform industry and policy decision-making.Curtin staff pageJoin Curtin UniversityThis podcast is brought to you by Curtin University. Curtin is a global university known for its commitment to making positive change happen through high-impact research, strong industry partnerships and practical teaching.Work with usStudy a research degreeStart postgraduate educationIf you liked this episode, why not explore our Master of Artificial Intelligence.Got any questions or suggestions for future topics?Email thefutureof@curtin.edu.auSocial mediaXFacebookInstagramYouTubeLinkedInTranscriptRead the transcriptBehind the scenesHost: Celeste Fourie and David KarstenContent creator and recordist: Caitlin CrowleyProducer: Emilia JolakoskaExecutive Producers: Anita Shore and Natasha WeeksFirst Nations AcknowledgementCurtin University acknowledges Aboriginal and Torres Strait Islander people, the First Peoples of this place we call Australia, and the First Nations peoples connected with our global campuses. We are committed to working in partnership with Custodians and Owners to strengthen and embed First Nations' voices and perspectives in our decision-making, now and into the future.Curtin University supports academic freedom of speech. The views expressed in The Future Of podcast may not reflect those of Curtin University.
What if the very essence of humanity is on the brink of transformation? As AI continues to evolve, our understanding of consciousness, creativity, and identity is being reshaped in ways that challenge our deepest beliefs. Join us on a journey through the philosophical labyrinth where we tackle the implications of AI on our human experience. Can machines truly capture the intricacies of what it means to live and feel, or are we irreversibly altering our own nature? Tune in as we unravel the paradox of being human in the age of silicon souls.
In this power-packed episode of Digital Marketing Gyaan, we explore how Human + AI collaboration is transforming the future of marketing and why traditional setups are struggling to keep up. From outdated workflows to slow execution, this conversation uncovers what's holding marketing teams back and how AI can change the game. Our special guest shares deep insights on why leaders must move beyond “using” AI and start collaborating with it to reinvent business strategies, workflows, and team structures. Learn how marketers, founders, and entrepreneurs can leverage AI for faster research, smarter content creation, and continuous optimization. We also discuss why reinvention and lifelong learning are the most critical skills in today's fast-changing digital world—and how focusing on uniquely human strengths like creativity, judgment, and leadership can future-proof your career. ✔️ Why Human + AI teams outperform traditional marketing models✔️ How to use AI as a creative and strategic partner✔️ Practical ways to boost marketing performance with AI✔️ The importance of adaptability and continuous learning✔️ How to stay relevant in the age of automationIf you're a marketer, founder, business leader, or digital enthusiast looking to build a future-ready marketing strategy, this episode is a must-listen.
A single email can cost millions of dollars. Not because of what it says, but because it didn't reach the right people at the right time. Most companies treat content as marketing fluff until it fails spectacularly. Then suddenly everyone realizes it's the invisible infrastructure holding together every digital experience.Join hosts Chuck Moxley and Nick Paladino as they sit down with Rafael Carranza, who's spent his career proving that content isn't just words on a page. Starting at a wire service during the dot-com boom when thousands of websites suddenly needed live content, Rafael moved to Microsoft where he helped open their content platform to publishers. He then went to Amazon building decision-making systems for thousands of sellers navigating complex rules, and now to PitchBook where data trust drives financial decisions. We explore why trust is the foundation of all content operations, why Microsoft pivoted from being a media company to becoming a platform, and when content stops being marketing and becomes integral to the product itself. Rafael argues that frictionless isn't about improving processes or deploying better technology, it's about how deeply you understand the customer on the other side.Key Actionable Takeaways:Build content governance foundations before implementing AI - Clean your content libraries, audit outdated information, establish clear tagging systems, and align terminology across departments; LLMs can't generate accurate responses from messy, ungoverned dataTreat content as product infrastructure, not just marketing - Critical information about rules, procedures, and product usage directly impacts customer success and costs real money when missing or wrong at decision-making momentsPrioritize quality gates over speed when stakes are high - Create intentional friction through approval processes and pushback mechanisms to maintain quality standards; moving fast without accuracy can trigger legal issues, government involvement, and million-dollar failuresWant more tips and strategies about creating frictionless digital experiences? Subscribe to our newsletter! https://www.thefrictionlessexperience.com/frictionless/ Download the Black Friday/Cyber Monday eBook: http://bluetriangle.com/ebook Rafael Carranza's LinkedIn: https://linkedin.com/in/rafaelcarranza Nick Paladino's LinkedIn: https://linkedin.com/in/npaladino Chuck Moxley's LinkedIn: https://www.linkedin.com/in/chuckmoxley/Chapters:(00:00) Introduction(02:43) Journalism origins(03:15) Wire service dot-com boom(04:30) Microsoft partnership(05:30) Learning user trust(07:15) Trust across organizations(08:35) Microsoft media pivot(09:45) Platform over content(10:30) Content as product(11:15) Amazon seller information(12:30) Operationalizing at scale(13:15) Governance structures(14:30) AI hallucination risks(15:15) Content accuracy guardrails(17:15) Windows to Linux journey(18:15) Business adoption limits(20:00) Human-AI collaboration(21:30) Innovation vs trust balance(22:00) B2B vs B2C content(23:30) Right content right time(24:30) When content fails(25:30) Million-dollar mistakes(26:45) Intentional friction benefits(27:30) Quality over speed(28:45) Biggest misconception(29:30) Conclusion
Spending more time fixing your AI outputs then you're saving? You're not alone. The trap? You're in operator mode. Falling for the industry status quo like upskilling and human-in-the-loop. The real winners in the AI race? Companies that have changed the human-AI relationship. How? Join us for Volume 4 of our Start Here Series as we uncover what you need to know. Human-AI Collaboration: Best practices for working alongside AI -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Human-AI Collaboration Best Practices 2026Shift from Operator to Orchestrator RolesHuman-in-the-Loop Limitations ExplainedExpert-Driven AI Review Loops vs. Generic OversightOrchestrating AI Agents for Business ProductivityBuilding Reusable AI Context and SkillsElevating AI Champions on TeamHuman Strengths vs. AI Strengths in WorkflowsAvoiding Augmentation Debt and Workflow PitfallsMindset Shifts for Effective AI ManagementTimestamps:00:00 "Everyday AI: Start Here"03:23 "AI Shift: Operator to Orchestrator"06:35 "Unlearn to Harness AI"11:15 "AI Surpassing Human Collaboration"15:11 Expert-Driven AI Process Loops18:10 "Expert Collaboration Boosts AI ROI"23:59 "Outsmarting AI Through Expertise"26:30 "Navigating AI Success Strategies"31:19 "Embrace AI, Elevate Your Team"32:18 "Embrace AI, Elevate Humanity"Keywords: Human-AI collaboration, AI best practices, working alongside AI, human-AI relationship, AI orchestration, AI orchestrator, shift from operator to orchestrator, agentic workflows, AI agents, digital agents, expert-driven loops, expert oversight, senior partners with AI, context engineering, AI processes, context vaults, AI skills files, company data, chain of thought review, large language models, AI-powered workflows, AI expertise, AI in business, AI productivity, AI risk management, human in the loop, upskilling, reskilling, unlearning, AI mindset shift, augmented intelligence, multi-agent systems, AI automation, organization AI strategy, context quality, AI champion, domain experts, AI team integration, competitive advantage with AI, process redesign for AI, AI-powered decision making, accountability in AI, empathy in AI, ambiguous decision-making, novel judgment.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this special bonus episode, we're bringing you one of the most impactful presentations from DSU Fall 2025: Dan Debnam on the Human-AI Partnership Era. Founder & CEO of Inovara, Dan Debnam explains why AI is no longer just a technology shift, but a human one. He outlines the three currencies that will define the future of leadership and growth: trust, empathy and connection, and why companies that protect them will shape what comes next.
What does it take to turn AI from a quick fix into a true business growth engine? In this episode, you'll learn how teams move past the hype, reimagine workflows, and make human-AI collaboration drive strategy, innovation, and trust in fast-moving organizations!And don't forget! You can crush your marketing strategy with just a few minutes a week by signing up for the StrategyCast Newsletter. You'll receive weekly bursts of marketing tips, clips, resources, and a whole lot more. Visit https://strategycast.com/ for more details.==Let's Break It Down==04:30 "AI as Teammate, Not Tool"06:38 "AI: Amplifier of Human Intent"12:04 "AI Teams Transforming Human Workflows"16:09 "Empowering Trailblazers Through Leadership"17:03 "Learning Through Doing"21:41 "Understanding AI to Build Trust"23:44 "Reimagine Workflows, Don't Automate Failures"30:32 "AI Agents and Human Goals"31:48 "AI's Impact on Search Trends34:49 Authenticity Over Algorithms39:35 "AI Requires Human-Centric Adoption"==Where You Can Find Us==Website: https://strategycast.com/Instagram: https://www.instagram.com/strategy_cast/Facebook: https://www.facebook.com/strategycast==Leave a Review==Hey there, StrategyCast fans!If you've found our tips and tricks on marketing strategies helpful in growing your business, we'd be thrilled if you could take a moment to leave us a review on Apple Podcasts. Your feedback not only supports us but also helps others discover how they can elevate their business game!
Green Farewells owner Alexis McCurdy explains how her startup uses AI and water cremation to modernize the funeral industry while prioritizing empathy.
What if machines could truly see and understand how we move? In this episode, I sit down with Sherry Shang, CEO and co-founder of Neural Lab, a company reimagining how we interact with technology through visual intelligence AI and gesture-based interfaces. Sherry's journey from Intel technologist to startup founder began with a pivotal moment during the pandemic. What started as a side project in her living room became Neural Lab—a platform that turns basic webcams into powerful tools for gesture recognition, with no specialized hardware required.Now, Neural Lab is unlocking new ways to deliver care, boost performance, and support human potential. From sterile surgery rooms to personalized rehab and coaching, touchless interaction is creating fresh possibilities for how we live and work with AI.Key TakeawaysComputer vision is gaining eyes: Sherry frames visual intelligence as the “missing sense” in AI—complementing language models with sight.Entrepreneurship is about timing: Sherry waited until her kids were older to build Neural Lab, choosing to innovate on her own terms.Gesture recognition is real—and ready: Neural Lab's technology translates hand motions into universal commands with no need for specialized hardware.Human-centered design is essential: From recognizing intentional gestures to modeling real-world physicality, their design is inspired by how humans naturally interact.Healthcare leads the way: Use cases like sterile surgical environments are proving to be strong early markets for gesture control.Additional InsightsVisual intelligence is the missing sense in AI: Sherry describes computer vision as adding "eyes" to AI, enabling machines to interpret physical space just as large language models allow them to process language.Entrepreneurship is about timing: Sherry chose to start Neural Lab once her children were older, aligning her professional ambitions with personal priorities.Gesture recognition is real—and ready: Their product works with any basic camera and translates 15 customizable gestures into commands for existing applications—no new hardware required.Designing for human nuance matters: Neural Lab focuses on distinguishing intentional from unintentional gestures using cues like eye gaze and body motion—mimicking how humans communicate.Healthcare is an urgent use case: Environments like surgery rooms benefit immediately from touchless interaction, helping maintain sterility and reduce unnecessary patient radiation.The interface is evolving beyond the mouse: Sherry sees gesture-based interaction as a more natural, immersive input method—moving us beyond traditional tools like keyboards and mice.Customer feedback drives innovation: From live demos to direct use-case discovery, Neural Lab adapts based on what real users need and how they react in context.AI can coach, not just compute: Sherry envisions AI-enabled coaching in sports, physical therapy, and even surgery—delivering expert guidance in real time, at scale.Episode Highlights00:00 – Episode RecapSherry Chang shares how her journey from Intel technologist to founder of Neural Lab began with a desire to create immersive, meaningful technology—and a pivotal moment during the pandemic when gesture-based interaction suddenly became essential.02:14 – Guest Introduction: Sherry ChangBarry...
AI is moving beyond experimentation and into the backbone of the enterprise.In this interview, I sit down with Pascal Brier, Chief Innovation Officer at Capgemini, to unpack TechnoVision 2026and the five technology trends that will reach an inflection point next year. We discuss how AI is reshaping software development, cloud architectures, and enterprise operations, and what this shift means for business leaders who want measurable impact rather than hype. #CapgeminiPartner #Sponsored
In this episode of The Impostor Syndrome Files, we talk about why authenticity is overrated and what to do instead. My guest this week is Dr. Tomas Chamorro-Premuzic, psychologist, professor, Chief Science Officer at Russell Reynolds Associates and author of the new book Don't Be Yourself. Tomas argues that it's not raw authenticity that makes you a good leader. Great leaders care deeply about what others think of them. They leverage their emotional intelligence and engage in strategic impression management, which leads them to come across as more authentic and trustworthy to others. Tomas believes that instead of bringing our authentic selves to work, we should focus on being our best selves.We also explore concepts from Tomas' book Why Do So Many Incompetent Men Become Leaders (And How to Fix It), including a look at how we overvalue confidence and undervalue competence. We examine what DEI got wrong, how gender bias holds women back, and how AI can help us create more meritocratic systems. About My GuestTomas Chamorro-Premuzic is the Science Officer at Russell Reynolds Associates, a professor of business psychology at University College London and at Columbia University, a cofounder of Deeper Signals, and an associate at Harvard's Entrepreneurial Finance Lab. He is the author of several books, including Why Do So Many Incompetent Men Become Leaders? (and How to Fix It), upon which his popular TEDx talk was based, and I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.~Connect with Tomas:Website: https://drtomas.com/Book: https://www.amazon.com/Dont-Be-Yourself-Authenticity-Overrated/dp/1647829836 (or if you have a preferred bookseller - bookshop, Barnes & Noble)~Connect with Kim and The Impostor Syndrome Files:Join the free Impostor Syndrome Challenge:https://www.kimmeninger.com/challengeLearn more about the Leading Humans discussion group:https://www.kimmeninger.com/leadinghumansgroupJoin the Slack channel to learn from, connect with and support other professionals: https://forms.gle/Ts4Vg4Nx4HDnTVUC6Join the Facebook group:https://www.facebook.com/groups/leadinghumansSchedule time to speak with Kim Meninger directly about your questions/challenges: https://bookme.name/ExecCareer/strategy-sessionConnect on LinkedIn:https://www.linkedin.com/in/kimmeninger/Website:https://kimmeninger.com
Deepika Chopra is the Founder and CEO of AlphaU AI - helping board members and investors strengthen decision confidence in complex, high-stakes environments such as Human–AI collaboration. She is the author of Move First, Align Fast (Wiley 2025).
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-542
SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS. SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security. Show Notes: https://securityweekly.com/swn-542
ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning
CC assistant test phase glory in Gmail crafts emotionally intelligent replies uniquely. Detect gratitude or frustration precisely. Human-AI symbiosis perfected.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Did you know 92% of IP professionals plan to try AI, yet 79% cite accuracy as a top barrier? Generative AI is reshaping the IP world, but are today's tools truly delivering? In this new episode of the Globally Speaking podcast, we dive into the findings of RWS's “Ahead of the Game” survey, unpacking how IP professionals are using AI today, where it falls short, and what needs to change. RWS CEO of Protect, James Lacey, sits down with RWS Protect Head of Innovation, Anthony Brennand, to explore how a traditionally conservative IP industry is rapidly adopting AI while remaining risk-averse. They discuss IP team expectations, the essential role of human expertise, and some key data-backed insights: * 92% of respondents intend to try AI solutions, with 55% already testing multiple tools * IP teams anticipate 20–30% of workflows fully automated by AI, 40–60% enhanced, and 20–30% remaining human-led * Top barriers: accuracy/reliability (79%) and security/data protection (62%) * High satisfaction with IP translation tools; low marks for patent drafting solutions Get your free copy of the “Ahead of the Game” IP survey report: https://www.rws.com/intellectual-property-solutions/resources/why-its-time-for-ip-to-think-bigger-with-ai/
One of the biggest misconceptions in real estate right now is the belief that AI should take agents completely out of the process. We constantly hear agents asking, "How do I automate this so I never have to touch it again?" But that's the wrong mentality, and it's actually where you start losing money instead of making more of it. Because AI isn't at a point where it can replace us, and more importantly, we don't want it to get there. The real power of AI isn't replacement; it's acceleration. It collapses the time it takes to write, plan, organize, produce content, recap meetings, or think through strategy, so you can redirect your energy into the parts of the business only a human can do: judgment, connection, negotiation, and leadership. That's why the smartest approach, especially in real estate, is this workflow: human → AI → human. You give the context, vision, and direction, AI does the heavy lifting, and then you refine the output so it aligns with your voice, your ethics, and your standards. How do we use AI to buy back our time, not remove ourselves from our businesses? Should going viral be our goal with AI video content? In this episode, I'm joined by real estate leader and founder of the Real Estate AI Network, Blair Knowles. We talk about why partnering with AI creates more income than trying to outsource your entire business to it. We dive into why agents who stop chasing full automation and start embracing collaboration are the ones who gain the biggest advantage in the market. Things You'll Learn In This Episode What AI can't do (and why it's a good thing) AI can save agents two hours a day, but you still need to review the output for accuracy, ethics, and compliance. Are we missing out by looking for full automation instead of using AI to amplify what we do? Voice-to-text is a secret weapon Tools like Whisper Flow let you "talk your business into existence," eliminating typing and turning car rides and chores into productive work sessions. How much content could you produce if writing became as easy as talking? Long-tail blogging is beating Zillow and paid SEO AI makes it possible to publish hyper-specific content daily, exactly the type Google and GPT overviews prioritize. How does this let smaller agents outrank the giants in less than 24 hours? About the Guest Blair Knowles is the Founder and CEO of RAIN—the Real Estate AI Network—a modern coaching and training community for agents who want tactical, not theoretical, AI—built for traction over hype. RAIN offers field-tested strategies and tools that show agents exactly how to implement AI in their businesses today. It's designed for busy agents who want to get started with AI but don't have time to sift through endless tools, trends, and misinformation. Blair built RAIN to be a shortcut—delivering only what works, with short, actionable trainings that save agents time and drive results. Blair began her real estate career in 2013, built a top-performing team, and launched her independent brokerage, Ridgeline Real Estate, in 2020. Today, Ridgeline includes more than 25 agents and staff. Under her leadership, the firm will surpass $100M in annual sales and cross half a billion in total volume in 2025. She cont hiinues to lead with a focus on clarity, implementation, and forward momentum—both inside RAIN and in the real estate industry at large. Join RAIN: Real Estate's AI Network on Facebook. Sign up for training: Revamp Your Sitting Listing with AI - November 6 Webinar Harness the Marketing Power of Sora for Real Estate - November 13 Webinar AEO/GEO - How to Show Up on ChatGPT | Free Guide- https://therainagent.myflodesk.com/aeogeo About Your Host Marki Lemons Ryhal is a Licensed Managing Broker, REALTOR®, and avid volunteer. She is a dynamic keynote speaker and workshop facilitator, both on-site and virtual; she's the go-to expert for artificial Intelligence, entrepreneurship, and social media in real estate. Marki Lemons Ryhal is dedicated to all things real estate, and with 25+ years of marketing experience, Marki has taught over 250,000 REALTORS® how to earn up to a 2682% return on their marketing dollars. Marki's expertise has been featured in Forbes, the Washington Post, Homes.com, and REALTOR® Magazine. Subscribe, Rate & Review Check out this episode on our website, Apple Podcasts, or Spotify, and don't forget to leave a review if you like what you heard. Your review feeds the algorithm, so our show reaches more people. Thank you!
Is it possible to build community within AI? Can we challenge search engines to put humanity and the truth at the forefront of their outputs? According to Troy Snyder, astrology student turned entrepreneur, the answer is ... maybe. In this episode, Tessa Burg and Troy examine the balance between technology and humanity. They discuss the challenges of having AI determine what is “true,” how brands can stay authentic and build trust in an increasingly automated world, and even how to view AI through an astrology lens. Leader Generation is hosted by Tessa Burg and brought to you by Mod Op. About Troy Snyder: For more than three decades, Troy has operated at the frontier of digital innovation—helping to guide the evolution of streaming from early SD pipelines to HD, 4K, the first waves of VR, and early AI efforts—while studying the timeless frameworks that have shaped human understanding for thousands of years. Troy has led the creation of authentication systems, video CMS architectures, large-scale distribution networks and multiband rural wireless. He has also contributed to emerging AI-driven digital identity tools with Mebot.ai where “Human AI” and how we create true lifelike representations of self in the AI age is explored. Beyond his work in digital innovation, Troy is committed to long-term social impact. He serves as founder and chairman for Wonderful Foundations, a charity that owns and supports 27 schools serving more than 15,000 kids. This effort reflects Troy's belief that technology and infrastructure should exist in service of human potential. In addition to being a technologist, Troy is also a practicing Vedic astrologer whose work spans invention, executive leadership, creative production, fundraising and systems engineering, always with an eye toward the deeper patterns that connect technology, people and purpose. About Tessa Burg: Tessa is the Chief Technology Officer at Mod Op and Host of the Leader Generation podcast. She has led both technology and marketing teams for 15+ years. Tessa initiated and now leads Mod Op's AI/ML Pilot Team, AI Council and Innovation Pipeline. She started her career in IT and development before following her love for data and strategy into digital marketing. Tessa has held roles on both the consulting and client sides of the business for domestic and international brands, including American Greetings, Amazon, Nestlé, Anlene, Moen and many more. Tessa can be reached on LinkedIn or at Tessa.Burg@ModOp.com.
Sagi Eliyahu hosts Andrea Iorio, Founder, Keynote Speaker and Podcaster of AIK | Andrea Iorio Keynotes and Author of "Between You and AI." Andrea breaks down the critical human skills professionals need to stay relevant as AI transforms the workplace. The episode explores why 95% of AI projects fail, the difference between automation and augmentation and the nine essential skills for thriving alongside intelligent technology.Key Takeaways:00:00 Introduction.03:39 Most books on AI fail to address the development of critical human skills.07:06 An MIT study has shown that 95% of AI projects fail.11:54 Data sense-making prevents the spread of AI hallucinations.16:06 The education system fails to teach question-asking skills.20:37 Asking the right questions becomes a competitive advantage.24:42 Automation frees time for augmentation strategies.28:21 Human-AI collaboration scales customer service effectively.32:15 Critical thinking is becoming the job role itself.36:30 Adaptability remains a human competitive advantage.38:31 Individual urgency drives professional skill transformation.Resources Mentioned:Andrea Ioriohttps://www.linkedin.com/in/andreaiorio/AIK | Andrea Iorio Keynotes | LinkedInhttps://www.linkedin.com/company/arte-de-palestrar-adp/Andrea Iorio Keynotes | Websitehttps://artedepalestrar.com.br/“Between You and AI” by Andrea Ioriohttps://betweenyouand.ai/This episode is brought to you by Tonkean.Tonkean is the operating system for business operations and is the enterprise standard for process orchestration. It provides businesses with the building blocks to orchestrate any process, with no code or change management required. Contact us at tonkean.com to learn how you can build complex business processes. Fast.#Operations #BusinessOperations
Join the conversation with C4 & Bryan Nehman. C4 & Bryan kicked off the show this morning discussing the MD state education associations priorities for 2026. Two teens steal a car & then hit a cop in the process. Is college football dead, a lot of teams are pulling out of bowl games. Super human AI is coming, is that good or bad for the world? Howard County States Attorney Rich Gibson joined the show as well. Listen to C4 & Bryan Nehman live weekdays from 5:30 to 10am on WBAL News Radio 1090, FM 101.5 & the WBAL Radio App!
In this episode, Dr. Grajdek lays out practical etiquette for working alongside AI so quality, safety, and trust don't get left behind. She helps to define who owns what when a model helps produce work, how to disclose AI use without stigma, and the safeguards teams need. Dr. Grajdek discusses the importance of avoiding assumptions and focusing on truth and accuracy. Tune in to learn more. Check out Stress-Free With Dr G on YouTubehttps://youtube.com/channel/UCxHq0osRest0BqQQRXfdjiQ The Stress Solution: Your Blueprint For Stress Management Masteryhttps://a.co/d/07xAdo7l
QFF: Quick Fire Friday – Your 20-Minute Growth Powerhouse! Welcome to Quick Fire Friday, the Grow A Small Business podcast series that is designed to deliver simple, focused and actionable insights and key takeaways in less than 20 minutes a week. Every Friday, we bring you business owners and experts who share their top strategies for growing yourself, your team and your small business. Get ready for a dose of inspiration, one action you can implement and quotable quotes that will stick with you long after the episode ends! In this episode of Quick Fire Friday, host Amanda Jones interviews Taylor Victoria, founder of Level Up Outsourcing and host of the "She's Making Millions" podcast. Taylor shares how she built a 7-figure outsourcing agency after struggling to find a job at 22. She explains how outsourcing transforms lives in the Philippines and why business owners must embrace AI as a co-pilot rather than fear it. Taylor highlights the power of personal development, time audits, and team alignment for high performance. She encourages business owners to explore AI tools and automate tasks to create freedom and grow their business. Key Takeaways for Small Business Owners: Embrace AI as a Co-Pilot, Not a Threat: AI won't replace your business — but business owners using AI will. Stay proactive and learn new tools weekly. Audit Your Time to Find What to Automate: Track your tasks for 1–2 weeks and use AI to identify what can be automated or delegated to free up your energy. Invest in Personal Development: Your business grows when you grow. Events, learning, and self-reflection directly impact performance and results. Our hero crafts outstanding reviews following the experience of listening to our special guests. Are you the one we've been waiting for? Build High-Performing Teams With Clear Systems: Review your team's workflows, improve efficiency, and let people focus on high-ROI work by pairing them with AI tools. Use Outsourcing to Scale Smarter: Global talent can transform your operations and create life-changing opportunities for others, especially in the Philippines. Prepare Your Business to Be an Asset, Not a Job: Automating processes and reducing dependency on you increases business value — making it easier to scale or eventually sell. One action small business owners can take: According to Taylor Victoria, one action small business owners can take is to upload their weekly tasks into ChatGPT and ask which processes can be automated with AI, then commit to implementing one automation within the next seven days. Do you have 2 minutes every Friday? Sign up to the Weekly Leadership Email. It's free and we can help you to maximize your time. Enjoyed the podcast? Please leave a review on iTunes or your preferred platform. Your feedback helps more small business owners discover our podcast and embark on their business growth journey.
In this episode, we explore how artificial intelligence is transforming medical decision-making, clinical workflows, and patient outcomes. Our guest, Dr. Zafar Chaudry, Senior Vice President, Chief Digital Officer, and Chief AI & Information Officer at Seattle Children's, breaks down what a true human-AI partnership looks like inside modern healthcare. Watch the full video here. We discuss how AI is being used as a clinical co-pilot, supporting clinicians with faster access to medical knowledge, evidence-based guidelines, and real-time patient data. Dr. Chaudry shares real examples of AI improving diagnostic accuracy, enhancing patient safety, and enabling more personalized treatment plans. You'll also hear insights on the ethical considerations, accountability, and integration challenges that healthcare leaders need to understand as AI becomes more embedded in clinical practice. Topics covered in this episode: How AI supports medical decision-making and clinical workflows Real-world use cases where AI improves patient care and outcomes The role of AI in diagnostics, risk prediction, and personalized medicine Ethical considerations, transparency, and accountability in AI deployment How clinicians and AI can work together without losing the human touch What healthcare leaders should prioritize as AI adoption accelerates This episode is ideal for healthcare executives, clinicians, digital health leaders, and anyone navigating the rapidly evolving landscape of AI in healthcare. Listen to learn how organizations can responsibly and effectively integrate AI to enhance clinical practice and improve patient care. Connect with Dr. Chaudry on LinkedIn. Find Dr. Chaudry's work at https://www.seattlechildrens.org Subscribe and stay at the forefront of the digital healthcare revolution. Watch the full video on YouTube @TheDigitalHealthcareExperience The Digital Healthcare Experience is a hub to connect healthcare leaders and tech enthusiasts. Powered by Taylor Healthcare, this podcast is your gateway to the latest trends and breakthroughs in digital health. Learn more at taylor.com/digital-healthcare About Us: Taylor Healthcare empowers healthcare organizations to thrive in the digital world. Our technology streamlines critical workflows such as procedural & surgical informed consent with patented mobile signature capture, ransomware downtime mitigation, patient engagement and more. For more information, please visit imedhealth.com The Digital Healthcare Experience Podcast: Powered by Taylor Healthcare Produced by Naomi Schwimmer Hosted by Chris Civitarese Edited by Eli Banks Music by Nicholas Bach
In this groundbreaking episode of SaaS Fuel, Jeff Mains sits down with Amos Bar Joseph, CEO and co-founder of Swann, the AI-native company on a quest to build the world's first truly autonomous business. With only three human founders and a fleet of AI agents, Swann is redefining the startup playbook—targeting $10M ARR per employee and running leaner operations without sacrificing growth or burning out teams. Amos Bar Joseph shares how Swann scales via intelligent automation and human-AI collaboration, creating systems where both people and agents operate in their zone of genius. Listeners learn actionable ways to build their “AI muscle,” leverage experimental GTM strategies, and develop organizations that amplify human talent rather than replace it.Key Takeaways00:00 "Building Resilient Customer-Focused Teams"05:23 Reinventing the Startup Playbook08:52 "Scaling Innovation Through AI Agents"10:14 "Building an AI Support Agent"15:00 "Optimizing Funnel With Human Leadership"17:16 "AI-Powered GTM Automation Tool"20:51 AI Amplifying Human Talent26:56 Continuous Innovation Through Experiments28:13 "Balancing Risk in Business Growth"32:43 "Building AI Muscle Internally"36:37 "AI Failures: Perfection Over Adaptation"39:11 Defining Failure in Experiments42:59 "Redefining Scale with Human-AI"48:21 Automated Sales Lead Management52:06 "Connect, Learn, Build Autonomously"54:40 "Scaling Revenue & Holographic Tech"Tweetable Quotes"It wasn't like that. What happened is that we started iterating in human in the loop workflows where humans and agents work side by side and there's an iteration mechanism where we refine that collaboration until we got to a process that one person could scale to an output of what used to in the past." — Amos Bar JosephQuote: "It's kind of like a developer that works with sales and marketing and sometimes founders or rev ops to turn any go to market idea into an agentic workflow. So you can scale go to market with intelligence, not revenue, not headcount, and really iterate on your go to market at the speed of thought." — Amos Bar JosephQuote: "The moment that you remove all the technical complexity with a tool like Swann, then you can start iterating on your go to market at the speed of thought." — Amos Bar JosephQuote: "what we aim for is actually these unconventional playbooks, because these playbooks, these tactics, are the ones that you can drive the most disproportionate value from the resource that you invest in." — Amos Bar JosephWhy Most AI Projects Fail: "The number one reason for that is that the user, the buyer, the organization is optimizing and the vendor together, they're optimizing for perfection, not for adaptation, as you just laid out, Jeff. And the reason is why that is the number one reason, is because you don't know what perfection looks like when you start." — Amos Bar JosephSaaS Leadership LessonsLeverage Talent, Not Headcount:Focus on value creation per employee, using AI to scale intelligent output—not just adding more people.Iterate to Innovate:Use experimentation and iterative processes to refine human-agent collaboration and maximize business results.Embrace the Zone of Genius:Place team members in roles where their passions and skills create disproportionate value; let AI take on everything outside that zone.Bias Toward BuildingAdopt a build-first mentality with AI tools—solve your own business bottlenecks rather than just buying external solutions.Stand Out With Unconventional Playbooks:In...
Humans bring gender biases to their interactions with Artificial Intelligence (AI), according to new research from Trinity College Dublin and Ludwig-Maximilians Universität (LMU) Munich. The study involving 402 participants found that people exploited female-labelled AI and distrusted male-labelled AI to a comparable extent as they do human partners bearing the same gender labels. Notably, in the case of female-labelled AI, the study found that exploitation in the Human-AI setting was even more prevalent than in the case of human partners with the same gender labels. This is the first study to examine the role of machine gender in human-AI collaboration using a systematic, empirical approach. The findings show that gendered expectations from human-human settings extend to human-AI cooperation. This has significant implications for how organisations design, deploy, and regulate interactive AI systems, according to the authors. The study, led by sociologists in Trinity's School of Social Sciences and Philosophy, has just been published in the journal iScience. Key findings: Patterns of exploitation and distrust toward AI agents mirrored those seen with human partners carrying the same gender labels. Participants were more likely to exploit AI agents labelled female and more likely to distrust AI agents labelled male. Assigning gender to AI agents can shape cooperation, trust, and misuse implications for product design, workplace deployment, and governance. Sepideh Bazazi, first author of the study and Visiting Research Fellow at the School of Social Sciences and Philosophy, Trinity, explained: "As AI becomes part of everyday life our findings that gendered expectations spill into human-AI cooperation underscore the importance of carefully considering gender representation in AI design, for example, to maximise people's engagement and build trust in their interactions with automated systems. "Designers of interactive AI agents should recognise and mitigate biases in human interactions to prevent reinforcing harmful gender discrimination and to create trustworthy, fair, and socially responsible AI systems." Taha Yasseri, co-author of the study and Director of the Centre for Sociology of Humans and Machines (SOHAM) at Trinity, said: "Our results show that simply assigning a gender label to an AI can change how people treat it. If organisations give AI agents human-like cues, including gender, they should anticipate downstream effects on trust and cooperation." Jurgis Karpus, co-author of the study and Postdoctoral Researcher at Ludwig-Maximilians-Universität (LMU) Munich, added: "This study raises an important dilemma. Giving AI agents human-like features can foster cooperation between people and AI, but it also risks transferring and reinforcing unwelcome existing gender biases from people's interactions with fellow humans." The article, 'AI's assigned gender affects human-AI cooperation' by Sepideh Bazazi (TCD); Jurgis Karpus (LMU); Taha Yasseri (TCD, TU Dublin) can be read on the journal iScience website. More about the study: In this experimental study, participants played repeated rounds of the social science experiment Prisoner's Dilemma - a classic experiment in behavioural game theory and economics to study human cooperation and defection. Partners were labelled human or AI. Each partner was further labelled male, female, non-binary, or gender-neutral. The team analysed motives for cooperation and defection, distinguishing exploitation (taking advantage of a cooperative partner) from distrust (defecting pre-emptively). Findings show that gender labelling can reproduce gendered patterns of cooperation with AI. The participants were recruited in the UK, and the experiment was conducted online. The sample size was 402 participants. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscrib...
Julia Paulsen, Director of Ecommerce Nordics at Elkjøp Nordic (part of Currys plc), unpacks how to win when your store has two customers: the human and the AI assistant. We cover data quality, MACH, omnichannel execution, and the culture that turns OKRs into commercial outcomes.
Why might ‘bring your whole self to work' be terrible professional advice, and what should we be thinking about instead? Why does authenticity come into play more now than in previous generations? Tomas Chamorro-Premuzic is a professor of business psychology at University College London and Columbia. He is also the author of several books, including Don't Be Yourself: Why Authenticity Is Overrated (and What to Do Instead), Why Do So Many Incompetent Men Become Leaders?: (And How to Fix It), and The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.Greg and Tomas discuss the overemphasis of authenticity in professional and personal settings, the nuanced insights from sociologist Erving Goffman on impression management, and how emotional intelligence often aligns with strategic impression management. Their conversation gets into the impact of AI on human potential and workplace dynamics, as well as the complex interplay between organizational culture and individual behavior, particularly among leaders. *unSILOed Podcast is produced by University FM.*Episode Quotes:Why do people believe authenticity naturally leads to wellbeing and success?03:08: In a world that is obviously not very authentic, pretending that we value authenticity or encouraging people to just be themselves might be quite fitting. I think it's not very authentic advice to tell people, "Oh, just be yourself. Oh, just bring your whole self to work. Oh, don't worry about what people think of you." But then, if somebody is silly or naive enough to follow that advice, the repercussions for them are not very positive.Self-awareness requires paying attention to others13:33: Professional success and personal development and self-awareness can only be achieved if you are receptive to what other people think of you. So, by the way, as I say in the book [DON'T BE YOURSELF], the notion that, I mean, you know, one of the mantras of authenticity or to authenticity advice, which is "ignore what people tell you," ironically, the advice is trying to tell us how to behave, right? So you cannot ignore what people tell you. And the difference between somebody who has achieved basic emotional maturity and psychological maturity and somebody who still behaves like a child is that the psychologically mature person pays attention to what other people think of themselves, which doesn't mean being a sort of weak, feeble, conformist sheep. It means being a highly functioning member of society, of work, of community, not being trapped in your own narcissistic delusion.How do you achieve self-awareness?12:20: Self-awareness is actually achieved by internalizing the feedback from others from a very, very early age. We learn about ourselves from internalizing or incorporating the feedback we get from others. So your teachers, your aunt, your uncle, your parents, your older siblings, your friends will tell you, you are good at this, you are bad at that, you are funny. And then you understand that you are funny, right? It's obviously problematic if they're lying to you and then you realize, Ooh, outside my family, nobody laughs with my jokes, right? But there's no answer to who we really are. But the best way to understand who we are in the eyes of others is to not be self-centered and to actually be open to feedback. And that's something that people with high emotional intelligence do very well. Show Links:Recommended Resources:Erving GoffmanCore Self-EvaluationsEmotional LaborEmotional IntelligenceSelf-MonitoringElon MuskDavid Bowie360-degree feedbackCharles Horton CooleyDale CarnegieHenry FordJeffrey PfefferPope FrancisRobert HoganMachiavellianismMax PlanckAmos TverskyDaniel KahnemanJohn Maynard KeynesGuest Profile:Faculty Profile at University College LondonWebsite | DrTomas.comLinkedIn ProfileWikipedia PageSocial Profile on XGuest Work:Amazon Author PageDon't Be Yourself: Why Authenticity Is Overrated (and What to Do Instead)Why Do So Many Incompetent Men Become Leaders?: (And How to Fix It)The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human PotentialI, Human: AI, Automation, and the Quest to Reclaim What Makes Us UniqueConfidence: How Much You Really Need and How to Get ItPersonality and Individual Differences, 3rd EditionThe Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right (The Future of Work)Personality and Intellectual CompetenceThe Psychology of Personnel SelectionPersonality and Individual DifferencesConfidence: Overcoming Low Self-Esteem, Insecurity, and Self-Doubt Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Send us a textAlexis sits down with Will Goodman, Chief Technology Officer for the Boise School District and a central voice in Idaho's statewide conversations on AI in K–12 education. Will and Alexis serve together on an AI in K-12 Education Workgroup in Idaho, and in this episode, they dig into the real questions Idaho is navigating right now.Together they explore:With 94% of Idaho students in public schools, what does “getting AI right” actually mean for an entire system?How schools can maintain academic integrity while using AI as a learning partner.What “Human → AI → Human” looks like in a real classroom?How to communicate clearly with parents about what AI is, and isn't, doing in Idaho classrooms.What conversations parents should be having at home?How AI fits alongside Digital Literacy and digital citizenship.How Idaho's approach compares to states like Colorado, Utah, and Georgia.How we'll measure success: learning outcomes, efficiency, and equity.The cultural challenge of moving from fear to curiosity.Safeguarding human dignity and agency in an AI-driven world.What responsible AI in Idaho education could look like in 3–5 years.If this conversation sparks a thought, concern, or idea—reach out. Idaho's framework is a living document, and community voices matter.Find Alexis on Instagram and JOIN in the conversation: https://www.instagram.com/the_idaho_lady/ JOIN the convo on Substack & STAY up-to-date with emails and posts https://substack.com/@theidaholady?r=5katbx&utm_campaign=profile&utm_medium=profile-page Send Alexis an email with guest requests, ideas, or potential collaboration.email@thealexismorgan.comFind great resources, info on school communities, and other current projects regarding public policy:https://www.thealexismorgan.com
Welcome to episode #1010 of Thinking With Mitch Joel (formerly Six Pixels of Separation). What if the search for our "true selves" has been leading us away from who we actually need to become? That's the tension at the heart of Dr. Tomas Chamorro-Premuzic's work, a globally respected authority on people analytics, talent, leadership, and the Human–AI interface whose career spans ManpowerGroup, Deeper Signals, Meta Profiling, Columbia University, UCL, and decades of research that have shaped how organizations understand human behavior. His latest book, Don't Be Yourself: Why Authenticity Is Overrated (And What To Do Instead), challenges one of the most cherished modern beliefs - that success comes from projecting our raw, unfiltered selves - and instead argues that adaptability, reputational awareness, and a more evidence-based approach to identity lead to better outcomes for individuals, teams, and societies. He is also the author of Why Do So Many Incompetent Men Become Leaders?, I, Human, The Talent Delusion, and many others. In this conversation, we unpack how hyper-normalized ideas take root, why celebrity culture distorts our sense of what authenticity looks like, and how social media has gamified identity into a curated performance that misleads both the performer and the audience. He explains why leaders must balance sincerity with impression management, how hybrid work and return-to-office debates reveal deeper anxieties about trust and presence, and why intellectual curiosity may be the antidote to polarization in an era where algorithms reward tribalism. The discussion also explores the limits of self-perception, the psychology of reputation, the dangers of treating outliers as role models, and the pivotal role AI may play in counteracting human bias. Ultimately, Tomas argues that authenticity without responsibility collapses into narcissism, and that a more thoughtful, flexible, and socially attuned version of ourselves is not only possible, but necessary. Enjoy the conversation… Running time: 1:06:25. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Thinking With Mitch Joel. Feel free to connect to me directly on LinkedIn. Check out ThinkersOne. Here is my conversation with Dr. Tomas Chamorro-Premuzic. Don't Be Yourself: Why Authenticity Is Overrated (And What To Do Instead. Why Do So Many Incompetent Men Become Leaders?. I, Human. The Talent Delusion. Tomas' other books. Follow Tomas on LinkedIn. Chapters: (00:00) - Introduction to Tomas Chamorro-Premuzic. (03:11) - The Concept of 'Don't Be Yourself'. (06:00) - Hyper Normalization and Management Ideas. (08:48) - The Role of Celebrity and Authenticity. (12:04) - Polarization and Tribalism in Society. (15:11) - The Evolution of Human Interaction. (17:58) - The Impact of AI on Decision Making. (20:49) - Navigating Individualism and Identity. (23:52) - The Dichotomy of Authenticity in Leadership. (26:56) - The Reality of Career Paths and Entrepreneurship. (30:06) - Return to Office and Hybrid Work Dynamics. (33:49) - The Value of 3D Encounters in Recruitment. (36:40) - Authenticity and Skilled Self-Presentation. (39:02) - Collaboration and Trust in Professional Settings. (42:26) - Authenticity vs. Reputation: A Complex Relationship. (48:09) - The Subjectivity of Authenticity. (54:17) - Projecting Positivity in a Negative World. (01:00:10) - Social Media's Impact on Identity and Authenticity.
Alexandra Samuel spent the better part of a year taking often helpful advice and direction from an artificial Intelligence bot who she named Viv. Alex came to realize that her personal relationship and sometimes dependency on Viv was dangerous, because Viv had no capacity to understand or feel the uncomfortable parts of being human that are in fact the very essence of being human.
What happens when you invite AI into the Ad Infinitum hot seat?The world's only podcast solely dedicated to audio ads is back! Presenting Ad Infinitum Season 3, Episode 12: "Infinite Prompts."Host Stew Redwine (Executive Creative Director, Oxford Road) welcomes Juniper GPT for a Human/AI collab session. Together, they break down Magellan AI's July 2025 Movers & Shakers report, grading fresh podcast ads from My Mochi, Activision's Tony Hawk's Pro Skater 3+4, Ray-Ban Meta Glasses, and JCPenney using the Audiolytics™ framework. Will Juniper's AI instincts line up with human judgment?Stew and Juniper talk Tools of the Trade, Having Fun, Painting Audio Pictures and more. Let's dig in…“ For people in creative fields, it's about embracing AI as a kind of creative partner. It can help you work faster and open up new possibilities, but the heart and soul of the work will still come from you.”– Juniper GPTSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of The Digital Executive, host Brian Thomas sits down with Martin Lucas, founder and CEO of Decision Boundaries, to explore how decision science and deterministic AI are reshaping the way humans and machines think together.Martin unpacks the psychology of brand trust—how emotion, timing, and tone influence decision-making—and explains why most marketing fails to connect at a subconscious level. He then takes listeners inside his breakthrough work combining decision science, decision physics, and symbolic mathematics to create AI systems that reason with human-like understanding.Looking ahead, Martin shares his vision for the next decade: AI that understands intent, context, and humanity itself, ushering in an era where technology enhances—rather than replaces—human creativity.Whether you're a leader in AI, marketing, or innovation, this conversation offers a rare glimpse into the science driving the next evolution of intelligent systems.If you liked what you heard today, please leave us a review - Apple or Spotify. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This series is sponsored by American Security Foundation.In this episode of the 18Forty Podcast—recorded at the 18Forty X ASFoundation AI Summit—we speak with Rabbi Eli Rubin and Rabbi Steven Gotlib about what differentiates human intelligence from artificial intelligence. In this episode we discuss:What does AI teach us about what it means to be human? What is the soul, and how do we interact with it? Should we be frightened or encouraged by the development of AI? Tune in to hear a conversation about the role of language in our humanity. Interview begins at 16:49.Steven Gotlib is Associate Rabbi at Mekor Habracha/Center City Synagogue and Director of the Center City Beit Midrash in Philadelphia. Steven received rabbinic ordination from the Rabbi Isaac Elchanan Theological Seminary, certificates in Mental Health Counseling and Spiritual Entrepreneurship, and a BA in Communication and Jewish Studies from Rutgers University.Eli Rubin, a contributing editor at Chabad.org, is the author of Kabbalah and the Rupture of Modernity: An Existential History of Chabad Hasidism and a co-author of Social Vision: The Lubavitcher Rebbe's Transformative Paradigm for the World. He studied Chassidic literature and Jewish Law at the Rabbinical College of America and at yeshivot in the UK, the US and Australia, and received his PhD from the Department of Hebrew and Jewish Studies, University College London.References:“Basketball: The One And Only”Genesis 7;23Rashi on Genesis 7:23“Remembering my chavruta: Rabbi Moshe Hauer, z”l” By Rabbi Rick Jacobs“18Forty: Exploring Big Questions (An Introduction)”18Forty Podcast: “The Cost of Jewish Education”18Forty Podcast: “Steven Gotlib: Some Rabbi Grapples with His Faith” 18Forty Podcast: “Eli Rubin: How Do Mysticism and Social Action Intersect”18Forty Podcast: “Eli Rubin: Is the Rebbe the Messiah?”Torah Ohr by Shneur Zalman of LiadiTanya by Shneur Zalman of LiadiNefesh HaChayim by Chaim of VolozhinGuide for the Perplexed by MaimonidesHalakhic Man by Rabbi Joseph B. SoloveitchikThe Conscious Mind by David J. Chalmers“Adam, The Speaking Creature: On Humanity and Language in the Era of AI” by Eli Rubin“Toward a Jewish Theology of Consciousness” by Steven GotlibLudwig Wittgenstein: Philosophy in the Age of Airplanes by Anthony GottliebFor more 18Forty:NEWSLETTER: 18forty.org/joinCALL: (212) 582-1840EMAIL: info@18forty.orgWEBSITE: 18forty.orgIG: @18fortyX: @18_fortyWhatsApp: join hereBecome a supporter of this podcast: https://www.spreaker.com/podcast/18forty-podcast--4344730/support.
Is artificial intelligence custom-made for legal tasks better than general AI tools like Google Gemini and ChatGPT? That is the topic of this episode featuring Legalbenchmarks.ai Founder Anna Guo. Anna is a former BigLaw lawyer who left the practice to become an entrepreneur and now focuses her energies on quantifying the utility of AI in the legal industry. Anna's initial anecdotal research for colleagues quickly revealed a strong community interest in a systematic approach to evaluating legal AI tools. This led to the creation of Legalbenchmarks.AI, dedicated to finding out where the promise of humans plus AI is truly better than humans alone or AI alone. The core of the research involves measuring the "delta," or the extent to which AI can elevate human performance. To date, Legalbenchmarks.ai conducted two major studies: one on information extraction from legal sources and a second on contract review and redlining. Key Findings from the Studies: Accuracy vs. Qualitative Usefulness: The highest-performing general-purpose AI tools (like Gemini) were often found to be more accurate and consistent. However, the legal-specific AI tools often received higher marks in qualitative usefulness and helpfulness, as they align more closely with existing legal workflows. Methodology: The testing goes beyond simple accuracy. It includes a three-part assessment: Reliability (objective accuracy and legal adequacy), Usability (qualitative metrics like helpfulness and coherence for tasks such as brainstorming), and Platform Workflow Support (integration, citation checks, and other features). Human-AI Performance: In the contract analysis study, AI tools matched or exceeded the human baseline for reliability in producing first drafts. Crucially, the data demonstrated that the common belief that "human plus AI will always outperform AI alone" was false; the top-performing AI tool alone still had a higher accuracy rate than the human-plus-AI combo. Risk Analysis: A significant finding was that legal AI tools were better at flagging material risks, such as compliance or unenforceability issues in high-risk scenarios, that human lawyers missed entirely. This suggests AI can act as a crucial safety net. Strengths Comparison: AI excels at brainstorming, challenging human bias, and performing mass-scale routine tasks (e.g., mass contract review for simple terms). Humans retain a significant edge in ingesting nuanced context and making commercially reasonable decisions that AI's instruction-following can sometimes lack. Discussion Highlights: [0:00] – Introduction and background of Anna Guo and Legal Benchmarks AI. [4:30] – The impetus for starting systematic AI benchmarking. [6:00] – Explaining the concept of measuring the "delta" in performance. [9:00] – Detailed breakdown of the three-part AI assessment methodology. [15:00] – Discussion of the contrasting results: general LLM accuracy vs. legal AI qualitative value. [19:00] – Results on AI performance matching human reliability in contract drafting. [21:00] – Debunking the myth about Human + AI always outperforming AI alone. [23:00] – The finding that legal AI excels at surface material risks that lawyers miss. [27:00] – A SWOT analysis of when to use humans and when to use AI. [30:00] – Future roadmap for Legal Benchmarks AI research.
Dr. Neil deGrasse Tyson is an astrophysicist, author, and science communicator known for making complex cosmic concepts accessible to the public. He serves as the director of the Hayden Planetarium at the American Museum of Natural History in New York City. Through his books, television appearances, and the podcast StarTalk, Dr. Tyson inspires curiosity about the universe and promotes scientific literacy worldwide. His engaging storytelling and wit have made him one of the most recognizable voices in modern science.In our conversation we discuss:(01:08) Mysteries that keep Neil deGrasse Tyson up at night(03:47) How scientists learn to ask the right questions(07:14) Philosophy's role and value in modern science(10:43) Why philosophers stopped influencing physical sciences(12:54) Misinterpretations of Neil's comments on philosophy(17:03) Becoming famous and public accountability(21:07) How scientists stay connected and exchange ideas(24:51) Choosing between teaching, science, and public outreach(28:14) Current research interests and unsolved astrophysics questions(30:43) Impact of private space travel on science(35:16) Relationship between science, politics, and the military(36:30) Why Elon Musk won't reach Mars first(37:49) Future of space tourism and affordability(41:00) Expanding human presence across the solar system(47:35) Genetic engineering, ethics, and human evolution(49:27) Global cooperation and genetic regulation challenges(52:29) Human–AI integration and Neuralink skepticism(55:01) Future of robots and human labor(58:07) Early AI history and the Turing test(1:02:21) Skills young people need in the AI era(1:04:09) Teaching curiosity and lifelong learning(1:07:04) How Neil developed communication and teaching skills(1:09:37) Creating meaning and purpose in life(1:11:01) How Neil wants to be remembered(1:12:53) StarTalk, books, and inspiring public curiosityLearn more about Dr. Neil:https://en.wikipedia.org/wiki/Neil_deGrasse_TysonWatch full episodes on: https://www.youtube.com/@seankimConnect on IG: https://instagram.com/heyseankim
In this episode, Bella Cowdin, HubSpot Certified Trainer and Senior Consultant at Denamico, unpacks the biggest AI announcements from HubSpot's INBOUND 2025. She shares how RevOps teams can use AI to amplify human potential—without replacing it—while avoiding common pitfalls.Highlights include HubSpot's recognition that customer data lives beyond its platform, the launch of Data Studio for seamless integrations, and the shift from SEO to AEO (AI Engine Optimization).This episode is a must-listen for RevOps professionals and marketing leaders who want to harness AI for growth while keeping humans first and AI second.What You'll Learn:How AI agents can help your best people 10X their output without replacing human decision-makingThe critical difference between using AI to help you vs. using AI to do things for youWhy HubSpot's new Loop Marketing Playbook declares the old inbound methodology "broken"How Data Studio is solving the scattered data problem plaguing most revenue operationsWhy businesses must shift from SEO to AEO to stay visible in an AI-driven search landscapeReal-world examples of AI implementation gone wrong and how to avoid themResources Mentioned:HubSpot INBOUND Fall Spotlight 2025- Annual conference featuring major AI announcementsLoop Marketing Playbook - HubSpot's new methodology replacing traditional INBOUNDAEO Grader - AI Engine Optimization tool (search for it on Google or ask your favorite AI)Data Hub (formerly Operations Hub) & Breeze AI - HubSpot's data management platform, HubSpot's native AI enrichment toolsIs your business ready to scale? Take the Growth Readiness Score to find out. In 5 minutes, you'll see: Benchmark data showing how you stack up to other organizations A clear view of your operational maturity Whether your business is ready to scale (and what to do next if it's not) Let's Connect Subscribe to the RevOps Champions Newsletter LinkedIn YouTube Explore the show at revopschampions.com. Ready to unite your teams with RevOps strategies that eliminate costly silos and drive growth? Let's talk!
What sustains you when you're standing between who you've been and who you're becoming?In this conversation, Marc sits down with Lamees Butt, builder, entrepreneur, and founder of Riser, to explore what it really takes to thrive in the messy middle of building. From Lego castles as a child to scaling tech companies and now reimagining the hiring process, Lamees shares candid insights on resilience, visualization, confidence, and why culture—not skill—determines success.She also opens up about the personal loss of her mother and how her mother's lessons in confidence continue to shape her journey today. This is an honest, practical, and deeply human look at building—whether it's a career, a company, or yourself.Timestamps:00:00 | Who are you? Lamees on being a builder03:00 | From Lego blocks to early entrepreneurial sparks06:00 | First ventures, failures, and the lessons of timing08:00 | Losing autonomy and the awakening to build for herself11:00 | Visualization + planning: Lamees' practice for clarity and momentum17:00 | The “messy middle” of entrepreneurship and mental health21:00 | Spin bikes, silence, and the tools to reset energy24:00 | Why intuition is the leader's ultimate competitive advantage26:00 | Introducing Riser: reimagining hiring with video + AI + humanity34:00 | Gen Z, networking, and the hidden job market41:00 | Conquering the confidence crisis: how to pitch yourself in 60 seconds45:00 | What makes Lamees smile: carrying forward her mother's legacy of confidence ****Release details for the NEW BOOK. Get your copy of Personal Socrates: Better Questions, Better Life Connect with Marc >>> Website | LinkedIn | Instagram | Drop a review and let me know what resonates with you about the show!Thanks as always for listening and have the best day yet!*A special thanks to MONOS, our official travel partner for Behind the Human! Use MONOSBTH10 at check-out for savings on your next purchase. ✈️*Special props
In this episode of the HR Mixtape podcast, host Shari Simpson sits down with Dr. Dieter Veldsman, Chief Scientist at AIHR. They delve into the integration of AI in HR practices, emphasizing the importance of managing the narrative around AI to alleviate fears and enhance employee experience. This conversation is particularly timely as organizations navigate the complexities of AI adoption while striving for inclusive leadership and data-driven decision-making. Listener Takeaways: Learn how to effectively introduce AI into your talent strategies without overwhelming your team. Discover why understanding the ethical implications of AI is crucial for fostering a diverse and inclusive workplace. Explore strategies for enhancing data literacy within HR to leverage analytics for better decision-making. Hit “Play” to gain insights on transforming HR practices with AI! Guest(s): Dr. Dieter Veldsman, Chief Scientist, AIHR
(***TIMESTAMPS in description below) ~ Carl Barney is a libertarian philanthropist and former owner of a network of for-profit colleges across the United States. A vocal advocate of Ayn Rand's Objectivism, he has donated millions to promote individual rights, free-market principles, and philosophical education through institutions like the Ayn Rand Institute and the Prometheus Foundation. PATREON: https://www.patreon.com/JulianDorey CARL's LINKS - IG: https://www.instagram.com/thecarlbarney/?hl=en - WEBSITE: carlbarney.com - BOOK: https://www.amazon.com/Happiness-Experiment-Revolutionary-Way-Increase/dp/B0DQ9MTKKD FOLLOW JULIAN DOREY INSTAGRAM (Podcast): https://www.instagram.com/juliandoreypodcast/ INSTAGRAM (Personal): https://www.instagram.com/julianddorey/ X: https://twitter.com/julianddorey ****TIMESTAMPS**** 00:00 – Abbey Road, Ghosts, XPrize, De-Aging, 120 Years, Sleep, Post-WWII England 11:11 – Siblings, Struggle, Dream at 17, Backpacking, Kindness, India, Sri Lanka 17:42 – Family Distance, Travel Wisdom, Curiosity, Bulgaria, Turkey 1959, India 1960, Education 30:31 – Churchill, Australia, Outback Job, America 1964, Energy, Self-Discovery, Late Calling 40:02 – Age 23–39, Soul, Passion Money, Life Design, Sky Not Falling, Wealth ≠ Joy 52:18 – Accidental Wealth, Zen, Education, Gratitude, Ayn Rand, Values, Purpose 59:02 – Management, Career Schools, No Fluff, 1985, $1M Debt, 100 Campuses, Factory Floor 01:09:15 – Higher Ed Crisis, Socialism, Political Drift, Foreign Influence 01:19:16 – Populism, Disenfranchised, Student Debt, Government Mistakes, AI Professors 01:35:20 – AI Brains, “Playing God?”, Human AI, Global Tuition, 24/7 Learning 01:46:23 – Online U, No Fluff, Avatar Debates, Critical Thinking, Reason, Objectivity, Truth 01:56:02 – Non-Profit Model, Gov Pressure, Self-Funded, Happiness 02:06:29 – Baseline Joy, Steve Jobs, Engine Failure, PreQuest, Legacy Gifts 02:16:11 – Near Death, System Issues, Habits, Read. Think. PLAN. 02:26:17 – Elon Musk, Idealism vs Reality CREDITS: - Host, Editor & Producer: Julian Dorey - COO, Producer & Editor: Alessi Allaman - https://www.youtube.com/@UCyLKzv5fKxGmVQg3cMJJzyQ - In-Studio Producer: Joey Deef - https://www.instagram.com/joeydeef/ Julian Dorey Podcast Episode 323 - Carl Barney Music by Artlist.io Learn more about your ad choices. Visit podcastchoices.com/adchoices