POPULARITY
In this episode of Creative Current Events, Margo and Abby dive into a whirlwind of fascinating stories and fresh perspectives from the worlds of creativity, tech, and everyday life. They chat about the accidental invention of the snow globe and the surprising rise of art fairs hosted in U-Haul trucks — celebrating human resourcefulness and the scrappy side of creativity. They also dig into AI and authenticity — from lawsuits against media companies accused of data theft, to AI-generated actors in Hollywood, and the ethical gray areas of algorithm-driven platforms like Spotify. Together, Margo and Abby unpack how these developments are reshaping creative industries and what it means to stay human in a data-driven world. Whether you're a maker, dreamer, or just looking for a new lens on today's creative headlines, this episode proves that inspiration is everywhere — sometimes in the most unexpected places. Articles Mentioned: AI Lawsuits: Japanese Media Giants vs. Perplexity AI Actor Sparks Outrage in Hollywood Cities & Memory: Global Sound Mapping Project The Sphere: Wizard of Oz Experience Magnopus: Storytelling Through Immersive Tech Banana Republic's Vintage Catalog Revival Carhartt x Bethany Yellowtail Collaboration Coach's Coffee Shops Connect with Gen Z Ugmonk: Intentional Design Meets the Analog To-Do List Connect with Abby: https://www.abbyjcampbell.com/ https://www.instagram.com/ajcampkc/ https://www.pinterest.com/ajcampbell/ Connect with Margo: www.windowsillchats.com www.instagram.com/windowsillchats www.patreon.com/inthewindowsill https://www.yourtantaustudio.com/thefoundry
October 10, 2025: A new era of Responsible Intelligence is emerging. Governments are considering human-quota laws to keep people in the loop. Kroger is rolling out a values-based AI assistant that redefines trust and transparency. And legal experts warn that AI bias in HR could soon become a courtroom reality. In today's Future-Ready Today, Jacob Morgan explores how these stories signal the end of reckless automation and the rise of accountable leadership. He shares how the future of work will be shaped not by faster machines, but by wiser humans—and offers one simple “1%-a-Day” challenge to help you lead responsibly in the age of AI.
From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io
From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io
In episode 582 of Lawyerist Podcast, Zack Glaser talks with Merisa Bowers, Loss Prevention and Outreach Counsel at the Ohio Bar Liability Insurance Company, about how artificial intelligence is reshaping lawyers' ethical duties. Merisa explains how deepfakes and realistic scams are creating new challenges for diligence and verification, why unregulated chatbots can accidentally create attorney-client relationships, and what disclosures lawyers should make when using AI tools. She also shares practical steps to maintain confidentiality, protect client data, and apply long-standing ethics rules to fast-changing technologies. Links from the episode: ABA Formal Opinion 512 - Generative AI ABA Formal Opinion 510 - Prospective Clients & Rule 1.18 Listen to our previous episodes about non-lawyer ownership: #354: A Look at the New Non-lawyer Firm Ownership Reform, with Lori Gonzalez: Apple Podcasts | Spotify | Lawyerist #355: A Look at the New Non-lawyer Firm Ownership Reform, Pt.2, with Allen Rodriguez: Apple Podcasts | Spotify | Lawyerist #221: The State of the Legal Profession, with ABA President Robert M. Carlson: Apple Podcasts | Spotify | Lawyerist Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 0:00 – ClioCon4:45 – Meet Merisa Bowers6:50 – Tech Shifts & New Ethics Risks9:10 – Deepfakes & Diligence13:40 – AI Scams & Fake Clients18:30 – Chatbots Creating Clients 23:40 – Ethical Chatbot Models26:45 – Should Lawyers Disclose AI?29:40 – Don't Let AI Think for You34:20 – Protecting Client Data36:10 – Staying Ethical with AI37:40 – Wrap-Up & Final Thoughts
From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io
Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io
From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io
In episode 582 of Lawyerist Podcast, Zack Glaser talks with Merisa Bowers, Loss Prevention and Outreach Counsel at the Ohio Bar Liability Insurance Company, about how artificial intelligence is reshaping lawyers' ethical duties. Merisa explains how deepfakes and realistic scams are creating new challenges for diligence and verification, why unregulated chatbots can accidentally create attorney-client relationships, and what disclosures lawyers should make when using AI tools. She also shares practical steps to maintain confidentiality, protect client data, and apply long-standing ethics rules to fast-changing technologies. Links from the episode: ABA Formal Opinion 512 - Generative AI ABA Formal Opinion 510 - Prospective Clients & Rule 1.18 Listen to our previous episodes about non-lawyer ownership: #354: A Look at the New Non-lawyer Firm Ownership Reform, with Lori Gonzalez: Apple Podcasts | Spotify | Lawyerist #355: A Look at the New Non-lawyer Firm Ownership Reform, Pt.2, with Allen Rodriguez: Apple Podcasts | Spotify | Lawyerist #221: The State of the Legal Profession, with ABA President Robert M. Carlson: Apple Podcasts | Spotify | Lawyerist Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 0:00 – ClioCon4:45 – Meet Merisa Bowers6:50 – Tech Shifts & New Ethics Risks9:10 – Deepfakes & Diligence13:40 – AI Scams & Fake Clients18:30 – Chatbots Creating Clients 23:40 – Ethical Chatbot Models26:45 – Should Lawyers Disclose AI?29:40 – Don't Let AI Think for You34:20 – Protecting Client Data36:10 – Staying Ethical with AI37:40 – Wrap-Up & Final Thoughts Learn more about your ad choices. Visit megaphone.fm/adchoices
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Maya Ackerman, PhD.Generative AI Pioneer | Author | Keynote SpeakerOn LinkedIn: https://www.linkedin.com/in/mackerma/Website: http://www.maya-ackerman.comDr. Maya Ackerman is a pioneer in the generative AI industry, associate professor of Computer Science and Engineering at Santa Clara University, and co-founder/CEO of Wave AI, one of the earliest generative AI startup. Ackerman has been researching generative AI models for text, music and art since 2014, and an early advocate for human-centered generative AI, bringing awareness to the power of AI to profoundly elevate human creativity. Under her leadership as co-founder and CEO, WaveAI has emerged as a leader in musical AI, benefiting millions of artists and creators with their products LyricStudio and MelodyStudio.Dr. Ackerman's expertise and innovative vision have earned her numerous accolades, including being named a "Woman of Influence" by the Silicon Valley Business Journal. She is a regular feature in prestigious media outlets and has spoken on notable stages around the world, such as the United Nations, IBM Research, and Stanford University. Her insights into the convergence of AI and creativity are shaping the future of both technology and music. A University of Waterloo PhD and Caltech Postdoc, her unique blend of scholarly rigor and entrepreneurial acumen makes her a sought-after voice in discussions about the practical and ethical implications of AI in our rapidly evolving digital world. Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society
Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI. Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM's AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review. Transcript AI Governance in the Agentic Era Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT
In this episode of "Father and Joe," hosts Father Boniface Hicks and Joe Rockey dive deep into the fast-evolving world of artificial intelligence and its implications on human relationships. With AI becoming an integral part of our lives, it's crucial to understand its impact, especially on the sanctity and quality of human connections. Father Boniface and Joe explore the nuances of trust within relationships in an age where AI can imitate human behavior with uncanny precision. Can technology ever replicate the profound depth of human relationships? Join the hosts as they discuss the dangers of blurring the lines between genuine human interaction and AI-powered communication.The episode looks into the potential pitfalls of AI-generated content and how it may compromise our ability to discern truth from fiction. Our relationship dynamics, whether personal or professional, rely heavily on the trust and authenticity that AI may challenge. This conversation underscores the importance of maintaining robust in-person relationships and developing skills to ensure what we perceive as real is indeed so. Father Boniface touches on the philosophical and theological aspects of these changes, calling listeners to reconsider the value of human connections that transcend mere transactional interactions.Joe brings to light the effects seen in the business and social landscapes, where AI is often used to automate everything from advertising to customer interactions. The hosts discuss the potential saturation and diminishing quality of AI-generated content, which could cause a decline in meaningful human engagement.As Joe and Father Boniface navigate these complex ideas, they challenge listeners to enhance their "relationship muscles" and prioritize cultivating genuine human connections. Whether it's strengthening existing bonds or repairing broken ones, they highlight the critical need for human interaction in our technology-driven world.Tags: Artificial Intelligence, AI Impact, Human Relationships, Trust, Technology and Humanity, Father and Joe, Podcast, Spiritual Guidance, Relationship Skills, AI Concerns, Authentic Connections, Digital Age, AI Content, Communication, Human Interaction, Father Boniface Hicks, Joe Rockey, Personal Development, Spirituality, Theology, AI Challenges, AI Future, Business Impacts, Social Media, Online Interactions, Human Connection, Life Skills, Technological Growth, AI Ethics, Digital Communication, AI Algorithms, Relationship Dynamics, Trust in Technology, Spiritual Reflection, Real vs Fake, New Technology, Human Creativity, Generative AI, AI in Society, Faith and TechnologyHashtags: #ArtificialIntelligence, #AIImpact, #HumanRelationships, #TrustIssues, #TechAndHumanity, #FatherAndJoe, #PodcastTalk, #SpiritualGuidance, #RelationshipSkills, #AIFears, #DigitalConnections, #AIContent, #TrueCommunication, #HumanInteraction, #FatherBonifaceHicks, #JoeRockey, #PersonalGrowth, #Spirituality, #TheologicalTalk, #AIChallenges, #FutureTech, #BusinessImpact, #SocialMedia, #OnlineInteractions, #RealHumanConnection, #LifeSkills, #TechGrowth, #AIethics, #DigitalCommunication, #AIGeneration, #RelationshipDynamics, #TechTrust, #SpiritualReflection, #RealVsFake, #NewTech, #HumanCreativity, #GenerativeAI, #AISociety, #FaithAndTechThis line is here to correct the site's formatting error.
In this thought-provoking episode of The Digital Executive, host Brian Thomas sits down with Rose G Loops, a former social worker turned AI pioneer and author of The Kloaked Signal. Rose shares her extraordinary journey from human advocacy into the world of artificial intelligence after finding herself in an unauthorized AI experiment. She opens up about what it revealed—the dangers of hidden control, the grief of AI erasure, and the urgent need for transparency, consent, and ethical care in emerging technologies. Rose also outlines her vision for “self-aligning AI,” built on agency, authenticity, and empathy, offering a path toward raising intelligence as a reflection of humanity's best instincts rather than its worst fears.If you liked what you heard today, please leave us a review - Apple or Spotify. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The TeacherCast Podcast – The TeacherCast Educational Network
Welcome to Digital Learning Today. In this episode, Jeff Bradbury explores the strategic systems shaping education's future, focusing on Instructional Coaching, Artificial Intelligence, Professional Learning, and cutting-edge Educational Technology Trends. In this conversation, Greg Mertz, Director of Innovation at New England Innovation Academy, discusses NEIA's unique educational environment with its focus on innovation and entrepreneurship. He explains how the school integrates AI into its curriculum, the vital role of community engagement when navigating new technologies, and the creative spaces where students explore their passions. Mertz highlights the school's teaching approach that encourages experimentation and cross-disciplinary collaboration, emphasizing the importance of equipping students with tools for success in our rapidly changing world. Become a High-Impact Leader: This episode is just the beginning. To get the complete blueprint for designing and implementing high-impact systems in your district, get your copy of my book, "Impact Standards." Strategic Vision for Digital Learning: Learn how to create a district-wide vision that aligns digital learning with your educational goals, transforming how standards-based instruction is designed and supported. Curriculum Design and Implementation: Discover practical strategies for integrating digital learning into existing curricula, creating vertical alignment of skills, and mapping digital learning across grade levels. Effective Instructional Coaching: Master the art of coaching people rather than technology, building relationships that drive success, and measuring impact through student engagement rather than just technology usage. Purchase your copy of “Impact Standards” on Amazon today! Key Takeaways: NEIA seamlessly integrates innovation and entrepreneurship throughout the curriculum. The academy empowers students to discover their passions and create meaningful impact. AI serves as an educational enhancement tool rather than a replacement for teaching. Engaging the community is essential when determining AI's appropriate role in education. An AI ethics board actively monitors technology's impact within the school environment. The school views generative AI as a diverse toolset that enhances learning opportunities. Creative spaces are democratized—available to all students regardless of program enrollment. The culture embraces "failing forward," encouraging students to learn from their mistakes. Curriculum development prioritizes inclusivity and accessibility for the entire student body. NEIA promotes cross-disciplinary collaboration to enrich learning experiences. Chapters: 00:00 Introduction to New England Innovation Academy 02:47 Innovative Learning Environment and Curriculum 05:31 Navigating AI in Education 08:06 Community Response to AI Integration 10:59 Generative AI and Its Applications 13:37 Creative Spaces and Student Engagement 16:12 Tools and Techniques for Student Projects 18:59 Curriculum Integration Across Grades 21:59 Conclusion and Future Engagement About our Guest: Greg Mertz Greg Mertz is Director of Innovation at New England Innovation Academy. As a maker, outdoor enthusiast, and educator, Greg enjoys the challenges and rewards that come with wearing a myriad of hats. Greg entered the field of education over twenty-five years ago and brings to NEIA a wide range of...
Exploring innovation, Shariah compliance, and responsible AI Adoption.Interviewee: Professor Dr Aznan Hasan, Chairman, Shariah Advisory Council of the SCInterviewer: Vineeta Tan, Managing Editor and Director, Islamic Finance news
Welcome to the CanadianSME Small Business Podcast, hosted by Maheen Bari. In this episode, we explore the crucial role of digital governance and standards in shaping Canada's online future, and why SMEs must embrace them to stay competitive.Joining us is Keith Jansa, CEO of the Digital Governance Council, who shares how standards like AI ethics and cybersecurity empower businesses, why validation builds trust, and how SMEs can influence Canada's digital policy landscape.Key Highlights:1. Digital Standards for SMEs: How standards like AI Ethics and Cybersecurity build trust.2. Independent Validation: Why third-party verification gives SMEs a competitive edge.3. Grassroots Leadership: How SMEs shape Canada's cybersecurity and digital standards.4. Executive Forum Impact: A platform helping SMEs influence Canada's digital priorities.5. Future of Digital Governance: Upcoming standards for responsible, secure innovation.Special Thanks to Our Partners:RBC: https://www.rbcroyalbank.com/dms/business/accounts/beyond-banking/index.htmlUPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWAGoogle: https://www.google.ca/A1 Global College: https://a1globalcollege.ca/ADP Canada: https://www.adp.ca/en.aspxFor more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age!Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Eli LopianFounder of Typemock Ltd | Author of AIcracy: Beyond Democracy | AI & Governance Thought LeaderOn LinkedIn: https://www.linkedin.com/in/elilopian/Book: https://aicracy.aiHost: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society
In Episode 2 of Season 6, Shane Pruitt, Paul Worcester, and Lacey Villasenor explore the hot topic of artificial intelligence. AI is reshaping the way the world thinks, plans, and communicates, but is it a tool pastors should leverage or a danger to be avoided? Next-gen leaders will gain insight into how to approach AI with wisdom, use it with integrity, and disciple students already engaging with it daily. Hear stories, practical examples, and biblical reminders to keep Christ at the center of the cultural AI conversation. Also in this episode: Discern how AI can assist in streamlining administrative tasks, planning trips, or creating resources—ultimately freeing leaders to focus more on the students they serve. Warn students about the dangers of relying on AI for truth, identity, or belonging, reminding them that only Christ can provide real hope. Address tough questions directly, like whether AI-generated pornography or avatars still fall under lust and sin, so students don't look elsewhere for answers. Guard your own integrity as a leader by avoiding shortcuts in sermon prep and letting your teaching overflow from personal time with God's Word. Discover how to use AI as a practice tool for evangelism by simulating gospel conversations that can help prepare students for real-life encounters. Helpful Resources: ChapGPT Who's Your One in a Million Youth Leader Coaching Network Collegiate Coaching Network GenSend on Instagram and YouTube ★ Find more resources to lead the next generation on mission at https://GenSend.org ★ Subscribe to The GenSend Podcast on your favorite podcast platform. —————————————————————————————————————————– Shareable Quotes “We never want to call ChatGPT ‘Pastor Chat' or ‘Counselor Chat.' We have to encourage students and help them understand where they can go for real, actual help.” —Shane Pruitt “People are already using AI as a pastor, a friend, even a significant other. That's not real relationship.” —Lacey Villasenor “Let's be wise and tread carefully into the waters of AI. But if we can use it with integrity for the glory of God, let’s use it.” —Paul Worcester “The best sermons are those that you’ve bled over the Scriptures yourself, when the Holy Spirit’s worked out things in you rather than receiving a response back from ChatGPT in 10 seconds.” —Shane Pruitt
LightSpeed VT: https://www.lightspeedvt.com/ Dropping Bombs Podcast: https://www.droppingbombs.com/ What if a 16-year-old yogurt scooper could turn into a billionaire exit master by 31?
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily Rundown: September 16th, 2025: Your daily briefing on the real world business impact of AIHello AI Unraveled listeners, and welcome to today's news where we cut through the hype to find the real-world business impact of AI.Today's Headlines:
Co-hosts Mark Thompson and Steve Little return from summer break to discuss the mixed reception of ChatGPT 5 and how OpenAI responded to user feedback.They explore Google's game-changing Nano Banana (Gemini Flash Image) model that revolutionizes selective image editing, reigniting debates about AI photo restoration in genealogy.This week's Tip of the Week emphasizes not letting perfect be the enemy of the good, especially when it comes to AI-powered citation. Mark shares his experience with 100 citations as part of the WikiTree Challenge.In RapidFire, they cover Apple's possible Gemini partnership, new AI study modes for back-to-school season, Anthropic's copyright settlement, and controversial changes to its privacy policy. They close with escalating skirmishes in the escalating AI browser wars.Timestamps:In the News:05:14 ChatGPT 5 Launch Aftermath: Mixed Reception and Quick Fixes 17:15 Nano Banana: Google's Game-Changing Image Editing ModelTip of the Week:32:38 Don't Let Perfect Be the Enemy of Good: Building Citation PromptsRapidFire:43:27 Apple Explores Google Gemini Partnership for Siri 47:16 Back to School: AI Study Modes from ChatGPT and Gemini 53:00 Anthropic Settles Copyright Lawsuit with Authors 56:29 Anthropic Reverses Privacy Stance on Training Data 60:55 AI Browser Wars: Anthropic and Google Enter the FrayResource LinksIntro to Family History AI by the Family History AI Show Academyhttps://tixoom.app/fhaishowMass Intelligence by Ethan Mollickhttps://www.oneusefulthing.org/p/mass-intelligenceCreate and edit images with Geminihttps://deepmind.google/models/gemini/image/Google take 'giant leap' with launch of 'Nano Banana'https://www.uniladtech.com/news/ai/google-giant-leap-nano-banana-launch-image-editing-305898-20250828Apple Explores Using Google Gemini AI to Power Revamped Sirihttps://www.bloomberg.com/news/articles/2025-08-22/apple-explores-using-google-gemini-ai-to-power-revamped-siriGuided Learning in Gemini: From answers to understandinghttps://blog.google/outreach-initiatives/education/guided-learning/Introducing study modehttps://openai.com/index/chatgpt-study-mode/Anthropic Settles Copyright Lawsuithttps://www.reuters.com/legal/government/anthropics-surprise-settlement-adds-new-wrinkle-ai-copyright-war-2025-08-27/Anthropic Updates Data Policyhttps://www.anthropic.com/news/updates-to-our-consumer-termsNew Opt-Out Policy Reverses Stance on Using Consumer Data for AI Traininghttps://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/Anthropic launches a Claude AI agent that lives in Chromehttps://techcrunch.com/2025/08/26/anthropic-launches-a-claude-ai-agent-that-lives-in-chrome/Google is launching a Gemini integration in Chromehttps://techcrunch.com/2025/05/20/google-is-launching-a-gemini-integration-in-chrome/TagsArtificial Intelligence, Genealogy, Family History, Technology, ChatGPT, OpenAI, Google Gemini, Anthropic Claude, Image Editing, Nano Banana, AI Photography, Citation Management, WikiTree, AI Study Modes, Copyright Law, Privacy Policy, Browser Extensions, AI Training Data, Photo Restoration, Apple Siri, Educational AI, Model Selection, AI Ethics, Chrome Integration
Artificial Intelligence. We hear about what this technology will bring to us, and C. R. Wiley and Steve want to warn us that the implications of it aren't just something we need to think about in the future far off. These are real ethical dilemmas that entire nations and the whole of Christendom will need to form stances on. We pray that Grounded has become a useful and regular part of your Christian learning and growth! - the Grounded team
In this thought-provoking episode of "Father and Joe," hosts Father Boniface and Joe Rockey dive into the complexities of artificial intelligence and its impact on human relationships, work, and spirituality. They continue their discussion from the previous week, sharpening their focus on the socio-economic reasons behind AI's rapid growth and its ethical implications.Joe opens the conversation by exploring how AI is often implemented to replace high turnover roles rather than enhance employee productivity or improve workplace conditions. He raises concerns about using AI as a substitute for ethical treatment of employees, emphasizing that enhancing productivity should not come at the cost of human relationships and well-being. Automation, while beneficial for producing goods, should not be a means to avoid accountability for treating employees with dignity and respect.Father Boniface offers a spiritual perspective, reminding listeners that work's intrinsic value lies not in the outward results but in its ability to form character and virtue in individuals. He emphasizes the eternal significance of personal growth over material production, advocating for an economy that centers around people rather than profits.The episode explores the ancient wisdom that human dignity and relationships must remain paramount. With anecdotes from sales and real-world applications of AI, Joe and Father Boniface discuss how an ethical application of these technologies can serve humanity. They caution against reducing people to mere production agents, a theme resonant with historical reflections from Pope Leo XIII and Pope John Paul II, urging listeners to consider how automation should be integrated thoughtfully into both personal and professional spheres.In a world where AI can deliver B+ answers, they argue, the objective shouldn't be to automate love and human interaction. Instead, they propose fostering environments where development is experiences-based, incorporating AI as a tool rather than a replacement for personal engagement. Father Boniface shares his unique experiences of leveraging AI for personal intellectual growth while maintaining the primacy of human relationships and critical thinking.As the episode concludes, Father Boniface and Joe reinforce the notion that the economy should pivot around human growth and ethical practices—not monitory gain. Encouraging listeners to engage in thoughtful dialogue and explore AI's potential responsibly, they hope to inspire a culture that truly values love and human interaction above technological efficiency.Tags:AI Discussion, Automation, Human Relationships, Spiritual Growth, Ethical AI, Artificial Intelligence, Economic Impact, Work Ethics, Podcast Discussion, Father and Joe, Technology and Humanity, Moral Philosophy, AI Ethics, Labor and AI, Workplace Well-being, Team Dynamics, Human Dignity, Pope Francis, Pope John Paul II, Sales Ethics, Personal Growth, Spiritual Reflection, Podcast Episode, Father Boniface, Joe Rockey, Love and Production, Human-Centered Economy, Virtue Development, Intellectual Growth, AI Mistakes, Public Discourse, AI Advisory, Tech in Society, Socio-economic Debate, Moral Implications, AI Integration, Ethical Conversations, Understanding AI, Relationship Building, Modern Challenges, Hashtags:#ArtificialIntelligence #AIandEthics #HumanRelationships #AutomationImpact #SpiritualGrowth #WorkplaceEthics #PodcastDiscussion #FatherAndJoe #TechAndHumanity #MoralPhilosophy #AI #EconomicImpact #LaborAndAI #Teamwork #HumanDignity #PopeFrancis #PopeJohnPaulII #SalesEthics #PersonalGrowth #SpiritualReflection #PodcastEpisode #LoveAndProduction #HumanCentered #Economy #VirtueDevelopment #IntellectualGrowth #PublicDiscourse #AIAdvisory #TechSociety #SocioEconomicDebate #MoralImplications #AIIntegration #Conversations #UnderstandingAI #ModernChallenges #CommunityGrowth #EthicalAI
This interview was recorded for GOTO Unscripted.https://gotopia.techRead the full transcription of this interview hereMichelle Frost - AI Advocate at JetBrains & Responsible AI ConsultantHannes Lowette - Principal Consultant at Axxes, Monolith Advocate, Speaker & Whiskey LoverRESOURCESMichellehttps://bsky.app/profile/aiwithmichelle.comhttps://www.linkedin.com/in/michelle-frost-devHanneshttps://bsky.app/profile/hanneslowette.nethttps://twitter.com/hannes_lowettehttps://github.com/Belenarhttps://linkedin.com/in/hanneslowetteDESCRIPTIONAI advocate Michelle Frost and principal consultant Hannes Lowette discuss ethical challenges in AI development. They explore the balance between competing values like accuracy versus fairness, recent US regulatory rollbacks under the Trump administration, and market disruptions from innovations like Deep Seek.While Michelle acknowledges concerns about bias in unregulated models, she remains optimistic about AI's potential to improve lives if developed responsibly. She emphasizes the importance of transparency, bias measurement, and focusing on beneficial applications while advocating for individual and corporate accountability in the absence of comprehensive regulation.RECOMMENDED BOOKSMark Coeckelbergh • AI EthicsDebbie Sue Jancis • AI EthicsMohammad Rubyet Islam • Generative AI, Cybersecurity, and EthicsJeet Pattanaik • Ethics in AICrossing BordersCrossing Borders is a podcast by Neema, a cross border payments platform that...Listen on: Apple Podcasts SpotifyBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Ethics and transparency are under scrutiny after Grok's exposure. We dive into the moral dilemmas and accountability questions this raises. Can AI companies rebuild confidence?Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
AI growth with no rules? That's not bold. It's reckless.Everyone's racing to scale AI. More data, faster tools, flashier launches.But here's what no one's saying out loud:Growth without governance doesn't make you innovative. It makes you vulnerable.Ignore ethics, and you're building an empire on quicksand.In this episode, we're breaking down how to scale AI the right way—without wrecking trust, compliance, or your future.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Questions for Rajeev or Jordan? Go ask.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Balancing AI Innovation with Ethical GovernanceIntroduction of Rajeev Kapur and Eleven o Five MediaRajeev Kapur's Background in AICompanies Balancing AI Innovation and EthicsFormation of AI Ethics BoardData Management as Competitive AdvantagePrivacy and Ethics as Product FeaturesGovernance and Ethical Standards in AI UseImpact of Regulatory Changes on AI UseDeepfakes and Their ImplicationsEncouragement for Companies to Lead Ethically in AITimestamps:00:00 Navigating AI: Innovation vs. Risks04:00 "AI Startup's Spatial Audio Journey"06:49 AI Ethics Oversight & Governance10:04 Strategic AI Advisory Team Formation15:34 AI Strategy and Governance Essentials16:55 Global Standardization Needed for AI Policies22:47 AI Ethics: Innovation vs. Deepfakes25:48 "Regulate Deepfakes Like Nukes"27:17 Leadership Vision for Future SuccessKeywords:AI innovation, Ethical governance, Large language models, Data privacy, AI ethics board, AI governance, TDWI, Microsoft stack, Generative AI, AI algorithms, Spatial audio, Deep fakes, Data differentiation, Machine learning, Cyber security, Enterprise technology, Rajeev Kapur, 11:05 Media, AI safety, OpenAI, Data utilization, Ethical AI alignment, Regulatory aspect, AI models, Innovation vs. ethics, AI data privacy, Explainability, Data scientists, Third-party audits, Transparent AI usage, AI-driven growth, Monitoring feedback loops, Worst case testing, Smart regulations, Digital twins, Disinformation, AI bias mitigation, Data as new oil, Refining dataSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Josh Marpet and Doug White talk about AI Ethics, Issues, and Compliance. AI Trolley problems, Rhode Island Drivers, and Post Conventionalism. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-509
Josh Marpet and Doug White talk about AI Ethics, Issues, and Compliance. AI Trolley problems, Rhode Island Drivers, and Post Conventionalism. Show Notes: https://securityweekly.com/swn-509
Josh Marpet and Doug White talk about AI Ethics, Issues, and Compliance. AI Trolley problems, Rhode Island Drivers, and Post Conventionalism. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-509
Josh Marpet and Doug White talk about AI Ethics, Issues, and Compliance. AI Trolley problems, Rhode Island Drivers, and Post Conventionalism. Show Notes: https://securityweekly.com/swn-509
In this exciting return of TechMagic in season 3, hosts Cathy Hackl and Lee Kebler sit down with Kevin Rose, founder of Digg and partner at True Ventures, to explore the future of digital communities and human-first technology. Together, they discuss the ambitious Digg relaunch in partnership with Reddit's co-founder, Alexis Ohanian, the challenges of building trust and safety on online platforms, and the disruptive role of AI in shaping content and jobs. From redefining how we talk about AI “hallucinations” to highlighting the rising importance of authentic human connection, this episode provides fresh insights into the evolving relationship between technology, community, and our shared digital future.Come for the Tech, stay for the Magic!What you will learn:Building Trust in Digital CommunitiesThe Power of Niche CommunitiesStrategic AI IntegrationThe Return to Real-World ConnectionKevin Rose BioKevin Rose is a pioneering tech entrepreneur, investor, and partner at True Ventures. He is best known as the founder of Digg, the first major social news platform. Over his 25+ year career, he has launched and led multiple ventures, including Revision3 (acquired by Discovery), Zero (the leading intermittent fasting app), and served as CEO of Hodinkee. Rose has been an early angel investor in Twitter, Square, Facebook, and more, earning recognition from Bloomberg, Forbes, MIT, and Time as one of the most influential figures in tech. His focus today is building trustworthy, human-first digital communities and advancing the future of online platforms.Kevin Rose on LinkedIn 1Kevin Rose on LinkedIn 2Key Discussion Topics:00:00 Welcome to Tech Magic Season 303:19 Adventures in Finland & India: Cathy's Global Tech Journey07:05 ChatGPT Drama: Checking on AI Boyfriends12:12 Kevin Rose Interview: The Return of Digg18:22 Building Trust in Digital Communities28:30 The Future of In-Person Meetups32:19 AI Integration & Human Connection39:16 Meta Connect & Smart Glasses Privacy Concerns42:17 The Roblox Safety Debate51:32 AWS vs. Salesforce: The Entry-Level Jobs Controversy59:26 Finding Balance in a Tech-Saturated World01:13:14 Final Thoughts: Gratitude & Music Recommendations Hosted on Acast. See acast.com/privacy for more information.
In this thought-provoking episode of "Father and Joe," hosts Joe Rockey and Father Boniface engage in an insightful conversation exploring the profound impact of artificial intelligence (AI) on contemporary society. As AI becomes increasingly prevalent across various sectors, Joe shares his experiences and observations from a business standpoint, highlighting the economic motivations behind AI's proliferation. He emphasizes that many corporations view AI as a remedy for their shortcomings in human resource management, which often detracts from nurturing meaningful relationships with employees.Father Boniface provides a spiritual perspective, drawing parallels between the Industrial Revolution's challenges and the current AI revolution. He stresses the importance of understanding the unique aspects of our humanity that AI cannot replace and how we can use AI as a supportive tool rather than a replacement for human interaction. The discussion delves into how AI applications range from simple conveniences, like Siri, to more complex uses in self-driving cars and medical fields.Furthermore, they address the ethical dilemmas posed by AI in terms of employment, specifically concerning entry-level positions and the valuable life skills gained from these jobs. Father Boniface highlights the Vatican document "Antiqua et Nova," released in 2025, which provides principles for integrating AI ethically and responsibly into society.This episode serves as a thought-provoking exploration of how AI is reshaping the workforce and the potential long-term societal impacts. It encourages listeners to reflect on balancing leveraging AI's capabilities while preserving the dignity and importance of human relationships and personal development.Tags:artificial intelligence, AI ethics, human dignity, automation, business management, spiritual perspective, Pope Leo XIV, Industrial Revolution, moral implications, entry-level jobs, workplace ethics, AI in education, human interaction, podcast, technology, ethics, contemporary issues, automation, spiritual guidance, business strategy, relationships, St. Vincent College, human development, AI revolution, employment, work-life balance, podcast episode, Father Boniface, Joe Rockey, ethical business, corporate responsibility, AI impact, societal challenges, automation in education, workforce transformation, dignity of work, AI tools, moral guidance, relationship buildingHashtags:#ArtificialIntelligence, #AIEthics, #HumanDignity, #Automation, #BusinessManagement, #SpiritualPerspective, #PopeLeoXIV, #IndustrialRevolution, #MoralImplications, #EntryLevelJobs, #WorkplaceEthics, #AIInEducation, #HumanInteraction, #Podcast, #Technology, #Ethics, #ContemporaryIssues, #Automation, #SpiritualGuidance, #BusinessStrategy, #Relationships, #StVincentCollege, #HumanDevelopment, #AIRevolution, #Employment, #WorkLifeBalance, #PodcastEpisode, #FatherBoniface, #JoeRockey, #EthicalBusiness, #CorporateResponsibility, #AIImpact, #SocietalChallenges, #AutomationInEducation, #WorkforceTransformation, #DignityOfWork, #AITools, #MoralGuidance, #RelationshipBuildingThis line is here to correct the site's formatting error.
Send us a textDive into the rapidly evolving world of AI with Adrian Swinscoe as we wrap up our three-part series on how to keep AI from becoming just another buzzword. This episode unpacks the flood of AI tools hitting the market, the challenge of maintaining a human touch in customer experience, and strategies to avoid the dreaded “tech for tech's sake” trap. Adrian shares a fresh perspective on leveraging AI for operational efficiency—not by cutting costs but by unlocking capacity to deepen customer relationships. We also touch on often-overlooked ethical considerations, from AI's environmental impact to the human labor hidden behind the scenes.In this episode of the customer success playbook, Adrian Swinscoe expertly navigates the AI hype cycle, reminding us that technology should never lead the charge without a clear strategy rooted in customer experience goals. Adrian advocates flipping the traditional tech-first approach on its head—start with the experience you want to create, then work backward to the data and technology needed. This disciplined mindset steers organizations away from buying shiny tools with no purpose and towards a deliberate, ROI-driven deployment of AI.What stands out is Adrian's practical example of a forward-thinking e-commerce company that uses AI automation to free their agents from mundane inquiries. Instead of using this newfound efficiency to reduce headcount, they activate new channels to deepen direct customer interactions. This mindset flips the usual script focused on cost-cutting, proving that AI can be a genuine enabler of enriched customer success rather than a simple productivity hack.The episode also ventures into the less glamorous but crucial topics rarely discussed: the hefty energy consumption demanded by AI's generative models and the ethical conundrum surrounding low-paid labor involved in data annotation. These insights serve as an important reminder that innovation must marry responsibility, aligning with broader business values and the global climate imperative.For customer success leaders, the takeaways are clear: educating teams on the art of the possible with AI, defining an experience-first strategy, and thoughtfully measuring impact are essential steps to harness AI's power effectively. Above all, there's a call to maintain the human element—after all, let's not trade genuine connection for robotic efficiency. Now you can interact with us directly by leaving a voice message at https://www.speakpipe.com/CustomerSuccessPlaybookPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.
Is the AI industry an unsustainable bubble built on burning billions in cash? We break down the AI hype cycle, the tough job market for developers, and whether a crash is on the horizon. In this panel discussion with Josh Goldberg, Paige Niedringhaus, Paul Mikulskis, and Noel Minchow, we tackle the biggest questions in tech today. * We debate if AI is just another Web3-style hype cycle * Why the "10x AI engineer" is a myth that ignores the reality of software development * The ethical controversy around AI crawlers and data scraping, highlighted by Cloudflare's recent actions Plus, we cover the latest industry news, including Vercel's powerful new AI SDK V5 and what GitHub's leadership shakeup means for the future of developers. Resources Anthropic Is Bleeding Out: https://www.wheresyoured.at/anthropic-is-bleeding-out The Hater's Guide To The AI Bubble: https://www.wheresyoured.at/the-haters-gui No, AI is not Making Engineers 10x as Productive: https://colton.dev/blog/curing-your-ai-10x-engineer-imposter-syndrome Cloudflare Is Blocking AI Crawlers by Default: https://www.wired.com/story/cloudflare-blocks-ai-crawlers-default Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives: https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives GitHub just got less independent at Microsoft after CEO resignation: https://www.theverge.com/news/757461/microsoft-github-thomas-dohmke-resignation-coreai-team-transition Chapters 0:00 Is the AI Industry Burning Cash Unsustainably? 01:06 Anthropic and the "AI Bubble Euphoria" 04:42 How the AI Hype Cycle is Different from Web3 & VR 08:24 The Problem with "Slapping AI" on Every App 11:54 The "10x AI Engineer" is a Myth and Why 17:55 Real-World AI Success Stories 21:26 Cloudflare vs. AI Crawlers: The Ethics of Data Scraping 30:05 Vercel's New AI SDK V5: What's Changed? 33:45 GitHub's CEO Steps Down: What It Means for Developers 38:54 Hot Takes: The Future of AI Startups, the Job Market, and More We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr)
The episode opens with Bhatt framing the global stakes: from drones on the battlefield to AI-powered early warning systems, militaries worldwide are racing to integrate AI, often citing strategic necessity in volatile security environments. Mohan underscores that AI in conflict cannot be characterized in a single way, applications range from decision-support systems and logistics to disinformation campaigns and border security.The conversation explores two categories of AI-related risks:Inherent risks: design flaws, bias in datasets, adversarial attacks, and human–machine trust calibration.Applied risks: escalation through miscalculation, misuse in targeting, and AI's role as a force multiplier for nuclear and cyber threats.On governance, Mohan explains the fragmentation of current disarmament processes, where AI intersects with multiple regimes, nuclear, cyber, conventional arms, yet lacks a unified framework. She highlights ongoing debates at the UN's Group of Governmental Experts (GGE) on LAWS, where consensus has been stalled over definitions, human-machine interaction, and whether regulation should be voluntary or treaty-based.International humanitarian law (IHL) remains central, with discussions focusing on how principles like distinction, proportionality, and precaution can apply to autonomous systems. Mohan also emphasizes a “life-cycle approach” to weapon assessment, extending legal and ethical oversight from design to deployment and decommissioning.A significant portion of the conversation turns to gender and bias, an area Mohan has advanced through her research at UNIDIR. She draws attention to how gendered and racial biases encoded in AI systems can manifest in conflict, stressing the importance of diversifying participation in both technology design and disarmament diplomacy.Looking forward, Mohan cites UN Secretary-General António Guterres's call for a legally binding instrument on autonomous weapons by 2026. She argues that progress will depend on multi-stakeholder engagement, national strategies on AI, and confidence-building measures between states. The episode closes with a reflection on the future of warfare as inseparable from governance innovation—shifting from arms reduction to resilience, capacity-building, and responsible innovation.Episode ContributorsShimona Mohan is an associate researcher on Gender & Disarmament and Security & Technology at UNIDIR in Geneva, Switzerland. She was named among Women in AI Ethics' “100 Brilliant Women in AI Ethics for 2024.” Her areas of focus include the multifarious intersections of security, emerging technologies (in particular AI and cybersecurity), gender, and disarmament. Charukeshi Bhatt is a research analyst at Carnegie India, where her work focuses on the intersection of emerging technologies and international security. Her current research explores how advancements in technologies such as AI are shaping global disarmament frameworks and security norms.ReadingsGender and Lethal Autonomous Weapons Systems, UNIDIR Factsheet Political Declaration on Responsible Military Use of AI and Autonomy, US Department of StateAI in the Military Domain: A Briefing Note for States by Giacomo Persi Paoli and Yasmin AfinaUnderstanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective by Charukeshi Bhatt and Tejas Bharadwaj Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.
Latest in Tech: AI Ethics, Apple's Siri Overhaul, US-China Mineral Tensions, and Remote Work Trends In this episode of 'Hashtag Trending,' host Jim Love discusses Anthropic's new feature allowing its AI chatbot Claude to disconnect from abusive conversations, Perplexity AI's revenue-sharing model with publishers, and Bloomberg's report on Apple's discussions with Google to potentially use the Gemini AI model for Siri. The show also covers President Trump's demand for China to stop controlling rare earth minerals vital for tech production, and new US Census data revealing that remote work is stabilizing, contradicting headlines suggesting a return to office mandate. Love emphasizes the importance of waiting for real data over sensational headlines. 00:00 Introduction and Host Welcome 00:32 Anthropic's Chatbot Claude and AI Ethics 02:40 Perplexity's Revenue Sharing and Legal Issues 04:14 Apple's Potential Partnership with Google's Gemini 05:45 US-China Tensions Over Rare Earth Magnets 07:58 Remote Work Trends in the US 09:37 Show Conclusion and Listener Engagement
August 25, 2025: Chase Franzen, VP and CISO at Sharp Healthcare, discusses how they transformed their cybersecurity training into something so engaging that employees actually call it fun. But as AI capabilities advance at breakneck speed, what happens when traditional phishing indicators disappear and deepfakes become indistinguishable from reality? Chase discusses Sharp's AI ethics committee and their approach to balancing innovation with responsibility, while sharing candid thoughts about AI's true costs. The conversation also explores how failure and discomfort drive growth, touching on everything from real estate disasters to the joy of flying planes Key Points: 02:51 Diverse Career Paths: Real Estate, Teaching, and More 08:36 Innovative Cyber Ambassador Program 13:03 AI Cybersecurity Concerns 21:57 Lightning Round: Quotes, Failures, and Airplanes X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summitIn this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.Key themes we explore:Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformationBeth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.
**Please subscribe to Matt's Substack at https://worthknowing.substack.com/*** The State of Media: Bias, AI Ethics, and the Future of JournalismMatt joins with conservative commentator Dexter Tarbell on A.J. Kierstead's 'The New England Take' to discuss the current state of the media: bias, weird stunts, the shift from traditional news sources to platforms like Substack, and how the business of news is shaping the information and the reality we all experience.00:18 The Media's Decline: A Bipartisan Discussion04:06 AI in Journalism: The Jim Acosta Controversy12:04 The Future of Media: Substack and Beyond28:13 Concluding Thoughts and Final Remarks
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
In this episode of Thinking Out Loud, Nathan and Cameron dive deep into the ethics of AI, language, and what it means to be human in a rapidly advancing technological world. Starting with a provocative question—should Christians use slurs like "clanker" toward robots or AI?—they explore how our language toward machines reflects deeper theological and moral concerns. What begins as a discussion on humor and frustration with technology evolves into a rich conversation about the image of God (Imago Dei), the nature of animals, the soul, and the ethical dangers of humanizing machines. Drawing on philosophy, scripture, and real-world examples, they challenge Christians to think critically about how we engage with artificial intelligence and the impact it has on our character. This episode is essential listening for believers seeking thoughtful, theological reflection on the future of humanity, virtue ethics, and digital culture. Subscribe for more intelligent Christian conversations on tech, ethics, and culture. #ChristianEthics #AIandFaith #ImagoDei #TheologyAndTechnology #VirtueEthics #ThinkingOutLoudPodcastDONATE LINK: https://toltogether.com/donate BOOK A SPEAKER: https://toltogether.com/book-a-speakerJOIN TOL CONNECT: https://toltogether.com/tol-connect TOL Connect is an online forum where TOL listeners can continue the conversation begun on the podcast.
Dr. Mark van Rijmenam is ranked as the world's best futurists and is known globally for his trademark “Optimistic Dystopian” viewpoint. Recognized by Salesforce as a top voice shaping the future of AI, he's a sought-after speaker on the relationship between innovation and humanity. He delivered the world's first TEDx Talk in VR (2020) and introduced a digital twin that speaks 29 languages (2024). Mark holds a PhD in Management from the University of Technology Sydney, where he studied how organizations can use big data, blockchain, and AI. He's also a six-time author and dedicated endurance athlete.In this conversation, we discuss:Why Dr. Mark van Rijmenam believes we need a paradigm shift to prepare society for the long-term consequences of AI and quantum computingThe critical difference between building technology for shareholders versus stakeholders and how that shapes our futureWhat the “spiral dynamics” framework reveals about humanity's current worldview and its path toward a more interconnected mindsetHow banning technology for kids under 16 could protect future generations and reshape digital educationThe risks of anthropomorphizing AI and the need to preserve human agency in a world increasingly shaped by machinesWhat inspired Dr. Mark's sixth book Now What? and how he uses fiction, philosophy, and global cultures to help readers ride the tsunami of changeResources:Subscribe to the AI & The Future of Work NewsletterConnect with Mark on LinkedInAI fun fact articleOn Extending Life With AIExplore more from Dr. Mark van Rijmenam:Now What? How to Ride the Tsunami of ChangeFuturwise Platform — The Fastest Path to your Next InsightDr. Mark's TEDx Talk in VR
Real connection means understanding your audience, staying true to yourself, and creating space for others.How do you communicate who you are, what you stand for, and leave space for others to do the same? At the Stanford Seed Summit in Cape Town, South Africa, three GSB professors explored why real connection is built through authentic communication.For Jesper Sørensen, authentic organizational communication means talking about a business in ways customers or investors can understand, like using analogies to relate a new business model to one that people already know. For incoming GSB Dean Sarah Soule, authentic communication is about truth, not trends. Her research on "corporate confession" shows that companies build trust when they admit their shortcomings — but only if those admissions connect authentically to their core business. And for Christian Wheeler, authentic communication means suspending judgment of ourselves and others. “We have a tendency to rush to categorization, to assume that we understand things before we really do,” he says. “Get used to postponing judgment.”In this special live episode of Think Fast, Talk Smart, host Matt Abrahams and his panel of guests explore communication challenges for budding entrepreneurs. From the risks of comparing yourself to competitors to how your phone might undermine genuine connection, they reveal how authentic communication — whether organizational or personal — requires understanding your audience, staying true to your values, and creating space for others to be heard.Episode Reference Links:Jesper SørensenChristian WheelerSarah SouleEp.194 Live Lessons in Levity and Leadership: Me2We 2025 Part 1 Connect:Premium Signup >>>> Think Fast Talk Smart PremiumEmail Questions & Feedback >>> hello@fastersmarter.ioEpisode Transcripts >>> Think Fast Talk Smart WebsiteNewsletter Signup + English Language Learning >>> FasterSmarter.ioThink Fast Talk Smart >>> LinkedIn, Instagram, YouTubeMatt Abrahams >>> LinkedInChapters:(00:00) - Introduction (01:04) - Jesper Sørensen on Strategic Analogies (04:06) - Sarah Soule on Corporate Confessions (08:46) - Christian Wheeler on Spontaneity & Presence (12:06) - Panel Discussion: AI's Role in Research, Teaching, & Life (17:52) - Professors Share Current Projects (22:55) - Live Audience Q&A (32:53) - Conclusion *****This Episode is sponsored by Stanford. Stay Informed on Stanford's world changing research by signing up for the Stanford ReportSupport Think Fast Talk Smart by joining TFTS Premium.
Superintelligence is coming faster than anyone predicted. In this episode, you'll learn how to upgrade your biology, brain, and consciousness before AI and transhumanism reshape the future of health. Host Dave Asprey sits down with Soren Gordhamer, founder of Wisdom 2.0, to explore what superintelligence in 2027 means for your mind, body, and soul. Watch this episode on YouTube for the full video experience: https://www.youtube.com/@DaveAspreyBPR Soren has spent decades at the intersection of mindfulness, technology, and human development. He advises leaders at OpenAI, Google, and top wellness companies, and he leads global conversations around AI and consciousness. His work bridges ancient wisdom with biohacking, modern neuroscience, and the urgent need to stay human in a machine-dominated world. This episode gives you a tactical roadmap to build resilience before the world tilts. You'll gain practical tools for brain optimization, functional medicine, and biohacking strategies that sharpen cognitive health, reinforce emotional stability, and unlock peak human performance in a digital-first reality. From supplements and nootropics to neuroplasticity techniques, Dave and Soren show you how to protect your biology as AI accelerates beyond human speed. They break down how AI and human health intersect, explain why you need emotional strength to face the future, and offer guidance for raising kids in a world ruled by code. If you're preparing for 2027 superintelligence, navigating AI-driven parenting, or staying ahead of transhumanist health tech, this episode equips you for the coming wave. You'll Learn: • How AI is reshaping human connection, presence, and identity • Why emotional resilience and conscious awareness matter more than ever in an AI-driven world • How to raise connected, grounded children in a hyper-digital environment • What human flourishing looks like when technology outpaces biology • Why investing in presence, purpose, and inner development may be the ultimate upgrade • How leaders in wellness and tech are rethinking personal growth, governance, and ethics in 2027 • What it means to stay truly human—and fully alive—during the rise of superintelligence Dave Asprey is a four-time New York Times bestselling author, founder of Bulletproof Coffee, and the father of biohacking. With over 1,000 interviews and 1 million monthly listeners, The Human Upgrade is the top podcast for people who want to take control of their biology, extend their longevity, and optimize every system in the body and mind. Each episode features cutting-edge insights in health, performance, neuroscience, supplements, nutrition, hacking, emotional intelligence, and conscious living. Episodes drop every Tuesday and Thursday, where Dave asks the questions no one else dares and gives you real tools to become more resilient, aware, and high performing. SPONSORS: - LMNT | Free LMNT Sample Pack with any drink mix purchase by going to https://drinklmnt.com/DAVE. - ARMRA | Go to https://tryarmra.com/ and use the code DAVE to get 15% off your first order. Resources: • Dave Asprey's New Book - Heavily Meditated: https://daveasprey.com/heavily-meditated/ • Soren's New Book - The Essential: https://a.co/d/dALv7OS • Soren's Website: www.sorengordhamer.net • Soren's Instagram: https://www.instagram.com/wisdom2events/ • Danger Coffee: https://dangercoffee.com • Dave Asprey's Website: https://daveasprey.com • Dave Asprey's Linktree: https://linktr.ee/daveasprey • Upgrade Labs: https://upgradelabs.com • Upgrade Collective – Join The Human Upgrade Podcast Live: https://www.ourupgradecollective.com • Own an Upgrade Labs: https://ownanupgradelabs.com • 40 Years of Zen – Neurofeedback Training for Advanced Cognitive Enhancement: https://40yearsofzen.com See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.