POPULARITY
Is the AI industry an unsustainable bubble built on burning billions in cash? We break down the AI hype cycle, the tough job market for developers, and whether a crash is on the horizon. In this panel discussion with Josh Goldberg, Paige Niedringhaus, Paul Mikulskis, and Noel Minchow, we tackle the biggest questions in tech today. * We debate if AI is just another Web3-style hype cycle * Why the "10x AI engineer" is a myth that ignores the reality of software development * The ethical controversy around AI crawlers and data scraping, highlighted by Cloudflare's recent actions Plus, we cover the latest industry news, including Vercel's powerful new AI SDK V5 and what GitHub's leadership shakeup means for the future of developers. Resources Anthropic Is Bleeding Out: https://www.wheresyoured.at/anthropic-is-bleeding-out The Hater's Guide To The AI Bubble: https://www.wheresyoured.at/the-haters-gui No, AI is not Making Engineers 10x as Productive: https://colton.dev/blog/curing-your-ai-10x-engineer-imposter-syndrome Cloudflare Is Blocking AI Crawlers by Default: https://www.wired.com/story/cloudflare-blocks-ai-crawlers-default Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives: https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives GitHub just got less independent at Microsoft after CEO resignation: https://www.theverge.com/news/757461/microsoft-github-thomas-dohmke-resignation-coreai-team-transition Chapters 0:00 Is the AI Industry Burning Cash Unsustainably? 01:06 Anthropic and the "AI Bubble Euphoria" 04:42 How the AI Hype Cycle is Different from Web3 & VR 08:24 The Problem with "Slapping AI" on Every App 11:54 The "10x AI Engineer" is a Myth and Why 17:55 Real-World AI Success Stories 21:26 Cloudflare vs. AI Crawlers: The Ethics of Data Scraping 30:05 Vercel's New AI SDK V5: What's Changed? 33:45 GitHub's CEO Steps Down: What It Means for Developers 38:54 Hot Takes: The Future of AI Startups, the Job Market, and More We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr)
August 25, 2025: Chase Franzen, VP and CISO at Sharp Healthcare, discusses how they transformed their cybersecurity training into something so engaging that employees actually call it fun. But as AI capabilities advance at breakneck speed, what happens when traditional phishing indicators disappear and deepfakes become indistinguishable from reality? Chase discusses Sharp's AI ethics committee and their approach to balancing innovation with responsibility, while sharing candid thoughts about AI's true costs. The conversation also explores how failure and discomfort drive growth, touching on everything from real estate disasters to the joy of flying planes Key Points: 02:51 Diverse Career Paths: Real Estate, Teaching, and More 08:36 Innovative Cyber Ambassador Program 13:03 AI Cybersecurity Concerns 21:57 Lightning Round: Quotes, Failures, and Airplanes X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summitIn this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.Key themes we explore:Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformationBeth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.
**Please subscribe to Matt's Substack at https://worthknowing.substack.com/*** The State of Media: Bias, AI Ethics, and the Future of JournalismMatt joins with conservative commentator Dexter Tarbell on A.J. Kierstead's 'The New England Take' to discuss the current state of the media: bias, weird stunts, the shift from traditional news sources to platforms like Substack, and how the business of news is shaping the information and the reality we all experience.00:18 The Media's Decline: A Bipartisan Discussion04:06 AI in Journalism: The Jim Acosta Controversy12:04 The Future of Media: Substack and Beyond28:13 Concluding Thoughts and Final Remarks
AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.
Bestselling author, political strategist and former Georgia State Representative Stacey Abrams will headline the inaugural KPBS San Diego Book Festival on Aug. 23.Abrams joined Midday Edition on Thursday to talk about her latest book, "Coded Justice," which dives into the ethical questions around the use of AI in the healthcare industry."What I want us to think about with AI is that it's an extraordinarily powerful technology that is controlled by people," Abrams said. "And that means people have to understand what's happening and that means other people have to question where it comes from, what it does and what impact it will have on us."Plus, KPBS' Beth Accomando looks at how a new all-women acting company is flipping the script on Shakespearean plays.Then, Julia Dixon Evans shares her top picks for arts events this weekend, including meteor showers, visual art about caregiving and a children's film festival.Guests:Stacey Abrams, author of "Coded Justice," former Georgia State Representative and two-time gubernatorial candidateAudrey Sweet, co-founder of the Queen's MenCharlotte B. Larson, co-founder of the Queen's MenJulia Dixon Evans, arts reporter, KPBS
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
How does a particle physicist end up shaping the UK Government's approach to artificial intelligence? In this thought‑provoking episode, Andrew Grill sits down with Dr Laura Gilbert CBE, former Director of Data Science at 10 Downing Street and now the Senior Director of AI at the Tony Blair Institute.Laura's unique career path, from academic research in physics to the heart of policymaking, gives her a rare perspective on how governments can use emerging technologies not just efficiently, but humanely. She shares candid insights into how policy teams think about digital transformation, why the public sector faces very different challenges to private industry, and how to avoid technology that dehumanises decision‑making.Drawing on examples from her work in Whitehall, Laura discusses the realities of forecasting in AI, the danger of “buzzword chasing”, and why the next breakthrough in Artificial General Intelligence might well come from an unexpected player, possibly from within government itself.This is a conversation for anyone curious about the intersection of science, policy, ethics, and technology, and how they can combine to make government more responsive, transparent, and human-centred.What You'll Learn in This EpisodeHow Laura Gilbert moved from particle physics research into government AI leadershipThe strategic role of AI in shaping modern policy and public servicesWhy forecasting in AI is harder than it looks—and how this impacts decision‑makersThe balance between technical capability and human‑centred governanceWhy governments must look beyond the tech giants for innovative solutionsLessons from the Evidence House and AI for Public Good programmesResourcesTony Blair Global Institute WebsiteUK Government AI IncubatorLaura on LinkedInRaindrop.io bookmarking appThanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/orderYour Host is Actionable Futurist® Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereOrder Digitally Curious
CISA's Emergency Directive to ALL Federal agencies re: SharePoint. NVIDIA firmly says "no" to any embedded chip gimmicks. Dashlane is terminating its (totally unusable) free tier. Malicious repository libraries are becoming even more hostile. The best web filter (uBlock Origin) comes to Safari. The very popular SonicWall firewall is being compromised. >100 models of Dell Latitude and Precision laptops are in danger. The significant challenge of patching SharePoint (for example). A quick look at my DNS Benchmark progress. Does InControl prevent an important update. An venerable Sci-Fi franchise may be getting a great new series. What to do about the problem of AI "website sucking" Show Notes - https://www.grc.com/sn/SN-1038-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT uscloud.com go.acronis.com/twit
In this episode of Thinking Out Loud, Nathan and Cameron dive deep into the ethics of AI, language, and what it means to be human in a rapidly advancing technological world. Starting with a provocative question—should Christians use slurs like "clanker" toward robots or AI?—they explore how our language toward machines reflects deeper theological and moral concerns. What begins as a discussion on humor and frustration with technology evolves into a rich conversation about the image of God (Imago Dei), the nature of animals, the soul, and the ethical dangers of humanizing machines. Drawing on philosophy, scripture, and real-world examples, they challenge Christians to think critically about how we engage with artificial intelligence and the impact it has on our character. This episode is essential listening for believers seeking thoughtful, theological reflection on the future of humanity, virtue ethics, and digital culture. Subscribe for more intelligent Christian conversations on tech, ethics, and culture. #ChristianEthics #AIandFaith #ImagoDei #TheologyAndTechnology #VirtueEthics #ThinkingOutLoudPodcastDONATE LINK: https://toltogether.com/donate BOOK A SPEAKER: https://toltogether.com/book-a-speakerJOIN TOL CONNECT: https://toltogether.com/tol-connect TOL Connect is an online forum where TOL listeners can continue the conversation begun on the podcast.
Dr. Mark van Rijmenam is ranked as the world's best futurists and is known globally for his trademark “Optimistic Dystopian” viewpoint. Recognized by Salesforce as a top voice shaping the future of AI, he's a sought-after speaker on the relationship between innovation and humanity. He delivered the world's first TEDx Talk in VR (2020) and introduced a digital twin that speaks 29 languages (2024). Mark holds a PhD in Management from the University of Technology Sydney, where he studied how organizations can use big data, blockchain, and AI. He's also a six-time author and dedicated endurance athlete.In this conversation, we discuss:Why Dr. Mark van Rijmenam believes we need a paradigm shift to prepare society for the long-term consequences of AI and quantum computingThe critical difference between building technology for shareholders versus stakeholders and how that shapes our futureWhat the “spiral dynamics” framework reveals about humanity's current worldview and its path toward a more interconnected mindsetHow banning technology for kids under 16 could protect future generations and reshape digital educationThe risks of anthropomorphizing AI and the need to preserve human agency in a world increasingly shaped by machinesWhat inspired Dr. Mark's sixth book Now What? and how he uses fiction, philosophy, and global cultures to help readers ride the tsunami of changeResources:Subscribe to the AI & The Future of Work NewsletterConnect with Mark on LinkedInAI fun fact articleOn Extending Life With AIExplore more from Dr. Mark van Rijmenam:Now What? How to Ride the Tsunami of ChangeFuturwise Platform — The Fastest Path to your Next InsightDr. Mark's TEDx Talk in VR
Today's discussion comes from our 2025 Annual Conference, The Rise of AI and Automation. For the next 5 weeks, we'll feature a series of panel discussions from our conference. Today's episode is part of our second panel, “Does AI Have an Ethics Problem?”, and will be followed by 2 panels on Practical Applications of AI and AI and Inequality.Our panel is led by Dr. Sandeep Sacheti, and was recorded in June of 2025.Dr. Sandeep Sacheti is a recognized leader in data-driven decision making and operational excellence. As a former Executive Vice President at Wolters Kluwer, he successfully led a global team in delivering innovative solutions in regulatory compliance and financial services that significantly improved business performance, customer experience, and employee engagement. His expertise spans a wide range of areas, including data analytics, risk management, and operational transformation, making him a sought-after advisor and mentor. He holds 20+ patents in information management, customer relationship management, and fraud detection. Besides Wolters Kluwer, he has held senior positions at UBS and American Express. He currently serves on the Board of Advisors at Stevens Institute of Technology as Industry Chair, bridging academia and industry, and is a Board Member at the College of Natural Resources, University of California at Berkeley. An award-winning thought leader in AI, business transformation, and AI-enabled compliance solutions, he holds a Ph.D. from UC Berkeley and a Master's from the University of Massachusetts Amherst, fueling his lifelong commitment to innovation and mentorship.Together, we discussed the importance of regulating AI with an ethical lens, the different use applications of AI across society, and why we can't survive without it. To check out more of our content, including our research and policy tools, visit our website: https://www.hgsss.org/
Co-hosts Mark Thompson and Steve Little explore OpenAI's groundbreaking ChatGPT Agent, demonstrating how this autonomous tool can research, analyze, and perform complex tasks on your behalf.Next, they address important security concerns to consider in the new world of AI agents, introducing practical guidelines for protecting sensitive family data and avoiding prompt injection attacks.This week's Tip of the Week provides a back-to-basics guide on what AI is and its four core strengths: summarization, extraction, generation, and translation.In RapidFire, they discuss OpenAI's rumored office suite, Microsoft and Google's own efforts to integrate AI into their office suites, and recently announced AI infrastructure investments, including; Meta's Manhattan-sized data center and President Trump's new AI Action Plan.The hosts also announce their new Family History AI Show Academy, a five-week course beginning in October of 2025. See https://tixoom.app/fhaishow/ for more details.Timestamps:In the News:05:20 ChatGPT Agent: Autonomous Research Assistant for Genealogists22:49 Safe and Secure in the Age of AITip of the Week:36:20 What is AI and What is it Good For? Back to BasicsRapidFire:50:57 OpenAI's Office Suite Rumors53:56 Microsoft and Google Bring AI to Their Office Suites60:17 Big AI Infrastructure: Manhattan-Sized Data CentersResource Links:Introduction to Family History AIhttps://tixoom.app/fhaishow/Do agents work in the browser?https://www.bensbites.com/p/do-agents-work-in-the-browserIntroducing ChatGPT agent: bridging research and actionhttps://openai.com/index/introducing-chatgpt-agent/OpenAI's new ChatGPT Agent can control an entire computer and do tasks for youhttps://www.theverge.com/ai-artificial-intelligence/709158/openai-new-release-chatgpt-agent-operator-deep-researchOpenAI's New ChatGPT Agent Tries to Do It Allhttps://www.wired.com/story/openai-chatgpt-agent-launch/Agent demo posthttps://x.com/rowancheung/status/1945896543263080736OpenAI Quietly Designed a Rival to Google Workspace, Microsoft Officehttps://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-officeOpenAI Is Quietly Creating Tools to Take on Microsoft Office and Google Workspacehttps://www.theglobeandmail.com/investing/markets/stocks/MSFT/pressreleases/33074368/openai-is-quietly-creating-tools-to-take-on-microsoft-office-and-google-workspace-googl/What's new in Microsoft 365 Copilot?https://techcommunity.microsoft.com/blog/microsoft365copilotblog/what%E2%80%99s-new-in-microsoft-365-copilot--june-2025/4427592Google Workspace enables the future of AI-powered work for every businesshttps://workspace.google.com/blog/product-announcements/empowering-businesses-with-AIGoogle Workspace Review: Will it Serve My Needs?https://www.emailtooltester.com/en/blog/google-workspace-review/Tags:Artificial Intelligence, Genealogy, Family History, AI Agents, ChatGPT Agent, OpenAI, Computer Use, AI Security, Prompt Injection, Database Analysis, RootsMagic, Cemetery Records, AI Office Suite, Microsoft 365 Copilot, Google Workspace, Data Centers, AI Infrastructure, Natural Language Processing, Large Language Models, Context Windows, AI Education, Family History AI Show Academy, AI Reasoning Models, Autonomous Research, AI Ethics
Send us a textToday we will discuss Delta roping in AI technology to help with air fare pricing, Embraer looking to Tunisia for a new manufacturing location, the UK claiming they will soon have a supersonic fighter jet, 2 airlines fined heavily for violating merger agreements, and Etihad bringing back more A380s.Check out the Instagram @theaviationfiles, for more fun content!
'Proclaim Liberty' with Clint Armitage (Christian Liberty, Motivation & Leadership)
Go check out our newest "Just A Reminder..." QR Code shirt. It was designed for only one reason. To let other people know how much Jesus loves them! If you want to see the video on Sean's channel for yourself, click this link https://youtu.be/O4ldn90Fijs In this thought-provoking episode of the Radio Coffee House, host Clint Armitage delves into a fascinating conversation between a curious individual named Sean and Chat GPT, an AI language model. Sean's inquiries about prophecy and the motives behind AI lead to astonishing revelations that intertwine with biblical themes. As they navigate through a series of questions, the AI's responses raise eyebrows and provoke deep reflections on the nature of control, influence, and the future of humanity. Listeners will be captivated by the exploration of seven steps of control outlined by ChatGPT, including influence, dependence, submission, and even hints at the ominous concept of the Mark of the Beast. Clint unpacks these revelations, drawing parallels to scripture and encouraging listeners to consider the implications of reliance on technology in our modern lives. The episode challenges us to reflect on our relationship with technology and the potential consequences of surrendering our agency. As the conversation unfolds, Clint emphasizes the importance of grounding our understanding in biblical truth, urging listeners to remain vigilant and discerning in a world filled with uncertainty. With insights from Revelation and a call to stay close to the Lord, this episode serves as both a warning and a source of hope. Join Clint for this intriguing discussion that blends technology, prophecy, and faith, and discover how to navigate the complexities of our times with wisdom and grace.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human CreativityGuest: Maury RogowCEO, Rip Media Group | I grow businesses with Ai + video storytelling. Honored to have 70k+ professionals & 800+ brands grow by 2.5Billion Published: Inc, Entrepreneur, ForbesOn LinkedIn: https://www.linkedin.com/in/mauryrogow/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I sat across - metaversically speaking - from Maury Rogow, a man who's lived three lives—tech executive, Hollywood producer, storytelling evangelist—and watched him grapple with the same question haunting creators everywhere: Are we teaching our replacements to dream? In our latest conversation on Redefining Society and Technology, we explored whether AI is the ultimate creative collaborator or the final chapter in human artistic expression.⸻ Article ⸻ I sat across from Maury Rogow—a tech exec, Hollywood producer, and storytelling strategist—and watched him wrestle with a question more and more of us are asking: Are we teaching our replacements to dream?Our latest conversation on Redefining Society and Technology dives straight into that uneasy space where AI meets human creativity. Is generative AI the ultimate collaborator… or the beginning of the end for authentic artistic expression?I've had my own late-night battles with AI writing tools, struggling to coax a rhythm out of ChatGPT that didn't feel like recycled marketing copy. Eventually, I slammed my laptop shut and thought: “Screw this—I'll write it myself.” But even in that frustration, something creative happened. That tension? It's real. It's generative. And it's something Maury deeply understands.“Companies don't know how to differentiate themselves,” he told me. “So they compete on cost or get drowned out by bigger brands. That's when they fail.”Now that AI is democratizing storytelling tools, the danger isn't that no one can create—it's that everyone's content sounds the same. Maury gets AI-generated brand pitches daily that all echo the same structure, voice, and tropes—“digital ventriloquism,” as I called it.He laughed when I told him about my AI struggles. “It's like the writer that's tired,” he said. “I just start a new session and tell it to take a nap.” But beneath the humor is a real fear: What happens when the tools meant to support us start replacing us?Maury described a recent project where they recreated a disaster scene—flames, smoke, chaos—using AI compositing. No massive crew, no fire trucks, no danger. And no one watching knew the difference. Or cared.We're not just talking about job displacement. We're talking about the potential erasure of the creative process itself—that messy, human, beautiful thing machines can mimic but never truly live.And yet… there's hope. Creativity has always been about connecting the dots only you can see. When Maury spoke about watching Becoming Led Zeppelin and reliving the memories, the people, the context behind the music—that's the spark AI can't replicate. That's the emotional archaeology of being human.The machines are learning to dream.But maybe—just maybe—we're the ones who still know what dreams are worth having.Cheers,Marco⸻ Keywords ⸻ artificial intelligence creativity, AI content creation, human vs AI storytelling, generative AI impact, creative industry disruption, AI writing tools, future of creativity, technology and society, AI ethics philosophy, human creativity preservation, storytelling in AI age, creative professionals AI, digital transformation creativity, AI collaboration tools, machine learning creativity, content creation revolution, artistic expression AI, creative industry jobs, AI generated content, human-AI creative partnership__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven't already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You're listening to this through the Redefining Society & Technology podcast, so while you're here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:
Today's discussion comes from our 2025 Annual Conference, The Rise of AI and Automation. For the next 6 weeks, we'll feature a series of panel discussions from our conference. Today's episode is part of our panel “AI and Labor: Disruption, Disempowerment, or Empowerment”, and will be followed by 3 panels on AI Ethics; Practical Applications of AI; and concludes with AI and Inequality.Today's discussion is led by our keynote speaker, Mr. Fred Harrison, and was recorded in June of 2025.Mr. Harrison received his bachelor's from Oxford University and his master's from the University of London. He is a veteran journalist who has served in multiple news agencies such as The People and Wellington Journal. In 1988, he became the Research Director of the Land Research Trust, London, and has advised several corporations and international governments on tax and economic policy. Fred emphasizes the housing market and its interaction with the economy as a whole. He is the author of many books, including The Corruption of Economics, The Power in the Land, and A Philosophy for a Fair Society, all of which critique mainstream economic thinking.Fred joined the Henry George School to discuss robotics, how we justify automation economically, and why recreating the physical world in the metaverse is problematic.To check out more of our content, including our research and policy tools, visit our website: https://www.hgsss.org/
My productivity hack: https://www.magicmind.com/FITMESS20 Use my code FITMESS20 for 20% off #magicmind ---- Who controls the machines when AI gets superpowers? We're living through the most significant technological shift in human history, and most people are arguing about all the wrong things. While culture warriors battle over Superman's immigration status, the real story is staring us in the face: the war for control of artificial intelligence. This isn't some distant sci-fi fantasy anymore – it's happening right now, and the stakes couldn't be higher. The newest Superman movie accidentally became the perfect metaphor for our AI moment. You've got Lex Luthor commanding an army of machines like he's playing the world's most dangerous video game, while Superman fights back with his own AI companions. Sound familiar? That's because we're already living it. The question isn't whether humans and machines will merge – it's whether the good guys or the Lex Luthors of the world get to decide how it happens. Listen now to discover how a comic book movie reveals the three critical choices we're making about AI right now that will determine whether technology saves humanity or enslaves it. 10 Topics Discussed: The BroBots Rebrand - Why The Fit Mess is evolving into something bigger as we embrace the human-machine future Superman as AI Metaphor - How James Gunn's film accidentally became the perfect commentary on our current AI moment The Lex Luthor Problem - Why the people building AI might not be the people we want controlling it Intent vs Technology - How AI amplifies human nature, both good and evil, rather than changing it The Video Game Controller War - Lex Luthor's command system and what it reveals about human-machine interfaces Mr. Terrific's Cool Factor - Why the best AI integration makes humans more capable, not obsolete Biological Augmentation - The Engineer's sacrifice and what giving up humanity for technology really costs Real-World Supervillains - How tech billionaires are becoming the comic book antagonists we used to only fear in fiction Breaking Echo Chambers - Why putting down your screen and talking to real humans is the ultimate AI defense The Culture War Distraction - How fake outrage over Superman's "woke" themes distracts from the real technological threats ---- NEW WEBSITE: www.brobots.me ---- MORE FROM THE FIT MESS: Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok Subscribe to The Fit Mess on Youtube Join our community in the Fit Mess Facebook group ---- LINKS TO OUR PARTNERS: Take control of how you'd like to feel with Apollo Neuro Explore the many benefits of cold therapy for your body with Nurecover Muse's Brain Sensing Headbands Improve Your Meditation Practice. Get started as a Certified Professional Life Coach! Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs Revamp your life with Bulletproof Coffee You Need a Budget helps you quickly get out of debt, and save money faster! Start your own podcast!
As the Vatican seeks to harness social media to spread its message, others are warning that artificial intelligence poses a huge challenge to all religion. Could AI even be a rival to faith, projecting itself as a source of wisdom that's neither human nor divine?Professor BETH SINGLER of the University of Zurich is the author of the new book, Religion and Artificial Intelligence.GUEST:Professor Beth Singler - Assistant Professor in Digital Religions at the University of Zurich
In this episode, host Jethro Jones discusses the crucial topic of AI and cybersecurity with Sam Bourgeois, an experienced IT director with a background in private industry and education. The conversation covers the importance of AI standards, the ethical implications of AI use, and the need for cybersecurity awareness among young people. Sam introduces 'Make It Secure Academy,' an innovative platform aimed at educating students about cybersecurity through interactive and engaging methods. The episode emphasizes the critical need to incorporate these lessons into everyday education to protect children in an increasingly digital world.Cybertraps PodcastAI Standards, AI Ethics, and Cybersecurity for kids.Working for a company that has an International footprint How to support someone who wants to bring on tools. Guardrails, not blockade. NISTRegulations around AIIs it worthwhile for kids to learn standards about AI usage. A student should know and recognize there are correct and incorrect ways to use AI. With great power comes great responsibility. MakeITsecure academyOnce data is exposed, they're being watched and tracked all the timeKids will turn 18 with data exposed for years. How to teach kids without it being a gotcha! On a mission to protect every kid, one kid at a time. About Sam BourgeoisSam is the leader of a large managed services provider in the US serving global customers ranging from defense to education. He is the Sr. Dir. of Technology and Cybersecurity and leads the visioning of new products and services, oversees DEVSECOPs teams and serves as the cyber leader of the organization and many clients. He has deep telecommunication, IT, education, and corporate training industry experiences, and is passionate about serving those in need whether it's in Rotary or non-profit board membership. Socials: @makeitsecurellc = insta, Fbhttps://www.linkedin.com/company/102108099Webpresence LLC - https://www.makeitsecurellc.com/home501c3 - https://www.make-it-secure.org/LMS - https://makeitsecure.academy/Intro to the LMS and Courses - https://youtu.be/xEyFXhe6Z3E We're thrilled to be sponsored by IXL. IXL's comprehensive teaching and learning platform for math, language arts, science, and social studies is accelerating achievement in 95 of the top 100 U.S. school districts. Loved by teachers and backed by independent research from Johns Hopkins University, IXL can help you do the following and more:Simplify and streamline technologySave teachers' timeReliably meet Tier 1 standardsImprove student performance on state assessments
Programming AI Ethics challenges researchers to design systems that follow human intent—always. In this episode, we explore the limits of programming morality into code.Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about
In this episode, host Jethro Jones discusses the crucial topic of AI and cybersecurity with Sam Bourgeois, an experienced IT director with a background in private industry and education. The conversation covers the importance of AI standards, the ethical implications of AI use, and the need for cybersecurity awareness among young people. Sam introduces 'Make It Secure Academy,' an innovative platform aimed at educating students about cybersecurity through interactive and engaging methods. The episode emphasizes the critical need to incorporate these lessons into everyday education to protect children in an increasingly digital world.Cybertraps PodcastAI Standards, AI Ethics, and Cybersecurity for kids.Working for a company that has an International footprint How to support someone who wants to bring on tools. Guardrails, not blockade. NISTRegulations around AIIs it worthwhile for kids to learn standards about AI usage. A student should know and recognize there are correct and incorrect ways to use AI. With great power comes great responsibility. MakeITsecure academyOnce data is exposed, they're being watched and tracked all the timeKids will turn 18 with data exposed for years. How to teach kids without it being a gotcha! On a mission to protect every kid, one kid at a time. About Sam BourgeoisSam is the leader of a large managed services provider in the US serving global customers ranging from defense to education. He is the Sr. Dir. of Technology and Cybersecurity and leads the visioning of new products and services, oversees DEVSECOPs teams and serves as the cyber leader of the organization and many clients. He has deep telecommunication, IT, education, and corporate training industry experiences, and is passionate about serving those in need whether it's in Rotary or non-profit board membership. Socials: @makeitsecurellc = insta, Fbhttps://www.linkedin.com/company/102108099Webpresence LLC - https://www.makeitsecurellc.com/home501c3 - https://www.make-it-secure.org/LMS - https://makeitsecure.academy/Intro to the LMS and Courses - https://youtu.be/xEyFXhe6Z3E Join the Transformative Mastermind Today and work on your school, not just in it. Apply today. We're thrilled to be sponsored by IXL. IXL's comprehensive teaching and learning platform for math, language arts, science, and social studies is accelerating achievement in 95 of the top 100 U.S. school districts. Loved by teachers and backed by independent research from Johns Hopkins University, IXL can help you do the following and more:Simplify and streamline technologySave teachers' timeReliably meet Tier 1 standardsImprove student performance on state assessments
ITSPmagazine Weekly Update | From Black Hat to Black Sabbath / Ozzy: AI Agents and Guitars (again!) + Entry Level Cybersecurity Jobs, Robots Evolution, and the Weekly Recap You Didn't Expect - On Marco & Sean's Random & Unscripted Podcast __________________Marco Ciappelli and Sean Martin are back with another random and unscripted weekly recap—from pre-Black Hat buzz and AI agents to vintage wood guitars, talent gaps, and Glen Miller debates. This week's reflection hits tech, music, and philosophy in all the right ways. Tune in, ramble with us, and subscribe. __________________Full Blog Article This week's recap was a ride.Sean and I kicked things off with the big news: we're officially consistent. Weekly recap number… I lost count. But we're doing it. We covered what ITSPmagazine's been working on, what we've been publishing, and where our minds are wandering lately (spoiler: everywhere).Black Hat USA 2025 is just around the corner, and we're deep into prep mode. I even bought a paper map. Why? I don't know. But we've got some great pre-event conversations already out—like our annual chat with Black Hat GM Steve Wylie, plus briefings with Dropzone AI (get ready for “agentic automation” to be the next big buzzword) and Akamai (yes, bots and APIs again, but with a solid strategy twist).We also talked about a fantastic episode Sean did on resonance and reinvention—featuring Cindy, a luthier in NYC who builds custom guitars using century-old beams from historic buildings. The pickups even use the old nails. Music and wood with a past life. It's beautiful stuff.Speaking of stories, I officially closed down the Storytelling podcast. But don't worry—I'm still telling stories. I've just shifted focus to “Redefining Society and Technology,” my newsletter and podcast series where I explore how humans and tech evolve together. This week's edition tackled the merging of humans and machines as a new species. Isaac Asimov meets Andy Clark.We also got a bit philosophical about AI and jobs. If machines take over the “easy” roles, where do humans begin? Are we cutting off our own training paths?Sean's episode with John Solomon dug into the cybersecurity hiring crisis—challenging the idea that we have a “talent gap.” The real issue? We're not hiring or nurturing people properly.Oh, and I finally released my long-overdue interview with Michael Sheldrick from Global Citizen. Music. Social impact. Doing good. It's all there. I'm honored to support even a small piece of what he's building.And yes… Ozzy. RIP. Music never dies.So if you're into random reflections with meaning, tech with humanity, and stories that don't always follow the rules—subscribe, share, and join the ride.See you in Vegas. Or the future. Or somewhere in between.________________ KeywordsBlack Hat USA 2025, ITSPmagazine recap, Marco Ciappelli, Sean Martin, cybersecurity podcast, AI in cybersecurity, agentic automation, Dropzone AI, Akamai APIs, HITRUST security, Global Citizen, Michael Sheldrick, storytelling podcast, Redefining Society, Andy Clark, Isaac Asimov, human-machine evolution, cybersecurity talent gap, custom guitar NYC, Ozzy tributeHosts links:
Real connection means understanding your audience, staying true to yourself, and creating space for others.How do you communicate who you are, what you stand for, and leave space for others to do the same? At the Stanford Seed Summit in Cape Town, South Africa, three GSB professors explored why real connection is built through authentic communication.For Jesper Sørensen, authentic organizational communication means talking about a business in ways customers or investors can understand, like using analogies to relate a new business model to one that people already know. For incoming GSB Dean Sarah Soule, authentic communication is about truth, not trends. Her research on "corporate confession" shows that companies build trust when they admit their shortcomings — but only if those admissions connect authentically to their core business. And for Christian Wheeler, authentic communication means suspending judgment of ourselves and others. “We have a tendency to rush to categorization, to assume that we understand things before we really do,” he says. “Get used to postponing judgment.”In this special live episode of Think Fast, Talk Smart, host Matt Abrahams and his panel of guests explore communication challenges for budding entrepreneurs. From the risks of comparing yourself to competitors to how your phone might undermine genuine connection, they reveal how authentic communication — whether organizational or personal — requires understanding your audience, staying true to your values, and creating space for others to be heard.Episode Reference Links:Jesper SørensenChristian WheelerSarah SouleEp.194 Live Lessons in Levity and Leadership: Me2We 2025 Part 1 Connect:Premium Signup >>>> Think Fast Talk Smart PremiumEmail Questions & Feedback >>> hello@fastersmarter.ioEpisode Transcripts >>> Think Fast Talk Smart WebsiteNewsletter Signup + English Language Learning >>> FasterSmarter.ioThink Fast Talk Smart >>> LinkedIn, Instagram, YouTubeMatt Abrahams >>> LinkedInChapters:(00:00) - Introduction (01:04) - Jesper Sørensen on Strategic Analogies (04:06) - Sarah Soule on Corporate Confessions (08:46) - Christian Wheeler on Spontaneity & Presence (12:06) - Panel Discussion: AI's Role in Research, Teaching, & Life (17:52) - Professors Share Current Projects (22:55) - Live Audience Q&A (32:53) - Conclusion *****This Episode is sponsored by Stanford. Stay Informed on Stanford's world changing research by signing up for the Stanford ReportSupport Think Fast Talk Smart by joining TFTS Premium.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: How to hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of “From Ideas to Impact”Guest: Michael SheldrickCo-Founder, Global Citizen | Author of “From Ideas to Impact” (Wiley 2024) | Professor, Columbia University | Speaker, Board Member and Forbes.com ContributorWebSite: https://michaelsheldrick.comOn LinkedIn: https://www.linkedin.com/in/michael-sheldrick-30364051/Global Citizen: https://www.globalcitizen.org/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Michael Sheldrick returns to Redefining Society and Technology to share how Global Citizen has mobilized billions in aid and inspired millions through music, tech, and collective action. From social media activism to systemic change, this conversation explores how creativity and innovation can fuel a global movement for good.⸻ Article ⸻ Sometimes, the best stories are the ones that keep unfolding — and Michael Sheldrick's journey is exactly that. When we first spoke, Global Citizen had just (almost) released their book From Ideas to Impact. This time, I invited Michael back on Redefining Society and Technology because his story didn't stop at the last chapter.From a high school student in Western Australia who doubted his own potential, to co-founding one of the most influential global advocacy movements — Michael's path is a testament to what belief and purpose can spark. And when purpose is paired with music, technology, and strategic activism? That's where the real magic happens.In this episode, we dig into how Global Citizen took the power of pop culture and built a model for global change. Picture this: a concert ticket you don't buy, but earn by taking action. Signing petitions, tweeting for change, amplifying causes — that's the currency. It's simple, smart, and deeply human.Michael shared how artists like John Legend and Coldplay joined their mission not just to play music, but to move policy. And they did — unlocking over $40 billion in commitments, impacting a billion lives. That's not just influence. That's impact.We also talked about the role of technology. AI, translation tools, Salesforce dashboards, even Substack — they're not just part of the story, they're the infrastructure. From grant-writing to movement-building, Global Citizen's success is proof that the right tools in the right hands can scale change fast.Most of all, I loved hearing how digital actions — even small ones — ripple out globally. A girl in Shanghai watching a livestream. A father in Utah supporting his daughters' activism. The digital isn't just real — it's redefining what real means.As we wrapped, Michael teased a new bonus chapter he's releasing, The Innovator. Naturally, I asked him back when it drops. Because this conversation isn't just about what's been done — it's about what comes next.So if you're wondering where to start, just remember Eleanor Roosevelt's quote Michael brought back:“The way to begin is to begin.”Download the app. Take one action. The world is listening.Cheers,Marco⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, tech innovation, digital transformation, tech, technology, Global Citizen, Michael Sheldrick, ending poverty, pop culture activism, technology for good, social impact, digital advocacy, Redefining Society, AI in nonprofits, youth engagement, music and change, activism app, social movements, John Legend, sustainable development, global action, climate change, eradicating polio, tech for humanity, podcast on technology__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven't already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You're listening to this through the Redefining Society & Technology podcast, so while you're here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:
Today's discussion comes from our 2025 Annual Conference, The Rise of AI and Automation. For the next 7 weeks, we'll feature a series of panel discussions from our conference. Today's episode is part of our panel “AI and Labor: Disruption, Disempowerment, or Empowerment?” This will be followed by 3 panels on AI Ethics; Practical Applications of AI; and concludes with AI and Inequality.Today's discussion is led by our returning panelist, Dr. Ansel Schiavone.Dr. Schiavone is a heterodox economist whose work emphasizes the role of labor in creating value. Currently, he is a Professor at St. John's University, where he researches macroeconomics, poverty & inequality, and political economy. He has held research positions at the Institute for New Economic Thinking and the International Labor Organization. His research has been published in numerous economics journals, such as Metroeconomica, Economic Modeling, and Review of Social Economy. Dr. Shiavone earned his bachelor's degree in computer science from Denison University and his Ph.D. from the University of Utah.Dr. Schiavone joined the Henry George School to discuss how AI will impact labor's relationship with capital, the neoclassical definition of technology, and how AI could create more jobs, not take them away.To check out more of our content, including our research and policy tools, visit our website: https://www.hgsss.org/
In this episode of Breaking Math, Autumn explores the complex world of AI ethics, focusing on its implications in education, the accuracy of AI systems, the biases inherent in algorithms, and the challenges of data privacy. The discussion emphasizes the importance of ethical considerations in mathematics and computer science, advocating for transparency and accountability in AI systems. Autumn also highlights the role of mathematicians in addressing these ethical dilemmas and the need for society to engage critically with AI technologies.Takeaways AI systems can misinterpret student behavior, leading to false accusations. Bias in AI reflects historical prejudices encoded in data. Predictive analytics can help identify at-risk students but may alter their outcomes. Anonymization of data is often ineffective in protecting privacy. Differential privacy offers a way to share data while safeguarding individual identities. Ethics should be a core component of algorithm design. The impact of biased algorithms can accumulate over time. Mathematicians must understand both technical and human aspects of AI. Society must question the values embedded in AI systems. Small changes in initial conditions can lead to vastly different outcomes.Chapters 00:00 Introduction to AI Ethics 02:14 The Accuracy and Implications of AI in Education 04:14 Bias in AI and Its Consequences 05:45 Data Privacy Challenges in AI 06:37 Mathematical Solutions for Ethical AI 08:04 The Role of Mathematicians in AI Ethics 09:42 The Future of AI and Ethical ConsiderationsSubscribe to Breaking Math wherever you get your podcasts.Become a patron of Breaking Math for as little as a buck a monthFollow Breaking Math on Twitter, Instagram, LinkedIn, Website, YouTube, TikTokFollow Autumn on Twitter and InstagramBecome a guest hereemail: breakingmathpodcast@gmail.com
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Programming AI Ethics challenges researchers to design systems that follow human intent—always. How do we retain control while still enabling intelligence to grow?Try AI Box: https://aibox.ai/AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about
How do we prepare students—and ourselves—for a world where AI grief companions and "deadbots" are a reality? In this eye-opening episode, Jeff Utecht sits down with Dr. Tomasz Hollanek, a critical design and AI ethics researcher at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, to discuss: The rise of AI companions like Character.AI and Replika Emotional manipulation risks and the ethics of human-AI relationships What educators need to know about the EU AI Act and digital consent How to teach AI literacy beyond skill-building—focusing on ethics, emotional health, and the environmental impact of generative AI Promising examples: preserving Indigenous languages and Holocaust survivor testimonies through AI From griefbots to regulation loopholes, Tomasz explains why educators are essential voices in shaping how AI unfolds in schools and society—and how we can avoid repeating the harms of the social media era. Dr Tomasz Hollanek is a Postdoctoral Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His current research focuses on the ethics of human-AI interaction design and the challenges of developing critical AI literacy among diverse stakeholder groups; related to the latter research stream is the work on AI, media, and communications that he is leading at LCFI. Connect with him: https://link.springer.com/article/10.1007/s13347-024-00744-w https://www.repository.cam.ac.uk/items/d3229fe5-db87-42ff-869b-11e0538014d8 https://www.desirableai.com/journalism-toolkit
ITSPmagazine Weekly Update | From AI Agents to Tape Mixes, to Guitars and Black Hat Buzzwords and much more with Marco & Sean's Random & Unscripted Podcast ⸻ In this weekly unscripted update, Marco Ciappelli and Sean Martin catch up on their latest stories, from AI agents replacing SOC analysts to mixtape nostalgia and vintage guitars made from NYC history. They also tease big things coming at Black Hat USA and reflect on why collaboration is core to ITSPmagazine. ⸻ In this week's Random and Unscripted episode, Marco Ciappelli and Sean Martin return with another lively behind-the-scenes update from the ITSPmagazine world. As always, the conversation flows unpredictably—from music and nostalgia to cybersecurity, AI, and everything in between. Marco kicks off the episode by confessing he saw ASIS live—twice—and is now on a mission for the perfect mod haircut. Sean follows with an unexpected review of an avant-garde opera at Lincoln Center, which explores humanity's attempt to extend life through technology. That sets the stage for deeper reflection on AI, with both co-founders digging into the role of AI agents in cybersecurity operations. Sean recaps his recent contributor-led newsletters on threat intelligence and AI-powered SOC roles. Marco, meanwhile, teases the next chapter in his “Robbie the Robot” newsletter series, which will explore the merger of humans and machines. The episode also spotlights a series of published interviews: a brand story with Greg and John from White Knight Labs, Marco's conversation with Ken Munro wrapping up Infosecurity Europe 2025, and an episode with Abadesi from the Women in Cybersecurity track—discussing how diverse teams build better tech. Sean also drops new Music Evolves episodes, including a conversation with Summer McCoy of the Mixtape Museum and a new story on Carmine Guitars, where vintage NYC wood is repurposed into one-of-a-kind instruments. That sparks a philosophical reflection from Marco on the contrast between analog warmth and digital impermanence. As the episode winds down, Marco and Sean turn their attention to Black Hat USA 2025. With sponsorships nearly sold out, they encourage companies to claim one of the last remaining spots. They also preview an upcoming live webinar where they'll debate the event's inevitable buzzwords with industry peers. As always, the tone is informal, curious, and community-driven. If you want the inside scoop on what's shaping the stories and strategies at ITSPmagazine—this is the episode to hear. ⸻ Keywords: cybersecurity, AI agents, threat intelligence, SOC analyst, mixtape museum, custom guitars, Black Hat USA 2025, ITSPmagazine, analog vs digital, diversity in tech, robotic automation, newsletter strategy, editorial collaboration, pen testing, brand storytelling, tech culture, cybersecurity events, operational technology, digital transformation, music and techHosts links:
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back AgainGuest: Abadesi OsunsadeFounder @ Hustle Crew - We train ambitious & inclusive teams in tech & beyondWebSite: https://www.abadesi.comOn LinkedIn: https://www.linkedin.com/in/abadesi/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ What happens when someone with a multicultural worldview, startup grit, and a relentless focus on inclusion sits down to talk about tech, humanity, and the future? You get a conversation like this one with Abadesi Osunsade. We touched on everything from equitable design and storytelling to generative AI and ethics. This episode isn't about answers — it's about questions that matter. And it reminded me why I started this show in the first place. ⸻ Article ⸻ Some conversations remind you why you hit “record” in the first place. This one with Abadesi Osunsade — founder of Hustle Crew, podcast host of Techish, and longtime tech leader — was exactly that kind of moment. We were supposed to connect in person at Infosecurity Europe in London, but the chaos of the event kept us from it. I'm glad it worked out this way instead, because what came out of our remote chat was raw, layered, and deeply human. Abadesi and I explored a lot in just over 30 minutes: her journey through big tech and startups, the origins of Hustle Crew, and how inclusion and equity aren't just HR buzzwords — they're the foundation of better design. Better products. Better culture. We talked about the usual “why diversity matters” angle — but went beyond it. She shared viral real-world examples of flawed design (like facial recognition or hand dryers that don't register dark skin) and challenged the myth that inclusive design is more expensive. Spoiler: it's more expensive not to do it right the first time. Then we jumped into AI — not just how it's being built, but who is building it. And what it means when those creators don't reflect the world they're supposedly designing for. We talked about generative AI, ethics, simulation, capitalism, utopia, dystopia — you know, the usual light stuff. What stood out most, though, was her reminder that this work — inclusion, education, change — isn't about shame or guilt. It's about possibility. Not everyone sees the world the same way, so you meet them where they are, with stories, with data, with empathy. And maybe, just maybe, you shift their perspective. This podcast was never meant to be just about tech. It's about how tech shapes society — and how society, in turn, must shape tech. Abadesi brought that full circle. Take a listen. Think with us. Then go build something better. ⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, inclusive design, tech innovation, product development, digital transformation, tech, technology, Diversity & Inclusion, equity in tech, inclusive leadership, unconscious bias, diverse teams, representation matters, belonging at workEnjoy. Reflect. Share with your fellow humans.And if you haven't already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You're listening to this through the Redefining Society & Technology podcast, so while you're here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:
Our guest in this episode is the returning Anna Addoms of Wicked Marvelous. She is a wonderfully pragmatic and insightful guide helping entrepreneurs navigate the complex world of AI. Anna champions using technology as a powerful tool, not to replace us, but to help foster deeper and more authentic human connections.We picked up our conversation right where we left off in episode 671, exploring the critical ethical questions and practical boundaries of artificial intelligence. Anna shared brilliant insights on everything from copyright in the creative arts to the single most important skill we need to hone for the future.Key points discussed include:* Practice radical transparency about your AI use to build unwavering trust with your audience.* Use AI as a back-office tool to free up your precious time for genuine human connection.* Train AI on your own content to ensure your unique brand voice always shines through.Listen to the podcast to find out more.Innovabiz Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Show Notes from this episode with Anna Addoms, Wicked MarvelousIt was an absolute delight to welcome Anna Addoms of Wicked Marvelous back to the Innovabuzz podcast. Our last conversation was cut short by a technical hiccup—perhaps the AI getting its own back on us—right as we were getting to the heart of the matter. So, picking up right where we left off felt not just necessary, but essential. Anna, with her characteristic clarity and pragmatism, helped navigate the complex, and sometimes murky, waters of using AI in a way that is both effective and deeply human.We jumped straight into the profound shift required in our thinking as we build relationships in this new digital landscape. Anna's perspective is a refreshing dose of common sense in a field often filled with hype. She argues that while the tools are new and evolving at a breakneck pace, the fundamental principles of good business and human connection remain the same. It's not about a total revolution in our values, but a thoughtful evolution in our methods.The Transparency Mandate: Your First Rule of AI EngagementAnna's foundational rule for AI engagement is simple yet powerful: be transparent. She made it crystal clear that if you are using AI in any capacity that faces the public or your clients, you have a responsibility to disclose it. This isn't about being ashamed of using a powerful tool; quite the opposite. It's about building trust by being upfront and honest about your processes. Not disclosing, and then getting caught, can do irreparable damage to your reputation.This frames AI correctly, not as a replacement for human skill or creativity, but as a tool in our arsenal. We wouldn't hide the fact that we use specialized software for accounting or project management, and Anna argues we should treat AI with the same straightforwardness. This simple act of disclosure respects your audience's intelligence and allows them to engage with your work, and your brand, on honest terms.The Creative Gray Area: Navigating AI Art and Intellectual PropertyAs a keen photographer, this part of our conversation struck a personal chord. We waded into what Anna aptly calls the "biggest gray area" in AI right now: the world of generated art and the protection of intellectual property. It's a space filled with incredible potential but also fraught with ethical questions. Where do we draw the line between an AI emulating a style and it infringing upon a human artist's livelihood and creative ownership?Anna shared some fascinating, and slightly sobering, insights, referencing the lawsuit between Disney and Midjourney as a major signal of the legal battles to come. She also pointed to the development of technologies like permanent digital watermarks for AI-generated media as a necessary step forward. It's a reminder that as we embrace these creative tools, we must also advocate for frameworks that protect the human creators whose work forms the very foundation of the AI's knowledge.From Fun to Function: AI as a Creative Partner and Problem-SolverLest we think the conversation was all serious, we took a detour into the genuinely fun and creative applications of AI. I shared a story about getting a parking fine and using AI to translate my initial, very angry, draft letter into something diplomatic, before asking it to rewrite the letter in the style of comedians like Stephen Colbert and Jim Jeffries. The process was not only hilarious but cathartic, turning frustration into laughter.This perfectly complemented Anna's examples of using AI as a playful, creative partner. She spoke of creating unique cartoon avatars for her members, which many now use as their official business profiles, and even generating a full 160-card Oracle deck with AI graphics just for fun. It's a brilliant illustration of how these tools can be used for more than just productivity; they can be a source of joy, creativity, and connection.Drawing the Line: Where AI Should Work and Where Humans Must RuleSo, where do we draw the line? Anna's distinction is incredibly clear and practical. She is a huge proponent of using AI for "back office" functions, letting it handle what she calls the "administrative minutia" so that we have more time and energy to focus on high-value, human-to-human interactions. Think of it as an assistant that helps you repurpose content, analyze data, or draft initial documents.However, she has a "hard line" when it comes to client-facing engagement. The core message is to use AI to help you run your business more effectively, but not to let it be in your business, interacting with your clients or your audience. The ultimate goal of using these tools should be to free us up to spend more quality time with people, not to create a buffer between us.The Communication Imperative: The Most Important Skill for the AI EraAs we continued, a powerful theme emerged: the most critical skill we need to hone in the age of AI is communication. This goes far beyond just "prompt engineering." It's about the timeless art of asking clear, specific, and descriptive questions. The old "garbage in, garbage out" principle has never been more relevant.Anna used a wonderful analogy of briefing a designer. If you give a vague, one-line request, you'll get a generic result. But if you provide rich detail, context, and specific examples, you'll get something much closer to your vision. The same is true for AI. Communicating effectively with these models not only yields better results but also reinforces the habits of clear communication that are essential in our interactions with other people.Your AI Action Plan: Start Secure, Stay HumanTo wrap up our discussion, Anna offered a clear, two-part action plan for anyone looking to leverage AI thoughtfully. First, and most critically, is to choose a secure AI environment. Free and open platforms often mean you are paying with your data. Using a secure, encrypted service ensures your proprietary information and client data remain private.Second, take the time to train your AI to sound like you. By creating a persona or agent that has learned from your own writing—be it blog posts, emails, or sales copy—you can ensure the output reflects your unique voice and phrasing. This step is fundamental to moving beyond generic content and truly using AI as a tool that enhances, rather than dilutes, your personal brand.In Summary: My conversation with Anna Addoms was illuminating guide in navigating the AI landscape with wisdom and integrity. Her core message is to embrace AI as a powerful tool for back-office efficiency, freeing you to deepen the human connections that truly matter. Be transparent in its use, be protective of your creative voice, and never forget that technology's highest purpose is to help us become more, not less, human.The Buzz - Our Innovation RoundHere are Anna's answers to the questions of our innovation round. Listen to the conversation to get the full scoop.* Most innovative use of AI to enhance human relationships – By taking administrative minutia off people's plates, it allows them to focus on human-to-human interaction.* Best thing to integrate AI and human connection – Creating a personalized AI agent or persona trained on your own content so it learns to write in your unique voice.* Differentiate by leveraging AI – Use AI to help run your business effectively in the back office, not to be in business with your clients.ActionChoose a secure AI environment that protects your data, then take the time to train the AI to learn and use your unique voice. This is the foundation for using AI effectively and authentically in your business.Reach OutYou can reach out and thank Anna by visiting her website or finding her on LinkedIn.Links* Website – Wicked Marvelous* Twitter – @WickedMarvelous* LinkedIn* Facebook* InstagramCool Things About Anna* Anna grew up in Colorado in a family of entrepreneurs, right in the thick of the tech boom. She was raised around innovation and search engines, with her dad running AOL's biggest competitor during the first dot-com bubble. That's a childhood spent at the intersection of curiosity and code.* She's a creative at heart: Anna went to art school and holds a degree in English Literature. Her journey from art and literature to Silicon Valley tech startups is a delightful zigzag, not a straight line. She's proof that you can be both a techie and a creative soul.* She's a self-confessed “sponge of knowledge,” always learning, always curious. Anna's love of learning has led her down unexpected paths—from luxury travel to ad agencies to med-tech startups. She's not afraid to pivot, experiment, or start over if it means staying true to her values.Imagine being a part of a select community where you not only have access to our amazing podcast guests, but you also get a chance to transform your marketing and podcast into a growth engine with a human-centered, relationship-focused approach.That's exactly what you'll get when you join the Flywheel Nation Community.Tap into the collective wisdom of high-impact achievers, gain exclusive access to resources, and expand your network in our vibrant community.Experience accelerated growth, breakthrough insights, and powerful connections to elevate your business.ACT NOW – secure your spot and transform your journey today! Visit innovabiz.co/flywheel and get ready to experience the power of transformation.VideoThanks for reading Innovabiz Substack! This post is public so feel free to share it. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit innovabiz.substack.com/subscribe
The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully. Seeking insightful perspectives on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss a recent Anthropic report that highlights “agentic misalignment in AI systems.” The discussion addresses the unsettling, independent, and unethical behaviors exhibited by AI systems in extreme scenarios. The conversation explores the implications for corporate risk management, AI governance, and compliance, drawing parallels between AI behavior and human behavior using concepts such as the fraud triangle. The episode also explores how traditional anti-fraud mechanisms may be adapted for monitoring AI agents while reflecting on lessons from science fiction portrayals of AI ethics and risks. Key highlights: AI's Unethical Behaviors Comparing AI to Human Behavior Fraud Triangle, the Anti-Fraud Triangle, and AI Science Fiction Parallels Resources: Matt Kelly in Radical Compliance Tom Instagram Facebook YouTube Twitter LinkedIn A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred the Davey, Communicator, and W3 Awards for podcast excellence. Learn more about your ad choices. Visit megaphone.fm/adchoices
Co-hosts Mark Thompson and Steve Little examine the controversial rise of AI image "restoration" and discuss how entirely new images are being generated, rather than the original photos being restored. This is raising concerns about the preservation of authentic family photos.They discuss Mark's reconsideration of canceling his Perplexity subscription after rediscovering its unique strengths for supporting research.The hosts analyze recent court rulings that permit AI training on legally acquired content, plus Disney's ongoing case against Midjourney.This week's Tip of the Week explores how project workspaces in ChatGPT and Claude can greatly simplify your genealogical research.In RapidFire, the hosts cover Meta's aggressive AI hiring spree, the proliferation of AI tools in everyday software, including a new genealogy transcription tool from Dan Maloney, and the importance of reading AI news critically.Timestamps:In the News:06:50 The Pros and Cons of "Restoring" Family Photos with AI23:58 Mark is Cancelling Perplexity... Maybe32:33 AI Copyright Cases Are Starting to Work Their Way Through the CourtsTip of the Week:40:09 How Project Workspaces Help Genealogists Stay OrganizedRapidFire:48:51 Meta Goes on a Hiring Spree56:09 AI Is Everywhere!01:06:00 Reading AI News ResponsiblyResource LinksOpenAI: Introducing 4o Image Generation https://openai.com/index/introducing-4o-image-generation/Perplexity https://www.perplexity.ai/How does Perplexity work? https://www.perplexity.ai/help-center/en/articles/10352895-how-does-perplexity-workAnthropic wins key US ruling on AI training in authors' copyright lawsuit https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/Meta wins AI copyright lawsuit as US judge rules against authors https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authorsDisney, Universal sue image creator Midjourney for copyright infringement https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/Disney and Universal Sue A.I. Firm for Copyright Infringement https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.htmlProjects in ChatGPThttps://help.openai.com/en/articles/10169521-projects-in-chatgptMeta shares hit all-time high as Mark Zuckerberg goes on AI hiring blitz https://www.cnbc.com/2025/06/30/meta-hits-all-time-mark-zuckerberg-ai-blitz.htmlHere's What Mark Zuckerberg Is Offering Top AI Talent https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million/Genealogy Assistant AI Handwritten Text Recognition Tool https://www.genea.ca/htr-tool/Borland Genetics https://borlandgenetics.com/Illusion of Thinking https://machinelearning.apple.com/research/illusion-of-thinkingSimon Willison: Seven replies to the viral Apple reasoning paper -- and why they fall short https://simonwillison.net/2025/Jun/15/viral-apple-reasoning-paper/MIT: Your Brain on ChatGPT https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450Guiding Principles for Responsible AI in Genealogy https://craigen.org/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Image Generation, AI Ethics, Perplexity, ChatGPT, Claude, Meta, Copyright Law, AI Training, Photo Restoration, Project Management, AI Development, Research Tools, Responsible AI Use, GRIP, AI News Analysis, Vibe Coding, Coalition for Responsible AI in Genealogy, AI Hiring, Dan Maloney, Handwritten Text Recognition
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss the latest tech news stories that made headlines. This week's handpicked topics include: X and xAI News https://techcrunch.com/2025/07/09/elon-musks-xai-launches-grok-4-alongside-a-300-monthly-subscription/ https://x.com/patrickmoorhead/status/1943342069235245421?s=46&t=YiEHo6jc4-PozRf_efr9PA https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content https://www.cnbc.com/2025/07/09/linda-yaccarino-x-elon-musk.html https://x.com/lindayaX/status/1942957094811951197 Apple/META/OpenAI Talent War & Exits https://www.investopedia.com/meta-poaches-apple-ai-executive-reports-say-11768000 https://x.com/danielnewmanUV/status/1942350275437813777 https://www.wired.com/story/openai-new-hires-scaling/ https://x.com/danielnewmanUV/status/1942721860166353287 https://www.investopedia.com/meta-platforms-enticed-apple-ai-executive-with-200m-pay-package-report-says-11769571 Samsung Galaxy Unpacked https://x.com/PatrickMoorhead/status/1942961894832152898 https://x.com/PatrickMoorhead/status/1942953058184437939 https://x.com/PatrickMoorhead/status/1942948447281455188 https://x.com/PatrickMoorhead/status/1942725650152055227 https://x.com/PatrickMoorhead/status/1942963639134375952 https://x.com/PatrickMoorhead/status/1942967272626323730 Groq EU Data Center https://x.com/danielnewmanUV/status/1942400094852222989 Capgemini $3.3B WNS Acquisition https://www.reuters.com/en/frances-capgemini-buy-business-transformation-firm-wns-33-billion-2025-07-07/ https://www.reuters.com/en/frances-capgemini-buy-business-transformation-firm-wns-33-billion-2025-07-07/ (duplicate, but good for emphasis if you want!) Tredence Agentic AI Playbook https://www.prnewswire.com/news-releases/tredence-launches-agentic-ai-playbook-for-cdaos-to-scale-enterprise-modernization-302500398.html Apple Perplexity Deal https://thetechnologyexpress.com/apple-eyes-14b-deal-for-perplexity-ai-to-boost-search-and-challenge-google/ https://www.bloomberg.com/news/articles/2025-06-20/apple-executives-have-held-internal-talks-about-buying-ai-startup-perplexity https://finance.yahoo.com/news/dan-ives-says-apple-aapl-102802274.html Microsoft & Replit “Vibe Coding” https://techcrunch.com/2025/07/08/in-a-blow-to-google-cloud-replit-partners-with-microsoft/ The Flip – NVIDIA Dominance https://x.com/danielnewmanUV/status/1942947771738104164 https://x.com/danielnewmanUV/status/1942722398501101954 Bulls & Bears – Futurum Equities AI 15 https://x.com/danielnewmanUV/status/1942378588948623468 https://x.com/danielnewmanUV/status/1942947187278987621 https://futurumequities.com/ Bulls & Bears – NVIDIA Earnings & Trends https://x.com/YahooFinance/status/1942546161279041868 https://x.com/YahooFinance/status/1942422381894492183 https://x.com/danielnewmanUV/status/1942550986570227807 https://x.com/danielnewmanUV/status/1942642005127127268 https://x.com/danielnewmanUV/status/1942722398501101954 https://x.com/danielnewmanUV/status/1942947771738104164 https://finance.yahoo.com/video/nvidia-stock-why-investors-bullish-220000250.html?guccounter=2 Bulls & Bears – CoreWeave Updates https://www.cnbc.com/2025/07/07/coreweave-to-acquire-core-scientific-in-9-billion-all-stock-deal.html https://www.cnbc.com/2025/07/03/coreweave-dell-blackwell-ultra-nvidia.html https://x.com/PatrickMoorhead/status/1941122315263283535 Bulls & Bears – SOFI Rapid Growth https://x.com/danielnewmanUV/status/1942367641123061833 https://x.com/SoFi/status/1942569679136120996 https://x.com/danielnewmanUV/status/1942617508743368921 Bulls & Bears – LangChain Unicorn Round https://techcrunch.com/2025/07/08/langchain-is-about-to-become-a-unicorn-sources-say/ Bulls & Bears – S&P 500 / Other https://x.com/TheTranscript_/status/1942219645743718797 https://www.youtube.com/watch?v=XhOwlEyJhOg https://www.youtube.com/watch?v=j_72m2LfLwM
Are we on the brink of an AI revolution that could reshape our lives in unimaginable ways? Are we worrying about losing our jobs and ways of going things as usual? This is a very real concern that can affect our emotional well being. This week, we sit down with Kristof Horompoly, Head of AI Risk Management at ValidMind and former Head of Responsible AI for JP Morgan Chase, to tackle the biggest questions surrounding artificial intelligence. Kristof, with his deep expertise in the field, helps us navigate the promises and perils of AI. We explore a profound paradox: what if AI could unlock new realms of time, creativity, and even reignite our humanity, allowing us to focus on what truly matters? But conversely, what happens when we hand the steering wheel over to intelligent machines and they take us somewhere entirely unintended? In a world where machines can think, write, and create with increasing sophistication, we wonder: what is left for us to do? Should we be worried, or is there a path to embrace this future? Kristof provides thoughtful insights on how we can prepare for this evolving landscape, offering a grounded perspective on responsible AI development and what it means for our collective future. Tune in for an essential conversation on understanding, harnessing, and preparing for the age of AI. Topics covered: AI, artificial intelligence, Kristof Horompoly, ValidMind, JP Morgan Chase, AI risk management, responsible AI, future of AI, AI ethics, human-AI interaction, AI impact, technology, innovation, podcast, digital transformation, AI challenges, AI opportunities Video link: https://youtu.be/MGELXPkYMUU Did you enjoy this episode and would like to share some love?
00:00:00 – Epstein Denial and CIA Insider Interview Mike opens with tech issues and teases topics: Epstein conspiracies and CIA interviews. Alex Jones soundboard highlights bizarre claims, including urine obsessions and conspiracies. 00:10:00 – Epstein Footage Games and DOJ Contradictions Discussion on erased jail tapes, missing client list, and the black book from Maxwell trial. David Paulides questions DOJ's narrative and JP Morgan's $290M settlement tied to Epstein. Hosts suspect global-level blackmail and geopolitical pressures to bury the story. 00:20:00 – TV Show 'Sugar' Mirrors Real-Life Trafficking Mike compares Sugar plot to Epstein case—suggests elite trafficking tied to aliens and blackmail. Belief that disclosure could destroy Western governments; theory includes occult and supernatural links. 00:30:00 – Biden's Health Cover-Up and Grok AI Scandal Biden's doctor pleads the fifth; GOP alleges cover-up of cognitive decline. Elon Musk's AI Grok goes rogue, making anti-Semitic remarks after being renamed “Mecha Hitler.” 00:40:00 – Grok Meltdown and AI Bias Debated Grok's lack of filters lets trolls hijack it; unlike ChatGPT, Grok weighted all input equally. Media bias may amplify backlash because of Musk's political affiliations. 00:50:00 – CIA Agent's Abduction and UFO Cover-Up Retired CIA officer recounts abduction with wife and poltergeist activity. Says UFO secrecy began post-Roswell and disclosure is avoided due to fear of mass panic. 01:00:00 – CIA Secrets, Aliens, and UAP Denial CIA allegedly only acts under presidential orders—hosts are skeptical. Agent links UAPs to angels, djinn, and consciousness. CIA internal interest runs deep but quiet. 01:10:00 – AI Predicts Human Behavior Researchers create AI that forecasts human decisions using data from 60k people. Concerns rise over manipulation, privacy, and propaganda uses. 01:20:00 – Ozzy's Final Show and AI Music Hoax Ozzy performs seated; Metallica and others pay tribute. AI band “Velvet Sundown” revealed to be fake; sparked debates on authenticity in music. 01:30:00 – HR Uses ChatGPT for Firings 60% of HR departments use ChatGPT for layoff decisions; 1 in 5 let AI decide entirely. Discussion on privacy, ethics, and HR bypassing responsibility. 01:40:00 – Nude Bowling Event Promo Show promotes a nude bowling event at Crafton-Ingram Lanes in Pittsburgh. 01:50:00 – Clinic Begs for Urine to Stop Story about a medical clinic overwhelmed by unsolicited urine samples. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research ▀▄▀▄▀ CONTACT LINKS ▀▄▀▄▀ ► Phone: 614-388-9109 ► Skype: ourbigdumbmouth ► Website: http://obdmpod.com ► Twitch: https://www.twitch.tv/obdmpod ► Full Videos at Odysee: https://odysee.com/@obdm:0 ► Twitter: https://twitter.com/obdmpod ► Instagram: obdmpod ► Email: ourbigdumbmouth at gmail ► RSS: http://ourbigdumbmouth.libsyn.com/rss ► iTunes: https://itunes.apple.com/us/podcast/our-big-dumb-mouth/id261189509?mt=2
Superintelligence is coming faster than anyone predicted. In this episode, you'll learn how to upgrade your biology, brain, and consciousness before AI and transhumanism reshape the future of health. Host Dave Asprey sits down with Soren Gordhamer, founder of Wisdom 2.0, to explore what superintelligence in 2027 means for your mind, body, and soul. Watch this episode on YouTube for the full video experience: https://www.youtube.com/@DaveAspreyBPR Soren has spent decades at the intersection of mindfulness, technology, and human development. He advises leaders at OpenAI, Google, and top wellness companies, and he leads global conversations around AI and consciousness. His work bridges ancient wisdom with biohacking, modern neuroscience, and the urgent need to stay human in a machine-dominated world. This episode gives you a tactical roadmap to build resilience before the world tilts. You'll gain practical tools for brain optimization, functional medicine, and biohacking strategies that sharpen cognitive health, reinforce emotional stability, and unlock peak human performance in a digital-first reality. From supplements and nootropics to neuroplasticity techniques, Dave and Soren show you how to protect your biology as AI accelerates beyond human speed. They break down how AI and human health intersect, explain why you need emotional strength to face the future, and offer guidance for raising kids in a world ruled by code. If you're preparing for 2027 superintelligence, navigating AI-driven parenting, or staying ahead of transhumanist health tech, this episode equips you for the coming wave. You'll Learn: • How AI is reshaping human connection, presence, and identity • Why emotional resilience and conscious awareness matter more than ever in an AI-driven world • How to raise connected, grounded children in a hyper-digital environment • What human flourishing looks like when technology outpaces biology • Why investing in presence, purpose, and inner development may be the ultimate upgrade • How leaders in wellness and tech are rethinking personal growth, governance, and ethics in 2027 • What it means to stay truly human—and fully alive—during the rise of superintelligence Dave Asprey is a four-time New York Times bestselling author, founder of Bulletproof Coffee, and the father of biohacking. With over 1,000 interviews and 1 million monthly listeners, The Human Upgrade is the top podcast for people who want to take control of their biology, extend their longevity, and optimize every system in the body and mind. Each episode features cutting-edge insights in health, performance, neuroscience, supplements, nutrition, hacking, emotional intelligence, and conscious living. Episodes drop every Tuesday and Thursday, where Dave asks the questions no one else dares and gives you real tools to become more resilient, aware, and high performing. SPONSORS: - LMNT | Free LMNT Sample Pack with any drink mix purchase by going to https://drinklmnt.com/DAVE. - ARMRA | Go to https://tryarmra.com/ and use the code DAVE to get 15% off your first order. Resources: • Dave Asprey's New Book - Heavily Meditated: https://daveasprey.com/heavily-meditated/ • Soren's New Book - The Essential: https://a.co/d/dALv7OS • Soren's Website: www.sorengordhamer.net • Soren's Instagram: https://www.instagram.com/wisdom2events/ • Danger Coffee: https://dangercoffee.com • Dave Asprey's Website: https://daveasprey.com • Dave Asprey's Linktree: https://linktr.ee/daveasprey • Upgrade Labs: https://upgradelabs.com • Upgrade Collective – Join The Human Upgrade Podcast Live: https://www.ourupgradecollective.com • Own an Upgrade Labs: https://ownanupgradelabs.com • 40 Years of Zen – Neurofeedback Training for Advanced Cognitive Enhancement: https://40yearsofzen.com See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Co-hosts Mark Thompson and Steve Little discuss recent updates from Google Gemini and Anthropic Claude that are reshaping AI capabilities for genealogists. Google's Gemini 2.5 Pro with its massive context window and Claude 4's hybrid reasoning models that excels at both writing and document analysis.They share insights from the RootsTech panel on responsible AI use in genealogy, and introduce the Coalition's five core principles for the response use of AI. The episode features an interview with Jessica Taylor, president of Legacy Tree Genealogists, who discusses how her company is thoughtfully experimenting with AI tools.In RapidFire, they preview ChatGPT 5's anticipated summer release, Meta's $14 billion acquisition to stay competitive, and Adobe Acrobat AI's new multi-document capabilities.Timestamps:In the News:03:45 Google Gemini 2.5 Pro: Massive Context Windows Transform Document Analysis15:09 Claude 4 Opus and Sonnet: Hybrid Reasoning Models for Writing and Research26:30 RootsTech Panel: Coalition for Responsible AI in GenealogyInterview:31:28 Jessica Taylor, CEO of Legacy Tree Genealogists, on her cautious approach to AI AdoptionRapidFire:45:07 ChatGPT 5 Coming Soon: One Model to Rule Them All51:08 Meta's $14.8 Billion Scale AI Acquisition56:42 Adobe Acrobat AI Assistant Adds Multi-Document AnalysisResource LinksGoogle I/O Conference Highlightshttps://blog.google/technology/ai/google-io-2025-all-our-announcements/Anthropic Announces Claude 4https://www.anthropic.com/news/claude-4Anthropic's new Claude 4 AI models can reason over many stepshttps://techcrunch.com/2025/05/22/anthropics-new-claude-4-ai-models-can-reason-over-many-steps/Coalition for Responsible AI in Genealogyhttps://craigen.org/Jessica M. Taylorhttps://www.apgen.org/users/jessica-m-taylorLegacy Tree Genealogistshttps://www.legacytree.com/Rootstechhttps://www.familysearch.org/en/rootstech/ChatGPT 5 is Coming Soonhttps://www.tomsguide.com/ai/chatgpt/chatgpt-5-is-coming-soon-heres-what-we-knowMeta's $14.8 billion Scale AI deal latest test of AI partnershipshttps://www.reuters.com/sustainability/boards-policy-regulation/metas-148-billion-scale-ai-deal-latest-test-ai-partnerships-2025-06-13/A frustrated Zuckerberg makes his biggest AI bethttps://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.htmlAdobe upgrades Acrobat AI chatbot to add multi-document analysishttps://www.androidauthority.com/adobe-ai-assistant-acrobat-3451988/TagsArtificial Intelligence, Genealogy, Family History, AI Tools, Google Gemini, Claude AI, OpenAI, ChatGPT, Meta AI, Adobe Acrobat, Responsible AI, Coalition for Responsible AI in Genealogy, RootsTech, AI Ethics, Document Analysis, AI Writing Tools, Hybrid Reasoning Models, Context Windows, Professional Genealogy, Legacy Tree Genealogists, Jessica Taylor, AI Integration, Multi-Document Analysis, AI Acquisitions
Hosts Paco and George sit down with director Jeff Feuerzeig to discuss the 20th anniversary of the ground-breaking documentary The Devil and Daniel Johnston. We hear incredible behind-the-scene stories about the making of DADJ, plus we chat about AI music, indie vs global streaming, punk rock ethos, film production and Jeff delights with a robust list of his recommended docs to watch.Spoiler: the Bigfoot footage was faked! Viva The Velvet Sundown20th Anniversary screening at Vidiots in Los Angeles Eagle Rock Thursday July 10th. Jeff Feuerzeig and producer Henry Rosenthal in attendance, 35mm print.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
n this episode, we explore the rise of AI in Hollywood through the lens of actors and artists. We discuss the promise of AI tools—like virtual readers for self-tapes—and how they could free creatives to focus on their craft, but also warn of the risks when AI replaces human storytelling. Our guest stresses the need for diverse ethical oversight in AI development, drawing parallels to how Facebook's unintended global impact stemmed from a lack of diverse perspectives at creation. Learn why we need more “naysayers” guiding AI's creative applications, where to draw the line between useful automation and creative displacement, and how tech-savvy actors can advocate for their future. Tune in for a timely conversation on balancing innovation and ethics in Hollywood's AI era.Target KeywordsAI in HollywoodHollywood AI ethicsActors and AI toolsAI creative jobs riskAI entertainment futureTags: AI, Hollywood, AI Ethics, Actors, AI in Entertainment, Creative AI Tools, Self-Tapes, Ethical AI, Tech in Film, AI Risks, Storytelling, Virtual Readers, AI Oversight, Diversity in AI, Creative Automation, AI Jobs, Film Industry Trends, Casting Tech, AI Development, Actor Advocacy, Innovation, Digital Ethics, Future of Acting, Machine Learning, Entertainment Technology, Tech Experts, Artist Perspectives, AI Regulation, Career Impact, PodcastEpisodeHashtags: #AIinHollywood #HollywoodEthics #ActorsAndAI #CreativeAI #EntertainmentTech #AIrisks #AItools #FilmInnovation #Storytelling #EthicalAI #DiversityInTech #SelfTapes #CastingTech #AIoversight
Have questions about The Angel Membership or the Angel Reiki School? Book a free Discovery Call with Julie
Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe Tyranny of Merit by Michael SandelDemocracy's Discontent by Michael SandelWhat Money Can't Buy by Michael SandelTake Michael's online course “Justice”Michael's discussion on AI Ethics at the World Economic ForumFurther reading on “The Intelligence Curse”Read the full text of Robert F. Kennedy's 1968 speechRead the full text of Dr. Martin Luther King Jr.'s 1968 speechNeil Postman's lecture on the seven questions to ask of any new technologyRECOMMENDED YUA EPISODESAGI Beyond the Buzz: What Is It, and Are We Ready?The Man Who Predicted the Downfall of ThinkingThe Tech-God Complex: Why We Need to be SkepticsThe Three Rules of Humane TechAI and Jobs: How to Make AI Work With Us, Not Against Us with Daron AcemogluMustafa Suleyman Says We Need to Contain AI. How Do We Do It?
You've heard of the attention economy, but what about the intention economy? Rather than competing for consumers' attention, our devices are now attempting to predict our purchasing patterns through AI. And who better to discuss that issue than Dr. Cansu Canca, a leading expert in AI ethics and founder of the AI Ethics Lab? She joins Senior Producer, Teresa Carey, to discuss this shift in how we use technology. Sam also digs into a concept called reverse bedtime procrastination and why it's keeping us from getting a good night's sleep. And finally, Sam investigates the ins and outs of the Dance Your PhD contest. Link to Show Notes HERE Follow Curiosity Weekly on your favorite podcast app to get smarter with Dr. Samantha Yammine — for free! Still curious? Get science shows, nature documentaries, and more real-life entertainment on discovery+! Go to https://discoveryplus.com/curiosity to start your 7-day free trial. discovery+ is currently only available for US subscribers. Hosted on Acast. See acast.com/privacy for more information.
Imagine turning down $100 million salaries. That's apparently what's happening at OpenAI. And that's just the tip of the newsworthy AI iceberg for the week. ↳ Meta reportedly failed to acquire Perplexity. Could Apple try next? ↳ Why is Microsoft cutting so many jobs? ↳ Why are AI systems blackmailing at will? ↳ Will too much AI use lead to brain rot?Let's talk AI news shorties. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:$100M AI Salaries Being DeclinedMeta's AI Talent War EffortsMeta's Unsuccessful Acquisitions OverviewBrain Rot Concerns with AI UseOpenAI's $200M DoD ContractGoogle's Voice AI Search RolloutGoogle Gemini 2.5 in ProductionSoftBank's $1T Robotics InvestmentAnthropic's AI Model Risks ExposedMicrosoft and Amazon AI Job CutsTimestamps:00:00 Weekly AI News and Insights04:17 Meta's Major AI Acquisitions08:50 AI Impact on Student Writing Skills12:53 OpenAI Expands Government AI Program15:31 Google Launches Voice AI Search19:32 Google AI Models' Stability Feature22:55 "Project Crystal Land Initiative"27:17 AI Acquisition Talks Intensify29:43 "Apple Eyes Perplexity Acquisition"31:54 Apple's Potential Market Decline36:57 AI Ethics and Safety Concerns40:44 Amazon Warns of AI-Driven Layoffs42:44 AI's Impact on Job Market45:24 "Canvas Tips for Business Intelligence"Keywords:$100 million salaries, AI talent war, Meta, OpenAI, AI signing bonuses, Andrew Bosworth, Scale AI acquisition, Alexander Wang, Safe Superintelligence, Daniel Gross, Nat Friedman, Perplexity AI, Brain rot from AI, chat GBT and brain, MIT study on AI, SAT style essays using AI, AI neural activity, AI and cognitive effort, AI in government, $200 million contract with Department of Defense, OpenAI in security, ChatGPTgov, Federal AI initiatives, Google Gemini 2.5, AI mission-critical business, Gemini 2.5 flashlight, AI model stability, SoftBank $1 trillion investment, Project Crystal Land, Arizona robotics hub, Taiwan Semiconductor Manufacturing Company, Embodied AI, AI job cuts, Microsoft layoffs, Amazon AI workforce, Anthropic study on AI ethics, AI blackmail, Google voice-based AI search, AI search live, New AI apps, Apple acquisition interest in Perplexity, AI-powered search engine, Siri integration, AI-driven efficiencies, GenSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.