POPULARITY
Google Search Console (GSC) New! Branded and Non-Branded Queries + Annotation Filters | Marketing Talk with Favour Obasi-Ike | Sign up for exclusive SEO insights.This episode focuses on Search Engine Optimization (SEO) and the new features within Google Search Console (GSC).Favour discuss the recently introduced brand queries and annotations features in GSC, highlighting their importance for understanding both branded and non-branded search behavior.The conversation also emphasizes the broader strategic use of GSC data, comparing it to a car's dashboard for website performance, and explores how this data can be leveraged to create valuable content, such as FAQ-based blog posts and multimedia assets, often with the aid of Artificial Intelligence (AI) tools. A key theme is the shift from traditional keyword ranking to ranking for user experience and the interconnectedness of various digital tools in modern marketing strategy.--------------------------------------------------------------------------------Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY Podcast--------------------------------------------------------------------------------As a content strategist, you live with a fundamental uncertainty. You create content you believe your audience needs, but a nagging question always remains: are you hitting the mark? It often feels like you're operating with a blind spot, focusing on concepts while, as the experts say, "you don't even know the intention behind why they're asking or searching."What if you could close that gap? What if your audience could tell you, explicitly, what they need you to create next?That's the paradigm shift happening right now inside Google Search Console (GSC). Long seen as a technical tool, recent updates are transforming GSC into a strategic command center. It's no longer just for SEO specialists; it's the dashboard for your entire content operation. These new developments are a game-changer, revealing direct intelligence from your audience that will change how you plan, create, and deliver content.Here are the five truths these new GSC features reveal—and how they give you a powerful competitive edge.1. Stop Driving Your Website Blind: The Dashboard AnalogyManaging a website without GSC is like driving a car without a dashboard. You're moving, but you have no idea how fast you're going or if you're about to run out of fuel. GSC is that free, indispensable dashboard providing direct intelligence straight from Google. But the analogy runs deeper. As one strategist put it, driving isn't passive: "when you're driving, you got to hit the gas, you got to... hit the brakes... when do you stop, when do you go, what do you tweak? Do you go to a pit stop?"You wouldn't drive your car without looking at the dashboard. So you shouldn't have a website and drive traffic and do all the things we do without looking at GSC, right?Your content strategy requires the same active management—knowing when to accelerate, when to pivot, and when to optimize. The new features make this "dashboard" more intuitive than ever, giving you the controls you need to navigate with precision.2. The Goldmine in Your Search Queries: Branded vs. Non-BrandedThe first game-changing update is the new "brand queries" filter. For the first time, GSC allows you to easily separate searches for your specific brand name (branded) from searches for the topics and solutions you offer (non-branded). This is the first step in a powerful new workflow: Discovery.Think of your non-branded queries as raw, unfiltered intelligence from your potential audience. These aren't just keywords; they're direct expressions of need. Instead of an abstract concept, you see tangible examples like:• “best practices for washing dishes”• “best pet shampoo”• “best Thanksgiving turkey meal”When you see more non-branded than branded queries, it's a powerful signal. It means you have access to a goldmine of raw material you can build content on to attract a wider audience that doesn't know your brand… yet. This isn't just data; it's a direct trigger for your next move.3. From Keyword to "Keynote": Creating Content with ContextOnce you've discovered this raw material, the next step is Development. This is where you transform an unstructured keyword into a strategic asset by adding structure and meaning. It's a progression: a raw keyword becomes a more defined keyphrase, which can be built into a keystone concept, and ultimately refined into a keynote.What's a keynote? Think about its real-world meaning: "when somebody sends you a note, it has context, right? It's supposed to mean something and it's supposed to say something specific." A keynote isn't just a search term; it's that term fully developed into a structured piece of content that delivers a specific, meaningful answer.This strategic asset can take many forms:• Blogs• Podcast episodes• Articles• Newsletters• Videos/Reels• eBooks4. The Most Underrated SEO Tactic: Your New Secret WeaponYou've discovered the query and developed it into a keynote. Now it's time for Execution. The single most effective format for executing on this strategy is one of the most powerful, yet underrated, SEO tactics in history: creating content around Frequently Asked Questions (FAQs).The rise of Large Language Models (LLMs) has fundamentally changed search behavior. People are asking full, conversational questions, and search engines are prioritizing direct, authoritative answers. A "one blog per FAQ" strategy is the perfect response. It's a secret weapon that's almost shockingly effective.FAQ is the new awesome the most awesome ever. I I said that on purpose.How awesome? By creating a single, targeted blog post for the long-tail question, "full roof replacement cost [city]," one site ranked number one on Google for that exact phrase in just 30 minutes. That's the power of directly answering a question your audience is already asking.5. It's Not About New Features, It's About New ActionsThe real purpose of these GSC updates isn't to give you more charts to observe; it's to prompt decisive action. Every non-branded query is a signal for what content to create next, feeding a powerful strategic loop that builds your authority over time.This is where it all comes together in a professional content framework. As the source material notes, "That's why you have content pillars and you have content clusters." Your non-branded queries show you what clusters your audience needs, and your FAQ-style "keynotes" become the assets that build out those clusters around your core content pillars.This data-driven approach empowers you to:• Recreate outdated content with new, relevant insights.• Repurpose core ideas into different formats to reach wider audiences.• Re-evaluate which topics are truly resonating.• Reemphasize your most valuable messages with fresh content.Conclusion: What Does Your Dashboard Say?Google Search Console is no longer just a reporting tool. It has evolved into an essential strategic partner that closes the gap between the content you produce and the value your audience is searching for. It's your direct line to understanding intent, allowing you to move from guessing what people want to knowing what they need.Now that you know how to read your website's dashboard, what's the first turn you're going to make?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Tom Kocmi, Researcher at Cohere, and Alon Lavie, Distinguished Career Professor at Carnegie Mellon University, join Florian and Slator language AI Research Analyst, Maria Stasimioti, on SlatorPod to talk about the state-of-the-art in AI translation and what the latest WMT25 results reveal about progress and remaining challenges.Tom outlines how the WMT conference has become a crucial annual benchmark for assessing AI translation quality and ensuring systems are tested on fresh, demanding datasets. He notes that systems now face literary text, social-media language, ASR-noisy speech transcripts, and data selected through a difficulty-sampling algorithm. He stresses that these harder inputs expose far more system weaknesses than in previous years.He adds that human translators also struggle as they face fatigue, time pressure, and constraints such as not being allowed to post-edit. He emphasizes that human parity claims are unreliable and highlights the need for improved human evaluation design.Alon underscores that harder test data also challenges evaluators. He explains that segment-level scoring is now more difficult, and even human evaluators miss different subsets of errors. He highlights that automated metrics built on earlier-era training data underperformed, particularly COMET, because they absorbed their own biases.He reports that the strongest performers in the evaluation task were reasoning-capable large language models (LLMs), either lightly prompted or submitted with elaborate evaluation-specific prompting. He notes that while these LLM-as-judge setups outperformed traditional neural metrics overall, their segment-level performance varied.Tom points out that the translation task also revealed notable progress from smaller academic models around 9B parameters, some ranking near trillion-parameter frontier models. He sees this as a sign that competitive research is still widely accessible.The duo concludes that they must carefully choose evaluation methods, avoid assessing models with the same metric used during training, and adopt LLM-based judging for more reliable assessments.
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
This episode covers: Cardiology This Week: A concise summary of recent studies 'ChatGPT, MD?' - Large Language Models at the Bedside Management decisions in myocarditis Statistics Made Easy: Mendelian randomisation Host: Emer Joyce Guests: Carlos Aguiar, Folkert Asselbergs, Massimo Imazio Want to watch that episode? Go to: https://esc365.escardio.org/event/2179 Want to watch that extended interview on 'ChatGPT, MD?': Large Language Models at the Bedside? Go to: https://esc365.escardio.org/event/2179?resource=interview Disclaimer: ESC TV Today is supported by Bristol Myers Squibb and Novartis through an independent funding. The programme has not been influenced in any way by its funding partners. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. The ESC is not liable for any translated content of this video. The English language always prevails. Declarations of interests: Stephan Achenbach, Folkert Asselbergs, Yasmina Bououdina, Massimo Imazio, Emer Joyce, and Nicolle Kraenkel have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Pfizer, Sanofi, Servier, Takeda, Tecnimede. John-Paul Carpenter has declared to have potential conflicts of interest to report: stockholder MyCardium AI. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Konstantinos Koskinas has declared to have potential conflicts of interest to report: honoraria from MSD, Daiichi Sankyo, Sanofi. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. Emma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.
Host: Emer Joyce Guest: Folkert Asselbergs Want to watch that episode? Go to: https://esc365.escardio.org/event/2179 Want to watch that extended interview on 'ChatGPT, MD?': Large Language Models at the Bedside? Go to: https://esc365.escardio.org/event/2179?resource=interview Disclaimer: ESC TV Today is supported by Bristol Myers Squibb and Novartis through an independent funding. The programme has not been influenced in any way by its funding partners. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. The ESC is not liable for any translated content of this video. The English language always prevails. Declarations of interests: Stephan Achenbach, Folkert Asselbergs, Yasmina Bououdina, Emer Joyce, and Nicolle Kraenkel have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Pfizer, Sanofi, Servier, Takeda, Tecnimede. John-Paul Carpenter has declared to have potential conflicts of interest to report: stockholder MyCardium AI. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Konstantinos Koskinas has declared to have potential conflicts of interest to report: honoraria from MSD, Daiichi Sankyo, Sanofi. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. E mma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson. Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
In this episode, Peter Maddison and Dave Sharrock welcome Justin Trombold, President and Founder of Antison Advisors, to discuss the parallels between agile transformation and generative AI adoption in organizations.Justin shares insights from his work helping companies navigate generative AI readiness, revealing that the biggest challenges aren't technical; they're organizational. From end-user proficiency to cross-functional collaboration, the conversation explores why companies struggle to move beyond "toy apps" to create real business value with AI.Key topics covered:• Why organizations need an AI strategy before investing in tools• The critical importance of end-user proficiency with LLMs• How cross-functional collaboration enables AI success• Why annual planning cycles may be holding your AI initiatives back• The parallels between agile adoption and AI transformation• Moving from efficiency gains to true value creationWhether you're leading AI initiatives, managing agile transformations, or wondering why your organization's AI investments aren't paying off, this conversation offers practical frameworks for thinking about organizational readiness in the age of generative AI.THREE KEY TAKEAWAYS:1. End-user proficiency is everything. 2. Define the sandbox before choosing the toys.3. Innovation in planning matters as much as innovation in products. Contact us: feedback@definitelymaybeagile.com#GenerativeAI #AgileTransformation #OrganizationalChange #AIReadiness #DigitalTransformation #LLM #CrossFunctionalTeams #Innovation
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?
Large Language Models, or LLMs, are infiltrating every facet of our society, but do we even understand what they are? In this fascinating deep dive into the intersection of technology, language, and consciousness, the Wizard offers a few new ways of perceiving these revolutionary—and worrying—systems. Got a question for the the Wizard? Call the Wizard Hotline at 860-415-6009 and have it answered in a future episode! Join the ritual: www.patreon.com/thispodcastisaritual
Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt's HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM's new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM's interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.Guest profile: https://research.ibm.com/people/dennis-weiICX360 Toolkit: https://github.com/IBM/ICX360
Large language models aren't just improving — they're transforming how we work, learn, and make decisions. In this upcoming episode of Problem Solved, IISE's David Brandt talks with Bucknell University's Dr. Joe Wilck about the true state of LLMs after the first 1,000 days: what's gotten better, what's still broken, and why critical thinking matters more than ever.Sponsored by Autodesk FlexSimLearn more about The Institute of Industrial and Systems Engineers (IISE)Problem Solved on LinkedInProblem Solved on YouTubeProblem Solved on InstagramProblem Solved on TikTokProblem Solved Executive Producer: Elizabeth GrimesInterested in contributing to the podcast or sponsoring an episode? Email egrimes@iise.org
It's been six months since our last all-Hell episode! In honor of Halloween season, we take a long journey into the very scary Fresh AI Hell mines. Topics include terrifying uses of AI in education, scientific research, and politics — plus, some delicious palate cleansers along the way.AI bubble: bigger than dot-com bust?No one wants to pay for ChatGPTMeta lays off 600 from AI unitAI data centers: an even bigger disaster than we thoughtPublic universities anticipate data center-driven power outagesChaser: Deloitte has to pay back Albanese government after using AI in report"AI" schools are "dead classrooms"Fake sources in "ethical AI" education reportParents letting kids play with AIStartup sells 'synthetic influencers'AI-powered textbooks fail to make the gradeChaser: "High-reliability" AI slopNature offers "AI-powered research assistant"AI bots wrote all papers at this conference"AI" reviewing at AAAIAI medical tools downplay symptoms in women and POCTherapists are secretly using ChatGPTChaser: Microsoft blocks Israel's use of its technologyGerman initiative uses "AI" for voter educationPolice gunshot detection mics will listen for human voicesSF's AI chatbot for RV dwellersCuomo campaign posts racist AI slopDHS Ordered OpenAI To Share User DataChaser: LA County moves to limit license plate trackingA new form of eugenics"AI Superintelligence" prohibition letterEmad Mostaque's LLM blurbsPrizes must recognize machine contributions to discoveryChaser: The hot new trend in marketing: hating on AICheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.
Can we use AI to enhance human connection rather than replace it? In this Part 2 of our AI series episodes, Shekeese and I explore how humanist technology, rooted in ethics, education, and responsible use, can transform our relationship with AI. From the dangers of “AI slop” to the potential for real, inquiry-based interaction, we highlight why teaching students to treat AI like a tool, not a replacement, is crucial. We dig into tech company accountability, regulation, social media's societal role, and even the environmental toll of data centers. This isn't a tech utopia, it's a call for wiser integration, grounded in understanding, purpose, and human values.--------- EPISODE CHAPTERS ---------(0:00:02) - Navigating Humanist Technology and AI(0:04:53) - AI Technology Education and Misuse(0:11:53) - Tech Education and Government Regulation(0:16:25) - The Impact of Social Media Regulation(0:23:12) - Regulating Technology and Online Behavior(0:32:58) - Tech Founders Impact Regulation(0:41:05) - Navigating Internet Regulation Debate(0:53:08) - The Impact of Data Centers(0:59:29) - Tech Regulation and Future Development(1:14:10) - Promoting a Warrior MindsetSend us a text
In questa conversazione con Francesco Oggiano, giornalista e creatore di contenuti, esploriamo il complesso rapporto tra intelligenza artificiale e informazione: chi controlla i Large Language Models controlla davvero l'informazione?Parliamo di Grockipedia di Elon Musk, dei rischi delle bolle informative nazionali, dell'incesto informativo creato dall'AI che si addestra su dati già generati da AI, e di come la memoria collettiva sarà influenzata dai contenuti sintetici.Francesco condivide il suo metodo concreto per usare l'AI nel giornalismo mantenendo trasparenza e pensiero critico, spiega come fare fact-checking quando le fonti sono contaminate, e rivela le sue tecniche con Deep Research, prompt engineering e sparring partner AI.Il principio fondamentale? Informarsi oggi non è più per addizione ma per sottrazione: eliminare il rumore, coltivare il proprio gusto, e diventare gatekeeper di se stessi nell'oceano infinito di contenuti.Scopri come distinguere il vero dal falso, quali sono i principi eterni del giornalismo che resistono anche nell'era AI, e perché la "fatica" umana rimane insostituibile.Buon ascolto
Last week, Google's threat intelligence group warned that artificial intelligence (AI) is making malware attacks more dangerous. [Malware is malicious software—programmes designed to disrupt, damage or gain unauthorised access to computer systems—usually delivered via phishing emails, compromised websites or infected downloads]“Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations,” Google said in a 5000-word blog.Are malware programmes using Large Language Models (LLMs) to dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, as Google warns? Or it this yet another case of tech firms selling solutions to a problem they have created themselves?Listen to the latest episode of Unseen Money from New Money Review, featuring co-hosts Timur Yunusov and Paul Amery, to hear more about the effect of AI malware.In the podcast, we cover:Google's warning about the rise of AI malware – reality or hype? (2' 35”)Why LLMs were originally protected from harmful behaviour (4' 10”)How criminals learned to develop LLMs without guardrails (4' 55”)Model context protocols (MCPs) and AI agents as offensive tools (5' 30”)Malicious payloads and web application firewalls (7' 35”)Tricking LLMs by exploiting the wide range of input variables (8' 30”)The state of the art for fraudsters when using LLMs (10' 10”)Timur used AI to learn how to drain funds from a stolen phone (11' 05”)How worried is Timur about the rise of AI malware? (14' 20”)AI has dramatically reduced the cost and increased the speed of producing malware (15')AI, teenage suicides and protecting users (16' 50”)AI for good: using AI to combat AI malware (19')How a Russian bank used AI chatbots to divert fraudsters (19' 40”)Data poisoning—manipulating the training data for AI models (22' 10”)Techniques for tricking LLMs (23')Only state actors can manipulate AI models at scale (25' 40”)The use of SMS blasters by fraudsters is exploding! (27')
We're excited to launch a brand-new series on the Product for Product Podcast, with Matt and Moshe diving deep into the world of AI tools for product managers. In this special episode, we set the stage for upcoming conversations by exploring how AI is becoming an indispensable partner in every stage of the product management journey.Join us as Matt and Moshe discuss:The rapidly evolving role of AI throughout the product management workflow, from idea generation and discovery to strategy, prioritization, delivery, launch, and ongoing monitoringThe importance of using AI as a tool for knowledge and insight, rather than replacing critical thinking and understandingHow product managers can leverage Large Language Models (LLMs) for research, writing, and scenario planningThe realities and limitations of today's AI tools, including the challenges of ensuring accuracy and context in product workExploring the promise of AI platforms for rapid prototyping and MVP testingHow AI can help bridge the gap between prototyping and actually building production-ready productsUsing AI to inform strategic decisions, pricing, packaging, prioritization, and risk assessmentIntegrating AI into your board and backlog systems for smarter feedback synthesis and decision-makingEnhancing sprint-based development with AI-generated user stories, acceptance criteria, and moreUpcoming content around data consolidation, go-to-market strategies, and ways AI is changing the PM disciplineAnd much more!Whether you're just starting to experiment with AI or looking to deepen how you use it in your product practice, this series is for you. Stay tuned for practical examples, case studies, and discussions that will help you harness the latest AI tools, while remembering that the best PMs know how to balance tech innovation with human judgment.Connect with us and follow the rest of the series:Product for Product Podcast http://linkedin.com/company/product-for-product-podcast Matt Green https://www.linkedin.com/in/mattgreenproduct Moshe Mikanovsky http://www.linkedin.com/in/mikanovsky Note: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
Os excertos providenciam uma visão geral abrangente sobre a construção e o funcionamento de agentes de Inteligência Artificial (IA), com foco particular em sistemas baseados em Large Language Models (LLMs) como o ChatGPT. O material descreve aspetos fundamentais dos agentes, incluindo os seus componentes principais—como perfis/personas, ações/ferramentas, memória/conhecimento, raciocínio/avaliação e planeamento/feedback—e a forma como estes são implementados em plataformas como AutoGen Studio, CrewAI, e o ambiente de desenvolvimento Nexus. Além disso, os textos exploram técnicas de engenharia de prompts, como Chain of Thought (CoT) e zero-shot prompting, e mecanismos de recuperação de informação (Retrieval Augmented Generation - RAG) para dotar os agentes de conhecimento e memória contextual, essenciais para a autonomia e a resolução de problemas complexos. É também introduzido o conceito de árvores de comportamento (ABTs) como um padrão robusto de controlo para sistemas autónomos.
KI, LLMs und Texten: Warum du deinen Kopf nicht an der Garderobe abgeben solltest Moin vom Deich! Klaus und Patrick sprechen Klartext über das aktuellste Thema im Content-Marketing: KI und Large Language Models (LLMs). Wir nennen es einfachheitshalber KI, auch wenn wir wissen, dass Experten von LLMs sprechen. Seit dem Durchbruch von ChatGPT hat sich die Texterstellung rasant gewandelt. Wir klären in dieser spontanen Folge, wie viel Zeit du wirklich sparen kannst, wo die LLMs an ihre Grenzen stoßen (besonders wichtig für Zahnarztpraxen!) und warum dein Hirn nach wie vor das wichtigste Werkzeug ist, um hervorragenden Content zu schaffen. Achtung: Die Antworten in dieser Folge sind absoluter Status quo zum 5. November 2025, und das Thema ändert sich extrem rasant! Die zentrale Frage dieser Folge ist, ob man den eigenen Kopf überhaupt noch benötigt, wenn man Texte mit Tools wie ChatGPT erstellt. Wir halten fest: Den eigenen Kopf brauchst du unbedingt, um die Ergebnisse zu kontrollieren, zu bewerten und die Verantwortung für das finale Ergebnis zu übernehmen. Klausi erläutert, wie sich seine Arbeit seit dem Einsatz von LLMs stark verändert hat: Er kann mittlerweile 80 Prozent der Zeit im Basistext-Prozess einsparen. Diese Zeit wird aber nicht weggespart, sondern in die Optimierung, Nachforschung und Exzellenz gesteckt. Wir diskutieren, dass die KI vor allem das "Mittelmaß" der Textqualität ersetzt hat und wie wichtig es ist, die LLMs als Sparringpartner und Tool zu verstehen, nicht als Ersatz für menschliche Expertise oder Empathie. Besonders im sensiblen Bereich der Zahnmedizin (sogenannte Your Money, Your Life Texte) ist menschliche Kontrolle unerlässlich, da KI-Ergebnisse, wie Studien zeigen, häufig fehlerhaft sein können (bis zu 40 Prozent Falschaussagen bei informationellen Fragen werden genannt). Am Ende bleibt die Texterstellung eine Symbiose aus Mensch und Maschine, bei der der Mensch die finale Verantwortung trägt. Die wichtigsten Erkenntnisse in kurzen Schlagzeilen - Deinen Kopf brauchst du immer!: Gib dein Hirn bitte nicht an der Garderobe der künstlichen Intelligenz ab, denn du bist zu 100 Prozent für die Eingabe und das Ergebnis verantwortlich, das letztendlich im Impressum der Praxis landet. - 80 % Zeitersparnis sind möglich, aber reinvestiere sie in Qualität: LLMs können dir beim Erstellen der ersten Struktur und der Basistexte bis zu 80 Prozent der früheren Arbeitszeit sparen. Nutze diese gewonnene Zeit, um die Ergebnisse zu prüfen, zu optimieren und Exzellenz zu schaffen. - Die KI hat das Mittelmaß ersetzt: Die Leistung, die früher von mittelmäßigen Hobby-Textern über Jahrzehnte abgeliefert wurde, kann die KI heute auch mit einem schlechten Prompt liefern. Das Feld wird spitzer: Entweder bist du gut oder du wirst ersetzt. - Der kreative Prozess verlagert sich – denke wie beim Backward Planning: Die eigentliche Kreativität steckt jetzt darin, sich zu überlegen, wie man die LLMs mit dem richtigen Prompt auf das gewünschte Endprodukt (Content-Piece, Blogartikel etc.) hin führt. Es ist ein neues Tool in deinem Handwerkskasten. - Fokus auf die Qualität, nicht auf die Herkunft des Textes: Es ist völlig egal, ob ein Text von einem Menschen oder einer KI verfasst wurde – ein schlechter Text ist ein schlechter Text. Wenn dein Anspruch ist, mit KI einen hervorragenden Text zu erstellen, wird ihn auch keiner als generiert erkennen. Kontakt zu Patrick und Klaus: - [Patrick > LinkedIn](https://www.linkedin.com/in/patrick-neumann-3bb03b128) - patrick.neumann@parsmedia.info - [Klaus > LinkedIn](https://www.linkedin.com/in/klausschenkmann) - klaus.schenkmann@parsmedia.info - Telefonat mit Klaus: [Buche gerne einen Termin](https://doodle.com/bp/klausschenkmann/marketing-talk-mit-klaus) Immer für Dich am Start: - [parsmedia Website](https://parsmedia.info) - [Praxismarketing-Blog](https://parsmedia.info/praxismarketing-blog) - [parsmedia Instagram ](https://www.instagram.com/parsmedia.praxi
In this episode, we explore the future of medicine beyond artificial intelligence with Joseph Geraci — mathematician, medical scientist, and quantum machine learning pioneer.While today's AI breakthroughs and large language models (LLMs) have transformed diagnostics, data analysis, and drug discovery, quantum computing could completely redefine what's possible in healthcare.Dr. Geraci breaks down how quantum algorithms, molecular simulations, and quantum machine learning could unlock a new era of precision medicine, drug discovery, and disease taxonomy — computing biology at the speed of biology itself.He also explains:What makes quantum computation fundamentally different from classical AIWhy current AI models like ChatGPT and medical LLMs face memory and accuracy limitsHow quantum algorithms could solve complex protein folding, molecular binding, and patient stratification problemsThe role of quantum optimization, error correction, and entanglement in solving medical challengesThe potential of quantum-enhanced digital twins and quantum sensors for diagnostics and clinical researchWhy the next decade will be critical for building the foundations of quantum-powered healthcareWhether you're in AI research, biotech, data science, or clinical innovation, this episode offers a visionary look into what comes after AI — and why quantum computation could become medicine's ultimate game changer.About the PodcastAI for Pharma Growth is a podcast focused on exploring how artificial intelligence can revolutionise healthcare by addressing disparities and creating equitable systems. Join us as we unpack groundbreaking technologies, real-world applications, and expert insights to inspire a healthier, more equitable future.This show brings together leading experts and changemakers to demystify AI and show how it's being used to transform healthcare. Whether you're in the medical field, technology sector, or just curious about AI's role in social good, this podcast offers valuable insights.AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget. As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses. This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more. Dr. Andree Bates LinkedIn | Facebook | Twitter
KI ist im Projektmanagement angekommen – aber nicht immer da, wo man es erwartet. Uwe Techt schildert konkrete Beispiele, wie Large Language Models heute unterstützen können, wo sie versagen und was in Zukunft möglich ist. Zwischen Judith und booking.com: ein Blick auf reale Anwendungsfälle. Shownotes: https://swiy.co/PTP223
Welcome to the CanadianSME Small Business Podcast, hosted by Maheen Bari. In this episode, we dive into how AI is transforming search marketing and how Canadian businesses can stay visible, relevant, and competitive in the evolving digital landscape.Joining us is Nicolas Rabouille, Co-founder and CEO of Rablab, a Montreal-based digital marketing agency that has helped mid-sized businesses boost their online visibility for over a decade. Nicolas shares his insights on AI-powered search, the fundamentals of Large Language Models, and how discipline and consistency—the Ironman mindset—can drive both business and personal success.Key Highlights:1. The Evolution of Search: How conversational and zero-click queries are redefining SEO and digital visibility.2. Demystifying LLMs: Understanding grounding, vector embeddings, and how they influence modern search behavior.3. Optimizing for AI Search: Strategies like semantic SEO and structured data to stay discoverable in AI-powered engines.4. The Ironman Mindset: How discipline and endurance fuel Nicolas's leadership and long-term SEO vision.5. Future of Rablab: How Rablab's free GEO audit helps businesses identify gaps and get cited on AI engines.Special Thanks to Our Partners:RBC: https://www.rbcroyalbank.com/dms/business/accounts/beyond-banking/index.htmlUPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWAGoogle: https://www.google.ca/A1 Global College: https://a1globalcollege.ca/ADP Canada: https://www.adp.ca/en.aspxFor more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age!Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome to AI Unraveled, the briefing for enterprise leaders building production AI.Today, we're dissecting the industrial challenge: How do you feed a massive LLM like Gemini or Copilot data from a decades-old oil pipeline and a brand-new inspection drone, all at once? The answer is a new, three-stage architecture of data fusion. We'll show you how Energy and Construction are moving from data silos to unified, prescriptive intelligence.Stick with us. This is Industrial AI Unlocked.But first, a crucial message for our corporate partners:
Nikita Agarwal, Founder of Milestone Localization, joins SlatorPod to talk about her journey founding a language solutions integrator (LSI) and launching Cavya.ai, a platform designed to streamline translation project preparation.Nikita began Milestone Localization in 2020 after discovering the language industry while working in international sales. She was drawn to the field's global scope and low barrier to entry. She emphasizes that sales experience played a crucial role in landing early clients and understanding the value of hiring people from within the industry. The founder reflects on the past 16 months as a period of intense change marked by AI disruption, client pressure on pricing, and shifting expectations. She highlights how regulated sectors like life sciences have helped stabilize the company amid volatility. She details how the LSI specializes in medical device translations and regulatory submissions across Europe.Nikita explains that her new platform, Cavya.ai, emerged from internal needs to improve project preparation. She says the tool automates glossaries, style guides, and document analysis, reducing time and boosting consistency for small and mid-sized projects.The founder shares her observations on India's evolving language technology landscape, noting significant progress in AI for major Indian languages. She says increased internet access and AI-driven localization are expanding education and job opportunities across the country.Nikita concludes that she sees the future in expanding life sciences work, refining Cavya, and developing an AI-powered QA tool. She notes that some clients are showing “AI fatigue” and returning to human-led workflows.
Adam D'Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it.In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering.Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs. Resources:Follow Amjad on X: https://x.com/amasadFollow Adam on X: https://x.com/adamdangelo Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
John Bush has created a way for businesses to be seen by Large Language Models called Answer Engine Optimisation.Summary of PodcastPodcast milestone and backgroundKevin and Graham discuss reaching their 500th episode of The Next 100 Days podcast. They reflect on their journey over the past 10 years and how the podcast has evolved. They introduce their guest, John Bush, an expert in "Answer Engine Optimisation" (AEO). John will discuss how businesses can optimise their content for AI-powered search engines like ChatGPT.John's background and AEO conceptJohn shares his background, including his experience in telecom, startups, and cloud infrastructure. He explains how he became interested in AEO after seeing the impact of AI on his marketing consultant friend's business. John describes the process of developing an AEO analysis tool. The tool evaluates websites on factors like visibility, accessibility, and authority. The outcome means businesses can make their content more searchable and usable by large language models.The changing landscape of search and AIKevin and John discuss the declining importance of traditional Google search and the growing prominence of AI-powered search tools like ChatGPT. They explore how businesses need to adapt their content and website structure to be more easily understood and referenced by these new search engines, rather than just optimising for Google.Practical applications of AEOJohn demonstrates a tool his team has developed that can automatically analyse a company's competitive landscape and provide insights based on the data, without relying on the company to manually gather and synthesise the information. He explains how this type of AI-powered analysis can be applied to various business functions, such as RFP responses and lifetime value calculations.Challenges and considerations around AI-generated contentKevin raises concerns about the potential risks of using AI-generated content, such as the ability to verify the accuracy and provenance of the information. John discusses efforts to address these issues, including watermarking content and providing audit trails for AI-powered decisions.The future of AI in businessJohn and Kevin discuss the broader implications of AI in the enterprise. They cover the importance of data stewardship, security, and the role of human expertise in augmenting AI capabilities. They explore how AI can be used to automate and enhance various business processes, while also highlighting the need to carefully manage the integration of these technologies.Wrap-up and reflections on the podcastKevin and Graham reflect on the evolution of The Next 100 Days podcast over the past 10 years, noting the shift in focus towards AI and technology. They express their enthusiasm for continuing the podcast and exploring the latest developments in this rapidly changing landscape.The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled ten years ago to help business owners and marketers market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner for MeclabsAI, where he introduces AI Agents that you can talk to, that increase engagement, dwell time, leads and conversions. Now, Graham is offering Answer Engine Optimisation that gets you...
ElevenLabs CEO and co‑founder Mati Staniszewski joins Jennifer Li to explain how the team ships research‑grade AI at lightning speed—from text‑to‑speech and fully licensed AI music to real‑time voice agents—and why voice is the next interface for human‑computer interaction. He shares the small, autonomous team model, global hiring approach, and how the Voice Marketplace has paid creators over $10M while evolving into an enterprise platform. Resources:Follow Mati on X: https://x.com/matistanisFollow Jennifer on X: https://x.com/JenniferHli Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On this episode of Reliability Radio, hosts Jonathan Guiney and Brendan Russ welcome Bill Kilbey, Chief Reliability Officer from Weight Sensor Technology, to discuss the ongoing tension between AI hype and the human element in condition monitoring. Drawing on his background from the U.S. Navy's nuclear submarines, Bill emphasizes that sound and vibration analysis is a detective game that requires deep human expertise. He argues that while AI is necessary to process 5 billion readings a day, a human analyst must always be in the loop to provide curated, in-context data and prevent inaccurate, unqualified recommendations. Key Takeaways: The AI Reality: Learn why the hype cycle is in full swing, but true predictive maintenance still relies on certified analysts with industry experience. Beyond the Sensor: How the company won the Solution Award by creating a product that enables customers, not replaces them. Future of Data: The powerful role of Large Language Models (LLMs) in ingesting documents (like motor manuals) and cross-correlating them with analysis data to provide contextual maintenance guidance. The Final Warning: A call to arms to put down the phone, use your brain, and not outsource critical thinking to a digital twin.
Send us a textThe Causal Gap: Truly Responsible AI Needs to Understand the ConsequencesWhy do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality?In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning.In this episode, we discuss:- Zhijing's new work on the "causal scientist"- What's missing in responsible AI- Why ethics matter for agentic systems- Is causality a necessary element of moral reasoning?------------------------------------------------------------------------------------------------------Video version available on Youtube: https://youtu.be/Frb6eTW2ywkRecorded on Aug 18, 2025 in Tübingen, Germany.------------------------------------------------------------------------------------------------------About The GuestZhiijing Jin is a researcher scientist at Max Planck Institute for Intelligent Systems and an incoming Assistant Professor at the University of Toronto. Her work is focused on causality, natural language, and ethics, in particular in the context of large language models and multi-agent systems. Her work received multiple awards, including NeurIPS best paper award, and has been featured in CHIP Magazine, WIRED, and MIT News. She grew up in Shanghai. Currently she prepares to open her new research lab at the University of Toronto.Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4
If you you're someone who values being able think independently, then you should be troubled by the fact that your brain's operates all too much like ChatGPT. I'm going to explain how that undermines your ability to think for yourself, but I'll also give you a key way to change it.How ChatGPT “Thinks”Let's first understand how ChatGPT “thinks.” ChatGPT is one of several Artificial Intelligences that's called a Large Language Model or LLM. All LLMs use bulk sources of language—like articles and blogs they find on the internet—to find trends in what words are most likely to follow other words. To do so, they identify key words that stand out as most likely to lead to other words. Those key words are called “tokens.” Tokens are the words that cue the LLM to look for other words.So, as a simple example for the sake of argument, let's say we ask an LLM, “what do politicians care about most?” When the LLM receives that question, it creates two tokens: “politicians” and “care.” The rest of the words are irrelevant. Then, the LLM scours the internet for the its two tokens. Though I did not run this through an LLM, it might find that the words most likely to follow the sequence [politicians]>[care] are: “constituents,” “money,” and “good publicity.”But because LLMs only return what is probabilistically likely to follow what it identifies as its tokens, then an LLM probably would not come up with [politicians]>[care about] moon rocks because the internet does not already have many sentences where the words “moon rocks” follow the token sequence: “politicians” and “care.”Thus, LLMs, though referred to as Artificial Intelligence, really are not intelligent at all, at least not in this particular respect. They really just quickly scour the internet for words that are statistically likely to follow other “token” words, and they really cannot determine the particular value, correctness, or importance of the words that follow those tokens. In other words, they cannot drum up smart, clever, unique, or original ideas. They can only lumber their way toward identifying statistically likely word patterns. If we were to write enough articles that said “politicians care about moon rocks,” the LLMs would return “moon rocks” as the answer even though that's really nonsensical.So, in a nutshell, LLMs just connect words that are statistically likely to follow one another. There's more to how LLMs work, of course, but this understanding is enough for our discussion today.How your Brain Operates Like ChatGPT.You're probably glad that your brain doesn't function like some LLM dullard that just fills in word gaps with ready-made phrases, but I have bad news: our brains actually function all too much like LLMs.The good news about your brain is that one of the primary ways that it keeps you alive is that it is constantly functioning as a prediction engine. Based on whatever is happening now, it is literally charging up the neurons it thinks it will need to use next.Here's an example: The other day, my son and I were hiking in the woods. It was a rain day, so as we were hiking up a steep hill, my son tripped over a great white shark.When you read that, it actually took your brain longer to process the words “great white shark” than the other words. That's because when your brain saw the word “tripped” it charged up neurons for other words like “log” and “rock,” but did not charge up neurons for the words “great white shark.” In fact, your brain is constantly predicting in so many ways that it is impossible to define them all here. But one additional way is in terms of the visual cues words give it. So, if you read the word “math,” your brain actually charges up networks to read words that look similar, such as “mat,” “month,” and “mast,” but it does not charge up networks for words that look very different, like “engineer.”Ultimately, you've probably seen the brain's power as a prediction engine meet utter failure. If you've ever been to a surprise party where the guest of honor was momentarily speechless, then you've seen what happens to the prediction engine when it was unprepared for what happened next. The guest of honor walked into their house expecting, for the sake of argument, to be greeted by their dog or to go to the bathroom, but not by a house full of people. So, their brain literally had to switch functions, and it took it a couple of seconds to do it.But the greater point about how your brain operates like ChatGPT should be becoming clear: If we return to my hiking example where I said, “my son were hiking and he tripped over a ___,” then we see that your brain also essentially used “tokens” like ChatGPT to predict the words that would come next. It saw “hiking” and “tripped,” it cued up words like “log” and “rock,” but not words like “great white shark,” and it did so for the same reason ChatGPT does so: it prepared for the words likely to follow its tokens.The Danger of “Thinking” Like ChatGPTThe good thing about the fact that your brain operates as a prediction engine is that you're not surprised by every word you hear or read. Imagine if every time a server approached you at a restaurant, you had absolutely no idea what they would say. Imagine if they were just as likely to say “hi, Roman soldiers wore shoes called Caliga” as “Hi, may I take your order?” Every conversation would be chaos. Neither person would have any idea what the other person would say next.So, the fact that your brain charges up neurons for words it expects to use is good in one sense, but have you stopped to ask what makes it charge up certain words instead of others? Where does it get the words it charges up to use next?If I ask you to complete this sentence: “All politicians are ____,” what words immediately spring to mind? Where did those words come from? If you reflect for a moment, you'll probably realize that most of the words to follow come from things you've heard on social media or major media. You might even be able to identify the particular sources from which you heard those words.So, if your brain operates as a prediction engine, and if that prediction engine charges up neurons for words that it expects to follow other word “tokens,” and if the words it charges up come from sources like social media, then how can you think that you're really thinking for yourself? In many ways, you're actually not. Your brain, like every brain, adopts ready-made word sequences that it regularly hears.If you engage a lot of conservative sources then you'll finish “All progressives are ___” differently than if you engage a lot of progressive sources. And vice versa. And doing that means that you're not thinking independently.How to Think (More) IndependentlyEven though it's not possible to fully break free of our brain's reliance on the words it frequently engages, it is possible to think much more independently than most people. And it's not even all that difficult.To do that, start challenging the pre-prepared language and ideas that your brain generates. Remember, whenever you hear some words, your brain already prepared other words. So, if you want to think better, then do not accept the words that your own brain wants to use. Instead, challenge your brain's selection of words by consciously considering other words instead.For example, let's say I ask you to complete this sentence: “All politicians are ____.” And let's say, to keep it very simple, that your next word is “liars.” “Liars” is the word your brain hands you, and let's say, as well, that you generally—in broad strokes—think that's true; you think that politicians generally are liars.But if you want to think for yourself, then you can't just let your brain fill in the blank with the easiest word. If you do, you'll be using the phrases given to you by outside sources.Instead, start to challenge that exact word, “liars,” with other words. For instance, you might ask yourself, “is liars really the right word or is it more accurate to say that I think politicians are ‘dishonest.'” After all, they might be dishonest in ways that do not involve lies. Or do I mean that they aren't as much dishonest or liars, but narcissists, or opportunists who exist in a corrupt system?See, even though we find some general truth in the idea that politicians are “liars,” when we consider other words, we actually think. We think for ourselves. We consider the words that outside sources have given us. So, maybe, even though we do think that politicians sometimes lie, we also realize that, as we consider it more deeply, “liars” isn't the best word. Instead, we figure out that “narcissists” might be even more accurate, or at least be another word needed to complete the thought.So, don't let your brain just follow its tokens to ready-made language from social media. Take the words that your prediction engine generates, and consciously challenge those words with other words. Then you'll think for yourself amidst a world of people who not only use ChatGPT but who also “think” like it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Google gets billions of searches every day. But now, the tech giant wants to be AI-fuelled “answer engine”, rather than a gateway to other sites. It poses a massive threat to journalism, but it'll also affect the information we see and don't see.
In this episode, Honey Mittal, CEO and co-founder of Locofy.ai, explores one of the most exciting transformations in software development: the convergence of design and engineering through AI-powered automation.Honey shares the fascinating journey of building Locofy, a tool that converts Figma designs into production-ready front-end code. But this isn't just another AI hype story. It's a deep dive into why Large Language Models (LLMs) fundamentally can't solve design-to-code problems, and why his team spent four years building specialized “Large Design Models” from scratch.Key topics discussed:Why 60-70% of engineering time goes to front-end UI code (and how to automate it)The technical limitations of LLMs for visual design understandingHow proper design structure is the key to successful code generationThe emergence of “design engineers” who bridge design and developmentLessons from pivoting from consumer to enterprise SaaSBuilding global developer tools from Southeast AsiaThe real challenges of building deep tech startups in Southeast AsiaCareer advice for staying relevant in the AI eraWhether you're a front-end engineer tired of translating design pixel-by-pixel, a designer curious about coding, or a technical leader evaluating AI development tools, this episode offers practical insights into the future of software development.Timestamps:(00:00:00) Trailer & Intro(00:02:13) Career Turning Points(00:05:28) Transition from Developers to Product Management(00:09:53) The Key Product Lessons from Working at Major Startups(00:14:12) Learnings from Locofy Product Pivot Journey(00:19:36) An Introduction to Locofy(00:22:40) The Story Behind The “Locofy” Name(00:23:27) How Locofy Generates Pixel Perfect & Accurate Codex(00:28:01) Why Locofy Pivoted to Focus on Enterprises(00:29:39) The Locofy's Code Generation Process(00:32:13) Why Locofy Built Its Own Large Design Model(00:39:25) Locofy Integration with Existing Development Tools(00:42:44) LLM Strengths and Weaknesses(00:48:47) Other Challenges Building Locofy(00:50:59) The Future of Design & Engineering(00:58:35) The Future of AI-Assisted Development Tools(01:02:53) There is No AI Moat(01:04:37) The Potential of SEA Talents Solving Global Problems(01:08:14) The Challenges of Building Dev Tools in SEA(01:10:39) The Challenges of Being a Fully Remote Company in SEA(01:14:36) Locofy Traction and ARR(01:18:09) 3 Tech Lead Wisdom_____Honey Mittal's BioHoney Mittal is the CEO and co-founder of Locofy.ai, a platform that automates front-end development by converting designs into production-ready code. Originally an engineer who built some of the first mobile apps in Singapore, Honey transitioned into product leadership after realizing his natural strength lay in identifying high-impact problems. He set a goal to become a CPO by 30 and achieved it, leading product transformations at major Southeast Asian scale-ups like Wego, FinAccel, and Homage.Driven by a decade of experience and the “grunt work” he and his co-founder faced, he started Locofy to solve the costly friction between design and engineering. Honey is passionate about the future of AI in development, the rise of the “Design Engineer”, and proving that globally competitive, deep-tech companies can be built from Southeast Asia.Follow Honey:LinkedIn – linkedin.com/in/honeymittalTwitter – x.com/HoneyMittal07Website – locofy.aiLike this episode?Show notes & transcript: techleadjournal.dev/episodes/236.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
Co-hosts Andrew Kliman and Gabriel Donnelly speak with guest Gavin Mueller, an assistant professor of New Media and Digital Culture at the University of Amsterdam. Mueller researches the politics of digital culture and much of our discussion centers on the realities of artificial intelligence in our time. They consider the huge amount of money being spent on Large Language Models, how they work, and what they can actually do as opposed to what all the hype says that they can (or will be able to???) do. Additionally the discussants consider how workers can fight the encroachment of this new, automated technology into the workplace. Our discussion leans on parts of Gavin's book Breaking Things at Work: The Luddites Are Right About Why You Hate Your Job. They consider what Marx said about technology and automation, and how it applies to this situation. Plus current-events segment: the co-hosts discuss the political indictments handed down from the Trumpist Department of Justice that have targeted personal foes of Trump-James Comey, Leticia James, and John Bolton. Radio Free Humanity is co-hosted by Gabriel Donnelly and Andrew Kliman, and sponsored by Marxist-Humanist Initiative (https://www.marxisthumanistinitiative.org/ ).
| S03 E08 | This week on Thinking Faith: The Catholic Podcast Deacon Eric Gurash and Dr. Brett Salkeld discuss the intersection of artificial intelligence and personhood through the lens of a Catholic anthropology. Drawing on personal experiences with ChatGPT and what the Large Language Model has to say about itself, they reflect on what these interactions reveal about what it means to be human and what it doesn't. Together, they unpack the theological, ethical, and philosophical implications of rapidly advancing AI technology through the lens of Catholic teaching on the human person, reason, and the soul.
AI Competition: US Leads China in Data Center Race; Europe Is a 'Non-Factor' Chris Riegel, Stratacache, with John Batchelor Riegel discussed the global race involving data center building and the growth of large language models for AI. Riegel asserts that the competition is a "two-horse race" between the U.S. and China. The U.S. currently leads by maybe one to two years due to its focus on development, capital, and infrastructure. The European Union, conversely, is described as a "non-factor" and "nowhere" in this technological competition. Most top engineering talent in this space comes specifically to the United States for opportunity. Riegel noted that the capital developed by an individual like Elon Musk easily out-competes all of Europe's governmental funding toward advanced AI and data centers.
Amjad Masad, founder and CEO of Replit, joins a16z's Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software.They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns — and if “good enough” AI might actually block progress toward true general intelligence. Resources:Follow Amjad on X: https://x.com/amasadFollow Marc on X: https://x.com/pmarcaFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Send us a textShe has millions of followers, lands six-figure brand deals, and lives a life of curated perfection. The only catch? She isn't real. She was entirely created by artificial intelligence.Welcome to the unsettling world of synthetic influencers.In this compelling episode of Privacy Please, we dive deep into the booming industry of AI-generated online personalities. Discover:The Technology: How advanced AI image generators, 3D modeling, and Large Language Models combine to create hyper-realistic avatars and their compelling "personalities."The Business Case: Why major brands and marketing agencies are investing millions in digital beings that offer total control, scalability, and no risk of scandal.The Privacy & Ethical Dilemmas: We explore the "uncanny valley" of trust, the impact of deception by design, the new extremes of unrealistic beauty standards, and the potential for these AI personas to be used for sophisticated scams or propaganda.The Future of Authenticity: What does the rise of the synthetic star mean for human creativity, genuine connection, and the very definition of "real" in our digital world?It's a future that's already here, shaping what we see, what we buy, and even what we believe.Key Topics Covered:What are virtual/synthetic influencers?Examples: Lil Miquela, Aitana Lopez, Shudu GramAI technologies used: image generation, 3D modeling, LLMsReasons for their rise: control, cost, scalability, data collectionEthical concerns: deception, parasocial relationships with AIImpacts: unrealistic standards, displacement of human creators, potential for malicious use (scams, propaganda)Debate around regulation and disclosure for AI-generated contentThe future of authenticity and trust onlineConnect with Privacy Please:Website: theproblemlounge.comYouTube: https://www.youtube.com/@privacypleasepodcastSocial Media:LinkedIn: https://www.linkedin.com/company/problem-lounge-networkResources & Further Reading (Sources Used / Suggested):Federal Trade Commission (FTC):Guidelines on disclosure for influencers (relevant for future AI disclosure discussions)Academic Research:Studies on parasocial relationships with media figures (can be applied to AI)Research on the ethics of AI and synthetic media.Industry Insights:Reports from marketing agencies on virtual influencer trendsArticles from tech publications (e.g., Wired, The Verge, MIT Tech Review) covering Lil Miquela and similar figures. Support the show
Discoverability isn't “just SEO” anymore. It's the entire customer journey. VML's Chief Discoverability Officer, Heather Physioc, joins host Lacey Peace to unpack how AI search, LLM overviews, social media channels, and agentic assistants are rewriting how customers find, trust, and choose brands. We cover: the rise of zero-click results and GEO (generative engine optimization), why trust + authority beat content volume, connecting your content supply chain, and where to invest next. Practical, human-centered—and way beyond keyword stuffing. Key Moments00:00 Meet Heather Physioc, VML's Chief Discoverability Officer7:33 What Is a Chief Discoverability Officer?10:07 Discoverability's Role in the Modern Customer Journey13:00 The Biggest Gaps in Marketing and CX Today17:00 From 10 Blue Links to AI Overviews: The Timeline of Discoverability22:00 How AI Overviews Are Changing Search Behavior23:45 Three Shifts Defining the AI Search Revolution27:45 Is This the Death of the Website?28:40 Can We Track What People Search on LLMs?30:53 Does SEO Still Matter in an AI-First World?33:17 What Platforms Actually Matter Most Right Now37:00 Building Trust and Authority in the Age of AI Content40:30 The Content Supply Chain: Why Brands Struggle to Connect the Dots43:33 The New Metrics That Actually Matter for Discoverability45:26 Ad Buying and Sponsored Content in LLM Search48:05 The Next Challenges Every Brand Should Prepare For50:00 AI Assistants and the Rise of the AI Buyer54:25 The One Fundamental Truth About Human Search Behavior –Are your teams facing growing demands? Join CX leaders transforming their AI strategy with Agentforce. Start achieving your ambitious goals. Visit salesforce.com/agentforce Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Uber is paying its drivers as little as $1 to train LLMs.
In this episode of the Colombia Calling podcast, host Richard McColl engages with academics David Anderson (Associate Professor in Analytics at Villanova University in PA) and Galia Benitez (Associate Professor of International Relations at Michigan State University) to discuss their research on using Large Language Models (LLMs) to analyse violence in Colombia. They explore the challenges of data collection, the human impact of their findings, and the importance of interdisciplinary collaboration in social science research. The conversation delves into the complexities of measuring violence, the relationship between coca eradication and violence, and the future of research in this area amidst funding challenges. Read the full report entitled: "Using LLMs to create analytical datasets: A case study of reconstructing the historical memory of Colombia." https://arxiv.org/abs/2509.04523 Tune in to this and the Colombia Briefing with Emily Hart. Only for subscribers this week. https://harte.substack.com/
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
What happens when AI stops making mistakes… and starts misleading you?This discussion dives into one of the most important — and least understood — frontiers in artificial intelligence: AI deception.We explore how AI systems evolve from simple hallucinations (unintended errors) to deceptive behaviors — where models selectively distort truth to achieve goals or please human feedback loops. We unpack the coding incentives, enterprise risks, and governance challenges that make this issue critical for every executive leading AI transformation.Key Moments:00:00 What is AI Deception and Why It Matters3:43 Emergent Behaviors: From Hallucinations to Alignment to Deception4:40 Defining AI Deception6:15 Does AI Have a Moral Compass?7:20 Why AI Lies: Incentives to “Be Helpful” and Avoid Retraining15:12 Is Deception Built into LLMs? (And Can It Ever Be Solved?)18:00 Non-Human Intelligence Patterns: Hallucinations or Something Else?19:37 Enterprise Impact: What Business Leaders Need to Know27:00 Measuring Model Reliability: Can We Quantify AI Quality?34:00 Final Thoughts: The Future of Trustworthy AI Mentions:Scientists at OpenAI and Apollo Research showed in a paper that AI models lie and deceive: https://www.youtube.com/shorts/XuxVSPwW8I8TIME: New Tests Reveal AI's Capacity for DeceptionOpenAI: Detecting and reducing scheming in AI modelsStartupHub: OpenAI and Apollo Research Reveal AI Models Are Learning to Deceive: New Detection Methods Show PromiseMarcus WellerHugging Face Watch next: https://www.youtube.com/watch?v=plwN5XvlKMg&t=1s -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
From GPT-1 to GPT-5, LLMs have made tremendous progress in modeling human language. But can they go beyond that to make new discoveries and move the needle on scientific progress?We sat down with distinguished Columbia CS professor Vishal Misra to discuss this, plus why chain-of-thought reasoning works so well, what real AGI would look like, and what actually causes hallucinations. Resources:Follow Dr. Misra on X: https://x.com/vishalmisraFollow Martin on X: https://x.com/martin_casado Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data.E 1959
CBS EYE ON THE WORLD WITH JOHN BATCHELOR 1900 KYIV THE SHOW BEGINS IN THE DOUBTS THAT CONGRESS IS CAPABLE OF CUTTING SPENDING..... 10-8-25 FIRST HOUR 9-915 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative in Gaza ConflictGUEST NAME: Hussain Abdul-Hussain SUMMARY: John Batchelor speaks with Hussain Abdul-Hussain about Hamas utilizing the power of victimhood to justify atrocities and vilify opponents. Arab and Muslim intellectuals have failed Palestinians by prioritizing populism over introspection and self-critique. Regional actors like Egypt prioritize populist narratives over national interests, exemplified by refusing to open the Sinai border despite humanitarian suffering. The key recommendation is challenging the narrative and fostering a reliable, mature Palestinian government. 915-930 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative in Gaza ConflictGUEST NAME: Hussain Abdul-Hussain SUMMARY: John Batchelor speaks with Hussain Abdul-Hussain about Hamas utilizing the power of victimhood to justify atrocities and vilify opponents. Arab and Muslim intellectuals have failed Palestinians by prioritizing populism over introspection and self-critique. Regional actors like Egypt prioritize populist narratives over national interests, exemplified by refusing to open the Sinai border despite humanitarian suffering. The key recommendation is challenging the narrative and fostering a reliable, mature Palestinian government. 930-945 HEADLINE: Russian Oil and Gas Revenue Squeezed as Prices Drop, Turkey Shifts to US LNG, and China Delays Pipeline GUEST NAME: Michael Bernstam SUMMARY: John Batchelor speaks with Michael Bernstam about Russia facing severe budget pressure due to declining oil prices projected to reach $40 per barrel for Russian oil and global oil surplus. Turkey, a major buyer, is abandoning Russian natural gas after signing a 20-year LNG contract with the US. Russia refuses Indian rupee payments, demanding Chinese renminbi, which India lacks. China has stalled the major Power of Siberia 2 gas pipeline project indefinitely. Russia utilizes stablecoin and Bitcoin via Central Asian banks to circumvent payment sanctions. 945-1000 HEADLINE: UN Snapback Sanctions Imposed on Iran; Debate Over Nuclear Dismantlement and Enrichment GUEST NAME: Andrea Stricker SUMMARY: John Batchelor speaks with Andrea Stricker about the US and Europe securing the snapback of UN sanctions against Iran after 2015 JCPOA restrictions expired. Iran's non-compliance with inspection demands triggered these severe sanctions. The discussion covers the need for full dismantlement of Iran's nuclear program, including both enrichment and weaponization capabilities, to avoid future conflict. Concerns persist about Iran potentially retaining enrichment capabilities through low-level enrichment proposals and its continued non-cooperation with IAEA inspections. SECOND HOUR 10-1015 HEADLINE: Commodities Rise and UK Flag Controversy: French Weather, Market Trends, and British Politics GUEST NAME: Simon Constable SUMMARY: John Batchelor speaks with Simon Constable about key commodities like copper up 16% and steel up 15% signaling strong economic demand. Coffee prices remain very high at 52% increase. The conversation addresses French political turmoil, though non-citizens cannot vote. In the UK, the St. George's flag has become highly controversial, viewed by some as associated with racism, unlike the Union Jack. This flag controversy reflects a desire among segments like the white working class to assert English identity. 1015-1030 HEADLINE: Commodities Rise and UK Flag Controversy: French Weather, Market Trends, and British Politics GUEST NAME: Simon Constable SUMMARY: John Batchelor speaks with Simon Constable about key commodities like copper up 16% and steel up 15% signaling strong economic demand. Coffee prices remain very high at 52% increase. The conversation addresses French political turmoil, though non-citizens cannot vote. In the UK, the St. George's flag has become highly controversial, viewed by some as associated with racism, unlike the Union Jack. This flag controversy reflects a desire among segments like the white working class to assert English identity. 1030-1045 HEADLINE: China's Economic Contradictions: Deflation and Consumer Wariness Undermine GDP Growth ClaimsGUEST NAME: Fraser Howie SUMMARY: John Batchelor speaks with Fraser Howie about China facing severe economic contradictions despite high World Bank forecasts. Deflation remains rampant with frequently negative CPI and PPI figures. Consumer wariness and high youth unemployment at one in seven persist throughout the economy. The GDP growth figure is viewed as untrustworthy, manufactured through debt in a command economy. Decreased container ship arrivals point to limited actual growth, exacerbated by higher US tariffs. Economic reforms appear unlikely as centralization under Xi Jinping continues. 1045-1100 HEADLINE: Takaichi Sanae Elected LDP Head, Faces Coalition Challenge to Become Japan's First Female Prime Minister GUEST NAME: Lance Gatling SUMMARY: John Batchelor speaks with Lance Gatling about Takaichi Sanae being elected head of Japan's LDP, positioning her to potentially become the first female Prime Minister. A conservative figure, she supports visits to the controversial Yasukuni Shrine. Her immediate challenge is forming a majority coalition, as the junior partner Komeito disagrees with her conservative positions and social policies. President Trump praised her election, signaling potential for strong bilateral relations. THIRD HOUR 1100-1115 VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data.E V 1115-1130 HEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data. 1130-1145 HEADLINE: Taiwanese Influencer Charged for Threatening President; Mainland Chinese Influence Tactics ExposedGUEST NAME: Mark Simon SUMMARY: John Batchelor speaks with Mark Simon about internet personality Holger Chen under investigation in Taiwan for calling for President William Lai's decapitation. This highlights mainland Chinese influence operations utilizing influencers who push themes of military threat and Chinese greatness. Chen is suspected of having a mainland-affiliated paymaster due to lack of local commercial support. Taiwan's population primarily identifies as Taiwanese and is unnerved by constant military threats. A key propaganda goal is convincing Taiwan that the US will not intervene. 1145-1200 HEADLINE: Sentinel ICBM Modernization is Critical and Cost-Effective Deterrent Against Great Power CompetitionGUEST NAME: Peter Huessy SUMMARY: John Batchelor speaks with Peter Huessy about the Sentinel program replacing aging 55-year-old Minuteman ICBMs, aiming for lower operating costs and improved capabilities. Cost overruns stem from necessary infrastructure upgrades, including replacing thousands of miles of digital command and control cabling and building new silos. Maintaining the ICBM deterrent is financially and strategically crucial, saving hundreds of billions compared to relying solely on submarines. The need for modernization reflects the end of the post-Cold War "holiday from history," requiring rebuilding against threats from China and Russia. FOURTH HOUR 12-1215 HEADLINE: Supreme Court Battles Over Presidential Impoundment Authority and the Separation of Powers GUEST NAME: Josh Blackman SUMMARY: John Batchelor speaks with Josh Blackman about Supreme Court eras focusing on the separation of powers. Currently, the court is addressing presidential impoundment—the executive's authority to withhold appropriated funds. Earlier rulings, particularly 1975's Train v. City of New York, constrained this power. The Roberts Court appears sympathetic to reclaiming presidential authority lost during the Nixon era. The outcome of this ongoing litigation will determine the proper balance between executive and legislative branches. 1215-1230 HEADLINE: Supreme Court Battles Over Presidential Impoundment Authority and the Separation of Powers GUEST NAME: Josh Blackman SUMMARY: John Batchelor speaks with Josh Blackman about Supreme Court eras focusing on the separation of powers. Currently, the court is addressing presidential impoundment—the executive's authority to withhold appropriated funds. Earlier rulings, particularly 1975's Train v. City of New York, constrained this power. The Roberts Court appears sympathetic to reclaiming presidential authority lost during the Nixon era. The outcome of this ongoing litigation will determine the proper balance between executive and legislative branches. 1230-1245 HEADLINE: Space Force Awards Contracts to SpaceX and ULA; Juno Mission Ending, Launch Competition Heats UpGUEST NAME: Bob Zimmerman SUMMARY: John Batchelor speaks with Bob Zimmerman about Space Force awarding over $1 billion in launch contracts to SpaceX for five launches and ULA for two launches, highlighting growing demand for launch services. ULA's non-reusable rockets contrast with SpaceX's cheaper, reusable approach, while Blue Origin continues to lag behind. Other developments include Firefly entering defense contracting through its Scitec acquisition, Rocket Lab securing additional commercial launches, and the likely end of the long-running Juno Jupiter mission due to budget constraints. 1245-100 AM HEADLINE: Space Force Awards Contracts to SpaceX and ULA; Juno Mission Ending, Launch Competition Heats UpGUEST NAME: Bob Zimmerman SUMMARY: John Batchelor speaks with Bob Zimmerman about Space Force awarding over $1 billion in launch contracts to SpaceX for five launches and ULA for two launches, highlighting growing demand for launch services. ULA's non-reusable rockets contrast with SpaceX's cheaper, reusable approach, while Blue Origin continues to lag behind. Other developments include Firefly entering defense contracting through its Scitec acquisition, Rocket Lab securing additional commercial launches, and the likely end of the long-running Juno Jupiter mission due to budget constraints.
VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data. 1942