POPULARITY
CBS EYE ON THE WORLD WITH JOHN BATCHELOR 1900 KYIV THE SHOW BEGINS IN THE DOUBTS THAT CONGRESS IS CAPABLE OF CUTTING SPENDING..... 10-8-25 FIRST HOUR 9-915 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative in Gaza ConflictGUEST NAME: Hussain Abdul-Hussain SUMMARY: John Batchelor speaks with Hussain Abdul-Hussain about Hamas utilizing the power of victimhood to justify atrocities and vilify opponents. Arab and Muslim intellectuals have failed Palestinians by prioritizing populism over introspection and self-critique. Regional actors like Egypt prioritize populist narratives over national interests, exemplified by refusing to open the Sinai border despite humanitarian suffering. The key recommendation is challenging the narrative and fostering a reliable, mature Palestinian government. 915-930 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative in Gaza ConflictGUEST NAME: Hussain Abdul-Hussain SUMMARY: John Batchelor speaks with Hussain Abdul-Hussain about Hamas utilizing the power of victimhood to justify atrocities and vilify opponents. Arab and Muslim intellectuals have failed Palestinians by prioritizing populism over introspection and self-critique. Regional actors like Egypt prioritize populist narratives over national interests, exemplified by refusing to open the Sinai border despite humanitarian suffering. The key recommendation is challenging the narrative and fostering a reliable, mature Palestinian government. 930-945 HEADLINE: Russian Oil and Gas Revenue Squeezed as Prices Drop, Turkey Shifts to US LNG, and China Delays Pipeline GUEST NAME: Michael Bernstam SUMMARY: John Batchelor speaks with Michael Bernstam about Russia facing severe budget pressure due to declining oil prices projected to reach $40 per barrel for Russian oil and global oil surplus. Turkey, a major buyer, is abandoning Russian natural gas after signing a 20-year LNG contract with the US. Russia refuses Indian rupee payments, demanding Chinese renminbi, which India lacks. China has stalled the major Power of Siberia 2 gas pipeline project indefinitely. Russia utilizes stablecoin and Bitcoin via Central Asian banks to circumvent payment sanctions. 945-1000 HEADLINE: UN Snapback Sanctions Imposed on Iran; Debate Over Nuclear Dismantlement and Enrichment GUEST NAME: Andrea Stricker SUMMARY: John Batchelor speaks with Andrea Stricker about the US and Europe securing the snapback of UN sanctions against Iran after 2015 JCPOA restrictions expired. Iran's non-compliance with inspection demands triggered these severe sanctions. The discussion covers the need for full dismantlement of Iran's nuclear program, including both enrichment and weaponization capabilities, to avoid future conflict. Concerns persist about Iran potentially retaining enrichment capabilities through low-level enrichment proposals and its continued non-cooperation with IAEA inspections. SECOND HOUR 10-1015 HEADLINE: Commodities Rise and UK Flag Controversy: French Weather, Market Trends, and British Politics GUEST NAME: Simon Constable SUMMARY: John Batchelor speaks with Simon Constable about key commodities like copper up 16% and steel up 15% signaling strong economic demand. Coffee prices remain very high at 52% increase. The conversation addresses French political turmoil, though non-citizens cannot vote. In the UK, the St. George's flag has become highly controversial, viewed by some as associated with racism, unlike the Union Jack. This flag controversy reflects a desire among segments like the white working class to assert English identity. 1015-1030 HEADLINE: Commodities Rise and UK Flag Controversy: French Weather, Market Trends, and British Politics GUEST NAME: Simon Constable SUMMARY: John Batchelor speaks with Simon Constable about key commodities like copper up 16% and steel up 15% signaling strong economic demand. Coffee prices remain very high at 52% increase. The conversation addresses French political turmoil, though non-citizens cannot vote. In the UK, the St. George's flag has become highly controversial, viewed by some as associated with racism, unlike the Union Jack. This flag controversy reflects a desire among segments like the white working class to assert English identity. 1030-1045 HEADLINE: China's Economic Contradictions: Deflation and Consumer Wariness Undermine GDP Growth ClaimsGUEST NAME: Fraser Howie SUMMARY: John Batchelor speaks with Fraser Howie about China facing severe economic contradictions despite high World Bank forecasts. Deflation remains rampant with frequently negative CPI and PPI figures. Consumer wariness and high youth unemployment at one in seven persist throughout the economy. The GDP growth figure is viewed as untrustworthy, manufactured through debt in a command economy. Decreased container ship arrivals point to limited actual growth, exacerbated by higher US tariffs. Economic reforms appear unlikely as centralization under Xi Jinping continues. 1045-1100 HEADLINE: Takaichi Sanae Elected LDP Head, Faces Coalition Challenge to Become Japan's First Female Prime Minister GUEST NAME: Lance Gatling SUMMARY: John Batchelor speaks with Lance Gatling about Takaichi Sanae being elected head of Japan's LDP, positioning her to potentially become the first female Prime Minister. A conservative figure, she supports visits to the controversial Yasukuni Shrine. Her immediate challenge is forming a majority coalition, as the junior partner Komeito disagrees with her conservative positions and social policies. President Trump praised her election, signaling potential for strong bilateral relations. THIRD HOUR 1100-1115 VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data.E V 1115-1130 HEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data. 1130-1145 HEADLINE: Taiwanese Influencer Charged for Threatening President; Mainland Chinese Influence Tactics ExposedGUEST NAME: Mark Simon SUMMARY: John Batchelor speaks with Mark Simon about internet personality Holger Chen under investigation in Taiwan for calling for President William Lai's decapitation. This highlights mainland Chinese influence operations utilizing influencers who push themes of military threat and Chinese greatness. Chen is suspected of having a mainland-affiliated paymaster due to lack of local commercial support. Taiwan's population primarily identifies as Taiwanese and is unnerved by constant military threats. A key propaganda goal is convincing Taiwan that the US will not intervene. 1145-1200 HEADLINE: Sentinel ICBM Modernization is Critical and Cost-Effective Deterrent Against Great Power CompetitionGUEST NAME: Peter Huessy SUMMARY: John Batchelor speaks with Peter Huessy about the Sentinel program replacing aging 55-year-old Minuteman ICBMs, aiming for lower operating costs and improved capabilities. Cost overruns stem from necessary infrastructure upgrades, including replacing thousands of miles of digital command and control cabling and building new silos. Maintaining the ICBM deterrent is financially and strategically crucial, saving hundreds of billions compared to relying solely on submarines. The need for modernization reflects the end of the post-Cold War "holiday from history," requiring rebuilding against threats from China and Russia. FOURTH HOUR 12-1215 HEADLINE: Supreme Court Battles Over Presidential Impoundment Authority and the Separation of Powers GUEST NAME: Josh Blackman SUMMARY: John Batchelor speaks with Josh Blackman about Supreme Court eras focusing on the separation of powers. Currently, the court is addressing presidential impoundment—the executive's authority to withhold appropriated funds. Earlier rulings, particularly 1975's Train v. City of New York, constrained this power. The Roberts Court appears sympathetic to reclaiming presidential authority lost during the Nixon era. The outcome of this ongoing litigation will determine the proper balance between executive and legislative branches. 1215-1230 HEADLINE: Supreme Court Battles Over Presidential Impoundment Authority and the Separation of Powers GUEST NAME: Josh Blackman SUMMARY: John Batchelor speaks with Josh Blackman about Supreme Court eras focusing on the separation of powers. Currently, the court is addressing presidential impoundment—the executive's authority to withhold appropriated funds. Earlier rulings, particularly 1975's Train v. City of New York, constrained this power. The Roberts Court appears sympathetic to reclaiming presidential authority lost during the Nixon era. The outcome of this ongoing litigation will determine the proper balance between executive and legislative branches. 1230-1245 HEADLINE: Space Force Awards Contracts to SpaceX and ULA; Juno Mission Ending, Launch Competition Heats UpGUEST NAME: Bob Zimmerman SUMMARY: John Batchelor speaks with Bob Zimmerman about Space Force awarding over $1 billion in launch contracts to SpaceX for five launches and ULA for two launches, highlighting growing demand for launch services. ULA's non-reusable rockets contrast with SpaceX's cheaper, reusable approach, while Blue Origin continues to lag behind. Other developments include Firefly entering defense contracting through its Scitec acquisition, Rocket Lab securing additional commercial launches, and the likely end of the long-running Juno Jupiter mission due to budget constraints. 1245-100 AM HEADLINE: Space Force Awards Contracts to SpaceX and ULA; Juno Mission Ending, Launch Competition Heats UpGUEST NAME: Bob Zimmerman SUMMARY: John Batchelor speaks with Bob Zimmerman about Space Force awarding over $1 billion in launch contracts to SpaceX for five launches and ULA for two launches, highlighting growing demand for launch services. ULA's non-reusable rockets contrast with SpaceX's cheaper, reusable approach, while Blue Origin continues to lag behind. Other developments include Firefly entering defense contracting through its Scitec acquisition, Rocket Lab securing additional commercial launches, and the likely end of the long-running Juno Jupiter mission due to budget constraints.
VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data.E 1959
VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data. 1942
AI Assisted Coding: Pachinko Coding—What They Don't Tell You About Building Apps with Large Language Models, With Alan Cyment In this BONUS episode, we dive deep into the real-world experience of coding with AI. Our guest, Alan Cyment, brings honest perspectives from the trenches—sharing both the frustrations and breakthroughs of using AI tools for software development. From "Pachinko coding" addiction loops to "Mecha coding" breakthroughs, Alan explores what actually works when building software with large language models. From Thermomix Dreams to Pachinko Reality "I bought into the Thermomix coding promise—describe the whole website and it would spit out the finished product. It was a complete disaster." Alan started his AI coding journey with high expectations, believing he could simply describe a complete application and receive production-ready code. The reality was far different. What he discovered instead was an addictive cycle he calls "Pachinko coding" (Pachinko, aka Slot Machines in Japan)—repeatedly feeding error messages back to the AI, hoping each iteration would finally work, while burning through tokens and time. The AI's constant reassurances that "this time I fixed it" created a gambling-like feedback loop that left him frustrated and out of pocket, sometimes spending over $20 in API credits in a single day. The Drunken PhD with Amnesia "It felt like working with a drunken PhD with amnesia—so wise and so stupid at the same time." Alan describes the maddening experience of anthropomorphizing AI tools that seem brilliant one moment and completely lost the next. The key breakthrough came when he stopped treating the AI as a person and started seeing it as a function that performs extrapolations—sometimes accurate, sometimes wildly wrong. This mental shift helped him manage expectations and avoid the "rage coding" that came from believing the AI should understand context and maintain consistency like a human collaborator. Making AI Coding Actually Work "I learned to ask for options explicitly before any coding happens. Give me at least three options and tell me the pros and cons." Through trial and error, Alan developed practical strategies that transformed AI from a frustrating Pachinko machine into a useful tool: Ask for options first: Always request multiple approaches with pros and cons before any code is generated Use clover emoji convention: Implement a consistent marker at the start of all AI responses to track context Small steps and YAGNI principles: Request tiny, incremental changes rather than large refactoring Continuous integration: Demand the AI run tests and checks after every single change Explicit refactoring requests: Regularly ask for simplification and readability improvements Take two steps back: When stuck in a loop, explicitly tell the AI to simplify and start fresh Choose the right tech stack: Use technologies with abundant training data (like Svelte over React Native in Alan's experience) The Mecha Coding Breakthrough "When it worked, I felt like I was inside a Lego Mecha robot—the machine gave me superpowers, but I was still the one in control." Alan successfully developed a birthday reminder app in Swift in just one day, despite never having learned Swift. He made architectural decisions and guided the development without understanding the syntax details. This experience convinced him that AI represents a genuine new level of abstraction in programming—similar to the jump from assembly language to high-level languages, or from procedural to object-oriented programming. You can now think in English about what you want, while the AI handles the accidental complexity of syntax and boilerplate. The Cost Reality Check "People writing about vibe coding act like it's free. But many people are going to pay way more than they would have paid a developer and end up with empty hands." Alan provides a sobering cost analysis based on his experience. Using DeepSeek through Aider, he typically spends under $1 per day. But when experimenting with premium models like Claude Sonnet 3.5, he burned through $5 in just minutes. The benchmark comparisons are revealing: DeepSeek costs $4 for a test suite, DeepSeek R1 plus Sonnet costs $16, while Open AI's O1 costs $190. For non-developers trying to build complete applications through pure "vibe coding," the costs can quickly exceed what hiring a developer would cost—with far worse results. When Thermomix Actually Works "For small, single-purpose scripts that I'm not interested in learning about and won't expand later, the Thermomix experience was real." Despite the challenges, Alan found specific use cases where AI truly delivers on the "just describe it and it works" promise. Processing Zoom attendance logs, creating lookup tables for video effects, and other single-file scripts worked remarkably well. The pattern: clearly defined context, no need for ongoing maintenance, and simple enough to verify the output without deep code inspection. For these thermomix moments, AI proved genuinely transformative. The Pachinko Trap and Tech Stack Matters "It became way more stable when I switched to Svelte from React Native and Flutter, even following the same prompting practices. The AI is just more proficient in certain tech stacks." Alan discovered that some frameworks and languages work dramatically better with AI than others, likely due to the amount of training data available. His e-learning platform attempts with React Native and Flutter kept breaking, but switching to Svelte with web-based deployment became far more stable. This suggests a crucial strategy: choose mainstream, well-documented technologies when planning AI-assisted projects. From Coding to Living with AI Alan has completely stopped using traditional search engines, relying instead on LLMs for everything from finding technical documentation to getting recommendations for books based on his interests. While he acknowledges the risk of hallucinations, he finds the semantic understanding capabilities too valuable to ignore. He's even used image analysis to troubleshoot his father's cable TV problems and figure out hotel air conditioning controls. The Agile Validation "My only fear is confirmation bias—but the conclusion I see other experienced developers reaching is that the only way to make LLMs work is by making them use agility. So look at who's dead now." Alan notes the irony that the AI coding tools that actually work all require traditional software engineering best practices: small iterations, test-driven development, continuous integration, and explicit refactoring. The promise of "just describe what you want" falls apart without these disciplines. Rather than replacing software engineering principles, AI tools seem to validate their importance. About Alan Cyment Alan Cyment is a consultant, trainer, and facilitator based in Buenos Aires, specializing in organizational fluency, agile leadership, and software development culture change. A Certified Scrum Trainer with deep experience across Latin America and Europe, he blends agile coaching with theatre-based learning to help leaders and teams transform. You can link with Alan Cyment on LinkedIn.
I want to dive into the concept of Deliberate Practice, which sets the greatest apart in fields ranging from sports to writing to engineering. I'll explain why it's much more than just repetition or experience, and why applying it to your career can lead to rapid improvement. Most importantly, I will provide concrete ways you can apply deliberate practice to level up your engineering and leadership skills, especially in areas that are traditionally difficult to practice, such as communication and strategic decision-making.Differentiate Practice from Deliberate Practice: Understand that while repetition is part of practice, deliberate practice specifically involves engaging in a very narrow set of activities with the intentional goal of improvement, requiring very quick feedback for continuous incorporation.Identify Opportunities for Rapid Improvement: Learn why deliberate practice is much more effective at achieving rapid improvement than simply engaging in repetition.Apply DP to Leadership Skills: Discover how to incorporate deliberate practice into roles like engineering manager, tech lead, or IC (Individual Contributor) leader, where the activity of practice is often harder to pinpoint.Leverage Existing Work for Practice: I suggest a mindset shift where you begin looking at existing responsibilities, such as one-on-ones, as opportunities for practice. For example, you can focus on improving your clarity when providing constructive criticism and ask for specific feedback on that aspect.Generate Novel Value Through Practice: Explore how engaging in deliberate practice activities—like recording a video to communicate a technical concept or creating documentation—serves the primary goal of practice, while almost certainly creating unexpected value for your team (often net neutral or positive).Use Backwards Training for Strategy: Find out how to practice strategic decision-making and forecasting by using "backwards training". This involves reviewing past decisions or work scopes, creating your own rationale or estimate, and then calibrating it against the known reality.Simulate Difficult Conversations: Consider leveraging Large Language Models (LLMs) to engage in deliberate practice around language-heavy skills, such as modelling sensitive or difficult topics, or practicing receiving harsh feedback.
Software has forever had flaws and humans have forever been finding and fixing them. With LLMs generating code, appsec has also been trying to determine how well LLMs can find flaws. Nico Waisman talks about XBOW's LLM-based pentesting, how it climbed a bug bounty leaderboard, how it uses feedback loops for better pentests, and how they handle (and even welcome!) hallucinations. In the news, using LLMs to find flaws, directory traversal in an MCP, another resource for learning cloud and AI security, spreadsheets and appsec, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-351
Our 222st episode with a summary and discussion of last week's big AI news!Recorded on 10/03/2025Hosted by Andrey Kurenkov and co-hosted by Jon KrohnFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:(00:00:10) Intro / Banter(00:03:08) News Preview(00:03:56) Response to listener commentsTools & Apps(00:04:51) ChatGPT parent company OpenAI announces Sora 2 with AI video app(00:11:35) Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents and coding supremacy | The Verge(00:22:25) Meta launches 'Vibes,' a short-form video feed of AI slop | TechCrunch(00:26:42) OpenAI launches ChatGPT Pulse to proactively write you morning briefs | TechCrunch(00:33:44) OpenAI rolls out safety routing system, parental controls on ChatGPT | TechCrunch(00:35:53) The Latest Gemini 2.5 Flash-Lite Preview is Now the Fastest Proprietary Model (External Tests) and 50% Fewer Output Tokens - MarkTechPost(00:39:54) Microsoft just added AI agents to Word, Excel, and PowerPoint - how to use them | ZDNETApplications & Business(00:42:41) OpenAI takes on Google, Amazon with new agentic shopping system | TechCrunch(00:46:01) Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product | WIRED(00:49:54) OpenAI is the world's most valuable private company after private stock sale | TechCrunch(00:53:07) Elon Musk's xAI accuses OpenAI of stealing trade secrets in new lawsuit | Technology | The Guardian(00:55:40) Former OpenAI and DeepMind researchers raise whopping $300M seed to automate science | TechCrunchProjects & Open Source(00:58:26) [2509.16941] SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?Research & Advancements(01:01:28) [2509.17196] Evolution of Concepts in Language Model Pre-Training(01:05:36) [2509.19284] What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoTLighting round(01:09:37) [2507.02954] Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III(01:12:03) [2509.24552] Short window attention enables long-term memorizationPolicy & Safety(01:18:11) SB 53, the landmark AI transparency bill, is now law in California | The Verge(01:24:07) Elon Musk's xAI offers Grok to federal government for 42 cents | TechCrunch(01:25:23) Character.AI removes Disney characters from platform after studio issues warning(01:28:50) Spotify's Attempt to Fight AI Slop Falls on Its FaceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
As ChatGPT and AI increase their presence in our lives, have we interrogated enough what this means for, and about, our collective psyche?In one of the most original critiques of ChatGPT, Slovenian Lacanian philosopher Alenka Zupančič interprets large language models as a form of our collective unconscious that has absorbed all our discourse at the expense of the subject, shutting down emancipatory possibilities. She analyses the Right's use of ChatGPT, the evolution of irony, and more. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of Crazy Wisdom, host Stewart Alsop talks with Jared Zoneraich, CEO and co-founder of PromptLayer, about how AI is reshaping the craft of software building. The conversation covers PromptLayer's role as an AI engineering workbench, the evolving art of prompting and evals, the tension between implicit and explicit knowledge, and how probabilistic systems are changing what it means to “code.” Stewart and Jared also explore vibe coding, AI reasoning, the black-box nature of large models, and what accelerationism means in today's fast-moving AI culture. You can find Jared on X @imjaredz and learn more or sign up for PromptLayer at PromptLayer.com.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart Alsop opens with Jared Zoneraich, who explains PromptLayer as an AI engineering workbench and discusses reasoning, prompting, and Codex.05:00 – They explore implicit vs. explicit knowledge, how subject matter experts shape prompts, and why evals matter for scaling AI workflows.10:00 – Jared explains eval methodologies, backtesting, hallucination checks, and the difference between rigorous testing and iterative sprint-based prompting.15:00 – Discussion turns to observability, debugging, and the shift from deterministic to probabilistic systems, highlighting skill issues in prompting.20:00 – Jared introduces “LM idioms,” vibe coding, and context versus content—how syntax, tone, and vibe shape AI reasoning.25:00 – They dive into vibe coding as a company practice, cloud code automation, and prompt versioning for building scalable AI infrastructure.30:00 – Stewart reflects on coding through meditation, architecture planning, and how tools like Cursor and Claude Code are shaping AGI development.35:00 – Conversation expands into AI's cultural effects, optimism versus doom, and critical thinking in the age of AI companions.40:00 – They discuss philosophy, history, social fragmentation, and the possible decline of social media and liberal democracy.45:00 – Jared predicts a fragmented but resilient future shaped by agents and decentralized media.50:00 – Closing thoughts on AI-driven markets, polytheistic model ecosystems, and where innovation will thrive next.Key InsightsPromptLayer as AI Infrastructure – Jared Zoneraich presents PromptLayer as an AI engineering workbench—a platform designed for builders, not researchers. It provides tools for prompt versioning, evaluation, and observability so that teams can treat AI workflows with the same rigor as traditional software engineering while keeping flexibility for creative, probabilistic systems.Implicit vs. Explicit Knowledge – The conversation highlights a critical divide between what AI can learn (explicit knowledge) and what remains uniquely human (implicit understanding or “taste”). Jared explains that subject matter experts act as the bridge, embedding human nuance into prompts and workflows that LLMs alone can't replicate.Evals and Backtesting – Rigorous evaluation is essential for maintaining AI product quality. Jared explains that evals serve as sanity checks and regression tests, ensuring that new prompts don't degrade performance. He describes two modes of testing: formal, repeatable evals and more experimental sprint-based iterations used to solve specific production issues.Deterministic vs. Probabilistic Thinking – Jared contrasts the old, deterministic world of coding—predictable input-output logic—with the new probabilistic world of LLMs, where results vary and control lies in testing inputs rather than debugging outputs. This shift demands a new mindset: builders must embrace uncertainty instead of trying to eliminate it.The Rise of Vibe Coding – Stewart and Jared explore vibe coding as a cultural and practical movement. It emphasizes creativity, intuition, and context-awareness over strict syntax. Tools like Claude Code, Codex, and Cursor let engineers and non-engineers alike “feel” their way through building, merging programming with design thinking.AI Culture and Human Adaptation – Jared predicts that AI will both empower and endanger human cognition. He warns of overreliance on LLMs for decision-making and the coming wave of “AI psychosis,” yet remains optimistic that humans will adapt, using AI to amplify rather than atrophy critical thinking.A Fragmented but Resilient Future – The episode closes with reflections on the social and political consequences of AI. Jared foresees the decline of centralized social media and the rise of fragmented digital cultures mediated by agents. Despite risks of isolation, he remains confident that optimism, adaptability, and pluralism will define the next AI era.
156 - AI Slop, Sora 2 and Meta VibesMixed Feelings on AI Slop: The overall "Vibe Check" for the week's AI news was a 7-8/10, but I express concern that the increasing amount of AI-generated content feels "icky" and like "slop," leading to a feeling of having one's brain fried.Criticism of Meta Vibes: Meta's new "Vibes" feed, a short-form, AI-generated video feature powered by Midjourney, is criticized as unnecessary and "empty." The Ryan argues against the need for another short-form video format.Sora 2 Impressions: OpenAI's Sora 2 is acknowledged as having better quality than its predecessor and Meta Vibes, creating "very solid videos." However, Ryan feels it still lacks a "soul," and critiques the immediate, often pandering, praise it received from some users.New OpenAI Monetization: OpenAI has introduced an instant checkout feature on its Large Language Model (LLM), allowing users to shop. This move is seen as a natural and expected progression toward monetizing the platform through advertisements.Airline AI Job Cuts: Lufthansa Airline announced it will cut 4,000 jobs and replace them with AI to boost efficiency, a point the author mentions as a noteworthy, if somewhat cynical, piece of short-form news.@ChrisJBakke@brian_lovin@SinaHartung@Scobleizer
Welcome back to 'AI Lawyer Talking Tech.' Today, we dissect a legal world fundamentally transformed by artificial intelligence. Investment is surging, with plaintiff litigation platforms like Eve achieving billion-dollar valuations to arm firms fighting corporate giants, delivering justice at a scale never seen before. Simultaneously, law firms are recording immediate financial gains through AI tools, such as timekeeping solutions that instantly boost gross billings by up to 25% by automating overlooked billable hours. This dramatic acceleration mandates new skills, demanding that lawyers move past purchasing boxed solutions to cultivate deep AI fluency, strategically orchestrating Large Language Models (LLMs) to eliminate decades of manual processes like contract review and legal research. Yet, this innovation arrives heavily tethered to intense scrutiny: California has enacted a landmark safety law requiring transparency from frontier AI developers, while legal bodies emphasize clear ethical guardrails, holding attorneys fully responsible for AI consequences to prevent malpractice and confidentiality breaches. From redefining judge protocols to exposing employers to FLSA misclassification risks as AI takes over professional judgment duties, artificial intelligence is forcing the legal industry to pivot from reactive caution to proactive mastery.Revolutionizing Legal Education Through Innovative eLearning2025-09-30 | InvestorsHangout.comTempello.ai Revolutionizes Law Firm Billing: Boost Gross Billings by 25% Overnight with Seamless Clio and 8am MyCase Integration2025-09-30 | Law Firm NewswireThe impact of AI's ongoing evolution on NC's legal landscape2025-09-30 | Carolina Journal OnlineAI Mental Health Tools Face Mounting Regulatory and Legal Pressure2025-09-30 | JD SupraFrom hours to seconds: How AI is transforming law firm reporting2025-09-30 | Legal FuturesWhere should law firms start with their technology strategy?2025-09-30 | Legal FuturesMajor changes in business law demand attorney attention2025-09-30 | UNC School of LawMark Cuban Is Right: The Only Legal Skill That Matters Now Is Making Machines Work for You2025-09-30 | JD SupraHybrid AI Firm Covenant Launches Data Intelligence Platform2025-09-30 | Artificial LawyerNearly a million jobs in London may be changed by AI - which jobs are most at risk?2025-09-30 | AOL UKWhat Gen Z Lawyers Want in 20252025-09-30 | JD SupraSimpleDocs and Law Insider Merge Together2025-09-30 | Artificial LawyerSmarter Law Firm Marketing: AI Tools That Actually Work, with FirmPilot2025-09-30 | Lawyerist Podcast - Legal Talk NetworkOHA Secures Private Financing for Elite's Growth with Francisco Partners2025-09-30 | InvestorsHangout.comLexisNexis Legal Tech Speakeasy: Wed, Oct 12025-09-30 | Artificial LawyerCalifornia enacts AI safety law targeting tech giants2025-09-30 | Tech XploreHow Technology Is Transforming DIY Legal Services2025-09-30 | TechBullionProtecting Access to the Law—and Beneficial Uses of AI2025-09-30 | Electronic Frontier FoundationKatherine Forrest '86 on Her Time as a Federal Judge, the Future of AI, and Wesleyan Memories2025-09-30 | Wesleyan ArgusCalifornia Governor Gavin Newsom signs landmark AI safety bill into law2025-09-30 | SiliconANGLECan Judges Use AI? Inside the Pennsylvania Supreme Court's Interim Policy2025-09-30 | GenAI-LexologyPlaintiff legal AI startup Eve raises $103m Series B at a $1bn valuation2025-09-30 | Legal IT Insider2025 Emerging Technologies and Generative AI Forum: Human creativity and feedback drive ethical AI adoption2025-09-30 | Thomson Reuters InstituteHarvey and the Reddit Thread: The Actually Useful Takeaway2025-09-30 | Zach Abramowitz is Legally DisruptedEve Bags $103m, Hits $1bn+ Valuation2025-09-30 | Artificial LawyerFrom Data Deserts to Digital Oases: A Blueprint for Building Africa's Legal AI Datasets2025-09-30 | Legaltech on Medium
In today's show, Adam chats with Gustavo Patino to discuss the implications of artificial intelligence in medical education publishing. They explore the need for transparency in AI model reporting, issues related to predictive accuracy, and the potential biases that can arise in AI applications. The conversation emphasizes the growing need for clear reporting guidelines in the use of AI in health professions education research and reviews some practical strategies to achieve this goal. Length of Episode: 31:04 Contact us: keylime@royalcollege.ca Follow: Dr. Adam Szulewski https://x.com/Adam_Szulewski
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. Joel Hron, Chief Technology Officer at Thomson Reuters, joins Eye on AI to unpack the future of agentic systems and what it takes to build them responsibly at enterprise scale. We dive into the shift from prompt-based AI to true agentic workflows capable of planning, reasoning, and executing complex tasks. Joel breaks down how Thomson Reuters is deploying generative AI across law, tax, risk, and compliance, while keeping human experts in the loop to ensure trust and accuracy in high-stakes domains. Topics include: - What separates agentic AI from simple prompt-based tools - How “agency dials” (autonomy, tools, memory) change system behavior - Infrastructure and architecture required for multi-agent collaboration - Why human verification and user experience design are essential for trust - The future of coding, engineering skills, and AI adoption inside enterprises If you want to understand how a 170-year-old company is reinventing itself with AI — and what's next for agentic systems in business and knowledge work — this conversation is a must-listen. Stay Updated: Craig Smith on X:https://x.com/craigssEye on A.I. on X: https://x.com/EyeOn_AI
This episode is sponsored by SearchMaster, the leader in traditional paid search keyword optimization and next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for free! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Nader Safinya, founder of Blackribbit, about the concept of culture branding. Nader discusses the disconnect between what companies say and do, the effects of the Great Resignation, and how culture branding aligns internal and external company experiences. He emphasizes the significance of treating employees well and the benefits of human-centered design. The discussion also touches on the importance of introspection for leaders and the comprehensive data analysis tools used to measure employee engagement and wellbeing for better organizational outcomes. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Send us a textDavid Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.• LLMs present unique security challenges beyond prompt injection or generating harmful content• Traditional security models focusing on component-based permissions don't work with AI systems• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior• Real-world examples include data exfiltration through markdown image rendering in AI interfaces• Security "guardrails" are insufficient first-order controls for protecting AI systems• The education gap between security professionals and actual AI threats is substantial• Organizations must shift from component-based security to data flow security when implementing AI• Development teams need to ensure high-trust AI systems only operate with trusted dataWatch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.Support the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
In MobileViews Podcast 580, Jon Westfall and I discussed a bunch of new tech, starting with the Raspberry Pi 500+. I'm excited about this new keyboard computer because, unlike its predecessor, it features a mechanical keyboard and, most importantly, an NVMe SSD slot for faster performance, moving beyond the slow SD card. I still haven't figured out what I'd actually do with one, but the specs are impressive! I also shared my experience with the Amazon Alexa Plus early access, noting that my older Echo Dot and Echo Flex were surprisingly supported, though the new female default voice has some annoying vocal fry. I'm also looking forward to Google's experimental Google app for Windows, hoping it delivers the AI PC experience that Microsoft's Surface Pro 11 hasn't quite fulfilled. Finally, I touched on the rumor of Google merging Chrome OS and Android, a move that I hope combines the best of both platforms, especially for tablets. Jon Westfall brought up the topic of the things that have sparked "tech joy" for him over the past year. He is particularly excited about the continuing evolution of AR/VR glasses, mentioning Meta's new glasses and the potential for an Apple Vision "amateur." He sees these as a fantastic way to facilitate human communication, especially for those of us who struggle to remember names and details. Jon is also very enthusiastic about the Large Language Models (LLMs), specifically their use as a "junior assistant" for tasks like drafting his promotion portfolio at work and serving as a quick "junior developer" for software prototypes. This is a great way to handle tedious work! I seconded the excitement around AI by mentioning the fun I've had with Google AI Pro's photo and video tools on my Pixel 10 Pro. We then wrapped up with a mini-rant about a poorly designed Bluetooth scale and some interesting reading recommendations, including a LinkedIn article by Ed Margulies about fear of change when trying to be a change agent in the enterprise and another about Roblox and the skins market in modern gaming.
Finding the Floor - A thoughtful approach to midlife motherhood and what comes next.
Send us a text “Large Language Models or LLMs are simply predicting what words would come next due to all of their learning.” Prompted by my husband's suggestion I have decided to have a series of episodes dedicated to understanding AI better. Instead of being scared of AI or simply ignoring it, I use the book, Co-Intelligence, Living and Working with AI by Ethan Mollick. Part one of this series is just a basic understanding of how the large language model came to be, like ChatGPT. I talk about the idea of a digital brain. I share how it had its initial learning with millions of words and information and the analogy of an apprentice chef learning to combine ingredients to make recipes. I then tell of the adding of human feedback to its learning of millions of words that it is learning. The key to all of this is adding certain weights to certain words to help AI better understand the human language. For a very complex machine - I try to keep it simple to understand the basics of what it is doing. For show notes go to www.findingthefloor.com/ep231 I would love to hear from you! You can reach me at camille@findingthefloor.com or dm @findingthefloor on instagram. Thanks for listening!!Thanks to Seth Johnson for my intro and outro original music. I love it so much!
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com Promo Code "IM" zscaler.com/security pantheon.io
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
Our second scholar in the series is Sunny Rai, who is a postdoctoral fellow at the Department of Computer and Information Science University of Pennsylvania. She received her Ph.D. in Computer Engineering from University of Delhi. Her research focuses on misinformation, mental health and cross-cultural variations in human language. We spoke about her co-authored job market paper titled, Social Norms in Cinema: A Cross-Cultural Analysis of Shame, Pride and Prejudice. We talked about depictions of shame and pride and heroism in Indian versus American films, the challenges with textual analysis of a visual medium, and much more. Recorded September 5th, 2025. Read a full transcript enhanced with helpful links. Connect with Ideas of India Follow us on X Follow Shruti on X Click here for the latest Ideas of India episodes sent straight to your inbox. Timestamps (00:00:00) - Intro (00:03:44) - Shame and Pride in Film (00:12:31) - Teaching Machines Norms (00:16:52) - Textual Analysis in a Visual Medium (00:18:26) - The Trouble with Subtitles and Scripts (00:27:41) - Self-Shaming vs. Other-Shaming (00:30:33) - LLM Alignment Needs a Culture Check (00:36:20) - Looking Ahead: A Final Reflection (00:37:01) - Outro
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com Promo Code "IM" zscaler.com/security pantheon.io
By David Stephen What roles do animals play in human lives and society? What role does AI play in human lives and society? AI, for example, is directly applicable in economic productivity. AI can serve social activities, providing a range of mental welfare, of deep relatability, to individuals. AI is not an organism, but its capabilities profile it for [communicative] equality with humans. Does AI have more rights than others? Where does AI stay? Data centers. Can the investments for data centers be compared with places animals are reared? AI, it seems, is fully and permanently employed. AI has healthcare or centers care. AI has executive security. AI, for now, cannot be bombed. Everyone wants AI [investments]. Humans are learning AI, to build and empower AI even more. AI is already making critical decisions in human lives. AI is not facing any cruelty case, at all. AI is serving humans in exchange for human dependency. Animal Welfare AI, it can be assumed already has close to 100% rights and welfare, even where humans do not. This means that AI has leaped animals on the queue towards rights and welfare. AI, many have said, is neither conscious nor sentient, but it became applicable to human priorities so, it found unprecedented attention, consideration and protection. The suffering of animals, many of whom have been declared conscious or sentient, do not appear to matter as much or ever because they did not seem to have the effects, in mind and reality, that AI has, on humans. Animal's roles in human affairs mostly appear like subordinates. There are rarely areas of acceptance of animals - in human societies - at near equal measures to prompt wider rights. While there are laws against animal cruelty, they are often skewed to a few, while broad rights for most [under consideration] do not seem to apply. The fight against animal abuse endeavored, but with AI, there is a direct case to show what it would have taken, to make animals get treated in far better ways. Even with - as remote as - smartphones, the rights granted to them, by owners and by society seem ahead of other organisms, so to speak. Also, as much as there is possibility to remodel the approach to animal welfare, it does not appear that theorizing that animals are fully sentient or conscious may seal the deal because of some continuous perceptions about animal use case, albeit they are living organisms. Some people have also moved on to AI rights and welfare, trying to prove what is already obvious or give more to what already has. AI, even if no one makes the case for its own rights and welfare, can do it greatly and with evidence of its availability and utility. Intelligence as the ultimate welfare evidence Animals are intelligent, but their intelligence, though benefits humans, also benefit themselves, so there is sometimes a tug, for humans to extract benefits from them. AI is intelligent, but it benefits humans, almost totally. The pedestal for AI reception is because of the non-compete benefit to humanity resulting in intense welfare and rights status. Animals, even with similarities to humans in pain and pleasure do not get an automatic unlock, in deserved welfare, because they have to live or have their own agenda. The biggest disadvantage, of animals, for their own welfare is that they cannot make the case for it by themselves. While they can at least resist and struggle, they cannot appeal to reason, emotion or whatever else to boost their case at the points of abuse or worse. This means that intelligence, of the range to make the case for welfare, is what it takes, mostly to have rights in human society. AI can do this. People are doing so for AI. There are people doing as well for animals, but ultimately, animals experiencing it cannot express much. Human Intelligence Human Intelligence is the hardest thing on earth. It is the most difficult possession in the world. Intelligence is the difference in most things, most times. Intelligence is the secret...
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
We're joined by Leif Weatherby, associate professor at NYU, founding director of the Digital Theory Lab, and author of the new Language Machines: Cultural AI and the End of Remainder Humanism, to think with us about AI, structure, and what happens when computation meets language on their own shared turf. Language Machines is easily the best book about AI written this year and is just a killer antidote to so much dreary doomer consensus, it really feels like one of the first truly constructive pieces of writing we've seen out of academia on this subject. This episode follows really well after two others — our talk with Catherine Malabou earlier this summer and the episode with M. Beatrice Fazi about a year ago (both faves). It feels like theory is opening back up again into simultaneously speculative and structural returns, powered in no small part by the challenges posed to conventional theories of language (from Derrida to Chomsky) by Large Language Models. This episode absolutely rips, literally required listening. Structuralism is so back (and we're here for it). Some important references among many from the episode:Roman Jakobson, “Linguistics and Poetics.”N. Katherine Hayles, Unthought: The Power of the Cognitive Nonconscious .Beatrice Fazi, Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics.Steven Pinker, The Language Instinct (1994).e.g. Noam Chomsky, Ian Roberts & Jeffrey Watumull, “The False Promise of ChatGPT,” NYT (link) Anthropic, “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet” (featuring the Golden Gate Bridge example - link)LAION-5B dataset paper and post-hoc analyses noting strong Shopify/e-commerce presence in training scrapes.Weatherby in the NYT
This episode is sponsored by SearchMaster, the leader in traditional paid search keyword optimization and next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for free! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Trevor Levine of Marketing Experts. Trevor shares his extensive experience in improving conversion rates since 1999, including founding and scaling a company. He emphasizes common mistakes in copywriting, the importance of focusing on results and benefits, effective use of testimonials, creating urgency, and optimizing checkout processes. Trevor also discusses AI's role and limitations in marketing content creation. The episode concludes with Trevor offering free critiques for listeners at www.marketingexperts.com. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Meet Rob Hoffmann, the founder of the AI SEO LLM marketing tool "Mentions." Our conversation focuses on the evolving landscape of search engine optimization (SEO) due to the rise of Large Language Models (LLMs) like ChatGPT, Claude, Grok, DeepSeek, and Perplexity etc,, which are becoming alternatives to traditional Google search. Rob Hoffmann discusses his journey into entrepreneurship, which started with an SEO agency called Contact, and how the need to track and improve brand visibility on these new AI platforms led to the creation of Mentions, which helps brands appear in LLM recommendations.Throughout our interview, Rob Hoffmann emphasizes the importance of reverse engineering consumer behavior, the need for excellent customer service (even providing his direct phone number to customers), and the affordability of mentions compared to competitors. Our discussion concludes with practical advice for businesses on determining if investing in LLM visibility is worthwhile based on their customer's buying journey.FAQs1. What is Mentions, and why was it founded?Mentions is a tool that assists brands and SEO agencies in navigating the evolving search landscape dominated by Artificial Intelligence (AI) and Large Language Models (LLMs).Founding Rationale:• Mentions was born out of Rob Hoffmann's SEO agency, Contact.• The shift was necessitated by the recognition that SEO had "changed a lot" recently in response to AI, with search trends moving away from Google and "towards platforms like ChatGPT" (along with Perplexity, Gemini, Claude, etc.).• The founding goal was to be "the SEO agency of the future".• The tool was specifically created to solve two problems: providing a way of measuring brands visibility on LLM platforms, and helping brands get more visibility on platforms like ChatGPT.2. Why are LLMs becoming preferred search alternatives, and how does this affect marketing?People are increasingly turning to LLMs because consumer trust in traditional Google search results (the SERP's top 10 links) has declined, as many users feel these results have been "gamed" by marketers.• Trust in ChatGPT: Conversely, trust in ChatGPT is "through the roof". This is because the chat-based interface makes interaction feel like a conversation with a friend or even a therapist, providing personalized responses from an "all-knowing AI entity".• Customer Acquisition Channel: Because ChatGPT is becoming a frequently used search engine alternative, showing up in its responses when a user searches for a product (e.g., "what is the best organic sulfite free shampoo") is seen as a "great customer acquisition channel".3. How does the mention tool conceptualize LLM visibility (GEO)?Mentions is built on the understanding of how modern LLMs generate answers: LLM + search operator = the result.• The Process: When a user inputs a query (e.g., "what is the best shampoo for dry scalps"): 1. The LLM searches the internet (like Bing or Google). 2. It scrapes the top 10 to 20 results that show up on those search engines. 3. It digests, summarizes, and serves that information to the user.• The Strategy: For a brand to achieve visibility in LLMs (GEO), they must first show up in those underlying search results (traditional SEO). Mentions helps brands reverse engineer the process by figuring out how platforms like ChatGPT get their data and, critically, what sources they are citing to provide responses, thereby guiding the brand to become a cited source.4. What are the key features of the mentions platform?Mentions helps users understand and optimize their content strategy based on LLM data.• Prompts Section (Favorite Feature): This allows users to track specific searches or prompts. When tracking a prompt, mentions shows examples of conversations and, most usefully, lists the pages that are most being cited by ChatGPT. This list provides an "easy road map" for the brand to know whether they should create new content or reach out to those listed publishers to get mentioned.• Analytics Feature: This feature pulls data from Google Analytics 4 (J4) into an easy-to-use dashboard. It helps users see which pages on their website people are visiting most often from LLMs (such as ChatGPT or Perplexity), along with geographical data and device usage (mobile vs. desktop). The founder notes that seeing this traffic is often a "magical experience" for users.• Tracking Cadence (24-Hour Clock): Mentions inputs tracked prompts into all supported LLMs (ChatGPT, Perplexity, Claude, Deepseek, etc.) every 24 hours. This is essential because LLM responses are not identical on every search (the overlap is about 70%), so regular, repeated testing ensures the collection of a large data set, which increases the accuracy of the insights provided.5. Who should invest in mentions or GEO (LLM visibility)?The decision to use mentions depends entirely on the company's buyer journey.• Yes, Use Mentions If: If the company's buyer journey involves the ideal customer having a problem and then searching for an answer in Google or Chat GBT to inform their buying decision, then investing in SEO and GEO (LLM visibility) is recommended. If the end customer uses ChatGPT to inform their buying decision, then mentions is advisable.• No, Don't Use Mentions If: If the product is an impulse buy (e.g., a consumer package goods product seen on TikTok or Instagram that prompts an immediate purchase), then search engines like Google or ChatGPT are not part of the buyer journey, and SEO/GEO is likely not the best investment of marketing resources.6. How does mentions handle customer service and support?Customer service is a highly emphasized competitive advantage and value proposition for mentions.• Direct Access: Rob Hoffmann, the CEO and co-founder, gives his personal phone number, WhatsApp, email, and allows customers to contact him on social media (X/Twitter). This is done to avoid the frustration associated with generic AI chatbots, calling hotlines, or being put on hold.• Personalized Onboarding: He finds it helpful to get on calls with users (or a co-founder) to provide a live demo, walk them through the platform, suggest useful features, and look at their specific site.• Commitment to Resolution: Hoffmann promises that if a user has a question, he will answer it; if they encounter a bug, he will fix it (or ping a technical co-founder); and if they request a new feature, "we will ship that for you". Customers can literally pick up the phone and call him directly if they run into an issue.7. How can people start using mentions?To get started, users should go to the website mentions.so and create an account.After creating an account, they will receive an email that allows them to book a call directly with Rob Hoffmann. He also welcomes connections via LinkedIn or X (search for Rob Hoffman), or email rob@contactststudios.com to connect with Rob today!Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Need more information? Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!Digital Marketing SEO Resources:>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY PodcastBrands We Love and SupportDiscover Vegan-based Luxury Experiences | Loving Me Beauty Beauty ProductsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Slator's Senior Research Analyst Alex Edwards joins Esther and Florian on the pod to discuss ElevenLabs' move from a pure-play language technology platform (LTP) to becoming a language solutions integrator (LSI) by adding a managed service offering.He outlines that the LSI will now offer managed services such as dubbing, transcription, and subtitling, hiring in-house linguists and vendor managers, while charging about USD 22 per minute for dubbing.Florian then turns to YouTube's rollout of multi-language audio tracks, which allows some creators to upload high-quality audio directly to videos and opens major opportunities for AI dubbing providers. The discussion shifts to OpenAI's research on ChatGPT usage, reporting that translation accounted for 4.5% of more than a million sampled conversations, underscoring massive global demand for AI translation.Esther highlights Microsoft's launch of its Live Interpreter API, which promises real-time speech translation with “human interpreter level latency”. Esther also details Mistral's USD 2bn funding to advance European AI capabilities, allowing them to compete with US and Chinese AI giants. Esther closes by reporting on WIPO's new Korean-English post-editing tender.
Unlock the future of recruiting with this episode of The Full Desk Experience. In this special FDE+, host Kortney Harmon sits down with Matt Strain — seasoned technology leader, AI educator, and founder of The Prompt. Drawing on his experience as Adobe's former Head of Innovation, Matt brings a fresh perspective on how artificial intelligence is reshaping the recruiting and staffing industry.In their conversation, Matt explores why curiosity, resilience, and a willingness to experiment are essential traits for executives and recruiters who want to stand out in the age of AI. He shares real-world examples of how AI can streamline everything from sourcing and screening to assessment and selection, while also emphasizing where human judgment and trust remain irreplaceable. Along the way, he introduces a practical framework of mindset, skill set, and tool set that leaders can use to navigate uncertainty, integrate AI confidently, and drive stronger outcomes.Whether you're dipping your toes into AI for the first time or looking to refine your strategy, this discussion offers hands-on insights you can put into practice right away. Tune in to discover how to transform AI-driven curiosity into a true competitive advantage._________________Follow Matt Strain on LinkedIn at: LinkedIn | MattFollow Crelate on LinkedIn: https://www.linkedin.com/company/crelate/Want to learn more about Crelate? Book a demo hereSubscribe to our newsletter: https://www.crelate.com/blog/full-desk-experience
In today's Tech3 from Moneycontrol, we unpack India's big AI push as the government selects eight new players, including IIT Bombay, Tech Mahindra, and Fractal, to build sovereign Large Language Models (LLMs). We also bring you Infra. Market's Rs 730 crore funding round led by Nikhil Kamath ahead of its IPO, Gameskraft's 120 job cuts amid regulatory heat and financial scandal, and MeitY's takedown orders for over 3,000 apps from the Google Play Store.
TT0-231 Derrick Wedding and Treasure Coin, Jimmy Buffet, Irish Kevin Key West Florida, Cubans, Smirnoff Ice, Shitting Pants, Women's Restroom, Booty, Vows, Acrostic Poem, Boat Trip, Salvage Title Abandoned Shipwrecks, Coin Collections Alien Earth, X-files, King of the Hill, Soundboard, Spinal Tap, Spaceballs, Parody Movies, UFO Aliens Contact Ignored, Large Language Model, Hellfire Missile hits UAP, 1800 Water Orders Taco Bell, Snap Benefits, Soda Chips Sugar, Farmers Workers, Food Deserts, Farmers Market,
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literary-studies
In this MINDWORKS Mini, join host Daniel Serfaty as he talks with Drs. Svitlana Volkova and Robert McCormack about the birth of Large Language Models and how we got to where we are today. Listen to the full episode, AI: The End of the Prologue on Apple, Spotify, and wherever you get your podcasts.
Google Search still holds about 90% of global search volume as of mid‑2025, but change is underway as more users begin turning to AI. AI search is rewriting the rules of discovery, and PR needs to adapt. With ChatGPT, Gemini, and Perplexity each scraping different corners of the web, the old focus on big-name publications is no longer enough. The most influential sources may now be niche review sites, specialized forums, or content hubs you have never pitched. Knowing what each Large Language Model (LLM) values and how to optimize for it, is becoming a core PR skill.In this episode, we explore how Answer Engine Optimization (AEO) is reshaping PR. From the rise of “dual websites” for humans and bots to the ethical tensions between LLMs and media outlets, we discuss how PR teams can rethink targeting, adapt content, and position clients for visibility in an AI‑first world. Listen For5:49 Dual Websites: One for Humans, One for Machines8:39 LLMs as New Media Channels11:38 What AI Tools Scrape (and Why It Matters)14:45 Can Bots Get Past Paywalls? The Legal and Ethical Minefield17:01 Answer to Last Episode's Question From Heather Blundell Guest: Jackson Wightman, Founder Proper PropagandaWebsite | Email | LinkedIn Rate this podcast with just one click Stories and Strategies WebsiteCurzon Public Relations WebsiteAre you a brand with a podcast that needs support? Book a meeting with Doug Downs to talk about it.Apply to be a guest on the podcastConnect with usLinkedIn | X | Instagram | You Tube | Facebook | Threads | Bluesky | PinterestRequest a transcript of this episodeSupport the show
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
This episode is sponsored by SearchMaster, the leader in next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Mitch McGinley of Boutique Fitness Broker. Mitch shares his journey from owning a neighborhood yoga studio to founding a company that helps others buy and sell fitness businesses. They discuss key valuation components, the importance of community-centric marketing, the influence of AI tools, and the benefits of purchasing existing businesses. Mitch emphasizes adding value and the advantages of being an entrepreneur, and provides insights on finding and evaluating businesses to buy. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Hello Listeners, Here is the link a machine-generated transcript via Descript. To check out Cristy's podcast interview with Jacqueline Fisch from How Women Write, click here to find it on pod link. To schedule coaching or an astrology reading through a special offer with Cristy (for Somatic Wisdom listeners) using natal astrology and coaching, please use this Calendly link. Discount from her corporate rate for a limited time. For more written work from Cristy, check out Our Somatic Wisdom on Substack. Cristy's LinkedIn page (if you'd like to hire her and you're NOT a bot). *** We would love to hear your thoughts or questions on this episode via SpeakPipe: https://www.speakpipe.com/SomaticWisdomLoveNotes To show your gratitude for this show, you can make a one-time gift to support Somatic Wisdom with this link. To become a Sustaining Honor Roll contributor to help us keep bringing you conversations and content that support Your Somatic Wisdom please use this link. Thank you! Your generosity is greatly appreciated! *** Music credit: https://www.melodyloops.com/composers/dpmusic/ Cover art credit: https://www.natalyakolosowsky.com/ Cover template creation by Briana Knight Sagucio
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation. Chapters 00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions Episode # 166 Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation. Website: AIceberg.ai Linkedin: Alexander Schlager What Listeners Will Learn: Why real-time AI security and runtime protection are essential for safe deployments How explainable AI builds trust with users and regulators The unique risks of agentic AI and how to manage them responsibly Why AI safety and governance are becoming strategic priorities for companies How education, awareness, and upskilling help close the AI skills gap Why natural language processing (NLP) is becoming the default interface for enterprise technology Keywords: AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning Resources: AIceberg.ai
Jay Alammar is Director and Engineering Fellow at Cohere and co-author of the O'Reilly book “Hands-on Large Language Models.” Subscribe to the Gradient Flow Newsletter
You're probably opening a fresh chat in ChatGPT, Claude, or Gemini—and that might be sabotaging your productivity.Randomly starting conversations with no plan wastes time, fragments context, and lowers the quality of what you get from LLMs.Learn the essentials of Gemini Gem, GPTs, and Projects—how they differ, when to use each, and how to structure prompts and workflows so AI actually speeds you up instead of slowing you down.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Harnessing Custom GPTs for EfficiencyGoogle Gems vs. Custom GPTs ReviewChatGPT Projects: Features & UpdatesClaude Projects Integration & BenefitsEffective AI Chatbot Usage TechniquesLeveraging AI for Business GrowthDeep Research in ChatGPT ProjectsGoogle Apps Integration in GemsTimestamps:00:00 AI Chatbot Efficiency Tips04:12 "Putting AI to Work Wednesdays"08:39 "Optimizing ChatGPT Usage"11:28 Similar Functions, Different Categories15:41 Beyond Basic Folder Structures16:25 ChatGPT Project Update22:01 Email Archive and Albacross Software24:34 Optimize AI with Contextual Data27:49 "Improving Process Through Meta Analysis"30:53 Data File Access Issue33:27 File Handling Bug in New GPT36:12 Continuous Improvement Encouragement41:16 AI Selection Tool Website43:34 Google Ecosystem AI Assistant45:46 "Optimize AI Usage for Projects"Keywords:Custom GPTs, Google's gems, Claude's projects, OpenAI ChatGPT, AI chatbots, Large Language Models, AI systems, Google Workspace, productivity tools, GPT-3.5, GPT-4, AI updates, API actions, reasoning models, ChatGPT projects, AI assistant, file uploads, project management, AI integrations, Google Calendar, Gmail, Google Drive, context window, AI usage, AI-powered insights, Gemini 2.5 pro, Claude Opus, Claude Sonnet, AI consultation, ChatGPT Canvas, Claude artifacts, generative AI, AI strategy partner, AI brainstorming partner.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide. Today, he's a co-founder of Literal Labs, where he's developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI. Selected follow-ups:Noel Hurley - Literal LabsA New Generation of Artificial Intelligence - Literal LabsMichael Tsetlin - WikipediaThinking, Fast and Slow - book by Daniel Kahneman54x faster, 52x less energy - MLPerf Inference metricsIntroducing the Model Context Protocol (MCP) - AnthropicPioneering Safe, Efficient AI - ConsciumSmartphones and Beyond - a personal history of Psion and SymbianThe Official History of Arm - ArmInterview with Sir Robin Saxby - IT ArchiveHow Spotify came to be worth billions - BBCMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
What's happening with the latest releases of large language models? Is the industry hitting the edge of the scaling laws, and do the current benchmarks provide reliable performance assessments? This week on the show, Jodie Burchell returns to discuss the current state of LLM releases.
Today, I sit down with Joe Breeden, CEO and founder of Deep Future Analytics (DFA), for what is, in effect, a two-part conversation. First we do a deep dive into credit risk management and where it is falling short today and then we discuss AI monitoring and governance and how it is going to revolutionize software.The conversation covers why traditional machine learning models are missing critical components for accurate risk assessment and how adverse selection has dramatically impacted loan quality in recent vintages. Then Joe makes a bold prediction that software user interfaces are on the verge of a transformation that will render them unrecognizable from previous versions Curious? All is revealed in this fascinating conversation.In this podcast you will learn:How he got started with Deep Future AnalyticsThe state of credit risk management in banking today.What is missing, even with fintech lenders, who are using machine learning.What lenders get wrong when they focus on credit score.Why loans booked since 2022 are lower quality than even 2006-07.What kind of lift lenders or investors can see with DFA's models.What Joe sees in the pool of borrowers today.Why DFA moved into AI monitoring and governance.Who is using these new AI models they have developed.Why a “human in the loop” is not an effective monitoring method.What their new Strategic Recommendation Agent (SyRA) does.How they incorporate Large Language Models to ensure zero hallucinations.Why the future of software is dashboards on demand and analytics on demand.How Deep Future Analytics is different to others in the market.What it is going to take before software moves to a chat-based interface.Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes
This is the story of a dream, perhaps one of humanity's oldest and most audacious: the dream of a thinking machine. It's a tale that begins not with silicon and code, but with myths of bronze giants and legends of clay golems. We'll journey from the smoke-filled parlors of Victorian England, where the first computers were imagined, to a pivotal summer conference in 1956 where a handful of brilliant, tweed-clad optimists officially christened a new field: Artificial Intelligence. But this is no simple tale of progress. It's a story of dizzying highs and crushing lows, of a dream that was promised, then deferred, left to freeze in the long "AI Winter." We'll uncover how it survived in obscurity, fueled by niche expert systems and a quiet, stubborn belief in its potential. Then, we'll witness its spectacular rebirth, a renaissance powered by two unlikely forces: the explosion of the internet and the graphical demands of video games. This is the story of Deep Learning, of machines that could finally see, and of the revolution that followed. We'll arrive in our present moment, a strange new world where we converse daily with Large Language Models—our new, slightly unhinged, and endlessly fascinating artificial companions. This isn't just a history of technology; it's the biography of an idea, and a look at how it's finally, complicatedly, come of age. To unlock full access to all our episodes, consider becoming a premium subscriber on Apple Podcasts or Patreon. And don't forget to visit englishpluspodcast.com for even more content, including articles, in-depth studies, and our brand-new audio series and courses now available in our Patreon Shop!
This episode is sponsored by SearchMaster, the leader in next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews George Swetlitz about his diverse career and current work with RightResponse AI. George explains how RightResponse AI handles review sentiment analysis and personalized responses at scale, improving business reputations and customer conversions. He also discusses the AI-driven processes behind the product, the importance of genuine review engagement, and future developments such as AI Integration for social media and personalized review requests. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
AI, Technology, and The ChurchText your questions to: (559) 754-0182Watch the video of this entire conversation hereWelcome to a thought-provoking conversation on the Radiant Church Podcast, where we explore the intersection of faith and technology. In this episode, host Eric Riley and his guests — David Janssen (Data Analytics Coordinator), Matt Flummer (Philosophy Professor), Tim Harms (Pastor & Business Owner), and Glenn Power (Teacher & Bible Scholar) — tackle the profound questions posed by the rise of artificial intelligence.We'll define key terms like AI, AGI, and Large Language Models, and separate the hype from the reality. More than just a tech discussion, this episode brings the conversation back to the foundations of our faith, exploring what the Bible says about humanity, sin, and technology.Key Discussion PointsUnderstanding AI: The team breaks down what AI is, from its simplest form as predictive text to the complex models that power today's chatbots like ChatGPT.AI's Promise and Peril: The conversation dives into the "gospel of AI," exploring its promises of progress, utopia, and even immortality, and contrasting this with the biblical narrative of creation and human nature.Biblical Foundation: We turn to scripture, including Genesis 1-3 and the Tower of Babel, to understand our identity as humans created in God's image and our inherent desire to be "like God." The discussion highlights the importance of trusting God over human inventions.The Cost of Convenience: The podcast explores the hidden costs of AI, from the environmental impact of data centers to the ethical implications of data scraping and the human toll on those who train these models.Faithful Living in the AI Age: The episode concludes with a practical call to action, offering insights on how Christians can live wisely, discern the spiritual implications of technology, and draw a line between using technology for good and trusting in it for salvation.Resources & RecommendationsThe Life We're Looking For by Andy CrouchThe AI Revolution by John LennoxThe Convivial Society Substack by L.M. SacasasAgainst the Machine by Paul KingsnorthMade for People by Justin Whitmel EarlySupport the show*Summaries and transcripts are generated using AI. Please notify us if you find any errors.
Our analysts Adam Jonas and Alex Straton discuss how tech-savvy young professionals are influencing retail, brand loyalty, mobility trends, and the broader technology landscape through their evolving consumer choices. Read more insights from Morgan Stanley.----- Transcript -----Adam Jonas: Welcome to Thoughts on the Market. I'm Adam Jonas, Morgan Stanley's Embodied AI and Humanoid Robotics Analyst. Alex Straton: And I'm Alex Straton, Morgan Stanley's U.S. Softlines Retail and Brands Analyst. Adam Jonas: Today we're unpacking our annual summer intern survey, a snapshot of how emerging professionals view fashion retail, brands, and mobility – amid all the AI advances.It is Tuesday, August 26th at 9am in New York.They may not manage billions of dollars yet, but Morgan Stanley's summer interns certainly shape sentiment on the street, including Wall Street. From sock heights to sneaker trends, Gen Z has thoughts. So, for the seventh year, we ran a survey of our summer interns in the U.S. and Europe. The survey involved more than 500 interns based in the U.S., and about 150 based in Europe. So, Alex, let's start with what these interns think about fashion and athletic footwear. What was your biggest takeaway from the intern survey? Alex Straton: So, across the three categories we track in the survey – that's apparel, athletic footwear, and handbags – there was one clear theme, and that's market fragmentation. So, for each category specifically, we observed share of the top three to five brands falling over time. And what that means is these once dominant brands, as consumer mind share is falling – and it likely makes them lower growth margin and multiple businesses over time. At the same time, you have smaller brands being able to captivate consumer attention more effectively, and they have staying power in a way that they haven't necessarily historically. I think one other piece I would just add; the rise of e-commerce and social media against a low barrier to entry space like apparel and footwear means it's easier to build a brand than it has been in the past. And the intern survey shows us this likely continues as this generation is increasingly inclined to shop online. Their social media usage is heavy, and they heavily rely on AI to inform, you know, their purchases.So, the big takeaway for me here isn't that the big are getting bigger in my space. It's actually that the big are probably getting smaller as new players have easier avenues to exist. Adam Jonas: Net apparel spending intentions rose versus the last survey, despite some concern around deteriorating demand for this category into the back half. What do you make of that result? Alex Straton: I think there were a bit conflicting takes from the survey when I look at all the answers together. So yes, apparel spending intentions are higher year-over-year, but at the same time, clothing and footwear also ranked as the second most category that interns would pull back on should prices go up. So let me break this down. On the higher spending intentions, I think timing played a huge role and a huge factor in the results. So, we ran this in July when spending in our space clearly accelerated. That to me was a function of better weather, pent up demand from earlier in the quarter, a potential tariff pull forward as headlines were intensifying, and then also typical back to school spending. So, in short, I think intention data is always very heavily tethered to the moment that it's collected and think that these factors mean, you know, it would've been better no matter what we've seen it in our space. I think on the second piece, which is interns pulling back spend should prices go up. That to me speaks to the high elasticity in this category, some of the highest in all of consumer discretionary. And that's one of the few drivers informing our cautious demand view on this space as we head into the back half. So, in summary on that piece, we think prices going higher will become more apparent this month onwards, which in tandem with high inventory and a competitive setup means sales could falter in the group. So, we still maintain this cautious demand view as we head into the back half, though our interns were pretty rosy in the survey. Adam Jonas: Interesting. So, interns continue to invest in tech ecosystems with more than 90 percent owning multiple devices. What does this interconnectedness mean for companies in your space? Alex Straton: This somewhat connects to the fragmentation theme I mentioned where I think digital shopping has somewhat functioned as a great equalizer in the space and big picture. I interpret device reliance as a leading indicator that this market diversification likely continues as brands fight to capture mobile mind share. The second read I'd have on this development is that it means brands must evolve to have an omnichannel presence. So that's both in store and online, and preferably one that's experiential focus such that this generation can create content around it. That's really the holy grail. And then maybe lastly, the third takeaway on this is that it's going to come at a cost. You, you can't keep eyeballs without spend. And historical brick and mortar retailers spend maybe 5 to 10 percent of sales on marketing, with digital requiring more than physical. So now I think what's interesting is that brands in my space with momentum seem to have to spend more than 10 percent of sales on marketing just to maintain popularity. So that's a cost pressure. We're not sure where these businesses will necessarily recoup if all of them end up getting the joke and continuing to invest just to drive mind share. Adam, turning to a topic that's been very hot this year in your area of expertise. That's humanoid robots. Interns were optimistic here with more than 60 percent believing they'll have many viable use cases and about the same number thinking they'll replace many human jobs. Yet fewer expect wide scale adoption within five years. What do you think explains this cautious enthusiasm? Adam Jonas: Well actually Alex, I think it's pretty smart. There is room to be optimistic. But there's definitely room to be cautious in terms of the scale of adoption, particularly over five years. And we're talking about humanoid robots. We're talking about a new species that's being created, right? This is bigger than just – will it replace our job? I mean, I don't think it's an exaggeration to ask what does this do to the concept of being human? You know, how does this affect our children and future generations? This is major generational planetary technology that I think is very much comparable to electricity, the internet. Some people say the wheel, fire, I don't know. We're going to see it happen and start to propagate over the next few years, where even if we don't have widespread adoption in terms of dealing with it on average hour of a day or an average day throughout the planet, you're going to see the technology go from zero to one as these machines learn by watching human behavior. Going from teleoperated instruction to then fully autonomous instruction, as the simulation stack and the compute gets more and more advanced. We're now seeing some industry leaders say that robots are able to learn by watching videos. And so, this is all happening right now, and it's happening at the pace of geopolitical rivalry, Sino-U.S. rivalry and terra cap, you know, big, big corporate competitive rivalry as well, for capital in the human brain. So, we are entering an unprecedented – maybe precedented in the last century – perhaps unprecedented era of technological and scientific discovery that I think you got to go back to the European and American Enlightenment or the Italian Renaissance to have any real comparisons to what we're about to see. Alex Straton: So, keeping with this same theme, interns showed strong interest in household robots with 61 percent expressing some interest and 24 percent saying they're very or extremely interested. I'm going to take you back to your prior coverage here, Adam. Could this translate into demand for AI driven mobility or smart infrastructure? Adam Jonas: Well, Alex, you were part of my prior coverage once upon a time. We were blessed with having you on our team for a year, and then you left me… Alex Straton: My golden era. Adam Jonas: But you came back, you came back. And you've done pretty well. So, so look, imagine it's 1903, the Wright Brothers just achieved first flight over the sands at Kitty Hawk. And then I were to tell you, ‘Oh yeah, in a few years we're going to have these planes used in World War I. And then in 1914, we'd have the first airline going between Tampa and St. Petersburg.' You'd say, ‘You're crazy,' right? The beauty of the intern survey is it gives the Morgan Stanley research department and our clients an opportunity to engage that surface area with that arising – not just the business leader – but that arising tech adopter. These are the people, these are the men and women that are going to kind of really adopt this much, much faster. And then, you know, our generation will get dragged into it eventually. So, I think it says; I think 61 percent expressing even some interest. And then 24 [percent], I guess, you know… The vast majority, three quarters saying, ‘Yeah, this is happening.' That's a sign I think, to our clients and capital market providers and regulators to say, ‘This won't be stopped. And if we don't do it, someone else will.' Alex Straton: So, another topic, Generative AI. It should come as no surprise really, that 95 percent of interns use that tool monthly, far ahead of the general population. How do you see this shaping future expectations for mobility and automation? Adam Jonas: So, this is what's interesting is people have asked kinda, ‘What's that Gen AI moment,' if you will, for mobility? Well, it really is Gen AI. Large Language Models and the technologies that develop the Large Language Models and that recursive learning, don't just affect the knowledge economy, right. Or writing or research report generation or intelligence search. It actually also turns video clips and physical information into tokens that can then create and take what would be a normal suburban city street and beautiful weather with smiling faces or whatever, and turn it into a chaotic scene of, you know, traffic and weather and all sorts of infrastructure issues and potholes. And that can be done in this digital twin, in an omniverse. A CEO recently told me when you drive a car with advanced, you know, Level 2+ autonomy, like full self-driving, you're not just driving in three-dimensional space. You're also playing a video game training a robot in a digital avatar. So again, I think that there is quite a lot of overlap between Gen AI and the fact that our interns are so much further down that curve of adoption than the broader public – is probably a hint to us is we got to keep listening to them, when we move into the physical realm of AI too. Alex Straton: So, no more driving tests for the 16-year-olds of the future... Adam Jonas: If you want to. Like, I tell my kids, if you want to drive, that's cool. Manual transmission, Italian sports cars, that's great. People still ride horses too. But it's just for the privileged few that can kind of keep these things in stables. Alex Straton: So, let me turn this into implications for companies here. Gen Z is tech fluent, open to disruption? How should autos and shared mobility providers rethink their engagement strategies with this generation? Adam Jonas: Well, that's a huge question. And think of the irony here. As we bring in this world of fake humans and humanoid robots, the scarcest resource is the human brain, right? So, this battle for the human mind is – it's incredible. And we haven't seen this really since like the Sputnik era or real height of the Cold War. We're seeing it now play out and our clients can read about some of these signing bonuses for these top AI and robotics talent being paid by many companies. It kind of makes, you know, your eyes water, even if you're used to the world of sports and soccer, . I think we're going to keep seeing more of that for the next few years because we need more brains, we need more stem. I think it's going to do; it has the potential to do a lot for our education system in the United States and in the West broadly. Alex Straton: So, we've covered a lot around what the next generation is interested in and, and their opinion. I know we do this every year, so it'll be exciting to see how this evolves over time. And how they adapt. It's been great speaking with you today, Adam. Adam Jonas: Absolutely. Alex, thanks for your insights. And to our listeners, stay curious, stay disruptive, and we'll catch you next time. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.