POPULARITY
Categories
Most practices think they need more—more patients, more visits, more volume. But what if the real revenue opportunity isn't volume at all… it's coding?In this episode, Dr. Heather sits down with Dr. Anne Hirsch, an internal medicine physician turned coding expert and physician coach, to explore why most practices are coding far below what their clinical work justifies—often doing a level 5 visit, documenting a level 4, and billing a level 3.You'll learn:• Why “fear-based coding” is silently draining your revenue• The most common undercoding patterns physicians don't realize they're doing• How better documentation reduces burnout and increases clinician confidence • Real examples of everyday visits that should nearly always be level 4s • How to implement quarterly audits, templates, and MDM habits that actually stick • Why physician-to-physician coding education creates better adoption and outcomes • How improved coding can add $30,000–$35,000+ per physician per year—without adding a single new patientIf your practice hasn't had a coding audit in the last 6–12 months, this episode is your wake-up call.Want a free coding evaluation for your practice? Email info@natrevmd.com with the subject line “Free Coding Evaluation” and our team will help you get started.
Former CEA Chair Jason Furman argues why the Fed should not cut rates at next week's meeting, despite his expectation they will do so. Then, Google announcing a new deal in the AI coding space. CNBC breaks the news. Plus, are Geopolitical risks a buying opportunity? Goldman Sachs argues just that as tensions between the U.S. and China further escalates. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com for Security Now canary.tools/twit - use code: TWIT
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com for Security Now canary.tools/twit - use code: TWIT
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com for Security Now canary.tools/twit - use code: TWIT
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com for Security Now canary.tools/twit - use code: TWIT
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com for Security Now canary.tools/twit - use code: TWIT
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com for Security Now canary.tools/twit - use code: TWIT
Cisco has finally admitted it's time for real change and is vowing to build "secure by default" gear after decades of criticism. Steve Gibson reacts to a rare moment when a tech giant actually gets security right—and what it means for everyone running critical infrastructure. • Scattered Lapsus$ Hunters strikes (Salesforce) again. • Cisco actually (no kidding) sees the light. • Next week, Australia bans all underage social media. • The EU Parliament moves to replace US computer tech. • When to use Passwords, Passkeys or Yubikeys. • Do unpowered SSDs lose their data. • How about a "Joy of Coding" podcast. • A Bitwarden Passkeys integration glitch. • XSLT is sneaky. It's where you don't expect it. • We know where last week's picture came from. • The long-awaited return of a new Stargate series. • A simple test to check our networks for any bot infections. Show Notes - https://www.grc.com/sn/SN-1054-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT vanta.com/SECURITYNOW bitwarden.com/twit threatlocker.com/twit canary.tools/twit - use code: TWIT
Steve Yegge is an industry veteran and the co-author of the recently published book Vibe Coding. Many of you will remember Steve's rant about Google platforms that I linked in one of my ancient 0800-DEVOPS newsletters. That rant is now 14 years old, but people still talk about it.We talked about vibe coding (the practice!) and Vibe Coding (the book!), whether junior developers are really doomed, the typical arguments people use against AI-assisted development, AI adoption in organizations, and what the future may bring.✨ Please leave a review on your favorite podcast platform, your feedback is gold. ✨Did you know there is a 0800-DEVOPS newsletter? Take a look and subscribe here.Text me what you think.
Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan discusses the Agent Client Protocol that makes Zed "Switzerland" for different AI coding agents, and his vision for fine-grained edit tracking that enables permanent, contextual conversations anchored directly to code—a collaborative layer that asynchronous git-based workflows can't provide. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren't going anywhere, and that source code is a language designed for humans to read, not just machines to execute. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
They were once called “clinical documentation improvement” specialists, charged with correcting the medical record to identify an overlooked diagnosis that carried the potential to increase revenue. Later, the description was changed to clinical documentation “integrity” (CDI) specialists. But that was then. This is now.Today, the job description continues to change. CDI professionals are being asked to take on more and more responsibilities.And that is why the producers of Talk Ten Tuesday have invited Penny Jefferson, a longtime CDI professional, to be the special guest during the next live edition of the weekly Internet broadcast.The broadcast will also feature these instantly recognizable panelists, who will report more news during their segments:· Social Determinants of Health: Tiffany Ferguson, CEO for Phoenix Medical Management, Inc., will report on the news that is happening at the intersection of medical record auditing and the SDoH.· CDI Report: Cheryl Ericson, Senior Director of Clinical Policy and Education for the vaunted Brundage Group, will have the latest CDI updates.· The Coding Report: Christine Geiger, Assistant Vice President of Acute and Post-Acute Coding Services for First Class Solutions, will report on the latest coding news.· News Desk: Timothy Powell, ICD10monitor national correspondent, will anchor the Talk Ten Tuesdays News Desk.· MyTalk: Angela Comfort, veteran healthcare subject-matter expert, will co-host the broadcast. Comfort is the Assistant Vice President of Revenue Integrity for Montefiore Health.
In this episode of How I Met Your Data: The Prompt, Anjali and Karen dig into one of the fastest-emerging patterns in development today: vibe coding - the practice of describing what you want and letting an LLM generate the code. It's new. It's evolving. And right now, it's causing as much frustration as it is excitement. Karen breaks down what vibe coding actually looks like in practice: developers prompting AI to produce entire features or files, navigating the wildly different “personalities” of today's LLMs, and learning how to guide systems that might generate brilliant structure… or unintended chaos. Together, they talk through the real friction points - overly eager model behavior, unexpected file changes, incomplete suggestions, and the creeping loss of hands-on debugging skills that used to tie engineers closer to their code. But underneath the surface is a bigger enterprise theme. The rise of vibe coding speaks to deeper issues: end users who still aren't getting what they need, bottlenecks in IT and data teams, and the rapid expansion of citizen development as people search for faster paths to outcomes. Anjali and Karen unpack the operational and governance implications, from maintainability and handoff challenges to compliance blind spots and the need for standards that can coexist with AI-assisted creation. They also dive into where AI does shine today - those repetitive, operational workflows that quietly save teams hours - and why focusing on value, ownership, and workflow design matters far more than chasing the next flashy LLM demo. This episode is an honest, grounded look at how AI-assisted development is taking shape: what's promising, what's painful, and what it means for teams trying to build responsibly, collaboratively, and at scale.
Ian and Aaron discuss Claude vs. Gemini, *another* Laravel New idea, drama on Thanksgiving, and so much more.Sponsored by Bento, Flare, Ittybit, tldraw, OG Kit, Tighten, and NusiiInterested in sponsoring Mostly Technical? Head to https://mostlytechnical.com/sponsor to learn more.(00:00) - Happy Cyber Monday! (01:39) - Follow Up (07:24) - AI Update: Claude vs. Gemini (24:43) - Laravel New, Again, Again? (36:30) - Ian's
In the first episode of the ACRO Podcast: Greetings from the GREC, our hosts Dr. Christopher Jahraus (GREC Committee Member) and Jason McKitrick (ACRO Legislative & Advocacy Liaison) are joined by GREC Co-Chair Dr. Laeton Pang. They discuss the upcoming radiation oncology coding changes, how we got here, what the ACRO GREC is doing behind the scenes on your behalf, and practical implications of the new code set.
In this episode, we talk with Simen, a senior software engineer and creator of Almost Done, a weekly email newsletter designed for neurodivergent developers and anyone who thinks a little differently. Simen shares how he built a format that supports real attention - short, scannable essays, intentional accessibility choices, and four writing “personas” that shape each issue's tone.We explore his creative workflow, why timing matters for engagement, and the “subscriber-first” philosophy that keeps the newsletter personal. Simen also opens up about career growth, simplicity in engineering, and practical systems that help with ADHD traits like hyperfocus and time blindness.It's an honest, uplifting conversation about writing, technology, and building a kinder approach to productivity. If the episode resonates, check out Almost Done and share it with someone who'd enjoy it.Sign up here - https://almostdone.news/Or view past issues - https://almostdone.news/issuesReach out to Simen on LinkedIn: https://www.linkedin.com/in/simendaehlin______
Jak může architekt dnes využívat AI tak, že mu to ušetří celé týdny práce? A proč je budoucnost v kombinaci odbornosti a umělé inteligence?Do dalšího dílu podcastu Budoucnost nepráce jsem si pozval Martina Jana Rosu – architekta, který nádherně propojuje svou doménovou expertízu s digitálními nástroji a AI. Tohle je další z velmi praktických rozhovorů, které jsem v poslední době vedl.Martina jsem se ptal na konkrétní scénáře, jak využívá AI v architektuře i při práci s daty. V podcastu zazní:Jak AI změnila Martinovu práci za poslední rok [03:25]Kdy má smysl používat skripty a Python v architektuře [07:13]Co je BIM / IFC a proč jsou zásadní [08:56]Automatizace rutinní práce pomocí AI [11:35]Nástroje Cursor, Replit a Cloud Code v praxi [13:59]Jak AI přemýšlí: reasoning a samoopravné skripty [17:12]Budoucnost profesí: doménová znalost + AI [28:39]„Druhý mozek“ a organizace informací [40:54]Proč je pořádek v datech klíčový pro práci s AI [48:13]Tahle epizoda je plná inspirace, konkrétních use cases a praktických tipů, které můžete začít používat hned. Co by se stalo, kdybyste se naučili využívat AI stejně efektivně jako Martin — a kolik práce by vám to ušetřilo?
https://event.on24.com/eventRegistration/EventLobbyServlet?eventid=4970954&groupId=6158316&key=6D958B67035A8B4047B2FBD06AE4F38A&sessionid=1&sourcepage=register?partnerref=website&target=reg20.jsp ------------------- For our listeners, use the code 'EYECODEMEDIA22' for 10% off at check out for our Premiere Billing & Coding bundle or our EyeCode Billing & Coding course. Sharpen your billing and coding skills today and leave no money on the table! questions@eyecode-education.com https://docs.google.com/forms/d/e/1FAIpQLSdEt3AkIpRrfNhieeImiZBF5lYRIR2aAsl7UqWJ_m2GV6OKEA/viewform?usp=header https://coopervision.com/our-company/... Go to MacuHealth.com and use the coupon code PODCAST2024 at checkout for special discounts Show Sponsors: CooperVision MacuHealth
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Dax Raad, co-founder of OpenCode, for a wide-ranging conversation about open-source development, command-line interfaces, the rise of coding agents, how LLMs change software workflows, the tension between centralization and decentralization in tech, and even what it's like to push the limits of the terminal itself. We talk about the future of interfaces, fast-feedback programming, model switching, and why open-source momentum—especially from China—is reshaping the landscape. You can find Dax on Twitter and check an example of what can be done using OpenCode in this tweet.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop and Dax Raad open with the origins of OpenCode, the value of open source, and the long-tail problem in coding agents. 05:00 They explore why command line interfaces keep winning, the universality of the terminal, and early adoption of agentic workflows. 10:00 Dax explains pushing the terminal with TUI frameworks, rich interactions, and constraints that improve UX. 15:00 They contrast CLI vs. chat UIs, discuss voice-driven reviews, and refining prompt-review workflows. 20:00 Dax lays out fast feedback loops, slow vs. fast models, and why autonomy isn't the goal. 25:00 Conversation turns to model switching, open-source competitiveness, and real developer behavior. 30:00 They examine inference economics, Chinese open-source labs, and emerging U.S. efforts. 35:00 Dax breaks down incumbents like Google and Microsoft and why scale advantages endure. 40:00 They debate centralization vs. decentralization, choice, and the email analogy. 45:00 Stewart reflects on building products; Dax argues for healthy creative destruction. 50:00 Hardware talk emerges—Raspberry Pi, robotics, and LLMs as learning accelerators. 55:00 Dax shares insights on terminal internals, text-as-canvas rendering, and the elegance of the medium.Key InsightsOpen source thrives where the long tail matters. Dax explains that OpenCode exists because coding agents must integrate with countless models, environments, and providers. That complexity naturally favors open source, since a small team can't cover every edge case—but a community can. This creates a collaborative ecosystem where users meaningfully shape the tool.The command line is winning because it's universal, not nostalgic. Many misunderstand the surge of CLI-based AI tools, assuming it's aesthetic or retro. Dax argues it's simply the easiest, most flexible, least opinionated surface that works everywhere—from enterprise laptops to personal dev setups—making adoption frictionless.Terminal interfaces can be richer than assumed. The team is pushing TUI frameworks far beyond scrolling text, introducing mouse support, dialogs, hover states, and structured interactivity. Despite constraints, the terminal becomes a powerful “text canvas,” capable of UI complexity normally reserved for GUIs.Fast feedback loops beat “autonomous” long-running agents. Dax rejects the trend of hour-long AI tasks, viewing it as optimizing around model slowness rather than user needs. He prefers rapid iteration with faster models, reviewing diffs continuously, and reserving slower models only when necessary.Open-source LLMs are improving quickly—and economics matter. Many open models now approach the quality of top proprietary systems while being far cheaper and faster to serve. Because inference is capital-intensive, competition pushes prices down, creating real incentives for developers and companies to reconsider model choices.Centralization isn't the enemy—lack of choice is. Dax frames the landscape like email: centralized providers dominate through convenience and scale, but the open protocols underneath protect users' ability to choose alternatives. The real danger is ecosystems where leaving becomes impossible.LLMs dramatically expand what individuals can learn and build. Both Stewart and Dax highlight that AI enables people to tackle domains previously too opaque or slow to learn—from terminal internals to hardware tinkering. This accelerates creativity and lowers barriers, shifting agency back to small teams and individuals.
In this episode of CISO Tradecraft, host G Mark Hardy is joined by Neatsun Ziv from Ox Security to discuss the evolving landscape of vibe coding and its security implications. The conversation delves into the risks and opportunities surrounding vibe coding, how it can enhance productivity while maintaining security, and the importance of embedding security into the entire lifecycle. They also explore the concept of VibeSec, why traditional shift-left security approaches might be failing, and what new methodologies can be adopted to ensure robust security in a rapidly changing tech world. Tune in to gain valuable insights into how you can future-proof your code, leverage modern IDEs and MCP, and maintain a strong security posture in the era of AI-driven development.Ox Security's Website - https://www.ox.security/Are AI App Builders Secure - https://www.ox.security/resource-category/whitepapers-and-reports/are-ai-app-builders-secure-we-tested-lovable-base44-and-bolt-to-find-out/The AI Code Security Crisis - https://www.ox.security/resource-category/whitepapers-and-reports/army-of-juniors/
What happens when AI adoption surges inside companies faster than anyone can track, and the data that fuels those systems quietly slips out of sight? That question sat at the front of my mind as I spoke with Cyberhaven CEO Nishant Doshi, fresh from publishing one of the most detailed looks at real-world AI usage I have seen. This wasn't a report built on opinions or surveys. It was built on billions of actual data flows across live enterprise environments, which made our conversation feel urgent from the very first moment. Nishant explained how AI has moved out of the experimental phase and into everyday workflows at a speed few anticipated. Employees across every department are turning to AI tools not as a novelty but as a core part of how they work. That shift has delivered huge productivity gains, yet it has also created a new breed of hidden risk. Sensitive material isn't just being uploaded through deliberate actions. It is being blended, remixed, and moved in ways that older security models cannot understand. Hearing him describe how this happens in fragments rather than files made me rethink how data exposure works in 2025. We also dug into one of the most surprising findings in Cyberhaven's research. The biggest AI power users inside companies are not executives or early career talent. It is mid-level employees. They know where the friction is, and they are under pressure to deliver quickly, so they experiment freely. That experimentation is driving progress, but it is also widening the gap between how AI is used and how data is meant to be protected. Nishant shared how that trend is now pushing sensitive code, R&D material, health information, and customer data into tools that often lack proper controls. Another moment that stood out was his explanation of how developers are reshaping their work with AI coding assistants. The growth in platforms like Cursor is extraordinary, yet the risks are just as large. Code that forms the heart of an organisation's competitive strength is frequently pasted into external systems without full awareness of where it might end up. It creates a situation where innovation and exposure rise together, and older security frameworks simply cannot keep pace. Throughout the conversation, Nishant returned to the importance of visibility. Companies cannot set fair rules or safe boundaries if they cannot see what is happening at the point where data leaves the user's screen. Traditional controls were built for a world of predictable patterns. AI has broken those patterns apart. In his view, modern safeguards need to sit closer to employees, understand how fragments are created, and guide people toward safer workflows without slowing them down. By the time we reached the end of the interview, it was clear that AI governance is no longer a strategic nice-to-have. It is becoming a daily operational requirement. Nishant believes employers must create a clear path forward that balances freedom with control, and give teams the tools to do their best work without unknowingly putting their organisations at risk. His message wasn't alarmist. It was practical, grounded, and shaped by years working at the intersection of data and security. So here is the question I would love you to reflect on. If AI is quickly becoming the engine of productivity across every department, what would your organisation need to change today to keep its data safe tomorrow? And how much visibility do you honestly have over where your most sensitive information is going right now? I would love to hear your thoughts. Useful Links Connect with Cyberhaven CEO Nishant Doshi on LinkedIn Learn more about Cyberhaven Tech Talks Daily is Sponsored by NordLayer: Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.
AI Advancements and Security Concerns: From Gemini 3 to Data Trust In this episode of Hashtag Trending - The Weekend Edition, hosts Marcel Ganger, John Pinard, and Jim Love discuss the latest AI advancements with a focus on Google's Gemini 3 and new releases from Claude. The conversation covers recent improvements in AI capabilities, challenges related to coding with different models, and the integration of AI in everyday tasks. They emphasize the importance of cybersecurity, especially concerning third-party applications and the potential risk of data breaches. The hosts also delve into the ongoing debate about the trustworthiness of AI developments and the need for staying updated with the latest technological advancements. 00:00 Introduction and Sponsor Message 00:23 AI Wishlist and Star Trek Inspiration 00:42 Weekend Edition Introduction 00:52 Weekly News Recap 01:14 Claude 4.5 and Gemini Comparison 04:30 AI Tools and Personal Experiences 06:12 Gemini's Capabilities and Use Cases 16:40 Coding with AI: Tools and Techniques 25:35 AI Studio and App Development 29:43 AI in Society: Trust and Future Implications 35:26 The Persistence of Human Thrills 36:25 Autonomous Racing and Technology Advancements 37:34 Cybersecurity Concerns with AI Integration 40:07 The Inherent Risks in Software and AI 42:46 The Necessity of Constant Updates 45:34 Trust and Security in Modern Technology 50:40 The Reality of AI and AGI 01:02:22 The Future of AI and Final Thoughts 01:11:05 Conclusion and Sponsor Message
AI Assisted Coding: Building Reliable Software with Unreliable AI Tools In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering. From Skeptic to Pioneer: Lada's AI Coding Journey "I got a new skill for free!" Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI. Understanding Vibecoding vs. AI-Assisted Development "AI assisted coding requires judgment, and it's never been as important to exercise judgment as now." Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices. The Answer Injection Anti-Pattern When Working With AI "You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect." One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability. Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you. Never Trust a Single LLM: Multi-Agent Collaboration "Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements." Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review. Code Quality Matters MORE with AI "This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!" Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively. Managing Complexity: The Open Question "If I just let it do things, it'll just run itself to the wall at crazy speeds, because it's really good at running. So I have to be there managing complexity for it." One of the most honest insights Lada shares is the current limitation of AI: complexity management. While AI excels at implementing features quickly, it struggles to manage the growing complexity of systems over time. Lada finds herself acting as the complexity manager, making architectural decisions and keeping the system maintainable while AI handles implementation details. She poses a critical question for the future: "Can it manage complexity? Can we teach it to manage complexity? I don't know the answer to that." This honest assessment reminds us that fundamental software engineering skills—architecture, refactoring, testing—remain as vital as ever. Context is Everything: Highway vs. Parking Lot "You need to be attuned to the environment. You can go faster or slow, and sometimes going slow is bad, because if you're on a highway, you're gonna get hurt." Lada introduces a powerful metaphor for choosing development speed: highway versus parking lot. When learning or experimenting with non-critical systems, you can go fast, don't worry about perfection, and leverage AI's speed fully. But when building production systems where reliability matters, different rules apply. The key is matching your development approach to the risk level and context. She emphasizes safety nets: "In one project, we used AI, and we didn't pay attention to the code, as it wasn't important, because at any point, we could actually step back and refactor. We were not unsafe." This perspective helps developers make better judgment calls about when to accelerate and when to slow down. The Era of Discovery: We've Only Just Begun "We haven't even touched the possibilities of what is there out there right now. We're in the era of gentleman scientists—newbies can make big discoveries right now, because nobody knows what AI really is capable of." Perhaps most exciting is Lada's perspective on where we stand in the AI-assisted development journey: we're at the very beginning. Even the creators of these tools are figuring things out as they go. This creates unprecedented opportunities for practitioners at all levels to experiment, discover patterns, and share learnings with the community. Lada has documented her discoveries in an interactive patterns and anti-patterns website, a Calgary Software Crafters presentation, and her Substack blog—contributing to the collective knowledge base that's being built in real-time. Resources For Further Study Video of Lada's talk: https://www.youtube.com/watch?v=_LSK2bVf0Lc&t=8654s Lada's Patterns and Anti-patterns website: https://lexler.github.io/augmented-coding-patterns/ Lada's Substack https://lexler.substack.com/ AI Assisted Coding episode with Dawid Dahl AI Assisted Coding episode with Llewellyn Falco Claude Flow - orchestration platform About Lada Kesseler Lada Kesseler is a passionate software developer specializing in the design of scalable, robust software systems. With a focus on best development practices, she builds applications that are easy to maintain, adapt, and support. Lada combines technical expertise with a keen eye for clean architecture and sustainable code, driving innovation in modern software engineering. Currently exploring how these values translate to AI-assisted development and figuring out what it takes to build reliable software with unreliable tools. You can link with Lada Kesseler on LinkedIn.
AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do." Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step." One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development." Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions. This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent." Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code." Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake." Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype." Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer." Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful." Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time. About Sergey Sergyenko Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS. Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time. You can link with Sergey Sergyenko on LinkedIn.
Tune in for some hands-on tips on how to use Claude code to create some amazing and not-so-amazing software. Paul will walk you through what worked and what didn't as he 100% vibe-coded a Python Flask application. The discussion continues with the crew discussing the future of vibe coding and how AI may better help in creating and securing software. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-902
Tune in for some hands-on tips on how to use Claude code to create some amazing and not-so-amazing software. Paul will walk you through what worked and what didn't as he 100% vibe-coded a Python Flask application. The discussion continues with the crew discussing the future of vibe coding and how AI may better help in creating and securing software. Show Notes: https://securityweekly.com/psw-902
Vibe coding was, remarkably, named word of the year by the Collins English Dictionary at the start of November 2025 — pretty good going for a term that was only coined in February. We first discussed it on the Technology Podcast back in April, and, given its prominence in the collective lexicon this year, thought we should revisit and reflect on the topic as 2025 draws to a close. Lots has happened in the intervening months: MCP adoption, the evolution of agentic coding tools and practices like context engineering have had a significant impact on the way the world is thinking about and using AI. To talk about it all and reflect on the implications, Thoughtworkers and regular podcast hosts Prem Chandrasekaran, Lilly Ryan and Neal Ford reconvened for a follow up to our April conversation. Taking in everything from the term's semantic slipperiness, its security risks and the challenges of maintaining AI-generated code, this is a discussion that, despite going deep into vibe coding, also touches on a huge range of issues in the technology industry today. Before we enter 2026, looking back on the good, the bad and the ugly of the last 12 months of experimentation is essential if we're to build better software for the world in the future. This episode aims to be a guide through that process. Listen to our April episode on vibe coding: https://www.thoughtworks.com/insights/podcasts/technology-podcasts/vibe-coding Read Ken Mugrage's blog post exploring the shift from vibe coding to context engineering in 2025: https://www.thoughtworks.com/insights/blog/machine-learning-and-ai/vibe-coding-context-engineering-2025-software-development
Tune in for some hands-on tips on how to use Claude code to create some amazing and not-so-amazing software. Paul will walk you through what worked and what didn't as he 100% vibe-coded a Python Flask application. The discussion continues with the crew discussing the future of vibe coding and how AI may better help in creating and securing software. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-902
Tune in for some hands-on tips on how to use Claude code to create some amazing and not-so-amazing software. Paul will walk you through what worked and what didn't as he 100% vibe-coded a Python Flask application. The discussion continues with the crew discussing the future of vibe coding and how AI may better help in creating and securing software. Show Notes: https://securityweekly.com/psw-902
"... best model in the world..."
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Today's episode digs into why Anthropic's surprise launch of Claude Opus 4.5 is landing like a true step-function moment for coding, agentic workflows, and the emerging paradigm of vibe-based software creation, with new benchmarks, early user tests, and developer reactions all pointing to a shift in how real work gets done; plus a quick look at the latest headlines including the White House's Genesis Mission and Amazon's massive new government-focused AI expansion. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsRovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - https://rovo.com/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
BONUS: Augmented AI Development - Software Engineering First, AI Second In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt. The AAID Philosophy: Don't Abandon Your Brain "Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps." Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step. In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story. Software Engineering First, AI Second: A Hill to Die On "You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on." What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that. Why TDD is Non-Negotiable with AI "Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change." Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare. The Problem with "Prompt and Pray" Autonomous Agents "When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work." Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops. Prerequisites: Product Discovery Must Come First "Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done." AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts. What's Wrong with Other AI Frameworks "When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea." Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily. Getting Started: Learn Fundamentals First "The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine." Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey. About Dawid Dahl Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency. You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.
In this episode, we sit down with Quincy Tennyson, who teaches an impressive four-year computer science pathway at Fern Creek High School. Quincy's background in the Marine Corps and as a network engineer brings a unique perspective to CS education. He discusses his curriculum progression from introductory courses through AP Computer Science Principles (heavily inspired by UC Berkeley's CS61A), AP Computer Science A (Java), and a culminating Project-Based Programming course. We dive deep into his philosophy of being a "warm demander" - setting high expectations while providing intensive coaching and support. The conversation touches on several compelling topics including teaching agile methodology to high school students, the importance of transparency about failure, and how behavioral economics concepts (from thinkers like Daniel Kahneman) inform his approach to helping students understand their own thinking processes. Quincy also shares insights on supporting underserved students, running a successful Girls Who Code chapter, and navigating the integration of AI tools in the classroom. His students' enthusiasm at PyCon 2024 was infectious, and this episode reveals the thoughtful pedagogy behind their success. Key resources mentioned include CS61A from UC Berkeley (https://cs61a.org/), CodeHS (https://codehs.com/), Code.org (https://code.org/), Sandra McGuire's book "Teach Students How to Learn," Eric Matthes' Python Crash Course (https://nostarch.com/python-crash-course-3rd-edition), and Al Sweigart's (https://alsweigart.com/) educational resources including his new Buttonpad library for Tkinter. Special Guest: Quincy Tennyson.
Today, we have an episode from our friends at Booming. In a recent episode, they reported how young people are choosing trade school over college out of fear of white-collar jobs drying up. Companies appear to be making big bets that AI can replace huge chunks of their workforces. It seems like “go to trade school” has become the new “learn to code.” But Dan Grossman, professor and vice director of the UW's Allen School of Computer Science and Engineering -- says the outlook isn’t so bleak for students who still want a career in tech. On today's episode: Are reports of AI driving a “white collar bloodbath” greatly exaggerated? Booming is a production of KUOW in Seattle, a proud member of the NPR Network. Our editor is Carol Smith. Our producers are Lucy Soucek and Alec Cowan. Our hosts are Joshua McNichols and Monica Nickelsburg. We can only make Seattle Now because listeners support us. Tap here to make a gift and keep Seattle Now in your feed. Got questions about local news or story ideas to share? We want to hear from you! Email us at seattlenow@kuow.org, leave us a voicemail at (206) 616-6746 or leave us feedback online.See omnystudio.com/listener for privacy information.
Across the country, commercial payers are quietly down-coding E/M services without issuing ADRs and without providing notice. Office visit reimbursements are being arbitrarily reduced based on payer algorithms rather than a proper review of documentation for compliance. In today's CodeCast episode, Terry sheds light on this growing problem and explains how to take proactive steps […] The post Watch for Payer Automatic Down-Coding Without Notice appeared first on Terry Fletcher Consulting, Inc..
AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes." Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time." Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt." Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up." As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes." Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want." Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity." Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it." When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources. About Lou Franco Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum. You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.
Since 80 percent of a person's health is influenced by factors outside of medical care, it is critical that a healthcare system has an understanding and appreciation for the circumstances of patients' daily lives that impact their health outcomes, referred to as the social determinants of health (SDoH). During the next live edition of Talk Ten Tuesday, Lauren Montwill, Vice President of Community Health and Social Impact for the UnitedHealth Group, will report on how her organization is collaborating on the delivery system to collect reliable SDoH data, as well as the effort to build health analytics infrastructure to benchmark, monitor, and track progress toward improving health outcomes and quality measures.The broadcast will also feature these instantly recognizable panelists, who will report more news during their segments:Social Determinants of Health: Tiffany Ferguson, CEO for Phoenix Medical Management, Inc., will report on the news that is happening at the intersection of medical record auditing and the SDoH.CDI Report: Cheryl Ericson, Senior Director of Clinical Policy and Education for the vaunted Brundage Group, will have the latest clinical documentation integrity (CDI) updates.The Coding Report: Christine Geiger, Assistant Vice President of Acute and Post-Acute Coding Services for First Class Solutions, will report on the latest coding news.News Desk: Timothy Powell, ICD10monitor national correspondent, will anchor the Talk Ten Tuesdays News Desk.MyTalk: Angela Comfort, veteran healthcare subject-matter expert, will co-host the broadcast. Comfort is the Assistant Vice President of Revenue Integrity for Montefiore Health.
Ian and Aaron talk about the launch of Database School - the branding, everything he did the morning of the launch, building the site with Gemini, and so much more. Plus the world championship of….bagels?Sponsored by Bento, Flare, No Compromises, and Ittybit.Interested in sponsoring Mostly Technical? Head to https://mostlytechnical.com/sponsor to learn more.(00:00) - Every Second Mattered (08:40) - The Morning Of (15:30) - The Last Launch? (18:07) - Walking With Adam (20:01) - Pricing (28:28) - Doing It Live (36:21) - We're Talking Logos (44:14) - Closing Thoughts On The Launch (49:21) - Built With Gemini (01:06:23) - World Championship of Bagels Links:NightwatchOG KitForgeLaravel CashierJamey Gannon on TwitterAaron's Blooper ReelAdam's Morning WalkLaravel VPSNano Banana ProGemini 3FilamentStarship Bagel
Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time. Resources:Follow Beyang Liu on X: https://x.com/beyangFollow Martin Casado on X: https://x.com/martin_casadoFollow Guido Appenzeller on X: https://x.com/appenz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Wildest week in AI since December 2024.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Maor Shlomo is the Founder and CEO of Base44, the AI building platform that Maor built from idea to $80M acquisition by Wix, in just 8 months. Today the company serves millions of users and will hit $50M ARR by the end of the year. Before Base44, Maor was the Co-Founder and CTO of Explorium. AGENDA: 00:05 – 00:10: How Vibe Coding is Going to Kill Salesforce and SaaS 00:13 – 00:15: Do Vibe Coding platforms have any defensibility? 00:22 – 00:24: I am not worried about Replit and Lovable, I am worried about Google… 00:28 – 00:29: Margins do not matter, the price of the models will go to zero 00:31 – 00:32: Speed to copy has never been lower; has the technical moat been eroded? 00:47 – 00:48: How does Base44 beat Cursor? 00:56 – 00:57: Do not pay attention to competition: focus on your business 00:57 – 00:58: How Base44 is helped, not hurt by not being in Silicon Valley? 00:58 – 00:59: What percent of code will be written by AI in 12 months? 01:01 – 01:02: OpenAI or Anthropic: Why Maor is Long Anthropic? 01:03 – 01:04: If I could have any board member in the world it would be Jack Dorsey
AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI In this special episode, Elina Patjas shares her remarkable journey from designer to solo developer, building LexieLearn—an AI-powered study tool with 1,500+ users and paying customers—entirely through AI-assisted coding. She reveals the practical workflow, anti-patterns to avoid, and why the future of software might not need permanent apps at all. The Two-Week Transformation: From Idea to App Store "I did that, and I launched it to App Store, and I was like, okay, so… If I can do THIS! So, what else can I do? And this all happened within 2 weeks." Elina's transformation happened fast. As a designer frustrated with traditional software development where maybe 10% of your original vision gets executed, she discovered Cursor and everything changed. Within two weeks, she went from her first AI-assisted experiment to launching a complete app in the App Store. The moment that shifted everything was realizing that AI had fundamentally changed the paradigm from "writing code" to "building the product." This wasn't about learning to code—it was about finally being able to execute her vision 100% the way she wanted it, with immediate feedback through testing. Building LexieLearn: Solving Real Problems for Real Users "I got this request from a girl who was studying, and she said she would really appreciate to be able to iterate the study set... and I thought: "That's a brilliant idea! And I can execute that!" And the next morning, it was 9.15, I sent her a screen capture." Lexie emerged from Elina's frustration with ineffective study routines and gamified edtech that didn't actually help kids learn. She built an AI-powered study tool for kids aged 10-15 that turns handwritten notes into adaptive quizzes revealing knowledge gaps—private, ad-free, and subscription-based. What makes Lexie remarkable isn't just the technology, but the speed of iteration. When a user requested a feature, Elina designed and implemented it overnight, sending a screen capture by 9:15 AM the next morning. This kind of responsiveness—from customer feedback to working feature in hours—represents a fundamental shift in how software can be built. Today, Lexie has over 1,500 users with paying customers, proving that AI-assisted development isn't just for prototypes anymore. The Workflow: It's Not Just "Vibing" "I spend 30 minutes designing the whole workflow inside my head... all the UX interactions, the data flow, and the overall architectural decisions... so I spent a lot of time writing a really, really good spec. And then I gave that to Claude Code." Elina has mixed feelings about the term "vibecoding" because it suggests carelessness. Her actual workflow is highly disciplined. She spends significant time designing the complete workflow mentally—all UX interactions, data flow, and architectural decisions—then writes detailed specifications. She often collaborates with Claude to write these specs, treating the AI as a thinking partner. Once the spec is clear, she gives it to Claude Code and enters a dialogue mode: splitting work into smaller tasks, maintaining constant checkpoints, and validating every suggestion. She reads all the code Claude generates (32,000 lines client-side, 8,000 server-side) but doesn't write code herself anymore. This isn't lazy—it's a new kind of discipline focused on design, architecture, and clear communication rather than syntax. Reading Code vs. Writing Code: A New Skill Set "AI is able to write really good code, if you just know how to read it... But I do not write any code. I haven't written a single line of code in a long time." Elina's approach reveals an important insight: the skill shifts from writing code to reading and validating it. She treats Claude Code as a highly skilled companion that she needs to communicate with extremely well. This requires knowing "what good looks like"—her 15 years of experience as a designer gives her the judgment to evaluate what the AI produces. She maintains dialogue throughout development, using checkpoints to verify direction and clarify requirements. The fast feedback loop means when she fails to explain something clearly, she gets immediate feedback and can course-correct instantly. This is fundamentally different from traditional development where miscommunication might not surface until weeks later. The Anti-Pattern: Letting AI Run Rampant "You need to be really specific about what you want to do, and how you want to do it, and treat the AI as this highly skilled companion that you need to be able with." The biggest mistake Elina sees is treating AI like magic—giving vague instructions and expecting it to "just figure it out." This leads to chaos. Instead, developers need to be incredibly specific about requirements and approach, treating AI as a skilled partner who needs clear communication. The advantage is that the iteration loop is so fast that when you fail to explain something properly, you get feedback immediately and can clarify. This makes the learning curve steep but short. The key is understanding that AI amplifies your skills—if you don't know what good architecture looks like, AI won't magically create it for you. Breaking the Gatekeeping: One Person, Ten Jobs "I think that I can say that I am a walking example of what you can do, if you have the proper background, and you know what good looks like. You can do several things at a time. What used to require 10 people, at least, to build before." Elina sees herself as living proof that the gatekeeping around software development is breaking down. Someone with the right background and judgment can now do what previously required a team of ten people. She's passionate about others experiencing this same freedom—the ability to execute their vision without compromise, to respond to user feedback overnight, to build production-quality software solo. This isn't about replacing developers; it's about expanding who can build software and what's possible for small teams. For Elina, working with a traditional team would actually slow her down now—she'd spend more time explaining her vision than the team would save through parallel work. The Future: Intent-Based Software That Emerges and Disappears "The software gets built in an instance... it's going to this intent-based mode when we actually don't even need apps or software as we know them." Elina's vision for the future is radical: software that emerges when you need it and disappears when you don't. Instead of permanent apps, you'd have intent-based systems that generate solutions in the moment. This shifts software from a product you download and learn to a service that materializes around your needs. We're not there yet, but Elina sees the trajectory clearly. The speed at which she can now build and modify Lexie—overnight feature implementations, instant bug fixes, continuous evolution—hints at a future where software becomes fluid rather than fixed. Getting Started: Just Do It "I think that the best resource is just your own frustration with some existing tools... Just open whatever tool you're using, is it Claude or ChatGPT and start interacting and discussing, getting into this mindset that you're exploring what you can do, and then just start doing." When asked about resources, Elina's advice is refreshingly direct: don't look for tutorials, just start. Let your frustration with existing tools drive you. Open Claude or ChatGPT and start exploring, treating it as a dialogue partner. Start building something you actually need. The learning happens through doing, not through courses. Her own journey proves this—she went from experimenting with Cursor to shipping Lexie to the App Store in two weeks, not because she found the perfect tutorial, but because she just started building. The tools are good enough now that the biggest barrier isn't technical knowledge—it's having the courage to start and the judgment to evaluate what you're building. About Elina Patjas Elina is building Lexie, an AI-powered study tool for kids aged 10–15. Frustrated by ineffective "read for exams" routines and gamified edtech fluff, she designed Lexie to turn handwritten notes into adaptive quizzes that reveal knowledge gaps—private, ad-free, and subscription-based. Lexie is learning, simplified. You can link with Elina Patjas on LinkedIn.
------------------- For our listeners, use the code 'EYECODEMEDIA22' for 10% off at check out for our Premiere Billing & Coding bundle or our EyeCode Billing & Coding course. Sharpen your billing and coding skills today and leave no money on the table! questions@eyecode-education.com https://docs.google.com/forms/d/e/1FAIpQLSdEt3AkIpRrfNhieeImiZBF5lYRIR2aAsl7UqWJ_m2GV6OKEA/viewform?usp=header https://coopervision.com/our-company/... Go to MacuHealth.com and use the coupon code PODCAST2024 at checkout for special discounts Show Sponsors: CooperVision MacuHealth
Pete Syme talks with Drew Falkman about vibe coding, a way for tour operators to build custom software tools using plain English prompts instead of traditional programming. Drew explains how AI tools like ChatGPT and Claude have been trained on code repositories, allowing them to generate working applications from simple descriptions. The conversation covers why this matters for small operators, what you can build, the learning curve, costs, security considerations, and how this technology could shift the relationship between tour operators and the software they depend on. Pete emphasizes that operators already have the same AI access as hundred million dollar companies and encourages spending at least an hour daily experimenting with these tools.Top 10 TakeawaysYou can build tools without coding knowledge. AI tools trained on code repositories can generate working applications from plain English descriptions, making app building accessible to anyone.Most SaaS tools don't fit your exact workflow. You end up paying for applications where 80% of features you're not using because they're designed for other industries, but the things you do use aren't quite refined enough.Start with internal workflows, not customer-facing apps. Build tools for internal processes first. Don't go public with what you build until you have experience, as you can get 80 to 90% correct quickly, but that last bit is more challenging.Map your processes before building. Write down all your processes on paper, rank what's most important, and list what you really don't like doing. This helps identify where custom tools can have the biggest impact.The learning curve has three main steps. First, learn to plan what you want to build (20 to 30 hours). Second, design the workflow and user interface (a few hours). Third, understand data and databases (a couple days). Total time to get comfortable is roughly a few weeks of focused learning.Tools like Lovable cost around $20 per month. There are small monthly fees for vibe coding platforms, plus hosting costs if your tool is public-facing. Tools like Lovable, Bolt, Replit, Magic Patterns, and N8n each serve different purposes.Keep data storage minimal for security. Don't store sensitive information like credit card numbers or social security numbers. Use third-party authentication (Google, Microsoft, Apple) and payment processors like Stripe to handle sensitive data.You can build custom booking flows and optimize conversions. Create your own booking engine where you control every step, then use analytics tools to see where people drop off and experiment with improvements to increase completion rates.This threatens the traditional SaaS industry. Large companies spending millions monthly on SaaS are already exploring vibe coding to reduce costs. What happens at that level will cascade down through the industry to the tools small operators use today.Just try it to understand the possibilities. Go to lovable.dev, run a prompt, and build something. You won't fully understand what you can do until you experiment. You have nothing to lose with free versions, and no one else will see your experiments.Want to learn vibe coding yourself? Drew teaches courses on building apps without code. Visit drewfalkman.com to explore free resources and paid courses that walk you through the process step by step.
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Conor and Logan break down the “vibe coding” renaissance enabled by Gemini 3. We explore what this shift means for developers and why the model's fluid coding experience is reshaping AI-assisted programming.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle
This conversation delves into the recent outage of the Cardano blockchain, exploring the causes, implications, and community responses. Peter breaks down the technical aspects of the incident, clarifying misconceptions about the nature of the outage, the role of state pool operators, and the recovery process. The discussion also highlights the importance of community collaboration and the challenges posed by media coverage and misinformation surrounding the event.Takeaways ✅ Cardano experienced a temporary chain partition due to a malformed transaction.✅ The incident was not a hack; funds were not compromised. ✅ State pool operators played a crucial role in the recovery process.✅ The network's self-healing capabilities were demonstrated during the incident.✅ Media coverage often misrepresents the situation, leading to misinformation.✅ Community collaboration was key in addressing the outage quickly.✅ The incident highlighted the importance of robust governance in blockchain ecosystems.✅ Lessons learned will strengthen the Cardano network moving forward.✅ The response from the Cardano community was prompt and effective.✅ Future steps include a thorough retrospective of the incident.Chapters00:00 Cardano Blockchain Outage Overview02:48 Understanding Chain Partitions and Forks06:07 The Role of State Pool Operators08:57 Technical Breakdown of the Incident 12:08 Community Response and Recovery14:58 Implications for the Cardano Ecosystem 18:10 Media Coverage and Misinformation20:55 Lessons Learned and Future Steps 23:49 Final Thoughts and Community SupportDISCLAIMER: This content is for informational and educational purposes only and is not financial, investment, or legal advice. I am not affiliated with, nor compensated by, the project discussed—no tokens, payments, or incentives received. I do not hold a stake in the project, including private or future allocations. All views are my own, based on public information. Always do your own research and consult a licensed advisor before investing. Crypto investments carry high risk, and past performance is no guarantee of future results. I am not responsible for any decisions you make based on this content.
(And Why It Still Matters)Welcome to the Holistic Healing Hour, hosted by Grandpa Bill!
Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/Learn more about AI Business Transformation Course: https://multiplai.ai/ai-course/Are you prepared for the moment when your AI tools fail—and take 20% of the internet with them?This week was one of the most explosive in recent AI history. From Google's jaw-dropping Gemini 3 release to a stealth drop of Grok 4.1, plus the Cloudflare crash that wiped out access to ChatGPT for hours — the implications for business leaders are massive.In this episode of the Leveraging AI Podcast, Isar Matis unpacks the seismic shifts that happened across the AI landscape this week—and what they mean for your business. If you're leading a team, scaling a company, or just trying to stay ahead of disruption, this is your AI cheat sheet.Bottom line: Ignore this week's AI developments, and you risk falling behind. Fast.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
AGENDA: 04:47 Cursor Raises $2.3BN at $29BN Valuation 11:36 What Gemini 3 Means for Lovable, Cursor and Replit 30:54 Peter Thiel and Softbank Sell NVIDIA: The Bubble Bursting? 48:54 Oracle Credit Default Swaps: The Risk is Increasing 01:07:22 Stripe Does Tender at All-Time High: Why the Best Companies Will Never IPO 01:19:18 Why Retail WIll Cause a Surge of Capital into VC Funds