POPULARITY
Categories
"Silver Bells" by Grails from Miracle Music; "Sailing Past Midnight" by Peter Baumann from Nightfall; "Between" by Kara-Lis Coverdale from From Where You Came; "Flashes from Everywhere" by Stereolab from Instant Holograms on Metal; "Nauka O Planetah" by Shine Grooves from Sequences for Fluttering; "Final Generations" by Raisa K from Affectionately; "Mustafa (feat. Iko Niche)" by The Vernon Spring from Under A Familiar Sun; "Cruising with Mr. Scratch" by Cody Carpenter, John Carpenter, and Daniel Davies from Lost Themes 10th Anniversary Expanded Edition; "Lucent" by emptyset from Dissever; "Flutter" by Loscil from Lake Fire; "October" by Eiko Ishibashi from October; "A Symmetry of Faith" by Mamuthones from From Word to Flesh.
本期节目我们和《二分电台》的主播 2BAB 探讨了移动应用开发领域的技术趋势。AB 详细介绍了原生与非原生开发的区别,以及 Flutter、ReactNative 和 Kotlin Multiplatform (KMP) 等跨平台框架的特点。嘉宾们还分析了各种技术选型的优劣,例如 ReactNative 的热更新优势和 Flutter 的 UI 一致性,以及 Kotlin 作为 Android 官方语言的崛起。最后,节目还探讨了 On-Device 模型在移动设备上的应用前景,例如图像语义搜索和离线推理,并对 AI 技术在移动开发领域的潜在影响进行了展望。 嘉宾 2BAB (AB) 主播 laike9m Manjusaka 章节 00:14 移动端开发框架介绍与原生/非原生定义 07:03 ReactNative 的兴起、问题与 Flutter 的挑战 14:19 Kotlin Multiplatform (KMP) 与 Jetpack Compose 的发展 23:22 KMP 的流行度、ReactNative 的价值与未来发展 30:05 Electron 的妥协与热更新的重要性 37:43 入门移动端开发的建议与 Flutter 的未来 42:57 Flutter 的风险与 Kotlin 的竞争 48:45 On-Device Model 的应用与发展 55:10 On-Device Model 的功耗与应用场景 1:03:08 On-Device Model 的隐私与安全 1:10:03 总结与推荐 链接 React Native Flutter Kotlin Programming Language Jetpack Compose Kotlin Multiplatform (KMP) Compose Multiplatform (CMP) SkiaSkia is an open source 2D graphics library which provides common APIs that work across a variety of hardware and software platforms. It serves as the graphics engine for Google Chrome and ChromeOS, Android, Flutter, and many other products. The Truth About React Native - YouTube google/XNNPACK: High-efficiency floating-point neural network inference operators for mobile, server, and Web React Native Panel hosted by Jamon Holmgren - Chiara Mooney, Eli White, Keith Kurak, Chris Traganos - YouTube Gemini Nano litert-community/Gemma3-1B-IT · Hugging Face OpenAIDoc | 开发者友好的文档中心,一站式解决您的技术文档需求 《mono 女孩》
App Masters - App Marketing & App Store Optimization with Steve P. Young
In this episode, we're breaking down how you can build a revenue-generating app — without writing a single line of code.Our guest is Alim Charaniya, 4x founder and the brains behind Ambitious Labs. He's helped generate over $100M+ in app revenue, scaled apps from 10K to 2M+ users, and was previously Head of Mobile at PrizePicks, the #1 sports app on the App Store.Alim shares the exact blueprint for building with Flutter, AI tools, and no-code platforms — proving that launching a successful app business is more accessible than ever.You'll learn the secrets behind building fast, validating your idea early, launching MVPs and making real money — all without hiring expensive developers.You will discover:✅ Why most apps aren't actually hard to build — and how to get started✅ How to validate and launch an MVP quickly (without burning cash)✅ Monetization strategies that work for indie developers✅ Why understanding user behavior is more important than codeLearn More:Build your app without code: (Use this code and enjoy $500 off on their plans)https://start.ambitiouslabs.io/appmasters?utm_campaign=appmastersWork with us: https://www.appmasters.comIndie App Santa: https://www.indieappsanta.comGet training, coaching, and community: https://appmastersacademy.com/*********************************************SPONSORSYou ever work with a tool where the support feels like...a ghost town? Yeah, not AppsFlyer.Their support team is massive — like 5x bigger than the industry average. They're global, 24/7, and actually helpful!If you're scaling your app and don't want to be left hanging, check out AppsFlyer dot com or book a demo by clicking this https://tinyurl.com/AppsFlyerAM*********************************************Follow us:YouTube: AppMasters.com/YouTubeInstagram: @App MastersTwitter: @App MastersTikTok: @stevepyoungFacebook: App Masters*********************************************
App Masters - App Marketing & App Store Optimization with Steve P. Young
In this episode, we're breaking down how you can build a revenue-generating app — without writing a single line of code.Our guest is Alim Charaniya, 4x founder and the brains behind Ambitious Labs. He's helped generate over $100M+ in app revenue, scaled apps from 10K to 2M+ users, and was previously Head of Mobile at PrizePicks, the #1 sports app on the App Store.Alim shares the exact blueprint for building with Flutter, AI tools, and no-code platforms — proving that launching a successful app business is more accessible than ever.You'll learn the secrets behind building fast, validating your idea early, launching MVPs and making real money — all without hiring expensive developers.You will discover:✅ Why most apps aren't actually hard to build — and how to get started✅ How to validate and launch an MVP quickly (without burning cash)✅ Monetization strategies that work for indie developers✅ Why understanding user behavior is more important than codeLearn More:Build your app without code: (Use this code and enjoy $500 off on their plans)https://start.ambitiouslabs.io/appmasters?utm_campaign=appmastersWork with us: https://www.appmasters.comIndie App Santa: https://www.indieappsanta.comGet training, coaching, and community: https://appmastersacademy.com/*********************************************SPONSORSYou ever work with a tool where the support feels like...a ghost town? Yeah, not AppsFlyer.Their support team is massive — like 5x bigger than the industry average. They're global, 24/7, and actually helpful!If you're scaling your app and don't want to be left hanging, check out AppsFlyer dot com or book a demo by clicking this https://tinyurl.com/AppsFlyerAM*********************************************Follow us:YouTube: AppMasters.com/YouTubeInstagram: @App MastersTwitter: @App MastersTikTok: @stevepyoungFacebook: App Masters*********************************************
This week, we're bringing you an episode of Bold Names, which presents conversations with the leaders of the bold-named companies featured in the pages of The Wall Street Journal. On this episode, hosts Tim Higgins and Christopher Mims speak to Peter Jackson, the CEO of Flutter Entertainment, who leads a global sports betting empire. With the U.S.-based FanDuel as its crown jewel, he has a prime view of one of the fastest-growing and most profitable entertainment industries in the world. How is Flutter using technology to supercharge sports betting, while grappling with its potential harms? Learn more about your ad choices. Visit megaphone.fm/adchoices
Chad Benyon and Zach Warring review gaming company earnings. Chad likes high-end companies in the space, and notes that many in the lodging industry said April was a busy month. Zach points to weakness in China and Macau and says he's “pretty neutral” on the industry. However, he has a Buy rating on DraftKings (DKNG). Chad compares DKNG's potential report to Flutter's (FLUT) report yesterday.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Calcium is one of the ions that move across the cellular membrane during cardiac contraction and relaxation.The primary use of calcium channel blockers in ACLS is for the treatment of stable, narrow complex tachycardias refractory to Adenosine and to lower the blood pressure of ischemic stroke patients with severe hypertension.Use of calcium channel blockers for SVT refractory to Adenosine and A-Fib or A-Flutter with RVR.Contraindications of calcium channel blockers.Nicardipine use during the treatment of ischemic strokes.For more information on ACLS medications, tachycardia, or stroke check out the pod resource page at passacls.com.Good luck with your ACLS class!Links: Buy Me a Coffee at https://buymeacoffee.com/paultaylor Practice ECG rhythms at Dialed Medics - https://dialedmedics.com/Free Prescription Discount Card - Download your free drug discount card to save money on prescription medications for you and your pets: https://safemeds.vipPass ACLS Web Site - Episode archives & other ACLS-related podcasts: https://passacls.com@Pass-ACLS-Podcast on LinkedIn
In this episode, I'm joined by Jeevan Jeyaratnam, Chief Betting Officer at Abelson Info—an industry leader in player market pricing and home to the settlement feeds behind many of today's major betting markets.Jeev shares insights from two decades in the betting world, including the evolution of player-focused markets like anytime goal scorers, shots, cards, and more. With Abelson pricing over 17,000 football matches annually, Jeev offers an expert deep dive into the nuts and bolts of compiling odds, managing data accuracy, and servicing leading firms like Flutter, Entain, William Hill, and Betway.You can also support the SBC Podcast by visiting our podcast sponsor, Matchbook. You can get 150 days, commission free with them via this link. Key topics include:How Abelson provides player market odds for 150+ global football competitionsThe rise of markets like shots on target, fouls committed, and goalkeeper savesThe art of pricing in-play player markets—and why team news timing is criticalExamples of edge cases bettors can exploit (e.g. players in unusual positions, suspension tactics)What happened when Henrikh Mkhitaryan was mispriced... and cost a bookmaker bigThe role of feedback loops and how Abelson collaborates with major operatorsWhat's next: Jeev reveals details about Abelson's upcoming Bet Builder product launch(00:00) Intro & Jeevan's Role(02:00) Career Beginnings & Super Soccer Origins(05:30) Rise of Goal Scorer Market(08:30) Abelson's Reach & Bookmaker Clients(11:30) Custom Pricing & Bookmaker Strategies(13:30) Data Collection and the Human Edge(15:30) Adding New Markets & Role of Modelling(18:30) Player Expectancies & Real-World Examples(21:30) In-Play Pricing Adjustments(24:30) Value in Pre-Match Windows(27:30) The Penalty Taker Dilemma(30:30) Goal Scorer Profiles & Outliers(34:30) Pricing Successes and Mistakes(38:30) Player Markets vs. Match Odds(43:00) Punters & Perceptions(47:00) Bet Builders & Market Evolution(51:00) Brazilian Booking Loophole(54:30) What's Next at AbelsonIt's a fascinating episode for bettors interested in the mechanics behind player props and the value spots to target in today's dynamic markets.
In atrial fibrillation (A-Fib) and atrial flutter (A-Flutter) the electrical impulse for cardiac contraction is in the atria but isn't the normal pacemaker of the heart, the SA node. The ECG characteristics of A-Fib and A-Flutter. Recognition and treatment of unstable patients in A-Fib/Flutter with rapid ventricular response (RVR).Suggested energy settings for synchronized cardioversion of unstable patients with a narrow complex tachycardia. Team safety when cardioverting an unstable patient in A-FIB/Flutter.Adenosine's role for stable SVT patients with underlying atrial rhythms.Treatment of stable patients in A-Fib/Flutter with RVR. For other medical podcasts that cover narrow complex tachycardias, visit the pod resource page at passacls.com. Good luck with your ACLS class!Links: Buy Me a Coffee at https://buymeacoffee.com/paultaylor Practice ECG rhythms at Dialed Medics - https://dialedmedics.com/Free Prescription Discount Card - Download your free drug discount card to save money on prescription medications for you and your pets: https://safemeds.vip/savePass ACLS Web Site - Episode archives & other ACLS-related podcasts: https://passacls.com@Pass-ACLS-Podcast on LinkedIn
Welcome to another episode of Flying High with Flutter! In this episode, we have Eric Seidel, the co-founder of the Flutter project and former lead of Flutter and Dart at Google, as our guest. Eric shares his journey from leading the Flutter and Dart teams at Google to starting his company, Shorebird, which aims to solve real challenges for Flutter developers.We dive into the details of CodePush, Shorebird's flagship product, and how it enables seamless updates for Flutter apps. Eric also discusses the evolution of Flutter, the Dart language, and how Shorebird is building tools to enhance the Flutter ecosystem.
Peter Jackson, the CEO of Flutter Entertainment, leads a global sports betting empire. With the U.S.-based FanDuel as its crown jewel, he has a prime view of one of the fastest-growing and most profitable entertainment industries in the world. How is Flutter using technology to supercharge sports betting, while grappling with its potential harms? Jackson speaks to WSJ's Christopher Mims and Tim Higgins on the latest episode of the Bold Names podcast. Check Out Past Episodes: What This Former USAID Head Had to Say About Elon Musk and DOGE Why Bilt's CEO Wants You To Pay Your Mortgage With a Credit Card The CEO Who Says Cheaper AI Could Actually Mean More Jobs Let us know what you think of the show. Email us at BoldNames@wsj.com Sign up for the WSJ's free Technology newsletter. Read Christopher Mims's Keywords column. Read Tim Higgins's column. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Flying High with Flutter, we're joined by Dominik Tornow, principal engineer and author of Thinking in Distributed Systems. Dominik shares his journey into distributed systems, breaks down complex concepts like the CAP theorem, liveness vs. safety, and item potency, and offers practical tips for building reliable and scalable systems.On the show:
Join us as we chat with Mariano Mattei, visionary CIO, CISO, AI strategist, and author of the newly titled Data-Driven Cybersecurity (formerly Security Metrics). In this episode, Mariano shares his expert insights on cybersecurity metrics, integrating AI into threat detection, and building secure apps from the ground up.
Join Allen Wyma and Arnaud Lauret, author of The Design of Web APIs, Second Edition, as they discuss creating user-friendly, secure, and efficient APIs.
I read from flutter kick to fly. The word of the episode is "fly". Use my special link https://zen.ai/thedictionary to save 30% off your first month of any Zencastr paid plan. Create your podcast today! #madeonzencastr Theme music from Tom Maslowski https://zestysol.com/ Merchandising! https://www.teepublic.com/user/spejampar "The Dictionary - Letter A" on YouTube "The Dictionary - Letter B" on YouTube "The Dictionary - Letter C" on YouTube "The Dictionary - Letter D" on YouTube "The Dictionary - Letter E" on YouTube "The Dictionary - Letter F" on YouTube Featured in a Top 10 Dictionary Podcasts list! https://blog.feedspot.com/dictionary_podcasts/ Backwards Talking on YouTube: https://www.youtube.com/playlist?list=PLmIujMwEDbgZUexyR90jaTEEVmAYcCzuq https://linktr.ee/spejampar dictionarypod@gmail.com https://www.facebook.com/thedictionarypod/ https://www.threads.net/@dictionarypod https://twitter.com/dictionarypod https://www.instagram.com/dictionarypod/ https://www.patreon.com/spejampar https://www.tiktok.com/@spejampar 917-727-5757
I read from flush to flutter. The word of the episode is "flutter". Use my special link https://zen.ai/thedictionary to save 30% off your first month of any Zencastr paid plan. Create your podcast today! #madeonzencastr Theme music from Jonah Kraut https://jonahkraut.bandcamp.com/ Merchandising! https://www.teepublic.com/user/spejampar "The Dictionary - Letter A" on YouTube "The Dictionary - Letter B" on YouTube "The Dictionary - Letter C" on YouTube "The Dictionary - Letter D" on YouTube "The Dictionary - Letter E" on YouTube "The Dictionary - Letter F" on YouTube Featured in a Top 10 Dictionary Podcasts list! https://blog.feedspot.com/dictionary_podcasts/ Backwards Talking on YouTube: https://www.youtube.com/playlist?list=PLmIujMwEDbgZUexyR90jaTEEVmAYcCzuq https://linktr.ee/spejampar dictionarypod@gmail.com https://www.facebook.com/thedictionarypod/ https://www.threads.net/@dictionarypod https://twitter.com/dictionarypod https://www.instagram.com/dictionarypod/ https://www.patreon.com/spejampar https://www.tiktok.com/@spejampar 917-727-5757
Send us a textWe are on a short spring break at AI for Kid. We look forward to seeing you all in May. In the meantime, check out this replay with Archi Marrapu, a remarkable young inventor. • Explaining artificial intelligence as a "fake brain" that can mimic human intelligence and sometimes exceed human capabilities• Creating Project Pill Tracker, a 3D-printed medication management system with AI features that prevent medication errors• Working with tools like Arduino Uno kits, 3D printers, Flutter, and coding languages including Java and Python• Starting with curiosity and coding as entry points to learning about AI• Building confidence to overcome challenges and persist through failuresLinks to Resources: VoyceProject Pill TrackerOnchi 3d printingAutodesk inventorTinkercadArdino unoFlutter app developmentAndroid studiosJavaNIHStemifyGirlsContact Archi:Archi Marrapu LinkedInEmail: stemifygirls@gmail.com or founder.stemifygirls@gmail.comSupport the showHelp us become the #1 podcast for AI for Kids.Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Gift or get our books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform. Apple Podcasts Amazon Music Spotify YouTube Other Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...
The Swim Set that Almost Killed Me 4800m Warm Up200 Loosen w/ Fins and small paddles 800 as 2x(100 swim 100 drill 100 kick 100 IM) 8x50 as (15 Fast Kick 10 Sprint Free 25 ez) on 15-20” Rest 100 Easy Main Set 100 Time Trial 5x 200 Pull Smooth w/ Medium Paddles 100 Time Trial 5x 200 Pull Smooth with Big Paddles Kick Set: with Finns 4x (2x100) 2 as Fly kick on Back 2 as Flutter kick Back 2 as Fly kick w/ Board 2 as Flutter kick w/ Board Warm Down 6x50 Youtube: http://www.youtube.com/@SwimSetsbythePool Patreon: patreon.com/SwimSetsbythePool Instagram: @swimsetsbythepool @gharpzorz @natefdot
In today's episode of iGaming Daily, brought to you by Optimove, iGaming Expert's Editor, Joe Streeter, is joined by SBC News' Editor, Ted Orme-Claye, to unpack the serious allegations leveled against Entain by Australia's financial watchdog, Austrac, over possible failures in its anti-money laundering practices. The two explore the findings of the report, Entain's regulatory track record, and what this means for the company's future. The duo also zooms out to examine Australia's broader regulatory environment, before shifting gears to discuss Flutter's latest acquisition move in Italy — a sign of the fast-changing global iGaming landscape.To read more about the topic's discussed in today's episode, click on the following links:- https://igamingexpert.com/regions/asia/entain-austrac-aml-allegations/- https://igamingexpert.com/regions/europe/flutter-snaitech-confirmation/Host: Joe StreeterGuest: Ted Orme-ClayeProducer: Anaya McDonaldEditor: James RossiGaming Daily is also now on TikTok. Make sure to follow us at iGaming Daily Podcast (@igaming_daily_podcast) | TikTok for bite-size clips from your favourite podcast. Finally, remember to check out Optimove at https://hubs.la/Q02gLC5L0 or go to Optimove.com/sbc to get your first month free when buying the industry's leading customer-loyalty service.
Jeff and Reuben share what they are most proud of doing in 2024. Nicholas didn't have anything
Patients with a narrow complex tachycardia with a rate over 150 BPM are in SVT.Unstable patients in SVT, or V-Tach with a pulse, should be cardioverted with a synchronized shock. Assessment & treatment of stable tachycardic patients.Commonly used vagal techniques.A less common technique to stimulate the vagus nerve is the dive reflex. Indications and use of Adenosine for stable patients in SVT refractory to vagal maneuvers.Possible treatments for patients found to be in A-Fib or A-Flutter with RVR after administration of Adenosine. Carotid sinus massage.Additional medical podcasts that have episodes on tachycardia can be found on the pod resources page at passacls.com. Good luck with your ACLS class!Links: Buy Me a Coffee at https://buymeacoffee.com/paultaylor Practice ECG rhythms at Dialed Medics - https://dialedmedics.com/Safe Meds VIP - Learn about medication safety and download a free drug discount card to save money on prescription medications for you and your pets: https://safemeds.vipPass ACLS Web Site - Episode archives & other ACLS-related podcasts: https://passacls.com@Pass-ACLS-Podcast on LinkedIn
A week out from the Aintree Festival. Nick was joined in the studio by, sports broadcasting legend John Inverdale, broadcaster Cornelius Lysaght, trainer Nicky Henderson, Author William Morgan & Flutter's Seb Butterworth. Lot's of Aintree chat, plus hearty discussions on the industry as a whole.
Adenosine is the first IV medication given to stable patients with sustained supraventricular tachycardia (SVT) refractory to vagal maneuvers. Symptoms indicating a stable vs unstable patient. Common causes of tachycardia. Cardiac effects of Adenosine. Indications for use in the ACLS Tachycardia algorithm.Considerations and contraindications. Adenosine as a diagnostic for patients in A-Fib or A-Flutter with RVR.Dosing and administration.Other podcasts that cover common ACLS antiarrhythmics in more detail and another covering Brugata Criteria used to differentiate V-Tach from SVT with an aberrancy, can be found on the Pod Resources page at passacls.com.Connect with me:Website: https://passacls.com@Pass-ACLS-Podcast on LinkedInOther Links: Buy Me a Coffee at https://buymeacoffee.com/paultaylor Practice ECG rhythms at Dialed Medics - https://dialedmedics.com/Save money on prescription medications for you and your pets: https://nationaldrugcard.com/ndc3506*Commissions may be earned from the above links.Good luck with your ACLS class!
Microsoft and the Analytics Institute have awarded 150 people from 13 organisations with a DataSMART Foundation Credential in Data Skills following an interactive Datathon event hosted by Microsoft in Dublin on Friday (March 21). The event enabled the participants to apply newly acquired data skills in a collaborative environment, demonstrating the practical benefits of this innovative programme. DataSMART, developed by Technology Ireland DIGITAL Skillnet in partnership with Microsoft and the Analytics Institute, is a trailblazing data literacy initiative designed for non-IT professionals who want to harness data strategically in their roles. Research shows that employees with core data skills advance faster in their careers, while organisations making data-driven decisions achieve accelerated market progress and revenue growth. Major companies including Bord Gais, Pfizer and Flutter have already enrolled employees in the programme, recognising the competitive advantage of building organisation-wide data literacy. The 12-week DataSMART programme equips professionals with foundational skills in data collection, cleaning, transformation, visualisation, and storytelling, while exploring cutting-edge topics like AI, data governance and emerging technologies. It is ideal for non-IT professionals seeking to build confidence in using data. DataSMART is particularly beneficial for large groups from the same organisation, as peer learning interactions deepen understanding and application of data concepts across departments. The programme's flexible format combines online tutorials with interactive workshops, allowing working professionals acquire valuable data literacy competence in a cost-effective way that accommodates busy schedules. Commenting, Maire Hunt, Network Director of Technology Ireland DIGITAL Skillnet, said: "The DataSMART initiative has been a resounding success, helping individuals across all sectors build the confidence and skills needed to navigate today's data-driven world. This programme has proven to be an ideal solution for organisations looking to upskill large groups of employees efficiently, ensuring their workforce can interpret and apply data to make better business decisions. "With its flexible, accessible learning approach and practical applications, DataSMART has opened new opportunities for both individuals and businesses, reinforcing the vital role of data skills in future-proofing Ireland's workforce." Russell Kane, Datathon Facilitator, of Microsoft, said: "It was fantastic to deliver a number of the online modules and subsequently to facilitate the in-person mid-review workshop as well as the full-day in-person datathon. The blended delivery method of the course allowed the participants to work through the contents of the modules at their own pace, while the in-person sessions provided invaluable opportunities for hands-on learning and real-time interactions. "The importance of this course cannot be overstated when it comes to upskilling all staff in understanding the significance and impact of data, particularly in our rapidly evolving era of AI. DataSMART empowers individuals to navigate the complexities of data with confidence, making informed decisions and driving innovation within their organisations. As AI continues to transform many sectors, equipping employees with robust data skills ensures that they remain agile and competitive in a data-driven world." Lorcan Malone, CEO, Analytics Institute, added: "The completion of the DataSMART pilot programme marks an important milestone in advancing data literacy at scale. The positive response from participating organisations highlights the growing recognition that data skills are essential across all roles and industries. By providing a structured yet flexible approach to learning, DataSMART has helped integrate data-driven thinking into everyday decision-making. We are excited to see how this initiative continues to support businesses in building...
We are working with Amplify on the 2025 State of AI Engineering Survey to be presented at the AIE World's Fair in SF! Join the survey to shape the future of AI Eng!We first met Snipd over a year ago, and were immediately impressed by the design, but were doubtful about the behavior of snipping as the title behavior:Podcast apps are enormously sticky - Spotify spent almost $1b in podcast acquisitions and exclusive content just to get an 8% bump in market share among normies.However, after a disappointing Overcast 2.0 rewrite with no AI features in the last 3 years, I finally bit the bullet and switched to Snipd. It's 2025, your podcast app should be able to let you search transcripts of your podcasts. Snipd is the best implementation of this so far.And yet they keep shipping:What impressed us wasn't just how this tiny team of 4 was able to bootstrap a consumer AI app against massive titans and do so well; but also how seriously they think about learning through podcasts and improving retention of knowledge over time, aka “Duolingo for podcasts”. As an educational AI podcast, that's a mission we can get behind.Full Video PodFind us on YouTube! This was the first pod we've ever shot outdoors!Show Notes* How does Shazam work?* Flutter/FlutterFlow* wav2vec paper* Perplexity Online LLM* Google Search Grounding* Comparing Snipd transcription with our Bee episode* NIPS 2017 Flo Rida* Gustav Söderström - Background AudioTimestamps* [00:00:03] Takeaways from AI Engineer NYC* [00:00:17] Weather in New York.* [00:00:26] Swyx and Snipd.* [00:01:01] Kevin's AI summit experience.* [00:01:31] Zurich and AI.* [00:03:25] SigLIP authors join OpenAI.* [00:03:39] Zurich is very costly.* [00:04:06] The Snipd origin story.* [00:05:24] Introduction to machine learning.* [00:09:28] Snipd and user knowledge extraction.* [00:13:48] App's tech stack, Flutter, Python.* [00:15:11] How speakers are identified.* [00:18:29] The concept of "backgroundable" video.* [00:29:05] Voice cloning technology.* [00:31:03] Using AI agents.* [00:34:32] Snipd's future is multi-modal AI.* [00:36:37] Snipd and existing user behaviour.* [00:42:10] The app, summary, and timestamps.* [00:55:25] The future of AI and podcasting.* [1:14:55] Voice AITranscriptswyx [00:00:03]: Hey, I'm here in New York with Kevin Ben-Smith of Snipd. Welcome.Kevin [00:00:07]: Hi. Hi. Amazing to be here.swyx [00:00:09]: Yeah. This is our first ever, I think, outdoors podcast recording.Kevin [00:00:14]: It's quite a location for the first time, I have to say.swyx [00:00:18]: I was actually unsure because, you know, it's cold. It's like, I checked the temperature. It's like kind of one degree Celsius, but it's not that bad with the sun. No, it's quite nice. Yeah. Especially with our beautiful tea. With the tea. Yeah. Perfect. We're going to talk about Snips. I'm a Snips user. I'm a Snips user. I had to basically, you know, apart from Twitter, it's like the number one use app on my phone. Nice. When I wake up in the morning, I open Snips and I, you know, see what's new. And I think in terms of time spent or usage on my phone, I think it's number one or number two. Nice. Nice. So I really had to talk about it also because I think people interested in AI want to think about like, how can we, we're an AI podcast, we have to talk about the AI podcast app. But before we get there, we just finished. We just finished the AI Engineer Summit and you came for the two days. How was it?Kevin [00:01:07]: It was quite incredible. I mean, for me, the most valuable was just being in the same room with like-minded people who are building the future and who are seeing the future. You know, especially when it comes to AI agents, it's so often I have conversations with friends who are not in the AI world. And it's like so quickly it happens that you, it sounds like you're talking in science fiction. And it's just crazy talk. It was, you know, it's so refreshing to talk with so many other people who already see these things and yeah, be inspired then by them and not always feel like, like, okay, I think I'm just crazy. And like, this will never happen. It really is happening. And for me, it was very valuable. So day two, more relevant, more relevant for you than day one. Yeah. Day two. So day two was the engineering track. Yeah. That was definitely the most valuable for me. Like also as a producer. Practitioner myself, especially there were one or two talks that had to do with voice AI and AI agents with voice. Okay. So that was quite fascinating. Also spoke with the speakers afterwards. Yeah. And yeah, they were also very open and, and, you know, this, this sharing attitudes that's, I think in general, quite prevalent in the AI community. I also learned a lot, like really practical things that I can now take away with me. Yeah.swyx [00:02:25]: I mean, on my side, I, I think I watched only like half of the talks. Cause I was running around and I think people saw me like towards the end, I was kind of collapsing. I was on the floor, like, uh, towards the end because I, I needed to get, to get a rest, but yeah, I'm excited to watch the voice AI talks myself.Kevin [00:02:43]: Yeah. Yeah. Do that. And I mean, from my side, thanks a lot for organizing this conference for bringing everyone together. Do you have anything like this in Switzerland? The short answer is no. Um, I mean, I have to say the AI community in, especially Zurich, where. Yeah. Where we're, where we're based. Yeah. It is quite good. And it's growing, uh, especially driven by ETH, the, the technical university there and all of the big companies, they have AI teams there. Google, like Google has the biggest tech hub outside of the U S in Zurich. Yeah. Facebook is doing a lot in reality labs. Uh, Apple has a secret AI team, open AI and then SwapBit just announced that they're coming to Zurich. Yeah. Um, so there's a lot happening. Yeah.swyx [00:03:23]: So, yeah, uh, I think the most recent notable move, I think the entire vision team from Google. Uh, Lucas buyer, um, and, and all the other authors of Siglip left Google to join open AI, which I thought was like, it's like a big move for a whole team to move all at once at the same time. So I've been to Zurich and it just feels expensive. Like it's a great city. Yeah. It's great university, but I don't see it as like a business hub. Is it a business hub? I guess it is. Right.Kevin [00:03:51]: Like it's kind of, well, historically it's, uh, it's a finance hub, finance hub. Yeah. I mean, there are some, some large banks there, right? Especially UBS, uh, the, the largest wealth manager in the world, but it's really becoming more of a tech hub now with all of the big, uh, tech companies there.swyx [00:04:08]: I guess. Yeah. Yeah. And, but we, and research wise, it's all ETH. Yeah. There's some other things. Yeah. Yeah. Yeah.Kevin [00:04:13]: It's all driven by ETH. And then, uh, it's sister university EPFL, which is in Lausanne. Okay. Um, which they're also doing a lot, but, uh, it's, it's, it's really ETH. Uh, and otherwise, no, I mean, it's a beautiful, really beautiful city. I can recommend. To anyone. To come, uh, visit Zurich, uh, uh, let me know, happy to show you around and of course, you know, you, you have the nature so close, you have the mountains so close, you have so, so beautiful lakes. Yeah. Um, I think that's what makes it such a livable city. Yeah.swyx [00:04:42]: Um, and the cost is not, it's not cheap, but I mean, we're in New York city right now and, uh, I don't know, I paid $8 for a coffee this morning, so, uh, the coffee is cheaper in Zurich than the New York city. Okay. Okay. Let's talk about Snipt. What is Snipt and, you know, then we'll talk about your origin story, but I just, let's, let's get a crisp, what is Snipt? Yeah.Kevin [00:05:03]: I always see two definitions of Snipt, so I'll give you one really simple, straightforward one, and then a second more nuanced, um, which I think will be valuable for the rest of our conversation. So the most simple one is just to say, look, we're an AI powered podcast app. So if you listen to podcasts, we're now providing this AI enhanced experience. But if you look at the more nuanced, uh, podcast. Uh, perspective, it's actually, we, we've have a very big focus on people who like your audience who listened to podcasts to learn something new. Like your audience, you want, they want to learn about AI, what's happening, what's, what's, what's the latest research, what's going on. And we want to provide a, a spoken audio platform where you can do that most effectively. And AI is basically the way that we can achieve that. Yeah.swyx [00:05:53]: Means to an end. Yeah, exactly. When you started. Was it always meant to be AI or is it, was it more about the social sharing?Kevin [00:05:59]: So the first version that we ever released was like three and a half years ago. Okay. Yeah. So this was before ChatGPT. Before Whisper. Yeah. Before Whisper. Yeah. So I think a lot of the features that we now have in the app, they weren't really possible yet back then. But we already from the beginning, we always had the focus on knowledge. That's the reason why, you know, we in our team, why we listen to podcasts, but we did have a bit of a different approach. Like the idea in the very beginning was, so the name is Snips and you can create these, what we call Snips, which is basically a small snippet, like a clip from a, from a podcast. And we did envision sort of like a, like a social TikTok platform where some people would listen to full episodes and they would snip certain, like the best parts of it. And they would post that in a feed and other users would consume this feed of Snips. And use that as a discovery tool or just as a means to an end. And yeah, so you would have both people who create Snips and people who listen to Snips. So our big hypothesis in the beginning was, you know, it will be easy to get people to listen to these Snips, but super difficult to actually get them to create them. So we focused a lot of, a lot of our effort on making it as seamless and easy as possible to create a Snip. Yeah.swyx [00:07:17]: It's similar to TikTok. You need CapCut for there to be videos on TikTok. Exactly.Kevin [00:07:23]: And so for, for Snips, basically whenever you hear an amazing insight, a great moment, you can just triple tap your headphones. And our AI actually then saves the moment that you just listened to and summarizes it to create a note. And this is then basically a Snip. So yeah, we built, we built all of this, launched it. And what we found out was basically the exact opposite. So we saw that people use the Snips to discover podcasts, but they really, you know, they don't. You know, really love listening to long form podcasts, but they were creating Snips like crazy. And this was, this was definitely one of these aha moments when we realized like, hey, we should be really doubling down on the knowledge of learning of, yeah, helping you learn most effectively and helping you capture the knowledge that you listen to and actually do something with it. Because this is in general, you know, we, we live in this world where there's so much content and we consume and consume and consume. And it's so easy to just at the end of the podcast. You just start listening to the next podcast. And five minutes later, you've forgotten everything. 90%, 99% of what you've actually just learned. Yeah.swyx [00:08:31]: You don't know this, but, and most people don't know this, but this is my fourth podcast. My third podcast was a personal mixtape podcast where I Snipped manually sections of podcasts that I liked and added my own commentary on top of them and published them as small episodes. Nice. So those would be maybe five to 10 minute Snips. Yeah. And then I added something that I thought was a good story or like a good insight. And then I added my own commentary and published it as a separate podcast. It's cool. Is that still live? It's still live, but it's not active, but you can go back and find it. If you're, if, if you're curious enough, you'll see it. Nice. Yeah. You have to show me later. It was so manual because basically what my process would be, I hear something interesting. I note down the timestamp and I note down the URL of the podcast. I used to use Overcast. So it would just link to the Overcast page. And then. Put in my note taking app, go home. Whenever I feel like publishing, I will take one of those things and then download the MP3, clip out the MP3 and record my intro, outro and then publish it as a, as a podcast. But now Snips, I mean, I can just kind of double click or triple tap.Kevin [00:09:39]: I mean, those are very similar stories to what we hear from our users. You know, it's, it's normal that you're doing, you're doing something else while you're listening to a podcast. Yeah. A lot of our users, they're driving, they're working out, walking their dog. So in those moments when you hear something amazing, it's difficult to just write them down or, you know, you have to take out your phone. Some people take a screenshot, write down a timestamp, and then later on you have to go back and try to find it again. Of course you can't find it anymore because there's no search. There's no command F. And, um, these, these were all of the issues that, that, that we encountered also ourselves as users. And given that our background was in AI, we realized like, wait, hey, this is. This should not be the case. Like podcast apps today, they're still, they're basically repurposed music players, but we actually look at podcasts as one of the largest sources of knowledge in the world. And once you have that different angle of looking at it together with everything that AI is now enabling, you realize like, hey, this is not the way that we, that podcast apps should be. Yeah.swyx [00:10:41]: Yeah. I agree. You mentioned something that you said your background is in AI. Well, first of all, who's the team and what do you mean your background is in AI?Kevin [00:10:48]: Those are two very different things. I'm going to ask some questions. Yeah. Um, maybe starting with, with my backstory. Yeah. My backstory actually goes back, like, let's say 12 years ago or something like that. I moved to Zurich to study at ETH and actually I studied something completely different. I studied mathematics and economics basically with this specialization for quant finance. Same. Okay. Wow. All right. So yeah. And then as you know, all of these mathematical models for, um, asset pricing, derivative pricing, quantitative trading. And for me, the thing that, that fascinates me the most was the mathematical modeling behind it. Uh, mathematics, uh, statistics, but I was never really that passionate about the finance side of things.swyx [00:11:32]: Oh really? Oh, okay. Yeah. I mean, we're different there.Kevin [00:11:36]: I mean, one just, let's say symptom that I noticed now, like, like looking back during that time. Yeah. I think I never read an academic paper about the subject in my free time. And then it was towards the end of my studies. I was already working for a big bank. One of my best friends, he comes to me and says, Hey, I just took this course. You have to, you have to do this. You have to take this lecture. Okay. And I'm like, what, what, what is it about? It's called machine learning and I'm like, what, what, what kind of stupid name is that? Uh, so you sent me the slides and like over a weekend I went through all of the slides and I just, I just knew like freaking hell. Like this is it. I'm, I'm in love. Wow. Yeah. Okay. And that was then over the course of the next, I think like 12 months, I just really got into it. Started reading all about it, like reading blog posts, starting building my own models.swyx [00:12:26]: Was this course by a famous person, famous university? Was it like the Andrew Wayne Coursera thing? No.Kevin [00:12:31]: So this was a ETH course. So a professor at ETH. Did he teach in English by the way? Yeah. Okay.swyx [00:12:37]: So these slides are somewhere available. Yeah. Definitely. I mean, now they're quite outdated. Yeah. Sure. Well, I think, you know, reflecting on the finance thing for a bit. So I, I was, used to be a trader, uh, sell side and buy side. I was options trader first and then I was more like a quantitative hedge fund analyst. We never really use machine learning. It was more like a little bit of statistical modeling, but really like you, you fit, you know, your regression.Kevin [00:13:03]: No, I mean, that's, that's what it is. And, uh, or you, you solve partial differential equations and have then numerical methods to, to, to solve these. That's, that's for you. That's your degree. And that's, that's not really what you do at work. Right. Unless, well, I don't know what you do at work. In my job. No, no, we weren't solving the partial differential. Yeah.swyx [00:13:18]: You learn all this in school and then you don't use it.Kevin [00:13:20]: I mean, we, we, well, let's put it like that. Um, in some things, yeah, I mean, I did code algorithms that would do it, but it was basically like, it was the most basic algorithms and then you just like slightly improve them a little bit. Like you just tweak them here and there. Yeah. It wasn't like starting from scratch, like, Oh, here's this new partial differential equation. How do we know?swyx [00:13:43]: Yeah. Yeah. I mean, that's, that's real life, right? Most, most of it's kind of boring or you're, you're using established things because they're established because, uh, they tackle the most important topics. Um, yeah. Portfolio management was more interesting for me. Um, and, uh, we, we were sort of the first to combine like social data with, with quantitative trading. And I think, uh, I think now it's very common, but, um, yeah. Anyway, then you, you went, you went deep on machine learning and then what? You quit your job? Yeah. Yeah. Wow.Kevin [00:14:12]: I quit my job because, uh, um, I mean, I started using it at the bank as well. Like try, like, you know, I like desperately tried to find any kind of excuse to like use it here or there, but it just was clear to me, like, no, if I want to do this, um, like I just have to like make a real cut. So I quit my job and joined an early stage, uh, tech startup in Zurich where then built up the AI team over five years. Wow. Yeah. So yeah, we built various machine learning, uh, things for, for banks from like models for, for sales teams to identify which clients like which product to sell to them and with what reasons all the way to, we did a lot, a lot with bank transactions. One of the actually most fun projects for me was we had an, an NLP model that would take the booking text of a transaction, like a credit card transaction and pretty fired. Yeah. Because it had all of these, you know, like numbers in there and abbreviations and whatnot. And sometimes you look at it like, what, what is this? And it was just, you know, it would just change it to, I don't know, CVS. Yeah.swyx [00:15:15]: Yeah. But I mean, would you have hallucinations?Kevin [00:15:17]: No, no, no. The way that everything was set up, it wasn't like, it wasn't yet fully end to end generative, uh, neural network as what you would use today. Okay.swyx [00:15:30]: Awesome. And then when did you go like full time on Snips? Yeah.Kevin [00:15:33]: So basically that was, that was afterwards. I mean, how that started was the friend of mine who got me into machine learning, uh, him and I, uh, like he also got me interested into startups. He's had a big impact on my life. And the two of us were just a jam on, on like ideas for startups every now and then. And his background was also in AI data science. And we had a couple of ideas, but given that we were working full times, we were thinking about, uh, so we participated in Hack Zurich. That's, uh, Europe's biggest hackathon, um, or at least was at the time. And we said, Hey, this is just a weekend. Let's just try out an idea, like hack something together and see how it works. And the idea was that we'd be able to search through podcast episodes, like within a podcast. Yeah. So we did that. Long story short, uh, we managed to do it like to build something that we realized, Hey, this actually works. You can, you can find things again in podcasts. We had like a natural language search and we pitched it on stage. And we actually won the hackathon, which was cool. I mean, we, we also, I think we had a good, um, like a good, good pitch or a good example. So we, we used the famous Joe Rogan episode with Elon Musk where Elon Musk smokes a joint. Okay. Um, it's like a two and a half hour episode. So we were on stage and then we just searched for like smoking weed and it would find that exact moment. It will play it. And it just like, come on with Elon Musk, just like smoking. Oh, so it was video as well? No, it was actually completely based on audio. But we did have the video for the presentation. Yeah. Which had a, had of course an amazing effect. Yeah. Like this gave us a lot of activation energy, but it wasn't actually about winning the hackathon. Yeah. But the interesting thing that happened was after we pitched on stage, several of the other participants, like a lot of them came up to us and started saying like, Hey, can I use this? Like I have this issue. And like some also came up and told us about other problems that they have, like very adjacent to this with a podcast. Where's like, like this. Like, could, could I use this for that as well? And that was basically the, the moment where I realized, Hey, it's actually not just us who are having these issues with, with podcasts and getting to the, making the most out of this knowledge. Yeah. The other people. Yeah. That was now, I guess like four years ago or something like that. And then, yeah, we decided to quit our jobs and start, start this whole snip thing. Yeah. How big is the team now? We're just four people. Yeah. Just four people. Yeah. Like four. We're all technical. Yeah. Basically two on the, the backend side. So one of my co-founders is this person who got me into machine learning and startups. And we won the hackathon together. So we have two people for the backend side with the AI and all of the other backend things. And two for the front end side, building the app.swyx [00:18:18]: Which is mostly Android and iOS. Yeah.Kevin [00:18:21]: It's iOS and Android. We also have a watch app for, for Apple, but yeah, it's mostly iOS. Yeah.swyx [00:18:27]: The watch thing, it was very funny because in the, in the Latent Space discord, you know, most of us have been slowly adopting snips. You came to me like a year ago and you introduced snip to me. I was like, I don't know. I'm, you know, I'm very sticky to overcast and then slowly we switch. Why watch?Kevin [00:18:43]: So it goes back to a lot of our users, they do something else while, while listening to a podcast, right? Yeah. And one of the, us giving them the ability to then capture this knowledge, even though they're doing something else at the same time is one of the killer features. Yeah. Maybe I can actually, maybe at some point I should maybe give a bit more of an overview of what the, all of the features that we have. Sure. So this is one of the killer features and for one big use case that people use this for is for running. Yeah. So if you're a big runner, a big jogger or cycling, like really, really cycling competitively and a lot of the people, they don't want to take their phone with them when they go running. So you load everything onto the watch. So you can download episodes. I mean, if you, if you have an Apple watch that has internet access, like with a SIM card, you can also directly stream. That's also possible. Yeah. So of course it's a, it's basically very limited to just listening and snipping. And then you can see all of your snips later on your phone. Let me tell you this error I just got.swyx [00:19:47]: Error playing episode. Substack, the host of this podcast, does not allow this podcast to be played on an Apple watch. Yeah.Kevin [00:19:52]: That's a very beautiful thing. So we found out that all of the podcasts hosted on Substack, you cannot play them on an Apple watch. Why is this restriction? What? Like, don't ask me. We try to reach out to Substack. We try to reach out to some of the bigger podcasters who are hosting the podcast on Substack to also let them know. Substack doesn't seem to care. This is not specific to our app. You can also check out the Apple podcast app. Yeah. It's the same problem. It's just that we actually have identified it. And we tell the user what's going on.swyx [00:20:25]: I would say we host our podcast on Substack, but they're not very serious about their podcasting tools. I've told them before, I've been very upfront with them. So I don't feel like I'm shitting on them in any way. And it's kind of sad because otherwise it's a perfect creative platform. But the way that they treat podcasting as an afterthought, I think it's really disappointing.Kevin [00:20:45]: Maybe given that you mentioned all these features, maybe I can give a bit of a better overview of the features that we have. Let's do that. Let's do that. So I think we're mostly in our minds. Maybe for some of the listeners.swyx [00:20:55]: I mean, I'll tell you my version. Yeah. They can correct me, right? So first of all, I think the main job is for it to be a podcast listening app. It should be basically a complete superset of what you normally get on Overcast or Apple Podcasts or anything like that. You pull your show list from ListenNotes. How do you find shows? You've got to type in anything and you find them, right?Kevin [00:21:18]: Yeah. We have a search engine that is powered by ListenNotes. Yeah. But I mean, in the meantime, we have a huge database of like 99% of all podcasts out there ourselves. Yeah.swyx [00:21:27]: What I noticed, the default experience is you do not auto-download shows. And that's one very big difference for you guys versus other apps, where like, you know, if I'm subscribed to a thing, it auto-downloads and I already have the MP3 downloaded overnight. For me, I have to actively put it onto my queue, then it auto-downloads. And actually, I initially didn't like that. I think I maybe told you that I was like, oh, it's like a feature that I don't like. Like, because it means that I have to choose to listen to it in order to download and not to... It's like opt-in. There's a difference between opt-in and opt-out. So I opt-in to every episode that I listen to. And then, like, you know, you open it and depends on whether or not you have the AI stuff enabled. But the default experience is no AI stuff enabled. You can listen to it. You can see the snips, the number of snips and where people snip during the episode, which roughly correlates to interest level. And obviously, you can snip there. I think that's the default experience. I think snipping is really cool. Like, I use it to share a lot on Discord. I think we have tons and tons of just people sharing snips and stuff. Tweeting stuff is also like a nice, pleasant experience. But like the real features come when you actually turn on the AI stuff. And so the reason I got snipped, because I got fed up with Overcast not implementing any AI features at all. Instead, they spent two years rewriting their app to be a little bit faster. And I'm like, like, it's 2025. I should have a podcast that has transcripts that I can search. Very, very basic thing. Overcast will basically never have it.Kevin [00:22:49]: Yeah, I think that was a good, like, basic overview. Maybe I can add a bit to it with the AI features that we have. So one thing that we do every time a new podcast comes out, we transcribe the episode. We do speaker diarization. We identify the speaker names. Each guest, we extract a mini bio of the guest, try to find a picture of the guest online, add it. We break the podcast down into chapters, as in AI generated chapters. That one. That one's very handy. With a quick description per title and quick description per each chapter. We identify all books that get mentioned on a podcast. You can tell I don't use that one. It depends on the podcast. There are some podcasts where the guests often recommend like an amazing book. So later on, you can you can find that again.swyx [00:23:42]: So you literally search for the word book or I just read blah, blah, blah.Kevin [00:23:46]: No, I mean, it's all LLM based. Yeah. So basically, we have we have an LLM that goes through the entire transcript and identifies if a user mentions a book, then we use perplexity API together with various other LLM orchestration to go out there on the Internet, find everything that there is to know about the book, find the cover, find who or what the author is, get a quick description of it for the author. We then check on which other episodes the author appeared on.swyx [00:24:15]: Yeah, that is killer.Kevin [00:24:17]: Because that for me, if. If there's an interesting book, the first thing I do is I actually listen to a podcast episode with a with a writer because he usually gives a really great overview already on a podcast.swyx [00:24:28]: Sometimes the podcast is with the person as a guest. Sometimes his podcast is about the person without him there. Do you pick up both?Kevin [00:24:37]: So, yes, we pick up both in like our latest models. But actually what we show you in the app, the goal is to currently only show you the guest to separate that. In the future, we want to show the other things more.swyx [00:24:47]: For what it's worth, I don't mind. Yeah, I don't think like if I like if I like somebody, I'll just learn about them regardless of whether they're there or not.Kevin [00:24:55]: Yeah, I mean, yes and no. We we we have seen there are some personalities where this can break down. So, for example, the first version that we released with this feature, it picked up much more often a person, even if it was not a guest. Yeah. For example, the best examples for me is Sam Altman and Elon Musk. Like they're just mentioned on every second podcast and it has like they're not on there. And if you're interested in it, you can go to Elon Musk. And actually like learning from them. Yeah, I see. And yeah, we updated our our algorithms, improved that a lot. And now it's gotten much better to only pick it up if they're a guest. And yeah, so this this is maybe to come back to the features, two more important features like we have the ability to chat with an episode. Yes. Of course, you can do the old style of searching through a transcript with a keyword search. But I think for me, this is this is how you used to do search and extracting knowledge in the in the past. Old school. And the A.I. Web. Way is is basically an LLM. So you can ask the LLM, hey, when do they talk about topic X? If you're interested in only a certain part of the episode, you can ask them for four to give a quick overview of the episode. Key takeaways afterwards also to create a note for you. So this is really like very open, open ended. And yeah. And then finally, the snipping feature that we mentioned just to reiterate. Yeah. I mean, here the the feature is that whenever you hear an amazing idea, you can trip. It's up your headphones or click a button in the app and the A.I. summarizes the insight you just heard and saves that together with the original transcript and audio in your knowledge library. I also noticed that you you skip dynamic content. So dynamic content, we do not skip it automatically. Oh, sorry. You detect. But we detect it. Yeah. I mean, that's one of the thing that most people don't don't actually know that like the way that ads get inserted into podcasts or into most podcasts is actually that every time you listen. To a podcast, you actually get access to a different audio file and on the server, a different ad is inserted into the MP3 file automatically. Yeah. Based on IP. Exactly. And that's what that means is if we transcribe an episode and have a transcript with timestamps like words, word specific timestamps, if you suddenly get a different audio file, like the whole time says I messed up and that's like a huge issue. And for that, we actually had to build another algorithm that would dynamically on the floor. I re sync the audio that you're listening to the transcript that we have. Yeah. Which is a fascinating problem in and of itself.swyx [00:27:24]: You sync by matching up the sound waves? Or like, or do you sync by matching up words like you basically do partial transcription?Kevin [00:27:33]: We are not matching up words. It's happening on the basically a bytes level matching. Yeah. Okay.swyx [00:27:40]: It relies on this. It relies on the exact match at some point.Kevin [00:27:46]: So it's actually. We're actually not doing exact matches, but we're doing fuzzy matches to identify the moment. It's basically, we basically built Shazam for podcasts. Just as a little side project to solve this issue.swyx [00:28:02]: Actually, fun fact, apparently the Shazam algorithm is open. They published the paper, it's talked about it. I haven't really dived into the paper. I thought it was kind of interesting that basically no one else has built Shazam.Kevin [00:28:16]: Yeah, I mean, well, the one thing is the algorithm. If you now talk about Shazam, the other thing is also having the database behind it and having the user mindset that if they have this problem, they come to you, right?swyx [00:28:29]: Yeah, I'm very interested in the tech stack. There's a big data pipeline. Could you share what is the tech stack?Kevin [00:28:35]: What are the most interesting or challenging pieces of it? So the general tech stack is our entire backend is, or 90% of our backend is written in Python. Okay. Hosting everything on Google Cloud Platform. And our front end is written with, well, we're using the Flutter framework. So it's written in Dart and then compiled natively. So we have one code base that handles both Android and iOS. You think that was a good decision? It's something that a lot of people are exploring. So up until now, yes. Okay. Look, it has its pros and cons. Some of the, you know, for example, earlier, I mentioned we have a Apple Watch app. Yeah. I mean, there's no Flutter for that, right? So that you build native. And then of course you have to sort of like sync these things together. I mean, I'm not the front end engineer, so I'm not just relaying this information, but our front end engineers are very happy with it. It's enabled us to be quite fast and be on both platforms from the very beginning. And when I talk with people and they hear that we are using Flutter, usually they think like, ah, it's not performant. It's super junk, janky and everything. And then they use it. They use our app and they're always super surprised. Or if they've already used our app, I couldn't tell them. They're like, what? Yeah. Um, so there is actually a lot that you can do with it.swyx [00:29:51]: The danger, the concern, there's a few concerns, right? One, it's Google. So when were they, when are they going to abandon it? Two, you know, they're optimized for Android first. So iOS is like a second, second thought, or like you can feel that it is not a native iOS app. Uh, but you guys put a lot of care into it. And then maybe three, from my point of view, JavaScript, as a JavaScript guy, React Native was supposed to be there. And I think that it hasn't really fulfilled that dream. Um, maybe Expo is trying to do that, but, um, again, it is not, does not feel as productive as Flutter. And I've, I spent a week on Flutter and dot, and I'm an investor in Flutter flow, which is the local, uh, Flutter, Flutter startup. That's doing very, very well. I think a lot of people are still Flutter skeptics. Yeah. Wait. So are you moving away from Flutter?Kevin [00:30:41]: I don't know. We don't have plans to do that. Yeah.swyx [00:30:43]: You're just saying about that. What? Yeah. Watch out. Okay. Let's go back to the stack.Kevin [00:30:47]: You know, that was just to give you a bit of an overview. I think the more interesting things are, of course, on the AI side. So we, like, as I mentioned earlier, when we started out, it was before chat GPT for the chat GPT moment before there was the GPT 3.5 turbo, uh, API. So in the beginning, we actually were running everything ourselves, open source models, try to fine tune them. They worked. There was us, but let's, let's be honest. They weren't. What was the sort of? Before Whisper, the transcription. Yeah, we were using wave to work like, um, there was a Google one, right? No, it was a Facebook, Facebook one. That was actually one of the papers. Like when that came out for me, that was one of the reasons why I said we, we should try something to start a startup in the audio space. For me, it was a bit like before that I had been following the NLP space, uh, quite closely. And as, as I mentioned earlier, we, we did some stuff at the startup as well, that I was working up. But before, and wave to work was the first paper that I had at least seen where the whole transformer architecture moved over to audio and bit more general way of saying it is like, it was the first time that I saw the transformer architecture being applied to continuous data instead of discrete tokens. Okay. And it worked amazingly. Ah, and like the transformer architecture plus self-supervised learning, like these two things moved over. And then for me, it was like, Hey, this is now going to take off similarly. It's the text space has taken off. And with these two things in place, even if some features that we want to build are not possible yet, they will be possible in the near term, uh, with this, uh, trajectory. So that was a little side, side note. No, it's in the meantime. Yeah. We're using whisper. We're still hosting some of the models ourselves. So for example, the whole transcription speaker diarization pipeline, uh,swyx [00:32:38]: You need it to be as cheap as possible.Kevin [00:32:40]: Yeah, exactly. I mean, we're doing this at scale where we have a lot of audio.swyx [00:32:44]: We're what numbers can you disclose? Like what, what are just to give people an idea because it's a lot. So we have more than a million podcasts that we've already processed when you say a million. So processing is basically, you have some kind of list of podcasts that you will auto process and others where a paying pay member can choose to press the button and transcribe it. Right. Is that the rough idea? Yeah, exactly.Kevin [00:33:08]: Yeah. And if, when you press that button or we also transcribe it. Yeah. So first we do the, we do the transcription. We do the. The, the speaker diarization. So basically you identify speech blocks that belong to the same speaker. This is then all orchestrated within, within LLM to identify which speech speech block belongs to which speaker together with, you know, we identify, as I mentioned earlier, we identify the guest name and the bio. So all of that comes together with an LLM to actually then assign assigned speaker names to, to each block. Yeah. And then most of the rest of the, the pipeline we've now used, we've now migrated to LLM. So we use mainly open AI, Google models, so the Gemini models and the open AI models, and we use some perplexity basically for those things where we need, where we need web search. Yeah. That's something I'm still hoping, especially open AI will also provide us an API. Oh, why? Well, basically for us as a consumer, the more providers there are.swyx [00:34:07]: The more downtime.Kevin [00:34:08]: The more competition and it will lead to better, better results. And, um, lower costs over time. I don't, I don't see perplexity as expensive. If you use the web search, the price is like $5 per a thousand queries. Okay. Which is affordable. But, uh, if you compare that to just a normal LLM call, um, it's, it's, uh, much more expensive. Have you tried Exa? We've, uh, looked into it, but we haven't really tried it. Um, I mean, we, we started with perplexity and, uh, it works, it works well. And if I remember. Correctly, Exa is also a bit more expensive.swyx [00:34:45]: I don't know. I don't know. They seem to focus on the search thing as a search API, whereas perplexity, maybe more consumer-y business that is higher, higher margin. Like I'll put it like perplexity is trying to be a product, Exa is trying to be infrastructure. Yeah. So that, that'll be my distinction there. And then the other thing I will mention is Google has a search grounding feature. Yeah. Which you, which you might want. Yeah.Kevin [00:35:07]: Yeah. We've, uh, we've also tried that out. Um, not as good. So we, we didn't, we didn't go into. Too much detail in like really comparing it, like quality wise, because we actually already had the perplexity one and it, and it's, and it's working. Yeah. Um, I think also there, the price is actually higher than perplexity. Yeah. Really? Yeah.swyx [00:35:26]: Google should cut their prices.Kevin [00:35:29]: Maybe it was the same price. I don't want to say something incorrect, but it wasn't cheaper. It wasn't like compelling. And then, then there was no reason to switch. So, I mean, maybe like in general, like for us, given that we do work with a lot of content, price is actually something that we do look at. Like for us, it's not just about taking the best model for every task, but it's really getting the best, like identifying what kind of intelligence level you need and then getting the best price for that to be able to really scale this and, and provide us, um, yeah, let our users use these features with as many podcasts as possible. Yeah.swyx [00:36:03]: I wanted to double, double click on diarization. Yeah. Uh, it's something that I don't think people do very well. So you know, I'm, I'm a, I'm a B user. I don't have it right now. And, and they were supposed to speak, but they dropped out last minute. Um, but, uh, we've had them on the podcast before and it's not great yet. Do you use just PI Anode, the default stuff, or do you find any tricks for diarization?Kevin [00:36:27]: So we do use the, the open source packages, but we have tweaked it a bit here and there. For example, if you mentioned the BAI guys, I actually listened to the podcast episode was super nice. Thank you. And when you started talking about speaker diarization, and I just have to think about, uh, I don't know.Kevin [00:36:49]: Is it possible? I don't know. I don't know. F**k this. Yeah, no, I don't know.Kevin [00:36:55]: Yeah. We are the best. This is a.swyx [00:37:07]: I don't know. This is the best. I don't know. This is the best. Yeah. Yeah. Yeah. You're doing good.Kevin [00:37:12]: So, so yeah. This is great. This is good. Yeah. No, so that of course helps us. Another thing that helps us is that we know certain structural aspects of the podcast. For example, how often does someone speak? Like if someone, like let's say there's a one hour episode and someone speaks for 30 seconds, that person is most probably not the guest and not the host. It's probably some ad, like some speaker from an ad. So we have like certain of these heuristics that we can use and we leverage to improve things. And in the past, we've also changed the clustering algorithm. So basically how a lot of the speaker diarization works is you basically create an embedding for the speech that's happening. And then you try to somehow cluster these embeddings and then find out this is all one speaker. This is all another speaker. And there we've also tweaked a couple of things where we again used heuristics that we could apply from knowing how podcasts function. And that's also actually why I was feeling so much with the BAI guys, because like all of these heuristics, like for them, it's probably almost impossible to use any heuristics because it can just be any situation, anything.Kevin [00:38:34]: So that's one thing that we do. Yeah, another thing is that we actually combine it with LLM. So the transcript, LLMs and the speaker diarization, like bringing all of these together to recalibrate some of the switching points. Like when does the speaker stop? When does the next one start?swyx [00:38:51]: The LLMs can add errors as well. You know, I wouldn't feel safe using them to be so precise.Kevin [00:38:58]: I mean, at the end of the day, like also just to not give a wrong impression, like the speaker diarization is also not perfect that we're doing, right? I basically don't really notice it.swyx [00:39:08]: Like I use it for search.Kevin [00:39:09]: Yeah, it's not perfect yet, but it's gotten quite good. Like, especially if you compare, if you look at some of the, like if you take a latest episode and you compare it to an episode that came out a year ago, we've improved it quite a bit.swyx [00:39:23]: Well, it's beautifully presented. Oh, I love that I can click on the transcript and it goes to the timestamp. So simple, but you know, it should exist. Yeah, I agree. I agree. So this, I'm loading a two hour episode of Detect Me Right Home, where there's a lot of different guests calling in and you've identified the guest name. And yeah, so these are all LLM based. Yeah, it's really nice.Kevin [00:39:49]: Yeah, like the speaker names.swyx [00:39:50]: I would say that, you know, obviously I'm a power user of all these tools. You have done a better job than Descript. Okay, wow. Descript is so much funding. They had their open AI invested in them and they still suck. So I don't know, like, you know, keep going. You're doing great. Yeah, thanks. Thanks.Kevin [00:40:12]: I mean, I would, I would say that, especially for anyone listening who's interested in building a consumer app with AI, I think the, like, especially if your background is in AI and you love working with AI and doing all of that, I think the most important thing is just to keep reminding yourself of what's actually the job to be done here. Like, what does actually the consumer want? Like, for example, you now were just delighted by the ability to click on this word and it jumps there. Yeah. Like, this is not, this is not rocket science. This is, like, you don't have to be, like, I don't know, Android Kapathi to come up with that and build that, right? And I think that's, that's something that's super important to keep in mind.swyx [00:40:52]: Yeah, yeah. Amazing. I mean, there's so many features, right? It's, it's so packed. There's quotes that you pick up. There's summarization. Oh, by the way, I'm going to use this as my official feature request. I want to customize what, how it's summarized. I want to, I want to have a custom prompt. Yeah. Because your summarization is good, but, you know, I have different preferences, right? Like, you know.Kevin [00:41:14]: So one thing that you can already do today, I completely get your feature request. And I think it just.swyx [00:41:18]: I'm sure people have asked it.Kevin [00:41:19]: I mean, maybe just in general as a, as a, how I see the future, you know, like in the future, I think all, everything will be personalized. Yeah, yeah. Like, not, this is not specific to us. Yeah. And today we're still in a, in a phase where the cost of LLMs, at least if you're working with, like, such long context windows. As us, I mean, there's a lot of tokens in, if you take an entire podcast, so you still have to take that cost into consideration. So if for every single user, we regenerate it entirely, it gets expensive. But in the future, this, you know, cost will continue to go down and then it will just be personalized. So that being said, you can already today, if you go to the player screen. Okay. And open up the chat. Yeah. You can go to the, to the chat. Yes. And just ask for a summary in your style.swyx [00:42:13]: Yeah. Okay. I mean, I, I listen to consume, you know? Yeah. Yeah. I, I've never really used this feature. I don't know. I think that's, that's me being a slow adopter. No, no. I mean, that's. It has, when does the conversation start? Okay.Kevin [00:42:26]: I mean, you can just type anything. I think what you're, what you're describing, I mean, maybe that is also an interesting topic to talk about. Yes. Where, like, basically I told you, like, look, we have this chat. You can just ask for it. Yeah. And this is, this is how ChatGPT works today. But if you're building a consumer app, you have to move beyond the chat box. People do not want to always type out what they want. So your feature request was, even though theoretically it's already possible, what you are actually asking for is, hey, I just want to open up the app and it should just be there in a nicely formatted way. Beautiful way such that I can read it or consume it without any issues. Interesting. And I think that's in general where a lot of the, the. Opportunities lie currently in the market. If you want to build a consumer app, taking the capability and the intelligence, but finding out what the actual user interface is the best way how a user can engage with this intelligence in a natural way.swyx [00:43:24]: Is this something I've been thinking about as kind of like AI that's not in your face? Because right now, you know, we like to say like, oh, use Notion has Notion AI. And we have the little thing there. And there's, or like some other. Any other platform has like the sparkle magic wand emoji, like that's our AI feature. Use this. And it's like really in your face. A lot of people don't like it. You know, it should just kind of become invisible, kind of like an invisible AI.Kevin [00:43:49]: 100%. I mean, the, the way I see it as AI is, is the electricity of, of the future. And like no one, like, like we don't talk about, I don't know, this, this microphone uses electricity, this phone, you don't think about it that way. It's just in there, right? It's not an electricity enabled product. No, it's just a product. Yeah. It will be the same with AI. I mean, now. It's still a, something that you use to market your product. I mean, we do, we do the same, right? Because it's still something that people realize, ah, they're doing something new, but at some point, no, it'll just be a podcast app and it will be normal that it has all of this AI in there.swyx [00:44:24]: I noticed you do something interesting in your chat where you source the timestamps. Yeah. Is that part of this prompt? Is there a separate pipeline that adds source sources?Kevin [00:44:33]: This is, uh, actually part of the prompt. Um, so this is all prompt engine. Engineering, um, uh, you should be able to click on it. Yeah, I clicked on it. Um, this is all prompt engineering with how to provide the, the context, you know, we, because we provide all of the transcript, how to provide the context and then, yeah, I get them all to respond in a correct way with a certain format and then rendering that on the front end. This is one of the examples where I would say it's so easy to create like a quick demo of this. I mean, you can just go to chat to be deep, paste this thing in and say like, yeah, do this. Okay. Like 15 minutes and you're done. Yeah. But getting this to like then production level that it actually works 99% of the time. Okay. This is then where, where the difference lies. Yeah. So, um, for this specific feature, like we actually also have like countless regexes that they're just there to correct certain things that the LLM is doing because it doesn't always adhere to the format correctly. And then it looks super ugly on the front end. So yeah, we have certain regexes that correct that. And maybe you'd ask like, why don't you use an LLM for that? Because that's sort of the, again, the AI native way, like who uses regexes anymore. But with the chat for user experience, it's very important that you have the streaming because otherwise you need to wait so long until your message has arrived. So we're streaming live the, like, just like ChatGPT, right? You get the answer and it's streaming the text. So if you're streaming the text and something is like incorrect. It's currently not easy to just like pipe, like stream this into another stream, stream this into another stream and get the stream back, which corrects it, that would be amazing. I don't know, maybe you can answer that. Do you know of any?swyx [00:46:19]: There's no API that does this. Yeah. Like you cannot stream in. If you own the models, you can, uh, you know, whatever token sequence has, has been emitted, start loading that into the next one. If you fully own the models, uh, I don't, it's probably not worth it. That's what you do. It's better. Yeah. I think. Yeah. Most engineers who are new to AI research and benchmarking actually don't know how much regexing there is that goes on in normal benchmarks. It's just like this ugly list of like a hundred different, you know, matches for some criteria that you're looking for. No, it's very cool. I think it's, it's, it's an example of like real world engineering. Yeah. Do you have a tooling that you're proud of that you've developed for yourself?Kevin [00:47:02]: Is it just a test script or is it, you know? I think it's a bit more, I guess the term that has come up is, uh, vibe coding, uh, vibe coding, some, no, sorry, that's actually something else in this case, but, uh, no, no, yes, um, vibe evals was a term that in one of the talks actually on, on, um, I think it might've been the first, the first or the first day at the conference, someone brought that up. Yeah. Uh, because yeah, a lot of the talks were about evals, right. Which is so important. And yeah, I think for us, it's a bit more vibe. Evals, you know, that's also part of, you know, being a startup, we can take risks, like we can take the cost of maybe sometimes it failing a little bit or being a little bit off and our users know that and they appreciate that in return, like we're moving fast and iterating and building, building amazing things, but you know, a Spotify or something like that, half of our features will probably be in a six month review through legal or I don't know what, uh, before they could sell them out.swyx [00:48:04]: Let's just say Spotify is not very good at podcasting. Um, I have a documented, uh, dislike for, for their podcast features, just overall, really, really well integrated any other like sort of LLM focused engineering challenges or problems that you, that you want to highlight.Kevin [00:48:20]: I think it's not unique to us, but it goes again in the direction of handling the uncertainty of LLMs. So for example, with last year, at the end of the year, we did sort of a snipped wrapped. And one of the things we thought it would be fun to, just to do something with, uh, with an LLM and something with the snips that, that a user has. And, uh, three, let's say unique LLM features were that we assigned a personality to you based on the, the snips that, that you have. It was, I mean, it was just all, I guess, a bit of a fun, playful way. I'm going to look up mine. I forgot mine already.swyx [00:48:57]: Um, yeah, I don't know whether it's actually still in the, in the, we all took screenshots of it.Kevin [00:49:01]: Ah, we posted it in the, in the discord. And the, the second one, it was, uh, we had a learning scorecard where we identified the topics that you snipped on the most, and you got like a little score for that. And the third one was a, a quote that stood out. And the quote is actually a very good example of where we would run that for user. And most of the time it was an interesting quote, but every now and then it was like a super boring quotes that you think like, like how, like, why did you select that? Like, come on for there. The solution was actually just to say, Hey, give me five. So it extracted five quotes as a candidate, and then we piped it into a different model as a judge, LLM as a judge, and there we use a, um, a much better model because with the, the initial model, again, as, as I mentioned also earlier, we do have to look at the, like the, the costs because it's like, we have so much text that goes into it. So we, there we use a bit more cheaper model, but then the judge can be like a really good model to then just choose one out of five. This is a practical example.swyx [00:50:03]: I can't find it. Bad search in discord. Yeah. Um, so, so you do recommend having a much smarter model as a judge, uh, and that works for you. Yeah. Yeah. Interesting. I think this year I'm very interested in LM as a judge being more developed as a concept, I think for things like, you know, snips, raps, like it's, it's fine. Like, you know, it's, it's, it's, it's entertaining. There's no right answer.Kevin [00:50:29]: I mean, we also have it. Um, we also use the same concept for our books feature where we identify the, the mention. Books. Yeah. Because there it's the same thing, like 90% of the time it, it works perfectly out of the box one shot and every now and then it just, uh, starts identifying books that were not really mentioned or that are not books or made, yeah, starting to make up books. And, uh, they are basically, we have the same thing of like another LLM challenging it. Um, yeah. And actually with the speakers, we do the same now that I think about it. Yeah. Um, so I'm, I think it's a, it's a great technique. Interesting.swyx [00:51:05]: You run a lot of calls.Kevin [00:51:07]: Yeah.swyx [00:51:08]: Okay. You know, you mentioned costs. You move from self hosting a lot of models to the, to the, you know, big lab models, open AI, uh, and Google, uh, non-topic.Kevin [00:51:18]: Um, no, we love Claude. Like in my opinion, Claude is the, the best one when it comes to the way it formulates things. The personality. Yeah. The personality. Okay. I actually really love it. But yeah, the cost is. It's still high.swyx [00:51:36]: So you cannot, you tried Haiku, but you're, you're like, you have to have Sonnet.Kevin [00:51:40]: Uh, like basically we like with Haiku, we haven't experimented too much. We obviously work a lot with 3.5 Sonnet. Uh, also, you know, coding. Yeah. For coding, like in cursor, just in general, also brainstorming. We use it a lot. Um, I think it's a great brainstorm partner, but yeah, with, uh, with, with a lot of things that we've done done, we opted for different models.swyx [00:52:00]: What I'm trying to drive at is how much cheaper can you get if you go from cloud to cloud? Closed models to open models. And maybe it's like 0% cheaper, maybe it's 5% cheaper, or maybe it's like 50% cheaper. Do you have a sense?Kevin [00:52:13]: It's very difficult to, to judge that. I don't really have a sense, but I can, I can give you a couple of thoughts that have gone through our minds over the time, because obviously we do realize like, given that we, we have a couple of tasks where there are just so many tokens going in, um, at some point it will make sense to, to offload some of that. Uh, to an open source model, but going back to like, we're, we're a startup, right? Like we're not an AI lab or whatever, like for us, actually the most important thing is to iterate fast because we need to learn from our users, improve that. And yeah, just this velocity of this, these iterations. And for that, the closed models hosted by open AI, Google is, uh, and swapping, they're just unbeatable because you just, it's just an API call. Yeah. Um, so you don't need to worry about. Yeah. So much complexity behind that. So this is, I would say the biggest reason why we're not doing more in this space, but there are other thoughts, uh, also for the future. Like I see two different, like we basically have two different usage patterns of LLMs where one is this, this pre-processing of a podcast episode, like this initial processing, like the transcription, speaker diarization, chapterization. We do that once. And this, this usage pattern it's, it's quite predictable. Because we know how many podcasts get released when, um, so we can sort of have a certain capacity and we can, we, we're running that 24 seven, it's one big queue running 24 seven.swyx [00:53:44]: What's the queue job runner? Uh, is it a Django, just like the Python one?Kevin [00:53:49]: No, that, that's just our own, like our database and the backend talking to the database, picking up jobs, finding it back. I'm just curious in orchestration and queues. I mean, we, we of course have like, uh, a lot of other orchestration where we're, we're, where we use, uh, the Google pub sub, uh, thing, but okay. So we have this, this, this usage pattern of like very predictable, uh, usage, and we can max out the, the usage. And then there's this other pattern where it's, for example, the snippet where it's like a user, it's a user action that triggers an LLM call and it has to be real time. And there can be moments where it's by usage and there can be moments when there's very little usage for that. There. So that's, that's basically where these LLM API calls are just perfect because you don't need to worry about scaling this up, scaling this down, um, handling, handling these issues. Serverless versus serverful.swyx [00:54:44]: Yeah, exactly. Okay.Kevin [00:54:45]: Like I see them a bit, like I see open AI and all of these other providers, I see them a bit as the, like as the Amazon, sorry, AWS of, of AI. So it's a bit similar how like back before AWS, you would have to have your, your servers and buy new servers or get rid of servers. And then with AWS, it just became so much easier to just ramp stuff up and down. Yeah. And this is like the taking it even, even, uh, to the next level for AI. Yeah.swyx [00:55:18]: I am a big believer in this. Basically it's, you know, intelligence on demand. Yeah. We're probably not using it enough in our daily lives to do things. I should, we should be able to spin up a hundred things at once and go through things and then, you know, stop. And I feel like we're still trying to figure out how to use LLMs in our lives effectively. Yeah. Yeah.Kevin [00:55:38]: 100%. I think that goes back to the whole, like that, that's for me where the big opportunity is for, if you want to do a startup, um, it's not about, but you can let the big labs handleswyx [00:55:48]: the challenge of more intelligence, but, um, it's the... Existing intelligence. How do you integrate? How do you actually incorporate it into your life? AI engineering. Okay, cool. Cool. Cool. Cool. Um, the one, one other thing I wanted to touch on was multimodality in frontier models. Dwarcash had a interesting application of Gemini recently where he just fed raw audio in and got diarized transcription out or timestamps out. And I think that will come. So basically what we're saying here is another wave of transformers eating things because right now models are pretty much single modality things. You know, you have whisper, you have a pipeline and everything. Yeah. You can't just say, Oh, no, no, no, we only fit like the raw, the raw files. Do you think that will be realistic for you? I 100% agree. Okay.Kevin [00:56:38]: Basically everything that we talked about earlier with like the speaker diarization and heuristics and everything, I completely agree. Like in the, in the future that would just be put everything into a big multimodal LLM. Okay. And it will output, uh, everything that you want. Yeah. So I've also experimented with that. Like just... With, with Gemini 2? With Gemini 2.0 Flash. Yeah. Just for fun. Yeah. Yeah. Because the big difference right now is still like the cost difference of doing speaker diarization this way or doing transcription this way is a huge difference to the pipeline that we've built up. Huh. Okay.swyx [00:57:15]: I need to figure out what, what that cost is because in my mind 2.0 Flash is so cheap. Yeah. But maybe not cheap enough for you.Kevin [00:57:23]: Uh, no, I mean, if you compare it to, yeah, whisper and speaker diarization and especially self-hosting it and... Yeah. Yeah. Yeah.swyx [00:57:30]: Yeah.Kevin [00:57:30]: Okay. But we will get there, right? Like this is just a question of time.swyx [00:57:33]: And, um, at some point, as soon as that happens, we'll be the first ones to switch. Yeah. Awesome. Anything else that you're like sort of eyeing on the horizon as like, we are thinking about this feature, we're thinking about incorporating this new functionality of AI into our, into our app? Yeah.Kevin [00:57:50]: I mean, we, there's so many areas that we're thinking about, like our challenge is a bit more... Choosing. Yeah. Choosing. Yeah. So, I mean, I think for me, like looking into like the next couple of years, like the big areas that interest us a lot, basically four areas, like one is content. Um, right now it's, it's podcasts. I mean, you did mention, I think you mentioned like you can also upload audio books and YouTube videos. YouTube. I actually use the YouTube one a fair amount. But in the future, we, we want to also have audio books natively in the app. And, uh, we want to enable AI generated content. Like just think of, take deep research and notebook analysis. Like put these together. That should be, that should be in our app. The second area is discovery. I think in general. Yeah.swyx [00:58:38]: I noticed that you don't have, so you
Whilst not technically a St Patrick's Day Special, this week's episode was recorded in Dublin, at the Royal Irish Academy of Music, on St Paddy's Weekend, with Irish flute royalty.Professor William Dowdall, former principal flute of the National Symphony Orhcestra of Ireland, joins me to chat about vibrato, flutter tongue, The Cleveland Orchestra, blending, orchestral excerpts and Guinness. Sláinte lads, Éirinn go Brách xInline G Merch
In this episode, Simon and Beto discuss the latest findings from the State of React Native survey, highlighting trends in developer backgrounds, platform usage, income levels, and the evolving landscape of libraries and tools in the React Native ecosystem. They delve into the increasing popularity of local storage solutions, deep linking, and the rise of Zustand in state management, while also addressing the challenges and opportunities for solo developers in the mobile app space. In this conversation, Beto and Simon discuss the current state and future of React Native, focusing on various aspects such as Expo Router usage, styling trends, graphics and animations, component libraries, debugging tools, architecture adoption, build processes, AI in code generation, and community sentiment. They highlight the improvements in developer experience and the shift towards a more native approach in React Native development.Learn React Native - https://galaxies.devAlberto MoedanoBeto X: https://twitter.com/betomoedanoBeto YouTube: https://www.youtube.com/@codewithbetoCode with Beto Courses: https://codewithbeto.dev/LinksState of React Native Survey: https://results.stateofreactnative.com/en-US/TakeawaysThe State of React Native survey had over 3,000 participants, indicating growing interest.A significant number of React Native developers come from backend backgrounds.Solo developers can effectively use Expo and React Native to build apps.The trend towards local-first applications is gaining traction in the developer community.Deep linking is becoming increasingly important for app navigation.Zustand is rising in popularity as a state management solution.Inline styling remains a popular choice among developers.Expo is working on a new UI component library to enhance native app development.The future of data syncing and local storage solutions looks promising with new technologies. ExpoRouter is seeing increased usage and feedback is being actively incorporated.Styling in React Native is evolving, with inline styles gaining popularity due to AI tools.Graphics and animations are best handled with libraries like Reanimated and Skia.Component libraries are declining, indicating a shift towards more flexible styling solutions.Debugging tools are improving, with new options like Radon IDE and Atlas for Expo.The adoption of the new React Native architecture is growing, with many developers migrating successfully.EAS build is the preferred method for building applications, offering automation and a free tier.AI is becoming a significant part of the coding process, with many developers relying on it for code generation.Cross-platform frameworks are consolidating, with React Native and Flutter leading the way.The community sentiment around React Native is positive, with excitement for future developments.
In the latest episode of iGaming Daily, brought to you by Optimove, the conversation turns to the US, in the wake of Flutter's financial report and DraftKings' NFT class action lawsuit. Joining SBC Media's Managing Editor, Jessica Welman, is joined by Ted Orme-Claye, Editor of Payment Expert, and fresh from the sunny beaches of Rio, to delve further into Flutter's report, highlighting the company's significant profit and the transformation of the US market. Then in the second half, Jessica leans on Ted's payments expertise to look into the implications of DraftKings' $10m settlement regarding its NFT marketplace, explorting regulatory scrutiny and the potential for future event contracts in the betting landscape. To read more of the topics discussed in today's podcast, click on the following links: - https://sbcamericas.com/2025/02/27/draftkings-class-action-settlement-nft/- https://sbcnews.co.uk/sportsbook/2025/03/04/flutter-fy2024-profits/Host: Jessica WelmanGuest: Ted Orme-ClayeProducer: Anaya McDonaldEditor: James Ross iGaming Daily is also now on TikTok. Make sure to follow us at iGaming Daily Podcast (@igaming_daily_podcast) | TikTok for bite-size clips from your favourite podcast. Finally, remember to check out Optimove at https://hubs.la/Q02gLC5L0 or go to Optimove.com/sbc to get your first month free when buying the industry's leading customer-loyalty service.
Jon and Morgan lead a packed hour, covering today's roller-coaster ride in the market from every angle. We talk tariff concerns with BCA Research's Marko Papic and former Walmart US CEO Bill Simon analyzinf the geopolitical and retail implications. Bespoke's Paul Hickey and JPMorgan's Phil Camporeale assess broader market moves. Earnings coverage includes Ross Stores, Nordstrom, CrowdStrike, and Flutter, plus Cantor Fitzgerald's Eric Johnston on the macro picture and Joel Fishbein on cybersecurity giant CrowdStrike.
Charlie Deutsch reflects on his remarkable season, sharing insights into his journey, horsemanship, and the importance of patience in racing. He discusses the relationships he has built within the industry, his upcoming races, and the joys of fatherhood, providing a comprehensive look at both his professional and personal life.
In this episode of Flying High with Flutter, we're joined by Arek Borucki, author of MongoDB in Action, Third Edition and a seasoned Principal Database Engineer. Arek shares his journey with MongoDB, discusses running databases on Kubernetes, and compares MongoDB to other databases. We also explore MongoDB 8's latest features, ACID compliance, and when MongoDB might not be the right choice. Plus, Arek dives into MongoDB Atlas, Atlas CLI, and how to get started with these powerful tools.
Calcium is one of the ions that move across the cellular membrane during cardiac contraction and relaxation. The primary use of calcium channel blockers in ACLS is for the treatment of stable, narrow complex tachycardias refractory to Adenosine and to lower the blood pressure of ischemic stroke patients with severe hypertension.Use of calcium channel blockers for SVT refractory to Adenosine and A-Fib or A-Flutter with RVR.Contraindications of calcium channel blockers. Nicardipine use during the treatment of ischemic strokes.For more information on ACLS medications, tachycardia, or stroke check out the pod resource page at passacls.com.Connect with me:Website: https://passacls.com@Pass-ACLS-Podcast on LinkedInGive Back & Help Others: Your support helps cover the monthly cost of software and podcast & website hosting. Donations at Buy Me a Coffee at https://buymeacoffee.com/paultaylor are appreciated and will help ensure others can benefit from these tips as well.Good luck with your ACLS class!Helpful Listener Links:Practice ECG rhythms at Dialed Medics - https://dialedmedics.com/*FREE to anyone in the U.S. Save $$ on prescription medications for you and your pets with National Drug Card - https://nationaldrugcard.com/ndc3506 *Indicates affiliate links. I may get paid a small commission if you purchase products or memberships using my link. It doesn't affect the price you pay.
En este episodio quiero que hablemos sobre el estado de Flutter y React Native en este 2025, la verdad es que es fácil pensar por cuál irse acorde a la necesidad que ustedes tengan. Ambas opciones son increíbles con casos de éxito en ambos lados.Fuente:https://www.nomtek.com/blog/flutter-vs-react-native
Talk Python To Me - Python conversations for passionate developers
As Python developers, we're incredibly lucky to have over half a million packages that we can use to build our applications with over at PyPI. However, when it comes to choosing a UI framework, the options get narrowed down very quickly. Intersect those choices with the ones that work on mobile, and you have a very short list. Flutter is a UI framework for building desktop and mobile applications, and is in fact the one that we used to build the Talk Python courses app, you'd find at talkpython.fm/apps. That's why I'm so excited about Flet. Flet is a Python UI framework that is distributed and executed on the Flutter framework, making it possible to build mobile apps and desktop apps with Python. We have Feodor Fitsner back on the show after he launched his project a couple years ago to give us an update on how close they are to a full featured mobile app framework in Python. Episode sponsors Posit Podcast Later Talk Python Courses Links from the show Flet: flet.dev Flet on Github: github.com Packaging apps with Flet: flet.dev/docs/publish Flutter: flutter.dev React vs. Flutter: trends.stackoverflow.co Kivy: kivy.org Beeware: beeware.org Mobile forge from Beeware: github.com The list of built-in binary wheels: flet.dev/docs/publish/android#binary-python-packages Difference between dynamic and static Flet web apps: flet.dev/docs/publish/web Integrating Flutter packages: flet.dev/docs/extend/integrating-existing-flutter-packages serious_python: pub.dev/packages/serious_python Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
In atrial fibrillation (A-Fib) and atrial flutter (A-Flutter) the electrical impulse for cardiac contraction is in the atria but isn't the normal pacemaker of the heart, the SA node. The ECG characteristics of A-Fib and A-Flutter. Recognition and treatment of unstable patients in A-Fib/Flutter with rapid ventricular response (RVR). Suggested energy settings for synchronized cardioversion of unstable patients with a narrow complex tachycardia. Team safety when cardioverting an unstable patient in A-FIB/Flutter. Adenosine's role for stable SVT patients with underlying atrial rhythms. Treatment of stable patients in A-Fib/Flutter with RVR. For other medical podcasts that cover narrow complex tachycardias, visit the pod resource page at passacls.com. Connect with me:Website: https://passacls.com@Pass-ACLS-Podcast on LinkedInGive Back & Help Others: Your support helps cover the monthly cost of software and podcast & website hosting. Donations at Buy Me a Coffee at https://buymeacoffee.com/paultaylor are appreciated and will help ensure others can benefit from these tips as well.Good luck with your ACLS class!Helpful Listener Links:Practice ECG rhythms at Dialed Medics - https://dialedmedics.com/*FREE to anyone in the U.S. Save $$ on prescription medications for you and your pets with National Drug Card - https://nationaldrugcard.com/ndc3506 *Indicates affiliate links. I may get paid a small commission if you purchase products or memberships using my link. It doesn't affect the price you pay.
In this episode of Flying High with Flutter, we're joined by Renaldi Gondosubroto, the author of Amazon Bedrock in Action. Renaldi shares his expertise on Amazon Bedrock, AI/GenAI, and how these tools integrate seamlessly with Flutter. Whether you're curious about implementing AI in your app or want to understand Bedrock's place within the AWS ecosystem, this episode is packed with insights for developers of all levels.
In this episode of Flying High with Flutter, we're joined by Vadym Grin, author of Emotional Digital Design and Head of Product Design at Atolls. Vadym shares insights from his book, discusses the design process, explores the role of developers in crafting great user experiences, and more! Whether you're a designer, developer, or simply curious about UI/UX, this episode is packed with valuable insights!
Join us for an exciting episode of Flying High with Flutter as we sit down with Christopher Trudeau, the author of Django in Action. Christopher shares his journey into Django, explores how Django and Flutter can work together, and offers invaluable advice for developers building APIs for their Django apps.Timestamps:00:00 Meet Christopher Trudeau03:55 How Christopher got into Django21:48 Django and Flutter35:08 Advice for building APIs with Django41:57 Closing discussionOn the show:
In this episode of Flying High with Flutter, we're joined by Immanuel Trummer, the author of LLMs in Action and an associate professor at Cornell University specializing in large-scale data analysis. Immanuel shares his insights on large language models (LLMs), how they work, their potential future, and the challenges of privacy and AI. Whether you're curious about how GPT prompting works or intrigued by the ethics and implications of AI in real-world applications, this episode is packed with valuable knowledge for developers and AI enthusiasts alike!
In this episode of Rocket Ship, Simon Grimm interviews Jack Herrington, a prominent figure in the React Native and Next.js communities. They discuss the challenges and changes in the React Native ecosystem, and the exciting developments around Module Federation and React Server Components (RSCs). Jack shares his experiences with React Native, the benefits of using Expo, and the performance gains associated with RSCs. The conversation also touches on the skepticism surrounding new technologies and the gradual adoption within the industry. In this conversation, Simon and Jack discuss the evolving landscape of React Server Components (RSCs), the impact of AI on app customization, and the rise of AI-driven development tools. They explore the integration of ShadCN, the future of universal apps, and compare RSCs with other frameworks like Svelte and Solid. The discussion highlights the challenges and innovations in the development community, particularly in relation to state management and the potential for AI to transform user experiences. They also delve into the ongoing debate between React Native and Flutter, highlight new features in React 19, and explore the potential of building custom Chrome extensions.Learn React Native - https://galaxies.devJack HerringtonX: https://x.com/jherrYouTube: https://www.youtube.com/@jherrGithub: https://github.com/jherrLinksFrontend Fire Podcast: https://front-end-fire.com/Pro Next.js Course: https://www.pronextjs.dev/Zephyr: https://www.zephyr-cloud.io/TakeawaysModule Federation allows for remote module updates without app store submissions.RSCs can improve performance by reducing client-side rendering time.The adoption of RSCs in the industry is slow due to existing codebases and frameworks.Jack's journey with React Native has been cyclical, returning to it multiple times.Performance gains with RSCs can be significant, especially on slower devices.Skepticism exists around new technologies like RSCs, impacting their adoption.Incremental adoption paths for frameworks can ease transitions for large companies. RSCs are still in development and face challenges.AI can significantly enhance app customization for users.Cursor is a popular AI-driven development tool that many developers prefer.ShadCN offers exciting possibilities for UI infrastructure.The concept of universal apps is becoming more feasible.The development landscape is shifting towards AI integration.Frameworks like Quick handle hydration differently than React.Solid and Svelte have similar functionalities to RSCs.AI models require extensive code examples for effective training.Zustand is gaining popularity in state management. Zustand has gained popularity as a state management library.Atomic state management allows for automatic updates based on dependencies.Choosing the right state management tool depends on the application's needs.React 19 introduces significant changes, especially with RSCs.Building custom Chrome extensions can enhance productivity and provide unique solutions.The debate between React Native and Flutter continues with no clear winner.Using the simplest state management solution is often the best approach.Understanding the context of your application is crucial for state management decisions.
Flutter Entertainment CEO Peter Jackson also explained the origin of the company's name, and how betting is different in the United States.
The Drunk Guys drink a Lancelot of beer this week when they read The Once and Future King by TH White. They have beer once and will have more in the future, including: Triple Broccoli by Other Half and Flutter by Finback Brewery. Join the Drunk Guys next Tuesday for
In this episode of The Hockey Journey Podcast, Rem is joined by our podcast producer, Mike Schwartz. Mike founded MusicFit Records to provide artists with the tools to build a sustainable music career. He's also an artist-producer, multi-instrumentalist, and the Trusted Authority of Musician Wellness. Their conversation explores the profound connection between music, mindset, and athletic performance, emphasizing how the power of sound can influence our emotional and physical states. Mike shares his unique journey from being a farm kid in Alberta to becoming a creative force in the world of music and athletics. He discusses the importance of understanding the frequencies of music and how they can impact performance, recovery, and overall well-being. As they delve into the science behind sound, Mike highlights many facts about music and how we can play with it to become the best version of ourselves. Musical Frequencies and Athletic Performance Discover how different frequencies can enhance training and recovery, and learn about the innovative methods Mike uses to help athletes tap into their full potential. They discuss the significance of mindset in sports, emphasizing that the stories we tell ourselves shape our beliefs and actions. Connection to Heritage Mike also discusses reconnecting with his Indigenous roots and how these cultural narratives inform his understanding of creativity and community. The conversation is a heartfelt reminder of the power of love, connection, and the creative spirit in the journey to become the best version of yourself. Chapters: (00:00) Intro (01:20) Meet Mike Schwartz (05:00) The intersection of music and athletics (12:45) Understanding musical frequencies (20:30) The importance of mindset (30:15) Music as a tool for healing (40:00) Cultural narratives and creativity (50:10) Closing thoughts and future aspirations Connect with Mike Schwartz:
Patients with a narrow complex tachycardia with a rate over 150 BPM are in SVT. Unstable patients in SVT, or V-Tach with a pulse, should be cardioverted with a synchronized shock. Assessment & treatment of stable tachycardic patients.Commonly used vagal techniques.A less common technique to stimulate the vagus nerve is the dive reflex. Indications and use of Adenosine for stable patients in SVT refractory to vagal maneuvers.Possible treatments for patients found to be in A-Fib or A-Flutter with RVR after administration of Adenosine.Carotid sinus massage.Additional medical podcasts that have episodes on tachycardia can be found on the pod resources page at passacls.com. Connect with me:Website: https://passacls.com@Pass-ACLS-Podcast on LinkedInGive Back & Help Others: Your support helps cover the monthly cost of software and podcast & website hosting so that others can benefit from these ACLS tips as well. Donations via Buy Me a Coffee at https://buymeacoffee.com/paultaylor are appreciated.Good luck with your ACLS class!
Hoje o papo é sobre o QuintoAndar. Neste episódio, conversamos sobre as formas como o QuintoAndar reconheceu que já não era mais uma pequena startup, e ajustou seus processos e a sua arquitetura para lidar com os desafios das empresas da primeira divisão da tecnologia brasileira. Vem ver quem participou desse papo: Paulo Silveira, o host que acerta sem combinar Paulo Golgher, CTO do QuintoAndar Rafael Castro, VP de Engenharia do QuintoAndar
Happy Monday, everyone! I hope you had a fantastic weekend hauling in multiple large and unruly gamefish. If gamefish aren't your bag, hopefully you wrangled some respectable roughfish. We like 'em all here at AFP and salute you for whatever you pursue. What do you say we dive into today's podcast topics? Here's what we're covering:- Fishing for smallies in a frozen wind tunnel- Catching prehistoric beasts with my best friend- Hunting mid-winter walleyes and crappies- Humminbird finally stepped into the forward facing sonar age- The FishLab Flutter Nymph has got to be seen to be believedAnd SO MUCH MORE!Sit back, crack open a cold one, and relax. You're among friends. This isn't another fishing podcast. This is...Another Fishing Podcast!Check out Angling Uploaded on these platforms:Facebook: https://www.facebook.com/anglinguploadedInstagram: https://www.instagram.com/anglinguploaded Rumble: https://www.rumble.com/anglinguploadedYouTube: https://www.youtube.com/anglinguploaded
Hoje o papo é sobre GraphQL no mobile. Neste episódio, conversamos sobre o histórico do GraphQL, desde os problemas que ele veio para resolver, até ecossistema, o que é (e o que não é) responsabilidade do GraphQL, vantagens e desvantagens do uso de GraphQL versus REST, e muito mais. Vem ver quem participou desse papo: André David, o host que já é o tradicional co-host Vinny Neves, Líder de Front-End na Alura Yago Oliveira, Coordenador de Conteúdo Técnico na Alura William Bezerra, Instrutor na Alura e Engenheiro Sênior no QuintoAndar
Adenosine is the first IV medication given to stable patients with sustained supraventricular tachycardia (SVT) refractory to vagal maneuvers.Symptoms indicating a stable vs unstable patient. Common causes of tachycardia. Cardiac effects of Adenosine. Indications for use in the ACLS Tachycardia algorithm. Considerations and contraindications. Adenosine as a diagnostic for patients in A-Fib or A-Flutter with RVR. Dosing and administration. Other podcasts that cover common ACLS antiarrhythmics in more detail and another covering Brugata Criteria used to differentiate V-Tach from SVT with an aberrancy, can be found on the Pod Resources page at passacls.com.Connect with me:Website: https://passacls.com@Pass-ACLS-Podcast on LinkedInGive Back & Help Others: Your support helps cover the monthly cost of software and podcast & website hosting so that others can benefit from these ACLS tips as well. Donations via Buy Me a Coffee at https://buymeacoffee.com/paultaylor are appreciated.Good luck with your ACLS class!
We're chomping down on our first paradox Pokemon of the season: Flutter Mane. It's time to cook up this ghost, dinosuar, time traveling alien and to wash it all down we also dive deep into the new Switch 2 developments, a hot dog experience unlike any other and a sandwich rivalry that's unearthing regional tensions. Palestinian Children's Relief Fund: https://www.pcrf.net/ Follow Us: @ichewspod Email Us: ichewspod@gmail.com Join our Discord: https://discord.gg/K27CHpz3Fx Visit our website and merch store: https://ichewsyou.menu/
Welcome to another episode of the Hockey Journey podcast, where we delve into the power of language and mindset with your host Rem Pitlick from Online Hockey Training and our guest today, Sonic StoryWork Artist and our show's Producer, Mike Schwartz. Listen in as Mike and Rem explore the profound impact of words on our lives, drawing inspiration from the book "The Four Agreements" and the concept of conflict versus architect language. Discover how music, language, and personal stories intertwine to shape our experiences and relationships. Plus, hear about practical techniques for improving communication and building stronger connections. Tune in for an 'enlifted' conversation! Find Mike in the wild as "bravebear." at https://heytherebravebear.com And be sure to check out https://www.MusicFitRecords.com as well. Music Credits Intro Music: "Flutter" - bravebear. Outro Music: "KEEPGOING" - mike. (Mike Stud) ✍️ Episode References The Four Agreements: Enlifted Method: