Podcasts about Debugging

  • 520PODCASTS
  • 906EPISODES
  • 46mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 19, 2025LATEST
Debugging

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Debugging

Show all podcasts related to debugging

Latest podcast episodes about Debugging

The Biotech Startups Podcast

"My mind is analytical – so I wondered ‘what is the most exciting, impactful thing I could do?' I felt we're all growing older, so where is the technology of aging? How well do we understand the biochemistry of us as humans?" In this episode, Michael Antonov, Co-Founder of Oculus and Founder & CEO of Deep Origin, shares his bold leap from pioneering virtual reality to tackling the complexities of human aging and biotech innovation. Driven by a lifelong curiosity about biology and the mysteries of longevity, Michael reveals how his software background inspired him to "debug" biology, build better research tools, and launch Formic Ventures to back high-risk, high-reward startups. He discusses his vision for integrating computational and biological approaches, the promise of cellular reprogramming, and his mission to create a new ecosystem that empowers researchers and accelerates breakthroughs in extending human healthspan.

Search Off the Record
Debugging the Internet: HTTP, TCP, and You

Search Off the Record

Play Episode Listen Later May 15, 2025 33:25


In this episode of Search Off the Record, Gary Illyes and Martin Splitt from the Google Search team dive deep into the foundations of how the web works—specifically HTTP, TCP, UDP, and newer technologies like QUIC and HTTP/3. The two reflect on how even experienced web professionals often overlook or forget the mechanics behind these core protocols, sharing insights through technical discussion, playful banter, and analogies ranging from messenger pigeons to teapots. The conversation spans key concepts like packet transmission, connection handshakes, and the importance of status codes such as 404, 204, and even 418 (“I'm a teapot”). Throughout the conversation, they connect these protocols back to real-world implications for site owners, developers, and SEOs—like why Search Console might report network errors, and how browser or server behavior is influenced by low-level transport decisions. With a mix of humor and expertise, Gary and Martin aim to demystify a crucial part of the internet's infrastructure and remind listeners of the layered complexity that makes modern web experiences possible. Resources: Episode transcript →https://goo.gle/sotr091-transcript    Listen to more Search Off the Record → https://goo.gle/sotr-yt Subscribe to Google Search Channel → https://goo.gle/SearchCentral Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team. #SOTRpodcast #SEO #Http Speakers: Lizzi Sassman, John Mueller, Martin Splitt, Gary Illyes Products Mentioned: Search Console - General  

DevZen Podcast
Виниловые искажения — Episode 499

DevZen Podcast

Play Episode Listen Later May 8, 2025 79:00


В этом выпуске: TPU и расширение сознания печатников. Больные ублюдки крутят тестовые пластинки. Стектрейс дифференциация тестов вместо цветовой. [00:01:55] Чему мы научились за неделю https://makerworld.com/models/1031969 https://makerworld.com/models/729562 [00:17:54] Валерин патефон часть 2 Hana SH MK II — HANA PHONO CARTRIDGES Ortofon Test Record [00:54:07] Differential Coverage for Debugging [01:11:02] #темы499 Лог чата в Telegram. Голоса выпуска: Алекс, Ваня, Валера,… Читать далее →

core.py
Beta Frenzy

core.py

Play Episode Listen Later May 6, 2025 79:11


Python 3.14 Beta 1 is coming! And that means we reach feature freeze. BUT QUICK, there's still time to squeeze in one last thing!## Timestamps(00:00:00) INTRO(00:01:58) PART 1: Template strings(00:07:10) PART 2: Asyncio Introspection(00:29:07) PART 3: Syntax highlighting(00:43:00) PART 4: Color themes(00:50:56) PART 5: Debugging a remote process with pdb(01:01:35) PART 6: Python Installation Manager for Windows(01:05:29) PART 7: Worship(01:08:53) PART 8: What else is happening?(01:16:03) OUTRO

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 515: Job Security Is Dead in the Age of AI. What You Should Do Instead.

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Apr 30, 2025 28:49


AI didn't just rewrite the rules—it burned the old playbook.The idea of long-term "job security"? Gone.But here's the twist:This isn't a threat. It's a wake-up call.Because the people who adapt aren't just surviving—they're winning.The rest? Still playing by rules that no longer exist.Let's talk about what to do next.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI's Impact on Job SecurityFuture of Full-Time EmploymentClarity AI CEO's Personal ExperiencesCoping with AI Layoff AnxietyShopify's AI Workforce StrategyUpskilling to Future-Proof CareersPersonal Branding & Networking ImportanceEmergency Plans for AI LayoffsTimestamps:00:00 "Job Security's End in AI Era"03:59 Navigating AI Layoff Anxiety08:00 Future-Proofing Company Leadership11:34 "Why Companies Struggle with Training"14:35 "AI Predictions Impacting Job Security"20:17 "Financial Preparedness & Career Resilience"22:30 "Future-Ready Skills for Developers"24:53 Work Trends and Future Job SecurityKeywords:Job security in the age of AI, Future of work, Upskill, Reskill, Full-time employment, AI tools, Clarity AI, AI layoffs, Layoff anxiety, Personal branding, Networking, Gig economy, Upwork, Video editing, Debugging generated code, AI systems, Senior positions, Entry-level jobs, Skill set changes, Emergency plan, Financial stability, AI prototyping, Data solutions, Freelance, Income diversification, Automation, Digital twins, Roadmap to careers, AI influencer, Modern careers, AI-driven marketplaces, Team restructuring.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

Gradient Dissent - A Machine Learning Podcast by W&B
Inside Cursor: The future of AI coding with Co-founder Sualeh Asif

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Apr 29, 2025 49:36


In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor's creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.⏳Timestamps:00:00 How Cursor got started and why it took off04:50 Switching from Vim to VS Code and the rise of CoPilot08:10 Why Cursor won among competitors: product philosophy and execution10:30 How user data and feedback loops drive Cursor's improvements12:20 Iterating on AI agents: what made Cursor hold back and wait13:30 Competitive coding background: advantage or challenge?16:30 Making coding fun again: latency, flow, and model choices19:10 Building Cursor's infrastructure: from GPUs to indexing billions of files26:00 How Cursor prioritizes compute allocation for indexing30:00 Running massive ML infrastructure: surprises and scaling lessons34:50 Why Cursor chose DeepSeek models early36:00 Where AI agents are heading next40:07 Debugging and evaluating complex AI agents42:00 How coding workflows will change over the next 2–3 years46:20 Dream future projects: AI for reading codebases and papers

Adafruit Industries
Deep Dive w/Scott: CircuitPython debugging

Adafruit Industries

Play Episode Listen Later Apr 26, 2025 125:52


Join Scott for his last deep dive before more paternity leave. He'll be debugging CircuitPython code. Not sure what. He'll also answer any questions people ask and provide an update on the garbage collection improvements. Visit the Adafruit shop online - http://www.adafruit.com 0:00 Getting started 1:14 Hello everyone - welcome to Deep Dive w/Scott 1:37 Circuit Python - runs on Adafruit Metro RP2350 and many other microprocessors 2:43 Scott's last Deep Dive for the year - then Dad mode and part time 4:00 question about databases on CP - BTree and SQLite 6:43 New 16 inch Framework Laptop ( upgraded from 13 inch ) 9:00 Garbage Collection last week 10:30 Met with MicroPython developers on Tuesday 11:35 Adjusting GC stack speed things up 13:24 working on USB PR commit #10821 16:57 CI scheduler overview 19:07 Shrinking USB static buffers to get the CI to run again 23:10 Merge the "fix" into gc_selective_collect 25:05 test mimxrt10xx port 28:59 looking at linker scripts for sizes 38:00 checking the esp-idf template memory.ld.in 42:00 found "Internal SRAM0 and SRAM1" in datasheet 45:49 back to PR 10293 51:00 investigate second stage boot loader 54:30 linker scripts "digression" 1:09:00 switch to the Fruit Jam 1:13:30 Fruit Jam booted 1:17:39 debug the startup crash 1:25:30 audio playback buffers 1:33:00 examine rp2350 Datasheet DMA and continue to debug DMA 1:50:22 maybe console uart running background task triggers issue 1:52:43 audio_dma_stop called twice? 1:58:27 any final questions? adfru.it/discord 2:04:00 wrap up - thanks to Tim for future deep dives ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------

Python Podcast
Live von der DjangoCon Europe 2025 in Dublin - Tag 2

Python Podcast

Play Episode Listen Later Apr 24, 2025 66:15 Transcription Available


Live von der DjangoCon Europe 2025 in Dublin - Tag 2 (click here to comment) 24. April 2025, Jochen Wir melden uns erneut von der DjangoCon Europe und sprechen über die Highlights des zweiten Konferenztages – mit jeder Menge technischer Einblicke, spannenden Talks und persönlichen Eindrücken.Diesmal mit dabei: Ronny als Gast in unserer Runde!

BSD Now
607: Sign those commits

BSD Now

Play Episode Listen Later Apr 17, 2025 56:27


We should improve libzfs somewhat, Accurate Effective Storage Performance Benchmark, Debugging aids for pf firewall rules on FreeBSD, OpenBSD and Thunderbolt issue on ThinkPad T480s, Signing Git Commits with an SSH key, Pgrep, LibreOffice downloads on the rise, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines We should improve libzfs somewhat (https://despairlabs.com/blog/posts/2025-03-12-we-should-improve-libzfs-somewhat/) Accurate Effective Storage Performance Benchmark (https://klarasystems.com/articles/accurate-effective-storage-performance-benchmark/?utm_source=BSD%20Now&utm_medium=Podcast) News Roundup Debugging aids for pf firewall rules on FreeBSD (https://dan.langille.org/2025/02/24/debugging-aids-for-pf-firewall-rules-on-freebsd/) OpenBSD and Thunderbolt issue on ThinkPad T480s (https://www.tumfatig.net/2025/openbsd-and-thunderbolt-issue-on-thinkpad-t480s/) Signing Git Commits with an SSH key (https://jpmens.net/2025/02/26/signing-git-commits-with-an-ssh-key/) Pgrep (https://www.c0t0d0s0.org/blog/pgrep-z-r.html) LibreOffice downloads on the rise as users look to avoid subscription costs (https://www.computerworld.com/article/3840480/libreoffice-downloads-on-the-rise-as-users-look-to-avoid-subscription-costs.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Felix - Bhyve and NVME (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/607/feedback/Felix%20-%20bhyve%20and%20nvme.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)

High Agency: The Podcast for AI Builders
From 0 to $40M in 5 Months: Bolt.new Story with Eric Simons

High Agency: The Podcast for AI Builders

Play Episode Listen Later Apr 3, 2025 41:33


Eric Simons discusses the meteoric rise of Bolt.new, an AI-powered web app builder that went from zero to $40 million ARR in just five months. He shares insights on how they built an AI agent capable of creating full-stack web applications from simple prompts, the challenges of rapid growth, and the future of AI in software development. From nearly shutting down the company to becoming one of the fastest-growing AI products in history, Eric offers valuable lessons for anyone building in the AI space.Chapters:00:00 - Introduction and Bolt.new overview06:05 - The journey from near-shutdown to rapid growth13:28 - Challenges of explosive growth and scaling18:50 - Technical deep dive: Building Bolt.new26:37 - Debugging and improving AI-generated code32:09 - Future directions and enterprise adoption34:11 - Advice for building AI applications37:03 - The concept of "vibe revenue" in AI startups39:39 - Is AI over or under-hyped?------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com

Code RED
#22 - Debugging the Future: How Lumigo Modernizes Developer Observability with Erez Berkner

Code RED

Play Episode Listen Later Apr 3, 2025 34:28


Lumigo CEO Erez Berkner joins Dash0's Mirko Novakovic to discuss the evolution of cloud observability and the critical role of AI in debugging modern architectures. They dive into the challenges of tracing serverless and hybrid environments, why traditional observability tools weren't built for developers, and how AI-driven insights are shaping the future of automated troubleshooting.

PodRocket - A web development podcast from LogRocket
Debugging apps with Deno and OpenTelemetry with Luca Casonato

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Mar 27, 2025 24:55


Luca Casanato, member of the Deno core team, delves into the intricacies of debugging applications using Deno and OpenTelemetry. Discover how Deno's native integration with OpenTelemetry enhances application performance monitoring, simplifies instrumentation compared to Node.js, and unlocks new insights for developers! Links https://lcas.dev https://x.com/lcasdev https://github.com/lucacasonato https://mastodon.social/@lcasdev https://www.linkedin.com/in/luca-casonato-15946b156 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Luca Casonato.

The Mob Mentality Show
Garrick West on 'Building' Great Developers with XP & Agile plus the Best Debugging

The Mob Mentality Show

Play Episode Listen Later Mar 24, 2025 48:03


Today I Learned
152. Debugging - 問題解決のための9のルール

Today I Learned

Play Episode Listen Later Mar 23, 2025 40:48


書籍「Debugging - 問題解決のための9のルール」を題材に問題の分析と修正のためのフレームワークというトピックにについて話しました。Understand the System - システムを理解しろMake It Fail - 再現させろQuit Thinking and Look - 考えるのをやめて良く観察しろDivide and Conquer - 分割して統治しろ(切り分けろ)Change One Thing at a Time - 変更はひとつづつKeep an Audit Trail - 記録を残せCheck the Plug - コンセントを確認しろGet a Fresh View - 新しい視点を取り入れろIf You Didn't Fix It, It Ain't Fixed - 再発を防止しろ感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://twitter.com/tomoaki_imaiRyoichi Kato, Software Engineer ⁠https://twitter.com/ryo1kato

Remote Ruby
Mac Upgrades to Debugging Dilemmas

Remote Ruby

Play Episode Listen Later Mar 21, 2025 47:41


In this episode, Andrew and Chris discuss Chris's new hardware upgrade to a Mac Studio, diving into its benefits for video processing and development work. They share stories about troubleshooting a perplexing bug related to WebSockets and Cable Ready, and discuss the conference proposal process, offering insights into writing effective CFPs based on their experiences with RailsConf and Rails World. Additionally, Andrew shares a game update about Cyberpunk and Chris shares the inspiring success story of the game ‘Balatro,' highlighting the developer's journey from side project to commercial triumph. Hit the download button now!HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

Karma Comment Chameleon
r/MaliciousCompliance - Ignored Warnings And MASSIVE FAILURE Lead To An "I TOLD YOU SO!" Moment!

Karma Comment Chameleon

Play Episode Listen Later Mar 19, 2025 16:22


In this episode of the Karma Stories Podcast, host Rob narrates a story from the Malicious Compliance subreddit about a manufacturing engineer who was ordered to disable critical testing on a new product due to unfounded blame from the design team. Despite warnings, the order led to increased failure rates and months of troubleshooting before the real issue was identified. This tale highlights the repercussions of ignoring expert advice and the valuable lessons learned by all involved.Submit your own stories to KarmaStoriesPod@gmail.com.Karma Stories is available on all major Podcasting Platforms and on YouTube under the @KarmaStoriesPodcast handle. We cover stories from popular Reddit Subreddits like Entitled Parents, Tales From Tech Support, Pro Revenge and Malicious Compliance. You can find new uploads here every single day of the week!Rob's 3D Printing Site: https://Dangly3D.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/karma-stories--5098578/support.

That Tech Pod
From Tech to Love: Debugging Dating for Men in Tech with Kativa Ajwani

That Tech Pod

Play Episode Listen Later Mar 11, 2025 42:27


404 Connection Not Found? Today Laura and Kevin sit down with Kavita Ajwani, one of the world's top dating psychology coaches for men and the founder of Dashing Date to discuss who Men in tech can win at dating. With over 13 years of experience, Kavita has helped thousands of men in tech navigate modern dating with confidence and authenticity.Kavita shares her journey from running a tech startup to building North America's largest speed dating company. She opens up about the unique challenges men in tech face when it comes to dating, why confidence is often the missing link, and how digital distractions are reshaping modern relationships. We breakdown the growing crisis of loneliness and emotional isolation, the rise of online dating scams, and the increasing role of AI and deepfake technology in the dating world. Plus, Kavita offers a special gift for our single male listeners—free access to her Coding for Connections masterclass.Whether you're a tech professional struggling to make real connections or simply curious about how technology intersects with modern romance, this episode is packed with insights, humor, and expert advice. Tune in now to learn how to debug your dating life and build meaningful relationships!

Rocket Ship
#062 - Navigation, Styling & The State of React Native with Beto

Rocket Ship

Play Episode Listen Later Mar 11, 2025 67:27


In this episode, Simon and Beto discuss the latest findings from the State of React Native survey, highlighting trends in developer backgrounds, platform usage, income levels, and the evolving landscape of libraries and tools in the React Native ecosystem. They delve into the increasing popularity of local storage solutions, deep linking, and the rise of Zustand in state management, while also addressing the challenges and opportunities for solo developers in the mobile app space. In this conversation, Beto and Simon discuss the current state and future of React Native, focusing on various aspects such as Expo Router usage, styling trends, graphics and animations, component libraries, debugging tools, architecture adoption, build processes, AI in code generation, and community sentiment. They highlight the improvements in developer experience and the shift towards a more native approach in React Native development.Learn React Native - https://galaxies.devAlberto MoedanoBeto X: https://twitter.com/betomoedanoBeto YouTube: https://www.youtube.com/@codewithbetoCode with Beto Courses: https://codewithbeto.dev/LinksState of React Native Survey: https://results.stateofreactnative.com/en-US/TakeawaysThe State of React Native survey had over 3,000 participants, indicating growing interest.A significant number of React Native developers come from backend backgrounds.Solo developers can effectively use Expo and React Native to build apps.The trend towards local-first applications is gaining traction in the developer community.Deep linking is becoming increasingly important for app navigation.Zustand is rising in popularity as a state management solution.Inline styling remains a popular choice among developers.Expo is working on a new UI component library to enhance native app development.The future of data syncing and local storage solutions looks promising with new technologies. ExpoRouter is seeing increased usage and feedback is being actively incorporated.Styling in React Native is evolving, with inline styles gaining popularity due to AI tools.Graphics and animations are best handled with libraries like Reanimated and Skia.Component libraries are declining, indicating a shift towards more flexible styling solutions.Debugging tools are improving, with new options like Radon IDE and Atlas for Expo.The adoption of the new React Native architecture is growing, with many developers migrating successfully.EAS build is the preferred method for building applications, offering automation and a free tier.AI is becoming a significant part of the coding process, with many developers relying on it for code generation.Cross-platform frameworks are consolidating, with React Native and Flutter leading the way.The community sentiment around React Native is positive, with excitement for future developments.

Demystifying Science
A Case for Scientific Anarchy - Jim Keller, DemystifySci #326

Demystifying Science

Play Episode Listen Later Mar 9, 2025 141:03


MAKE HISTORY WITH US THIS SUMMER:https://demystifysci.com/demysticon-2025PATREON https://www.patreon.com/c/demystifysciPARADIGM DRIFThttps://demystifysci.com/paradigm-drift-showPATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasBMERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/allAMAZON: Do your shopping through this link: https://amzn.to/3YyoT98SUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysciJim Keller is on of the all-time great computer architects who has shaped the tech revolution during his tenure at Digital Equipment Corporation, AMD, Tesla, Intel, AMD and Apple. In addition to being a crack engineer, Keller is also deeply curious about the inner workings of nature on all scales, and is the patron for our upcoming Beyond the Big Bang meeting in Portugal. Keller is a big proponent of what philosopher Paul Feyerabend called “scientific anarchy,” where progress requires us to abandon a prescriptive approach to what counts as “science,” because many contradictory theories can be “true” at once. We build on our previous conversation about the nature of institutions to unpack why the Big Bang is about to fall apart, what it takes to build a new theory, and how to create a lasting impact without striving explicitly for immortality. (00:00) Go! (00:04:07) Skepticism in Scientific Models and the Role of Physical Intuition(00:11:26) The Evolution of Physics and Cognitive Sciences(00:18:07) Challenges in Conventional Cosmic and Solar Models(00:22:10 )Evolution of Computer Architecture(00:24:02) The Utilitarian Approach of Engineers(00:28:12) Debugging in Engineering(00:30:32) Reevaluating Scientific Paradigms(00:34:00) Collaborative Dynamics in Science and Engineering(00:37:13) The Influence of Paradigms on Scientific Discourse(00:44:42) Musician's Journey and Archetypes(00:45:37) Belief Systems and Shared Realities(00:49:59) The Bible and Mythology(01:00:17) Sci-Fi, Technology, and Societal Impact(01:02:31) Religious Beliefs and Cultural Diversity(01:07:53) Cultural Beliefs and Alignment(01:10:50) Alignment vs. Experimentation(01:13:23) Decentralization and Experimentation(01:15:00) Individual Experience in Religion(01:18:10) Mythology and Technology(01:21:14) Scarcity and Human Behavior(01:23:04) Complexity in Human Systems(01:33:47) Human Collaboration and Motivation(01:36:16) Public Speaking and Leadership Dynamics(01:39:51) Stability and Change in Systems(01:43:52) Paradigm Drifts (01:49:02) Geopolitical Structures and Exploration(01:53:37) Political Power Dynamics(01:57:05) Critique of Power Structures(02:08:45) Paradigm Shifts(02:14:44) Mission-Driven Success(02:18:04) Authenticity in Art#TechVisionary, #AIInnovation, #TechnologyLeader, #TechVisionary, #TechGuru, #astrophysics, #engineeringlife, #cosmos, #innovations, #technews, #codinglife , #programminglife , #techindustry , #engineeringlife, #philosophypodcast, #sciencepodcast, #longformpodcastCheck our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomicsJoin our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. SOCIAL: - Discord: https://discord.gg/MJzKT8CQub- Facebook: https://www.facebook.com/groups/DemystifySci- Instagram: https://www.instagram.com/DemystifySci/- Twitter: https://twitter.com/DemystifySciMUSIC: -Shilo Delay: https://g.co/kgs/oty671

Where It Happens
My honest review of AI Product Designer backed by Y-Combinator (v0 Users Need to See This)

Where It Happens

Play Episode Listen Later Mar 5, 2025 30:43


In this episode, I tests Polymet AI, an AI product designer tool, by creating a YouTube analytics prediction SaaS concept inspired by a viral tweet about predicting tweet performance. I compares Polymet AI with V0, noting that while Polymet required multiple prompts and offered less feedback during the design process, both tools ultimately produced usable designs. I concludes that different AI tools have different strengths, and using multiple tools in combination might yield the best results.Timestamps: 00:00 - Intro02:33 - First Impressions of Polymet03:57 - Startup Idea: Predicting YouTube Engagement05:03 - Initial Design Prompt10:11 - Polymet's First Design Output: TubePredict11:04 - UI and Functionality Issues and Debugging16:08 - Polymet's Second Design Output17:44 - Debugging pt 220:48 - Prompting v0 22:30 - v0's Design Output 23:41 - Polymet's Third Design Output24:49 - Comparing v0 and Polymer and Final Thoughts on Design Outputs27:36 - Conclusion and Recommendations for AI Design ToolsKey Points• I tests Polymet AI a new AI product designer tool that claims to help non-designers create production-ready designs• I compares Polymet AI with v0, a similar tool he's used before, by designing a YouTube analytics prediction tool• Both tools produced functional designs, but with different user experiences and output quality1) First impressions of Polymet AI:• Clean interface similar to ChatGPT• Includes voice input (huge plus!)• Image upload capability for reference designs• Credit-based system (250 free credits to start)• 50 credits per page generation2) The design process with Polymet was... interesting.PROS:• Named the product "TubePredict" automatically• Created decent landing page copy• Saved version historyCONS:• Slow generation (2+ minutes)• No progress indicator (frustrating!)• Initial designs missed the mark completely 3) After 3 attempts with increasingly specific prompts, Polymet finally delivered:• Clean, modern interface• Detailed A/B testing dashboard• Statistical confidence indicators• AI suggestions for optimizationBut the communication was ONE-WAY. No feedback loop!4) Meanwhile, v0 showed its strengths:• Real-time reasoning as it designed• Conversational approach ("I'll create a SaaS that...")• Faster FEELING process (transparency helps!)• Ability to ask clarifying questionsThe difference in experience was NIGHT and DAY.5) The final designs were surprisingly similar in quality!Polymet's strengths:• More detailed product features• Hover states built in• Actual code generationv0's advantages:• Slightly more polished visually• More "glassy" as requested• Better feedback loop6) MAJOR INSIGHT: The future isn't about finding ONE perfect AI design tool.It's about using MULTIPLE tools strategically:• Generate initial concepts in one• Refine in another• Mix and match their strengthsJust like we do with ChatGPT, Claude, Gemini, etc.7) The REAL lesson? Knowing design terminology matters!When I specifically requested "glassmorphism" instead of just saying "glassy," both tools performed MUCH better.The more precise your design vocabulary, the better your AI design results.8) Would I recommend these tools? YES - but with expectations in check.Two years ago this would have been MIND-BLOWING.Today, our standards are higher.But for quick prototyping or inspiration, both tools deliver value in different ways.Notable Quotes:"Two years ago, if I would have seen this, I would have been like, 'Oh my God, everything has changed.' And now our bar for all these AI products is so high that the output needs to be incredible for you to really use it in your workflow.""How do you get the most out of them is just you use all of them, you realize what's best for each individual product... There's these nuances that make these products better."Want more free ideas? I collect the best ideas from the pod and give them to you for free in a database. Most of them cost $0 to start (my fav)Get access: https://www.gregisenberg.com/30startupideasLCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/BoringAds — ads agency that will build you profitable ad campaigns http://boringads.com/BoringMarketing — SEO agency and tools to get your organic customers http://boringmarketing.com/Startup Empire - a membership for builders who want to build cash-flowing businesses https://www.startupempire.coFIND ME ON SOCIALX/Twitter: https://twitter.com/gregisenbergInstagram: https://instagram.com/gregisenberg/LinkedIn: https://www.linkedin.com/in/gisenberg/

More Train, Less Pain; Engineering the Adaptable Athlete
S5E15: Lance Goyke on Gorillas, Kangaroos, and Debugging Humans

More Train, Less Pain; Engineering the Adaptable Athlete

Play Episode Listen Later Mar 3, 2025 62:43


Back almost 3 years after his first appearance on More Train, Less Pain, Lance Goyke rejoins Tim to talk simplifying velocity based-training, designing workouts for busy people, leaving the (full time) fitness industry, and *most importantly* best pop punk albums of the 2000s. Listen, subscribe, and tell your friends… Reframe Performance: https://www.reframeperformance.com/ Reframe Performance Free Guide- 6 Biggest Weight Training Mistakes Runners Make: https://www.reframeperformance.com/weight-training-runners-mistakes-guide Contact Michelle: https://www.michelleboland-training.com/ or @dr.michelleboland Contact Tim: https://www.timrichardt.com/ or @tim_richardt_dpt

Software Engineering Daily
Troubleshooting Microservices with Julia Blase

Software Engineering Daily

Play Episode Listen Later Feb 25, 2025 43:00


A distributed system is a network of independent services that work together to achieve a common goal. Unlike a monolithic system, a distributed system has no central point of control, meaning it must handle challenges like data consistency, network latency, and system failures. Debugging distributed systems is conventionally considered challenging because modern architectures consist of The post Troubleshooting Microservices with Julia Blase appeared first on Software Engineering Daily.

Podcast – Software Engineering Daily
Troubleshooting Microservices with Julia Blase

Podcast – Software Engineering Daily

Play Episode Listen Later Feb 25, 2025 43:00


A distributed system is a network of independent services that work together to achieve a common goal. Unlike a monolithic system, a distributed system has no central point of control, meaning it must handle challenges like data consistency, network latency, and system failures. Debugging distributed systems is conventionally considered challenging because modern architectures consist of The post Troubleshooting Microservices with Julia Blase appeared first on Software Engineering Daily.

Thinking Elixir Podcast
241: A LiveView Debugger and Gigalixir

Thinking Elixir Podcast

Play Episode Listen Later Feb 18, 2025 44:59


News includes the release of LiveDebugger, an exciting new browser-based debugging tool for Phoenix LiveView applications and the announcement of Artifix for creating private Hex registries on S3 and CloudFront. We are also joined by Tim Knight, the CTO at Gigalixir, to get a peek inside the machine that is Gigalixir and learn more about how the platform specializes in providing an excellent Elixir deployment experience, and more! Show Notes online - http://podcast.thinkingelixir.com/241 (http://podcast.thinkingelixir.com/241) Elixir Community News https://github.com/software-mansion-labs/live-debugger (https://github.com/software-mansion-labs/live-debugger?utm_source=thinkingelixir&utm_medium=shownotes) – New Phoenix LiveView debugging tool released providing browser-based debugging capabilities similar to React DevTools. https://bsky.app/profile/bcardarella.bsky.social/post/3lhn3y7vw4k2v (https://bsky.app/profile/bcardarella.bsky.social/post/3lhn3y7vw4k2v?utm_source=thinkingelixir&utm_medium=shownotes) – Confirmation that LiveDebugger works with LiveView Native. https://github.com/probably-not/artifix (https://github.com/probably-not/artifix?utm_source=thinkingelixir&utm_medium=shownotes) – New project Artifix announced, allowing creation of private Hex Registry on S3 and Cloudfront with customizable deployment patterns. https://gleam.run/news/gleam-gets-rename-variable/ (https://gleam.run/news/gleam-gets-rename-variable/?utm_source=thinkingelixir&utm_medium=shownotes) – Gleam v1.8.0 released with significant Language Server enhancements and compiler improvements. https://github.com/Wilfred/difftastic (https://github.com/Wilfred/difftastic?utm_source=thinkingelixir&utm_medium=shownotes) – Difftastic, a structural diff tool, now supports HEEx syntax highlighting. https://bsky.app/profile/crbelaus.com/post/3lhtpkkn4vc2l (https://bsky.app/profile/crbelaus.com/post/3lhtpkkn4vc2l?utm_source=thinkingelixir&utm_medium=shownotes) – Additional announcement about Difftastic's HEEx support. https://github.com/Wilfred/difftastic/pull/785 (https://github.com/Wilfred/difftastic/pull/785?utm_source=thinkingelixir&utm_medium=shownotes) – Pull request adding HEEx support to Difftastic. https://x.com/chris_mccord/status/1887957394149310502 (https://x.com/chris_mccord/status/1887957394149310502?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord shares a preview of integrated AI work at Fly.io, demonstrating web search capabilities. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources https://www.gigalixir.com/thinking (https://www.gigalixir.com/thinking?utm_source=thinkingelixir&utm_medium=shownotes) – Thinking Elixir Podcast listeners get 20% off the standard tier for the first YEAR with the promo code "Thinking" https://www.gigalixir.com (https://www.gigalixir.com?utm_source=thinkingelixir&utm_medium=shownotes) https://www.gigalixir.com/docs/ (https://www.gigalixir.com/docs/?utm_source=thinkingelixir&utm_medium=shownotes) https://www.gigalixir.com/pricing/ (https://www.gigalixir.com/pricing/?utm_source=thinkingelixir&utm_medium=shownotes) https://journey.gigalixir.com/ (https://journey.gigalixir.com/?utm_source=thinkingelixir&utm_medium=shownotes) Guest Information https://twitter.com/gigalixir (https://twitter.com/gigalixir?utm_source=thinkingelixir&utm_medium=shownotes) – on Twitter https://github.com/gigalixir/ (https://github.com/gigalixir/?utm_source=thinkingelixir&utm_medium=shownotes) – on Github https://bsky.app/profile/gigalixir.com (https://bsky.app/profile/gigalixir.com?utm_source=thinkingelixir&utm_medium=shownotes) – on BlueSky https://elixir-lang.slack.com/archives/C5AJLMATG (https://elixir-lang.slack.com/archives/C5AJLMATG?utm_source=thinkingelixir&utm_medium=shownotes) – gigalixir on Elixir Slack https://gigalixir.com/ (https://gigalixir.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Site Find us online Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) Message the show - X (https://x.com/ThinkingElixir) Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) Mark Ericksen on X - @brainlid (https://x.com/brainlid) Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

The Lunar Society
Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI

The Lunar Society

Play Episode Listen Later Feb 12, 2025 134:43


This week I welcome on the show two of the most important technologists ever, in any field.Jeff Dean is Google's Chief Scientist, and through 25 years at the company, has worked on basically the most transformative systems in modern computing: from MapReduce, BigTable, Tensorflow, AlphaChip, to Gemini.Noam Shazeer invented or co-invented all the main architectures and techniques that are used for modern LLMs: from the Transformer itself, to Mixture of Experts, to Mesh Tensorflow, to Gemini and many other things.We talk about their 25 years at Google, going from PageRank to MapReduce to the Transformer to MoEs to AlphaChip – and maybe soon to ASI.My favorite part was Jeff's vision for Pathways, Google's grand plan for a mutually-reinforcing loop of hardware and algorithmic design and for going past autoregression. That culminates in us imagining *all* of Google-the-company, going through one huge MoE model.And Noam just bites every bullet: 100x world GDP soon; let's get a million automated researchers running in the Google datacenter; living to see the year 3000.SponsorsScale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale's Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you're an AI researcher or engineer, learn about how Scale's Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh.Curious how Jane Street teaches their new traders? They use Figgie, a rapid-fire card game that simulates the most exciting parts of markets and trading. It's become so popular that Jane Street hosts an inter-office Figgie championship every year. Download from the app store or play on your desktop at figgie.com.Meter wants to radically improve the digital world we take for granted. They're developing a foundation model that automates network management end-to-end. To do this, they just announced a long-term partnership with Microsoft for tens of thousands of GPUs, and they're recruiting a world class AI research team. To learn more, go to meter.com/dwarkesh.Advertisers:To sponsor a future episode, visit: dwarkeshpatel.com/p/advertise.Timestamps00:00:00 - Intro00:02:44 - Joining Google in 199900:05:36 - Future of Moore's Law00:10:21 - Future TPUs00:13:13 - Jeff's undergrad thesis: parallel backprop00:15:10 - LLMs in 200700:23:07 - “Holy s**t” moments00:29:46 - AI fulfills Google's original mission00:34:19 - Doing Search in-context00:38:32 - The internal coding model00:39:49 - What will 2027 models do?00:46:00 - A new architecture every day?00:49:21 - Automated chip design and intelligence explosion00:57:31 - Future of inference scaling01:03:56 - Already doing multi-datacenter runs01:22:33 - Debugging at scale01:26:05 - Fast takeoff and superalignment01:34:40 - A million evil Jeff Deans01:38:16 - Fun times at Google01:41:50 - World compute demand in 203001:48:21 - Getting back to modularity01:59:13 - Keeping a giga-MoE in-memory02:04:09 - All of Google in one model02:12:43 - What's missing from distillation02:18:03 - Open research, pros and cons02:24:54 - Going the distance Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Internet Stormcast Feb 7th 2025: Unbreakable Anti-Debugging;

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Feb 7, 2025 6:22


The Unbreakable Multi-Layer Anti-Debugging System Xavier found a nice Python script that included what it calls the "Unbreakable Multi-Layer Anti-Debugging System". Leave it up to Xavier to tear it appart for you. https://isc.sans.edu/diary/The%20Unbreakable%20Multi-Layer%20Anti-Debugging%20System/31658 Take my money: OCR crypto stealers in Google Play and App Store Malware using OCR on screen shots was available not just via Google Play, but also the Apple App Store. https://securelist.com/sparkcat-stealer-in-app-store-and-google-play-2/115385/ Threat Actors Still Leveraging Legit RMM Tool ScreenConnect Unsurprisingly, threat actors still like to use legit remote admin tools, like ScreenConnect, as a command and control channel. Silent Push outlines the latest trends and IoCs they found https://www.silentpush.com/blog/screenconnect/ Cisco Identity Services Engine Insecure Java Deserialization and Authorization Bypass Vulnerabilities Java deserializing strikes again to allow arbitrary code execution. Cisco fixed this vulnerability and a authorization bypass issue in its Identity Services Engine https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-ise-multivuls-FTW9AOXF F5 Update F5 fixes an interesting authentication bypass problem affecting TLS client certificates https://my.f5.com/manage/s/article/K000149173

PodRocket - A web development podcast from LogRocket
You're Probably Doing Web Performance Wrong  with Sia Karamalegos

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Feb 6, 2025 27:11


Sia Karamalegos, performance engineer, web developer, and Google Developer Expert in Web Technologies & Web Performance, discusses the pitfalls of relying solely on Lighthouse scores, the importance of Real User Monitoring (RUM), and dive deep into metrics like Core Web Vitals and various other performance indicators, giving practical insights to enhance user experience on your website! Links https://sia.codes http://front-end.social/@sia https://bsky.app/profile/sia.codes https://github.com/siakaramalegos https://www.linkedin.com/in/karamalegos https://stackoverflow.com/users/5049215/sia https://conf.11ty.dev/2024/you-re-probably-doing-web-performance-wrong https://themevitals.com We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Sia Karamalegos.

The GeekNarrator
Power of #Duckdb with Postgres: pg_duckdb

The GeekNarrator

Play Episode Listen Later Jan 22, 2025 60:19


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Hey folks - In this episode we have Jelte with us, who is the main contributor to the pg_duckdb project, which is a postgres extension to add the #duckdb power to our beloved #postgresql. We will try to understand how it works? Why is it needed and what's the future of pg_duckdb? If you love #Postgres or #Duckdb or just understanding #database internals then this episode will give you pretty solid insights into Postgres query processing, Duckdb analytics, Postgres extension ecosystem and so on. Basics: pg_duckdb is a Postgres extension that embeds DuckDB's columnar-vectorized analytics engine and features into Postgres. We recommend using pg_duckdb to build high performance analytics and data-intensive applications. Chapters: 00:00 Introduction to PG-DuckDB 03:40 Understanding the Integration of DuckDB with Postgres 06:23 Architecture of PG-DuckDB: Query Processing Explained 10:02 Configuring DuckDB for Analytics Queries 15:37 Managing Workloads: Transactional vs. Analytical 21:02 Observability and Debugging in DuckDB 25:58 Data Deletion and GDPR Compliance 30:46 Schema Management and Migration Challenges 33:14 Managing Schema Changes in Databases 35:21 Upgrading Database Extensions 36:33 Enhancing Data Reading Methods 38:33 Future Features and Improvements 45:54 Use Cases for PGDuckDB 50:03 Challenges in Building the Extension 55:25 Getting Involved with PGDuckDB Important links: The duckdb discord server, which has a pg_duckdb channel inside it: https://discord.duckdb.org/ repo: https://github.com/duckdb/pg_duckdb good-first-issue issues: https://github.com/duckdb/pg_duckdb/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #sql #postgres #databasesystems

Engineering Kiosk
#179 MLOps: Machine Learning in die Produktion bringen mit Michelle Golchert und Sebastian Warnholz

Engineering Kiosk

Play Episode Listen Later Jan 21, 2025 76:51


Machine Learning Operations (MLOps) mit Data Science Deep Dive.Machine Learning bzw. die Ergebnisse aus Vorhersagen (sogenannten Prediction-Models) sind aus der modernen IT oder gar aus unserem Leben nicht mehr wegzudenken. Solche Modelle kommen wahrscheinlich öfter zum Einsatz, als dir eigentlich bewusst ist. Die Programmierung, Erstellung und das Trainieren dieser Modelle ist die eine Sache. Das Deployment und der Betrieb ist die andere Thematik. Letzteres nennt man Machine Learning Operations, oder kurz “MLOps”. Dies ist das Thema dieser Episode.Wir klären was eigentlich MLOps ist und wie es sich zum klassischen DevOps unterscheidet, wie man das eigene Machine Learning-Modell in Produktion bringt und welche Stages dafür durchlaufen werden müssen, was der Unterschied von Model-Training und Model-Serving ist, welche Aufgabe eine Model-Registry hat, wie man Machine Learning Modelle in Produktion eigentlich monitored und debugged, was Model-Drift bzw. die Drift-Detection ist, ob der Feedback-Cycle durch Methoden wie Continuous Delivery auch kurz gehalten werden kann, aber auch welche Skills als MLOps Engineer wichtig sind.Um all diese Fragen zu beantworten, stehen uns Michelle Golchert und Sebastian Warnholz vom Data Science Deep Dive Podcast rede und Antwort.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
(Part 2) Exploring No Code Development Platform: Bubble (feat. Xan Hong)

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

Play Episode Listen Later Jan 15, 2025 79:42


In this episode, ⁠Xan Hong⁠, a no-code development expert, guides ⁠Krish Palaniappan⁠ through the process of building a simple to-do application using Bubble. They explore the user interface, create workflows, and discuss the transition from web-based to native app development. The conversation emphasizes practical application, collaboration, and the importance of getting a functional product up and running before focusing on aesthetics. They explore the Bubble platform, discussing its no-code capabilities, data management, plugin integrations, and design systems. They reflect on the learning curve associated with Bubble, the potential for AI integration, and the future of no-code platforms in the context of evolving technology. The discussion highlights practical applications, user experiences, and the importance of adapting to new tools in software development. Takeaways • No-code development can significantly reduce costs for clients. • Understanding the user interface is crucial for effective app development. • Starting with a functional prototype is more important than aesthetics. • Bubble allows for both web and native app development. • Dynamic elements enhance user interaction in applications. • Workflows are essential for managing data and user actions. • Testing and debugging are critical steps in the development process. • Collaborative development can enhance the learning experience. • Data management is key to maintaining application functionality. • User feedback is vital for refining app features. • Bubble allows users to create applications without writing code. • Understanding the platform's structure is crucial for effective use. • Data management in Bubble includes both data and metadata. • Plugins enhance functionality but vary in quality. • Design systems can be created and reused within Bubble applications. • APIs can be integrated to extend application capabilities. • The learning curve for Bubble can be steep for new users. • AI integration presents new challenges for no-code platforms. • Building meaningful projects can enhance the learning experience. • No-code platforms are evolving with the introduction of AI technologies. Chapters 00:00 Introduction to No Code Development 02:55 Exploring Bubble's User Interface 05:52 Creating a To-Do Application 09:10 Understanding Native App Development with Bubble 11:58 Dynamic Elements and User Interaction 15:07 Workflow Creation and Data Management 17:52 Testing and Debugging the Application 20:47 Collaborative Development and Sharing 24:01 Data Management and Deletion Functionality 38:49 Navigating the Bubble Platform 40:46 Understanding Data Management and Metadata 42:49 Exploring Plugins and Third-Party Integrations 45:35 Design Systems and Reusability in Bubble 48:30 Connecting APIs and Backend Functionality 53:44 Recap of Building a To-Do App 55:11 Reflections on No-Code Development 01:01:43 The Learning Curve of Bubble 01:08:14 The Future of AI in No-Code Platforms

Day[0] - Zero Days for Day Zero
WhatsApp vs. NSO and CCC Talks

Day[0] - Zero Days for Day Zero

Play Episode Listen Later Jan 14, 2025 82:50


Specter and zi discuss their winter break, cover some interesting CCC talks, and discuss the summary judgement in the WhatsApp vs. NSO Group case. Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/268.html [00:00:00] Introduction [00:09:53] 38C3: Illegal Instructions [00:35:38] WhatsApp v. NSO Group [01:04:06] Vulnerability Research Highlights 2024 [01:08:45] Debugging memory corruption: Who wrote ‘2' into my stack?! [01:16:46] HardBreak [01:20:14] Announcing CodeQL Community Packs Podcast episodes are available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosec You can also join our discord: https://discord.gg/daTxTK9

Engineering Kiosk
#178 Code der bewegt: Infotainmentsysteme auf Kreuzfahrtschiffen mit Sebastian Hammerl

Engineering Kiosk

Play Episode Listen Later Jan 14, 2025 68:44


Softwareentwicklung in der Praxis: Infotainment-Systeme für Kreuzfahrtschiffe.Jede Industrie und Domäne hat ihre Eigenheiten und Herausforderungen. Dies überträgt sich auch auf die Software, die für die entsprechenden Anwendungsfälle geschrieben wird. Oft fragen wir uns “Wie ist es eigentlich, Software für Brauereien, Waschmaschinen oder Mähdrescher zu schreiben?”. In dieser Episode beantworten wir diese Frage für das Thema Kreuzfahrtschiffe - Also die richtig dicken Pötte.Welche Software wird auf einem Kreuzfahrtschiff benötigt? Auf welcher Hardware läuft diese und wo steht die Hardware überhaupt - In der Cloud oder ist das ein schwimmendes Datacenter? Wie sieht es mit der Internet-Connectivity und dem Debugging aus, wenn man auf den Weltmeeren unterwegs ist? Welche Probleme muss die Software lösen, wenn Ländergrenzen übertreten werden in Bezug auf Zeitzonen, Datenschutz und Accessibility?Unser Gast Sebastian Hammerl steht uns Rede und Antwort.Bonus: Speedboot fahren mit Systemadministratoren - Warum nicht?Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

Rocket Ship
#057 - The Time for React Native is Now

Rocket Ship

Play Episode Listen Later Jan 1, 2025 10:24


In this solo episode, Simon shares the new Galaxies.dev Zero to Hero Mission and why now is the best time to learn React Native. From New Architecture to Debugging, Expo, and companies in the ecosystem, this podcast will reinforce your drive to build and ship awesome React Native apps in 2025!Join React Native Zero to Hero - https://galaxies.dev/missions/zero-to-hero

Foundations of Amateur Radio
Bald Yak, scene 5, debugging

Foundations of Amateur Radio

Play Episode Listen Later Dec 28, 2024 5:02


Foundations of Amateur Radio As you might know, a little while ago I started a new project. "The Bald Yak project aims to create a modular, bidirectional and distributed signal processing and control system that leverages GNU Radio." In embarking on this adventure I've been absorbing information as I go whilst explaining what I've learnt to anyone who will sit still long enough. Credit to Glynn VK6PAW and Charles NK8O for their patience. For most people, me included, the introduction to GNU Radio happens via a graphical user interface where you build so-called flowgraphs. These are made up of little blocks that you wire together to get from a Source, where a signal originates, to a Sink, where it terminates. Each of these blocks does something to the signal, it might be a filter, an amplifier, it might encode or decode a signal like FM, AM, Wideband FM, or some other modulation like Phase Modulation or OFDM, Orthogonal Frequency Division Multiplexing, a way of transmitting digital information using multiple channels. It's used in places like WiFi, ADSL and DSL, Digital Television as well as modern cellular systems. Those blocks generally expect a specific type of input and generate some particular output. After you save your design you can run the flowgraph and behind the scenes some magic happens. Your visual representation of signal flow is translated into either Python or C++ and the resulting application is what is actually run, which is why the user interface that you design your flowgraph in is cunningly named, GNU Radio Companion. So, what if you want to do something that doesn't yet exist? As it happens, that's where I came across a YouTube video by John VE6EY called "GNURadio Embedded Python Block" which neatly describes a fundamental aspect of how the GNU Radio framework actually operates. One of the blocks available to you is one called "Python Block", which you can add to your flowgraph just like any other block. What sets it apart from the others is that you can open it up and write some Python code to process the signal. When you first insert such a block, it's already populated with some skeleton code, so it already does something from the get-go and that's helpful because if you break the code, you get to keep both parts. Seriously, it allows you to figure out what you broke, rather than having to worry immediately about how specifically the code is wired to the outside world, which let's face it, is not trivial. If you're a programmer, think of it as the "Hello World" of GNU Radio. If not much of that means anything, think of it as a variable electronic component. If you need it to be a capacitor, it can be that, or a transistor, a whole circuit, or just a filter, all in software, right there at your fingertips and no soldering required. Now I'm under no illusion that everybody is going to want to get down and dirty with Python at this point, and truth be told, I have a, let's call it "special" relationship with the language, but that is something I'm just going to have to get over if this project is going to go anywhere. For my sins this week I attempted to recreate the intent of John's video on my own keyboard and discovered that debugging code in this environment might be tricky. It turns out that you can actually print out Python variables within your code and in the GNU Radio environment they'll show up in the console inside the companion window, which is handy if you committed one of many Python sins, like say attempting to compare an integer against a list. Don't ask me how I know. One thing I'm planning to attempt is to get the same thing going for C++ output. By default GNU Radio Companion uses Python, but you can change it so instead of generating Python, it can generate C++. Whilst I have no immediate need for that, I do know that at some point it's likely that I will, like say when I want to run something on an embedded processor, or some other contraption. So, whilst I have nothing to lose, I want to try out the boundaries of my new toy, besides, I have form, in testing boundaries that is. I'm Onno VK6FLAB

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
[Paid Course] Snowpal Education: (Weaviate) Open Source Vector Database

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

Play Episode Listen Later Dec 17, 2024 1:31


In this conversation, Krish Palaniappan introduces Weaviate, an open-source vector database, and explores its functionalities compared to traditional databases. The discussion covers the setup and configuration of Weaviate, hands-on coding examples, and the importance of vectorization and embeddings in AI. The conversation also addresses debugging challenges faced during implementation and concludes with a recap of the key points discussed. Takeaways Weaviate is an open-source vector database designed for AI applications. Vector databases differ fundamentally from traditional databases in data retrieval methods. Understanding vector embeddings is crucial for leveraging vector databases effectively. Hands-on coding examples help illustrate the practical use of Weaviate. Python is often preferred for AI-related programming due to its extensive support. Debugging is an essential part of working with new technologies like Weaviate. Vectorization optimizes database operations for modern CPU architectures. Embedding models can encode various types of unstructured data. The conversation emphasizes co-learning and exploration of new technologies. Future discussions may delve deeper into the capabilities of vector databases. Chapters 00:00 Introduction to Weaviate and Vector Databases 06:58 Understanding Vector Databases vs Traditional Databases 12:05 Exploring Weaviate: Setup and Configuration 20:32 Hands-On with Weaviate: Coding and Implementation 34:50 Deep Dive into Vectorization and Embeddings 42:15 Debugging and Troubleshooting Weaviate Code 01:20:40 Recap and Future Directions Purchase course in one of 2 ways: 1. Go to https://getsnowpal.com, and purchase it on the Web 2. On your phone:     (i) If you are an iPhone user, go to http://ios.snowpal.com, and watch the course on the go.     (ii). If you are an Android user, go to http://android.snowpal.com.

React Native Radio
RNR 316 - Debugging the Debugger with Cedric van Putten and Alex Hunt

React Native Radio

Play Episode Listen Later Dec 9, 2024 44:34


Expo-vember's epic finale! React Native's debugging tools are getting a major upgrade! Host Jamon Holmgren talks with Cedric van Putten (Expo) and Alex Hunt (Meta) about the new React Native DevTools. They discuss why the teams joined forces to build a more reliable debugging experience, the power of Chrome DevTools Protocol, and what's coming next for React Native debugging. Plus, learn why debuggers beat console.log and how the new tools make debugging more accessible for developers on all platforms.Show NotesAlex's talk at React Universe Conf 2024 - The New React Native DebuggerCedric's talk at AppJS Conf 2023 - Debugging Should Be EasierAlex's talk at AppJS Conf 2023 - Metro and React Native DevX in 2023[0.76] React Native DevTools — Issues and Feedback #819Chrome DevTools Protocol (CDP)0.76 Release NotesReactotronRadon IDE (previously React Native IDE)RNR 309 - React Native IDE with Krzysztof Magiera Connect With Us!React Native Radio: @ReactNativeRdioJamon Holmgren: @jamonholmgrenAlex Hunt: Alex on LinkedInCedric van Putten: @cedric_devInfinite Red Slack Community: https://community.infinite.red This episode is brought to you by Infinite Red!Infinite Red is an expert React Native consultancy located in the USA. With nearly a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.

The Bike Shed
449: Evergreen skills for new-ish developers

The Bike Shed

Play Episode Listen Later Dec 3, 2024 37:41


One of the most challenging things about starting out as a developer is how much you need to master all at once. And with so much to learn, it can be difficult for experts to guide fresh developers and advise them on where to focus first. Luckily, some skills will always be useful, no matter what language you're coding in. In today's episode, Stephanie and Joël tackle this topic by unpacking several key evergreen skills that will always be valuable, from reading error messages to deciphering a stack trace. They break down how new-ish developers can start acquiring these skills, key obstacles they're likely to encounter, and how to ask for help when you hit a block. Their conversation covers the ins and outs of debugging, how to feel comfortable in your editor as a new developer, the art of asking for help, and much more. They also share plenty of valuable tips to help you on your journey – including one that will help you commit more frequently. Tune in now to hear it all! Key Points From This Episode: Stephanie's time at the Ruby Conference in Chicago. The challenges of advising new-ish developers as an expert. Broad evergreen skills that are always valuable to learn. Tips on mastering debugging as a core skill. How to improve your ability to read error messages. Our approach to resolving errors and isolating what is wrong. Advice for learning to read a stack trace (even though it's intimidating). Strategies for fixing different types of bugs. The value of editor mastery and version history. Tips on how to commit more frequently as a new developer. Learning to ask for help when you hit a block. The art of structuring your questions when asking for help. Breaking down large tasks into smaller sections. Learning to find focus as a new developer. Links Mentioned in Today's Episode: What technologies should I learn? (https://thoughtbot.com/blog/what-technologies-should-i-learn) Debugging blog post series (https://thoughtbot.com/blog/tags/debugging-series-2021) Asking about solutions rather than problems (https://xyproblem.info/) The Bike Shed (https://bikeshed.thoughtbot.com/) Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) Joël Quenneville on X (https://x.com/joelquen) Support The Bike Shed (https://github.com/sponsors/thoughtbot) Mailtrap (https://l.rw.rw/the_bike_shed) WorkOS (workos.com/)

Slow & Steady
Debugging horror stories

Slow & Steady

Play Episode Listen Later Nov 28, 2024 50:05


Benedicte and Benedikt share life and work updates, and some debugging horror stories.Benedicte and her Pirate family have been low key sick all month. Despite feeling under the weather, they were able to bake and pack more than 100 cookies for the Christmas Market at school. She finished her Xano + Outseta (XO) Vanilla Demo for Outseta, and is doing price adjustments for Whee.Benedikt feels recharged from his birthday vacation last weekend. On the work front, they recently shipped improvements on how they do condition on Userlist, making it possible to filter users by companies that are within the segment.They also talk about some debugging horror stories, token exchanges, and more. 

The Bootstrapped Founder
356: James Phoenix — Mastering Code & AI for the Modern Developer

The Bootstrapped Founder

Play Episode Listen Later Nov 20, 2024 53:01 Transcription Available


Coding with AI changes everything. It changes how we design, test, and improve our software projects. Today, I'm talking to generative AI expert James Phoenix.He's written the book on prompt engineering and shares his hard-earned AI insights freely on the show — including a crash course in developing effectively with the Cursor IDE.Are developers just AI wranglers now? Technical managers? Will we ever code again?You'll find out today.This episode is sponsored by Paddle.com — if you're looking for a payment platform that works for you so you can focus on what matters, check them out.The blog post: https://thebootstrappedfounder.com/james-phoenix-mastering-code-ai-for-the-modern-developer/The podcast episode: https://tbf.fm/episodes/356-james-phoenix-mastering-code-ai-for-the-modern-developerCheck out Podscan to get alerts when you're mentioned on podcasts: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw

Empower Apps
Debugging Your Job Search with Jaim Zuber

Empower Apps

Play Episode Listen Later Nov 1, 2024 38:09


Jaim Zuber returns after over 5 years to give us an overview of looking for a job in iOS development in 2024, how has it changed, what are some things you can do now, and when becoming a manager is the right call.GuestJaim Zuber@sharpfive (GitHub)Jaim Zuber (LinkedIn)@jaimzuber@mspsocial.net (Mastodon)AnnouncementsNeed help with your projects this year? BrightDigit has openings.Join Bushel BetaJoin our Brand New Patreon Page!BrightDigit #100 - Top 10 Emails Ever!

Unbusy Your Life
The Invisible Edge: Debugging Mental Blocks in Business and Life with Marci Owen

Unbusy Your Life

Play Episode Listen Later Oct 30, 2024 31:13


What role do subconscious patterns play in shaping our lives?We all have goals, routines, and habits. But have you ever wondered why certain goals seem harder to reach, or why some habits feel impossible to change? What if the answer lies beneath the surface?In this conversation with Marci Owen, a trauma-informed coach specializing in subconscious patterns and nervous system resilience, we dig into how subconscious beliefs shape our actions and reactions. Imagine if every recurring frustration, hesitation, or fear had roots deep within your own past experiences and belief systems. Marci explains how these “hidden scripts” impact us and offers steps to uncover, understand, and ultimately transform them.Join us as we explore the power of self-awareness, compassionate self-reflection, and reverse-engineering behavior patterns. This episode is an invitation to break free from the limitations of unexamined beliefs and create a life where every choice aligns with who you truly want to be.Tap into the freedom that self-understanding can bring—because when you make sense of your patterns, you unlock the potential for true growth.Topics covered in this episode include:The role of subconscious patterns in shaping our behaviorsHow belief systems formed in childhood can influence adult decision-makingStrategies to identify and break free from self-limiting subconscious narrativesThe importance of compassion in the healing processCurious about the stories running in the background that might be holding you back? Discover how bringing awareness and self-compassion into the mix can spark real transformation. Tune in to start your journey toward lasting change!Spoiler:  Doors will open for the next round of the Self Love Lab on Black Friday! Resources Mentioned:How to Identify Subconcious Stories and Pattern that Hold You Back:  https://marciowen.com/identify-subconscious-patterns/Marci Owen's Self Love Lab: https://marciowen.com/selflovelab/Marci Owen's Website: https://marciowen.com/Get The Book:  https://book.neillwilliams.com/bookLearn More About TEAM90:  https://neillwilliams.com/team90Book A Team Turnaround Call:  https://neillwilliams.com/team-turnaround-callContact Us: support@neillwilliams.comIf you're loving what you're learning on this podcast every week - the simple high-impact habits, tools and mindsets to optimize your whole body focus, motivation and energy - please follow, rate and review by heading to Apple Podcasts or wherever you listen to podcasts.

core.py
Episode 16: Memory Allocation

core.py

Play Episode Listen Later Oct 29, 2024 105:52


How does Python handle memory? Why does it need to perform custom forms of memory allocation? We talk about all that in this episode. We don't talk about Easter eggs, and we never mention Brandt by name, as promised last time! ## Timestamps (00:00:00) INTRO (00:00:22) PART 0: SPORTS NEWS (00:01:57) PART 1: MEMORY ALLOCATION (00:03:46) If you write C correctly, it manages memory for you (00:05:38) malloc and the heap (00:09:31) High-level allocators (00:10:48) pymalloc (00:11:15) 512 is a good number (00:12:43) Memory domains in pymalloc (00:15:09) Why is pymalloc faster than just using malloc? (00:20:16) To free or not to free (00:26:35) User-facing features of pymalloc (00:30:40) Łukasz replaced by an LLM Sim... almost (00:32:58) Debugging help with marked memory (00:39:24) 8-byte alignment (00:42:27) cymem (00:45:08) tracemalloc (00:49:12) memray (00:50:47) Coding like an ultravillain (00:55:22) Complaining about Apple (01:01:45) PART 2: PR OF THE WEEK (01:13:46) Pablo's buildbot release status page (01:18:26) PART 3: WHAT'S GOING ON IN CPYTHON (01:18:52) PEP 762 (01:22:30) PEP 758 (01:26:10) Performance updates (01:29:14) Argparse suggests correct spelling (01:30:13) Fraction.from_number() and Decimal.from_number() (01:30:39) switched in contextvars (01:31:42) Better type annotations in pydoc signatures (01:33:33) Fixes in TaskGroup and _PyFuture ref cycles (01:34:03) Data race in ZoneInfo (01:35:53) Testing certificates set to expire far in the future (01:36:32) InternalDocs (01:40:22) There are no Easter eggs (01:40:42) OUTRO

The Technically Human Podcast
Debugging Division: The Architecture of Bridge-Building Social Media

The Technically Human Podcast

Play Episode Listen Later Oct 25, 2024 53:01


Today we are bringing you a conversation featuring one technologist who is rethinking and reshaping social media—to build platforms that spark empathy and joy, not division and hate. Vardon Hamdiu is the co-founder and head of Sparkable, a young nonprofit organization that builds a social media platform aimed at bridging divides. Growing up immersed in diverse cultures, Vardon has always been a bridge-builder who navigates between worlds. His family history has exposed him to the devastating consequences of communication breakdowns between ethnic communities and the outbreak of war. These experiences have profoundly shaped his understanding of the importance of empathy and social cohesion. Over the past decade, Vardon has worked on the communications team of a Swiss President, studied to become a teacher, spent an exchange semester in South Africa, and engaged with refugees facing often traumatic circumstances. These experiences made him acutely aware of the enormous disconnect between the information we consume online and the lived realities of many people around the globe. He became deeply passionate about exploring why today's social media platforms are often dysfunctional and how these powerful systems, which govern our collective attention, could be constructed differently. Driven by this vision, he made the pivotal decision to quit his job, drop out of his studies, and launch Sparkable, aiming to foster a healthier online environment.

Smart Software with SmartLogic
Creating the Igniter Code Generation Framework with Zach Daniel

Smart Software with SmartLogic

Play Episode Listen Later Oct 17, 2024 52:55


To kick off Elixir Wizards Season 13, The Creator's Lab, we're joined by Zach Daniel, the creator of Igniter and the Ash framework. Zach joins hosts Owen Bickford and Charles Suggs to discuss the mechanics and aspirations of his latest brainchild, Igniter—a code generation and project patching framework designed to revolutionize the Elixir development experience. Igniter isn't just about generating code; it's about generating smarter code. By leveraging tools like Sourcerer and Rewrite, Igniter allows developers to modify source code and batch updates by directly interacting with Elixir's AST instead of regex patching. This approach streamlines new project setup and package installations and enhances overall workflow. They also discuss the strategic implications of Igniter for the broader Elixir community. Zach hopes Igniter will foster a more interconnected and efficient ecosystem that attracts new developers to Elixir and caters to the evolving needs of seasoned Elixir engineers. Topics discussed in this episode: Advanced package installation and code generation improve the developer experience Scripting and staging techniques streamline project updates Innovative methods for smoother installation processes in Elixir packages High-level tools apply direct patches to source code Progressive feature additions simplify the mix phx.new experience Chaining installers and composing tasks for more efficient project setup Continuous improvement in developer experiences to boost Elixir adoption Encourage listeners to collaborate by sharing code generation patterns Introduction of a new mix task aimed at removing the "unless" keyword in preparation for Elixir 1.18 You can learn more in the upcoming book "Building Web Applications with Ash Framework" by Zach and Rebecca Links mentioned: https://smartlogic.io/ https://alembic.com.au/blog/igniter-rethinking-code-generation-with-project-patching https://hexdocs.pm/igniter/readme.html https://github.com/ash-project/igniter https://www.zachdaniel.dev/p/serialization-is-the-secret https://www.zachdaniel.dev/p/welcome-to-my-substack https://ash-hq.org/ https://hexdocs.pm/sourceror/readme.html https://smartlogic.io/podcast/elixir-wizards/s10-e09-hugo-lucas-future-of-elixir-community/ https://github.com/hrzndhrn/rewrite https://github.com/zachdaniel https://github.com/liveshowy/webauthn_components https://hexdocs.pm/elixir/Regex.html https://github.com/msaraiva/vscode-surface https://github.com/swoosh/swoosh https://github.com/erlef/oidcc https://alembic.com.au/ https://www.zachdaniel.dev/ Special Guest: Zach Daniel.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Lex Fridman Podcast

Play Episode Listen Later Oct 6, 2024 157:38


Aman Sanger, Arvid Lunnemark, Michael Truell, and Sualeh Asif are creators of Cursor, a popular code editor that specializes in AI-assisted programming. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep447-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/cursor-team-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Cursor Website: https://cursor.com Cursor on X: https://x.com/cursor_ai Anysphere Website: https://anysphere.inc/ Aman's X: https://x.com/amanrsanger Aman's Website: https://amansanger.com/ Arvid's X: https://x.com/ArVID220u Arvid's Website: https://arvid.xyz/ Michael's Website: https://mntruell.com/ Michael's LinkedIn: https://bit.ly/3zIDkPN Sualeh's X: https://x.com/sualehasif996 Sualeh's Website: https://sualehasif.me/ SPONSORS: To support this podcast, check out our sponsors & get discounts: Encord: AI tooling for annotation & data management. Go to https://encord.com/lex MasterClass: Online classes from world-class experts. Go to https://masterclass.com/lexpod Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (09:25) - Code editor basics (11:35) - GitHub Copilot (18:53) - Cursor (25:20) - Cursor Tab (31:35) - Code diff (39:46) - ML details (45:20) - GPT vs Claude (51:54) - Prompt engineering (59:20) - AI agents (1:13:18) - Running code in background (1:17:57) - Debugging (1:23:25) - Dangerous code (1:34:35) - Branching file systems (1:37:47) - Scaling challenges (1:51:58) - Context (1:57:05) - OpenAI o1 (2:08:27) - Synthetic data (2:12:14) - RLHF vs RLAIF (2:14:01) - Fields Medal for AI (2:16:43) - Scaling laws (2:25:32) - The future of programming PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

Beyond UX Design
97. Debugging Your Brain: A Software Engineer's Framework for Design Career Success with Dagna Bieda

Beyond UX Design

Play Episode Listen Later Sep 24, 2024 65:29


Dagna Bieda discusses her book “Brain Refactor” and how tech professionals can optimize their mindset for success. She explores common career obstacles and strategies for overcoming them. The post 97. A Software Engineer's Framework for Design Career Success appeared first on Beyond UX Design.

Beyond UX Design
97. Debugging Your Brain: A Software Engineer's Framework for Design Career Success with Dagna Bieda

Beyond UX Design

Play Episode Listen Later Sep 24, 2024 65:28


Dagna Bieda discusses her book “Brain Refactor” and how tech professionals can optimize their mindset for success. She explores common career obstacles and strategies for overcoming them. The post 97. A Software Engineer's Framework for Design Career Success appeared first on Beyond UX Design.

Syntax - Tasty Web Development Treats
825: Syntax Assistant Desktop App

Syntax - Tasty Web Development Treats

Play Episode Listen Later Sep 23, 2024 28:34


Scott walks Wes through the new Syntax Production Assistant Desktop App, designed to streamline and automate their complex publishing process. From tech stack choices like Svelte5 and Rust to AI-driven features, they dive into how this tool keeps everything consistent. Show Notes 00:00 Welcome to Syntax! 00:44 Brought to you by Sentry.io. 01:37 What was the idea? 05:42 The tech. Svelte5, Tauri, Rust, FFMPEG. 08:32 Markdown editor. ink-mde, Dillinger. 09:32 Epoch timestamps. Epoch.vercel. 10:01 Updating front-matter. 10:10 Dexie.js function. 11:25 Backing up data. 11:58 Rust functions. 12:58 Why a desktop app and not a website? 14:38 Some small AI features. 16:26 Challenges with OAuth. 20:03 Publishing challenges. 23:29 Could this work on Windows? 23:54 Debugging. 26:23 Deciphering Apple logs. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads