Podcasts about Xcode

IDE containing tools for developing for iOS, iPadOS, macOS, watchOS, and tvOS

  • 341PODCASTS
  • 1,485EPISODES
  • 48mAVG DURATION
  • 1WEEKLY EPISODE
  • Apr 8, 2025LATEST
Xcode

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Xcode

Show all podcasts related to xcode

Latest podcast episodes about Xcode

App Masters - App Marketing & App Store Optimization with Steve P. Young

App Icon A/B Testing in Xcode (Step-by-Step Tutorial) — Increase App Downloads with Better Icons!Wondering how to A/B test your iOS app icons to see which one drives more downloads? In this video, we break down everything you need to know — from Xcode setup to App Store Connect to running a real A/B test for your app icon.

Point-Free Videos
SQL Builders: Joins

Point-Free Videos

Play Episode Listen Later Apr 7, 2025 49:21


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- We dive into the "relational" part of relational databases by learning how tables can reference one another, the various ways queries can join these relations together, and even how to aggregate nuanced data across these relations, all without ever hopping over to Xcode.

Side Project Spotlight
#85: Bento Fit - Vibe Coding Our Way to an MVP

Side Project Spotlight

Play Episode Listen Later Mar 31, 2025 71:53


In this extra long episode, we review progress made in domain modeling, Swift Charts, and a Tron-esque UI! Plus, Steve discusses his experiences using the ChatGPT macOS app and Cursor with Xcode before we try to do some "vibe coding" ourselves to add a small UI feature. There are a lot of quotable moments in this episode. It's a fun one!## Show Notes- BentoFit: The Road So Far… - Kotaro: UI Updates including Dark Mode! - Aaron: Swift Chart! - Steve: More metrics, refactored HealthKit, domain modeling begun!- How to Use Cursor and Xcode Together- Vibe Coding Demo with Cursor and Xcode- Next Time - Use up Cursor tokens and get more domain modeling done and caching of HealthKit data - Add/remove bento items in the UI - Bento zoom transition to detail screen - Basic settings we can configure - More charts, maybe even configurable!- Wrap-Up - http://phillycocoa.org- Side Project Shout Out - https://stylebookapp.com## Chapters00:00:00 Introductions00:02:08 BentoFit: The Road So Far...00:10:28 Kotaro Fixed Dark Mode!00:18:25 Aaron Made a Chart!00:22:00 Steve Started Domain Modeling!00:24:11 Using ChatGPT macOS App with Xcode00:26:10 Using Cursor with Xcode00:35:32 Reviewing the Cursor Chat Log00:40:50 The Trio Vibe Code with Cursor and Xcode00:50:06 How to Avoid Shooting Yourself in the Foot00:58:16 Next Time01:07:26 Wrap-Up01:07:44 Side Project Shout Out: Stylebook App01:11:48 TagIntro music: "When I Hit the Floor", © 2021 Lorne Behrman. Used with permission of the artist.

Swift over Coffee
S4E5: Could you just…?

Swift over Coffee

Play Episode Listen Later Mar 23, 2025 33:05


‘Networking' is a word to strike fear into the heart of any developer, and upsettingly we're dealing with both types this episodes: talking to other humans at conferences, but mostly trying to coax computers to talk to each other too.Plus, our Open Ballot this episode is the trifling little matter of the major changes you're hoping to see in Xcode, SwiftUI, SwiftData and more as WWDC25 rolls around.Essential links from the episode:Apple Intelligence delayOriginal announcement: https://daringfireball.net/2025/03/apple_is_delaying_the_more_personalized_siri_apple_intelligence_featuresGruber's rant: https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertinoApple introduces age-checking systems https://techcrunch.com/2025/02/27/apple-introduces-new-child-safety-initiatives-including-an-age-checking-system-for-apps/Testing Workgroup https://www.swift.org/testing-workgroup/Conferences:Swift Heroes (8–9 April): https://swiftheroes.com/2025/try! Swift Tokyo (9–11 April): ⁠⁠https://tryswift.jp/Conference organizers: we'd love to feature more events here on a regular basis. Get in touch with us when early bird tickets go on sale, or when you announce speakers or something else, and we'll do our best to feature you!

CacaoCast
Épisode 290 - Nouveaux Macs, Swift-build, Cmd-Maj-O, Jellyfin

CacaoCast

Play Episode Listen Later Mar 21, 2025 46:44


Bienvenue dans le deux-cent-quatre-vingt-dixième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: Nouveaux Macs - Mac Studio M4 Max & M3 Ultra + Macbook Air M4 Swift-build - Maintenant en code-source libre Cmd-Maj-O - Fonctionne sur les sites en DocC Jellyfin - Maintenant sur iOS Ecoutez cet épisode

Intego Mac Podcast
Episode 387: Defense in Depth

Intego Mac Podcast

Play Episode Listen Later Mar 13, 2025 30:01


Apple's latest OS updates patch a serious vulnerability, one that was already supposed to be patched. The tide may be turning in how governments view backdoor access to encrypted data. And Apple has announced it's delaying new personalized Siri features by another year. We have views. Show Notes: Apple and Google patch zero-day vulnerability used to hack iPhones France rejects controversial encryption backdoor provision Apple challenging legality of UK's secret demand to globally compromise iCloud encryption 10 Ways End-to-Encryption Protects Your Data, Your Privacy, and Your Bank Balance Don't expect cheaper iCloud storage as Apple wins another monopoly lawsuit New XCSSET malware adds new obfuscation, persistence techniques to infect Xcode projects Mac malware exposed: XCSSET, an advanced new threat US cities warn of wave of unpaid parking phishing texts Apple Is Delaying the ‘More Personalized Siri' Apple Intelligence Features Apple Adds Disclosure About Delayed Siri Features to iPhone 16 Pages Intego Mac Premium Bundle X9 is the ultimate protection and utility suite for your Mac. Download a free trial now at intego.com, and use this link for a special discount when you're ready to buy.

iWeek (la semaine Apple)
Le vrai nouveau Siri boosté à l'IA : dans un an seulement, vraiment ?

iWeek (la semaine Apple)

Play Episode Listen Later Mar 11, 2025 96:59


Soutenez-nous sur patreon.com/iweek !Voici l'épisode 223 d'iWeek (la semaine Apple), le podcast.Le vrai nouveau Siri boosté à l'IA : dans un an seulement, vraiment ?Enregistré mardi 11 mars 2025 à 17h30, enregistrement accessible en direct sur X, YouTube, Twitch et LinkedIn LivePrésentation : Benjamin Vincent avec la participation d'Elie Abitbol (ex-MCS et ex-président des Apple Premium Resellers en France) et Gilles Dounès.Invité : Frédéric Crassat, ingénieur formation chez Orange et expert en composants et processeurs.Au sommaire de cet épisode 223 : le vrai nouveau Siri n'arrivera pas en mai-juin mais « dans l'année qui vient ». L'information officielle est arrivée, vendredi dernier, de la porte-parole américaine d'Apple qui nous dit donc que la version 2.0 Siri pourrait n'arriver qu'en mars 2026 avec iOS 19.4... Au même moment, comme un cache-misère, on apprend qu'iOS 19 pourrait coïncider avec l'arrivée d'un design totalement nouveau du système de l'iPhone.30% plus rapide que le M2 Ultra, 51% plus rapide que le M1 Ultra, c'est confirmé : le nouveau processeur M3 Ultra qui arrive ce mercredi dans le nouveau Mac Studio est bien le plus puissant des processeurs à ce jour chez Apple. Sans doute plus puissant qu'aucun M4 ne le sera jamais, ce qui est quand même un problème. Frédéric Crassat va nous aider à y voir clair.Et puis, la révolution chatGPT pour les développeurs dans l'environnement Apple : la version 4.5 peut désormais écrire directement dans Xcode. Florent Morin, notre ami développeur, est de retour pour nous dire ce que change cette nouvelle fonctionnalité proposée par OpenAI.Enfin, ne manquez pas le point sur le retard de la bascule vers la nouvelle page Patreon d'iWeek : Patreon nous avait promis qu'elle interviendrait le 3 mars. Et le 3 mars... il ne s'est rien passé. Patreon ne nous a pas prévenu. Nous avons donc demandé des explications. Benjamin les partage avec vous et il n'est franchement pas content ! Du coup, pas de bonus cette semaine. On l'espère, la semaine prochaine !À mardi prochain, 18 mars 2025, pour l'épisode 224 dont l'enregistrement sera à suivre en direct à partir de 17h30 sur sur X, YouTube, Twitch et LinkedIn Live !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

In Touch with iOS
348 - Spatial Galleries & Apple's Shiny New Gear

In Touch with iOS

Play Episode Listen Later Mar 8, 2025 87:03


The latest In Touch With iOS with Dave he is joined by guest Chuck Joiner, Marty Jencius, Jeff Gamet, and Ben Roethig. Dave and the crew kick off this episode with a lively discussion on Apple's latest updates of iPad and Mac, discussing Vision Pro and Vision OS 2.4 beta 2, which introduces Apple Intelligence, writing tools, and Genmoji. They compare ChatGPT to Apple Intelligence and explore the new Spatial Gallery app. Marty shares his experience with the iPhone 16E, and the team dives into iOS. The show notes are at InTouchwithiOS.com  Direct Link to Audio  Links to our Show Give us a review on Apple Podcasts! CLICK HERE we would really appreciate it! Another way to support the show is to become a Patreon member patreon.com/intouchwithios Website: In Touch With iOS Click this link Buy me a Coffee to support the show we would really appreciate it. intouchwithios.com/coffee  YouTube Channel In Touch with iOS Magazine on Flipboard Facebook Page BlueSky Mastodon X Instagram Threads Spoutible Summary Topics and Links In this episode it features an engaging team, including returning guest Chuck Joyner, the insightful Ben Rathig, and the always entertaining Marty Jensen. Together, they dive deep into the whirlwind of Apple announcements from the past week, while bringing their own unique perspectives to the conversation. The episode kicks off with an enthusiastic greeting, where Dave expresses his excitement for the crew's participation this week, especially welcoming Chuck back and expressing joy over Ben's return after a busy work stint. Marty humorously sets the tone with a relatable anecdote about tedious meetings, before the group transitions to discussing Apple's significant announcements and updates, particularly focusing on Vision Pro and its latest features. A highlight this week is the second beta release of Vision OS 2.4, which introduces Apple Intelligence, writing tools, Genmoji, and more. Dave and the crew delve into firsthand experiences using these emerging features. The conversation flows through various aspects—comparing ChatGPT to Apple Intelligence, and weighing the pros and cons of both regarding user experience. The group collectively mentions the excitement surrounding the new dedicated Spatial Gallery app while also highlighting the need for further improvements. As they navigate through the nitty-gritty of Apple's updates, the discussion effortlessly transitions to iPhone 16E, where Marty shares his delightful experience with this new, more affordable model. He praises its solid performance, camera capabilities, and ergonomics. The group expresses collective thoughts on how the 16E, while aimed at budget-conscious consumers, meets many users' everyday needs, ultimately making it a worthy contender in the smartphone market. In the midst of this richness, the crew engages in thoughtful dialogues about iOS 18.4 beta two updates. Interesting features such as priority notifications and enhancements to the shortcuts app steal the show. Yet, they also critique the absence of Apple Intelligence in lower-end models, prompting a study of how Apple positions its products within the market. Staying true to the audience's needs, the crew explores the recent Mac announcements, including a new iPad Air with an M3 chip, victorious arrivals of new MacBook Air models in delightful colors, and the powerhouse Mac Studio featuring the M4 Max. They debate the implications of these upgrades, with Marty humorously revealing his impulse decision to acquire the Mac Studio, much to the amusement of his friends. Through collaborative discussion, the group reflects on Apple's evolving strategies, assessing where the company stands regarding cutting-edge technology and consumer needs. Last but not least, the episode dives into news bites that capture attention, such as Tapbots' forthcoming release of the Phoenix app for Blue Sky and police in Australia utilizing CarPlay for vehicle recognition. These stories not only provide a fun wrap-up to the episode but also highlight tech's growing intersection with everyday life and law enforcement practices. In Touch With Vision Pro this week.  Apple Seeds Second Betas of visionOS 2.4, tvOS 18.4, and watchOS 11.4 Hands on with Apple Intelligence on Apple Vision Pro Vision Pro Spatial Gallery Launches in visionOS 2.4 Beta 2 Vision Pro App for iPhone Available in iOS 18.4 Beta 2 Marty has the new iPhone 16e. He gives his review of this new iPhone.iFixit Takes Apart iPhone 16e for Closer Look at C1 Modem  Beta this week.  Everything New in iOS 18.4 Beta 2 Apple Seeds Second Public Betas of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4 iOS 18.4 Brings RCS Support to Google Fi, Mint Mobile and Other T-Mobile MVNOs iOS 18.4 Adds Apple Intelligence Features to Control Center  iOS 18.4 upgrades the App Store with these two new features New iPads were released this week.  Apple Announces New iPad Air With M3 Chip, Updated Magic Keyboard Apple Announces Redesigned Magic Keyboard for iPad Air M2 iPad Air vs. M3 iPad Air Buyer's Guide  Apple Unveils 11th-Gen iPad With A16 Chip and More Storage Here Are 8 Things to Know About Apple's New iPad With the A16 Chip New Entry-Level iPad With A16 Chip Has More RAM Than iPad 10 Apple's 64GB Era is Over In Touch With Mac this week Apple Seeds Second Beta of macOS Sequoia 15.4 With Mail Categorization Apple Announces New MacBook Air With M4 and 'Sky Blue' Color Option M2 vs. M3 vs. M4 MacBook Air Buyer's Guide: 25+ Differences Compared Apple Announces New Mac Studio With M4 Max and M3 Ultra Chips, Thunderbolt 5, and More A Maxed Out M3 Ultra Mac Studio Will Cost You $14,099 New MacBook Air and Mac Studio Offer Easy Setup With a Nearby iPhone  M3 vs. M4 Chip Buyer's Guide: How Much Better Really Is M4 Apple Discontinues M2 and M3 MacBook Air The Mac's Space Gray era is officially ove Everything Apple Announced This Week - MacRumors News Tapbots Making 'Phoenix' App for Bluesky ChatGPT can now directly edit code in Xcode, VS Code, & more on macOS Police in Australia are using CarPlay in an interesting way Announcements Macstock 9 is here for 3 Days on July 11, 12, and 13th, 2025. Super Early bird tickets here save $100. Register | Macstock Conference & Expo Book your room with a Macstock discount here. Location | Macstock Conference & Expo Our Host Dave Ginsburg is an IT professional supporting Mac, iOS and Windows users and shares his wealth of knowledge of iPhone, iPad, Apple Watch, Apple TV and related technologies. Visit the YouTube channel https://youtube.com/intouchwithios follow him on Mastadon @daveg65, and the show @intouchwithios   Our Regular Contributors Jeff Gamet is a podcaster, technology blogger, artist, and author. Previously, he was The Mac Observer's managing editor, and Smile's TextExpander Evangelist. You can find him on Mastadon @jgamet as well as Twitter and Instagram as @jgamet  His YouTube channel https://youtube.com/jgamet Marty Jencius, Ph.D., is a professor of counselor education at Kent State University, where he researches, writes, and trains about using technology in teaching and mental health practice. His podcasts include Vision Pro Files, The Tech Savvy Professor and Circular Firing Squad Podcast. Find him at jencius@mastodon.social  https://thepodtalk.net  Ben Roethig Former Associate Editor of GeekBeat.TV and host of the Tech Hangout and Deconstruct with Patrice  Mac user since the mid 90s. Tech support specialist. Twitter @benroethig  Website: https://roethigtech.blogspot.com About our Guest Chuck Joiner is the host of MacVoices and hosts video podcasts with influential members of the Apple community. Make sure to visit macvoices.com and subscribe to his podcast. You can follow him on Twitter @chuckjoiner and join his MacVoices Facebook group.

Mac Folklore Radio
Jonathan Schwartz - Good Artists Copy, Great Artists Steal (2010)

Mac Folklore Radio

Play Episode Listen Later Mar 5, 2025 11:59


What to say when Steve Jobs threatens to sue you. Original text by Jonathan Schwartz. More about Lighthouse Design's Concurrence courtesy of the Apple Wikia instance. Sun famously sued Microsoft over their incompatible Java implenentation variant in 1997. Microsoft settled by paying Sun a bunch of money. Please enjoy this Flash animation shown at JavaOne 2004 retelling the story. Steve Jobs quotes from Triumph of the Nerds, WWDC 1997 Q&A, and Macworld San Francisco 2003. In the mid-1990s, Sun Microsystems acquired StarDivision and its StarOffice product, which Sun open sourced and renamed OpenOffice. After some entirely predictable grief from Oracle, the community forked the project and delivered what we know today as LibreOffice. Apple adopted Sun's dynamic system-wide tracing and performance profiling framework DTrace, known as Instruments in Xcode's collection of tools. Apple announced Snow Leopard Server would ship with Sun's ZFS but that ultimately never happened for licensing and patent reasons. Whether Sun's soon-to-be-acquisition by Oracle and the Steve Jobs/Larry Ellison relationship would have helped or hindered this, we'll never know. Either way, Apple, I know you're reading this and I'd like APFS to checksum my data blocks too, not just the metadata. Thank you. Jonathan Schwartz and Scott McNealy quotes from Sun's NC03-Q3 (2003) keynote and JavaOne 2004. See Project Looking Glass in action.

TestGuild Performance Testing and Site Reliability Podcast
Unlocking AI in DevOps with IBM's Watson xCode with Keri Olson

TestGuild Performance Testing and Site Reliability Podcast

Play Episode Listen Later Mar 5, 2025 31:43


In this episode of the DevOps Toolchain podcast, host Joe Colantonio welcomes Keri Olson, Vice President of Product Management, AI for Code at IBM. Try SmartBear Insight Hub: https://testguild.me/insighthub Keri, an expert in AI-powered software development solutions, shares insights on the transformative impact of generative AI in various business functions, particularly in software development and IT processes. Discover how IBM's Watson X platform, designed to build and deploy generative AI across enterprises, is revolutionizing application modernization, especially for legacy systems like mainframes. Listen in as Keri explains the unique features of Watson xCode Assistant and how it's enabling businesses to streamline their software development lifecycle. Whether you're curious about deploying AI for code or understanding the future of AI in DevOps, this episode is packed with actionable insights and expert advice.

Merge Conflict
452: Building a 2MB iOS app in Swift & Xcode

Merge Conflict

Play Episode Listen Later Mar 3, 2025 40:45


Frank is off building amazing things for iOS and they are tiny, but feature rich. How did Frank build his latest app to be only 2MB in Swift? Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us (https://itunes.apple.com/us/podcast/merge-conflict/id1133064277?mt=2&ls=1) ⭐⭐ Machine transcription available on http://mergeconflict.fm

CacaoCast
Épisode 289 - Fireside Cocoa, iPhone 16e, MapleScan, Alternatives UE, Spices, Arm64-to-sim, Finder, BusySimulator, IA

CacaoCast

Play Episode Listen Later Feb 28, 2025 70:22


Bienvenue dans le deux-cent-quatre-vingt-neuvième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: Fireside Cocoa - Les impressions de Philippe iPhone 16e - Le moins cher des iPhones? MapleScan - Fait au Canada Alternatives UE - Cherchez-vous une alternative à l'hégémonie américaine? Spices - Créez des vues de débogage en SwiftUI Arm64-to-sim - Pour les frameworks récalcitrants Astuce Finder - Renommer plusieurs fichiers simultanément BusySimulator - Faites semblant d'être occupé Astuce IA - Le mot en F toujours utile Ecoutez cet épisode

CacaoCast
Épisode 288 - Bambu Lab, CotEditor, AppIconKit, AutoDock, Swift Concurrency

CacaoCast

Play Episode Listen Later Jan 31, 2025 47:53


Bienvenue dans le deux-cent-quatre-vingt-huitième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: Bambu Lab - La controverse CotEditor - Éditeur en code source AppIconKit - Pour que vos utilisateurs changent l'icône de votre application AutoDock - Cachez votre dock en fonction de la taille de vos écrans Swift concurrency - Un glossaire Ecoutez cet épisode

CacaoCast
Épisode 287 - Advent of Code, Bambu Lab, SwiftUI, Finder, Kodex, AquaUI

CacaoCast

Play Episode Listen Later Jan 11, 2025 59:55


Bienvenue dans le deux-cent-quatre-vingt-septième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: Advent of code - L'expérience de Philippe Bambu Lab - Cadeau de Noël de Philippe SwiftUI - Session en ligne par Apple Aperçus SwiftUI - Ils peuvent prendre de la place Sélecteur d'application - Des raccourcis clavier Kodex - Un éditeur de texte complet pour iOS avec iCloud Drive AquaUI - Un look rétro pour votre application Mac Ecoutez cet épisode

Front-End Fire
OpenAI Goes Pro, React 19 is Stable, and Figma's New Competitor Onlook

Front-End Fire

Play Episode Listen Later Dec 16, 2024 50:17


In our last news episode of the year, we share that React 19 is declared stable, just in time for the holidays. It's been a long road from release candidate in April to stability now, but it was well worth the wait. React 19 is packing a lot of features, including: Actions, hooks, form actions, the new use API, and of course, React Server Components and Server Actions.OpenAI's been busy as well, introducing ChatGPT Pro, its $200 a month subscription for unlimited access to OpenAI o1 (the “reasoning model), GPT-4o, and Advanced Voice mode. Additionally, the startup announced the ChatGPT desktop app for macOS can now read code in a handful of developer-focused coding apps, like VS Code, XCode, Terminal, and iTerm2.There's a new challenger to Figma for styling React-based code bases called Onlook. Onlook is a browser-based product studio that lets you design React code with Tailwind CSS using an easy-to-use interface just like you would in Figma.News:Paige - Onlook, the power Figma in your React appJack - React 19 is stableTJ - OpenAI announcements: ChatGPT Pro and Work With AppsBonus News:Quantum Computing Inches Closer to RealityiOS 18.2 and Apple IntelligenceFire Starters:Customizable Selects (Wes Bos video)What Makes Us Happy this Week:Paige - Black Doves TV series and The Midnight Feast bookJack - Seestar-S50 All-in-one smart telescopeTJ - Christmas Village in Grand Rapids, MIThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or Tweet us on X @front_end_fire and BlueSky.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.com

Point-Free Videos
Tour of Sharing: App Storage, Part 2

Point-Free Videos

Play Episode Listen Later Dec 9, 2024 26:34


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- We show how the `@Shared` property wrapper, unlike `@AppStorage`, can be used _anywhere_, not just SwiftUI views. And we show how `@Shared` has some extra bells and whistles that make it easier to write maintainable Xcode previews and avoid potential bugs around "string-ly" typed keys and default values.

CacaoCast
Épisode 286 - Advent of code, MonBoard, Raspberry Pi, Mac mini, Festivitas

CacaoCast

Play Episode Listen Later Dec 6, 2024 38:12


Bienvenue dans le deux-cent-quatre-vingt-sixième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: Advent of code - De retour pour 2024 (suivez Philippe!) MomBoard - Pour un proche avec pertes cognitives Raspberry Pi - Un Pico 2 avec Wifi pour $10? Mac Mini 3D - Si vous en voulez un mais vous n'en avez pas besoin Festivitas - Joyeux temps des fêtes sur votre Mac Ecoutez cet épisode

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

Syntax - Tasty Web Development Treats
851: The Future of VS Code and Copilot

Syntax - Tasty Web Development Treats

Play Episode Listen Later Nov 22, 2024 42:12


Wes and Scott talk with Cassidy Williams and Harald Kirschner about exciting new features in VS Code and GitHub Copilot, including custom instructions, UI/UX improvements, and the future of AI and Copilot within different editors. Show Notes 00:00 Welcome to Syntax! 00:32 Cassidy's keynote at GitHub Universe 03:23 New Copilot features 04:55 Use cases for prompt engineering 09:20 UI and UX enhancements 19:18 Copilot Extensions 20:38 Brought to you by Sentry.io 21:26 Multi-line suggestions? 27:00 How do you develop new ideas in this space? GitHub Next 35:42 Copilot in Xcode GitHub Copilot code completion in Xcode is now available in public preview 39:16 VS Code experimental features @code Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

IT Privacy and Security Weekly update.
Episode 217.5 Deep Dive The IT Privacy and Security Weekly Update puts it on a map for the Week Ending November 19th., 2024

IT Privacy and Security Weekly update.

Play Episode Listen Later Nov 21, 2024 13:35


Privacy & Security FAQ: Week Ending November 19th, 20241. What happened with T-Mobile and Chinese hackers? Chinese hackers, suspected of ties to Chinese intelligence, infiltrated T-Mobile as part of a larger cyberespionage operation. This attack targeted telecom companies to gather intelligence on high-value targets. While T-Mobile claims no significant impact on their systems or customer data, the breach raises concerns about the security of telecommunications networks and the potential for surveillance. Google is rolling out an AI-powered scam call detection feature for Android phones, starting with Pixel 6 and newer models. This feature analyzes real-time conversation patterns to detect potential scams and alerts users through audio, haptic, and visual warnings. The system operates entirely on the device, ensuring privacy by not storing or transmitting call data externally. India's competition watchdog fined Meta $25.4 million and ordered WhatsApp to stop sharing user data with other Meta units for advertising for five years. This action stems from WhatsApp's 2021 privacy policy update, which mandated data sharing with Meta companies without an opt-out option. The watchdog deemed this practice as an abuse of Meta's dominant position and coercive towards users. Legal documents from a US lawsuit between NSO Group and WhatsApp revealed that NSO Group, not their government clients, directly install and extract information from phones targeted by their Pegasus spyware. This contradicts NSO's claims that clients solely operate the spyware. The revelation raises concerns about the control and accountability of NSO Group's powerful surveillance technology. ChatGPT's desktop app for macOS can now read code from developer-focused apps like VS Code, Xcode, and TextEdit. This integration allows developers to directly send code snippets to ChatGPT for analysis and assistance without manual copy-pasting. While it currently lacks the ability to write code directly into apps, this feature marks a step towards streamlined AI assistance in coding workflows. DeFlock is an open-source project utilizing Open Street Map to map the locations of automated license plate readers (ALPRs) worldwide. Concerned about the proliferation of these surveillance devices, the project encourages crowdsourced reporting of ALPR locations, including details like camera direction. You can contribute to this initiative by reporting ALPRs in your area on the DeFlock website: https://deflock.me/report. Internal emails revealed that the US Secret Service debated the need for warrants when using location data from smartphone apps. Some officials argued that users' acceptance of app terms of service implied consent for data sharing, even if those terms didn't explicitly mention sharing with law enforcement. This raised concerns about government agencies accessing private location data without proper legal authorization. How can you enhance your privacy and security? For secure communication: Consider using encrypted messaging apps like Signal or Session. Protect against phone fraud: Be wary of suspicious calls and consider enabling Google's AI-powered scam call detection. Control data sharing: Scrutinize app permissions and privacy policies before granting access to personal information. Support privacy initiatives: Contribute to projects like DeFlock and advocate for stronger data protection laws. Stay informed: Follow reputable sources for news on privacy and security issues to make informed decisions about your digital life.

DOU Podcast
Нові правила бронювання | Скорочення в MacPaw | Український аналог Upwork — DOU News #173

DOU Podcast

Play Episode Listen Later Nov 18, 2024 33:56


Growth Factory Academy — закритий клуб для власників та СЕО аутсорс компаній. Залишайте заявку, щоб дізнатись більше — https://bit.ly/gfa-dou  

Super Feed
A Fonte - 126: Um Deleite Para os Olhos

Super Feed

Play Episode Listen Later Nov 18, 2024 102:32


O ChatGPT agora enxerga o Xcode, o Filipe agora enxerga tudo bem maior, e o Tim Cook ainda tem muito trabalho pela frente.

A Fonte
126: Um Deleite Para os Olhos

A Fonte

Play Episode Listen Later Nov 18, 2024 102:32


O ChatGPT agora enxerga o Xcode, o Filipe agora enxerga tudo bem maior, e o Tim Cook ainda tem muito trabalho pela frente.

Point-Free Videos
SQLite: SwiftUI

Point-Free Videos

Play Episode Listen Later Nov 18, 2024 36:13


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- Let's see how to integrate a SQLite database into a SwiftUI view. We will explore the tools GRDB provides to query the database so that we can display its data in our UI, as well as build and enforce table relations to protect the integrity of our app's state. And we will show how everything can be exercised in Xcode previews.

CacaoCast
Épisode 285 - Nouveaux Macs, Swift-format, Swift testing, DeckUI, Met

CacaoCast

Play Episode Listen Later Nov 16, 2024 69:09


Bienvenue dans le deux-cent-quatre-vingt-cinquième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: iPad mini - Pas de 5k Mac mini - Petit mais costaud! Macbook Pro - Avec Apple Intelligence Swift-format - Un formateur Swift inclus dans Xcode 16 Swift Testing - Pour convertir vos tests au nouveau modèle de tests DeckUI - Compliquez-vous la vie pour vos présentations Oeuvres d'art numérisées - Le Met vous en met plein la vue Ecoutez cet épisode

FileMaker DevCast: Everything Claris FileMaker
Ep19: Claris FileMaker 21.1 with Guest Lucy Chen, VP of FileMaker Engineering

FileMaker DevCast: Everything Claris FileMaker

Play Episode Listen Later Nov 14, 2024 52:47


Join us for a very special in-depth interview with Lucy Chen, VP of Claris Engineering. Kate Waldhauser hosts this special edition of the FileMaker DevCast, as Lucy gives us a glimpse behind the scenes of the engineering world at Claris, an Apple Company. Lucy shares insights as to how new features get into the product and how the latest version improves performance, reliability and security. She takes us under the hood, diving into the new enhancements of FileMaker 21.1, including the move to newer technologies like Java 17, Xcode 16, and OpenSSL 3.3. She also covers new features in FileMaker 21.1, such as HTTPS tunneling, improvements to the Admin Console, and the integration of AI-powered semantic search, for both natural language and image content searches. These changes have come about through Claris's ongoing commitment to understanding customer feedback. Join Kate & Lucy as they explore how the FileMaker platform continues to empower businesses with innovative capabilities.   Portage Bay Solutions is a custom software development firm based in Seattle, WA, with additional offices in the Austin, Chicago, Dallas, Omaha, Orange County, and Vancouver areas. For more than thirty years, we have been helping businesses of all sizes get the most out of their FileMaker investments. As a full-service Claris FileMaker Platinum Partner, Portage Bay is committed to helping you optimize your software investments and improve your business processes. #claris #filemaker #devtools #devs #devcast #openSSL #https #tunneling #AI #semanticsearch #innovation #portagebaysolutions #portagebay  #FileMaker21.1

Swift over Coffee
S4E1: You have to sit down and do it

Swift over Coffee

Play Episode Listen Later Nov 10, 2024 48:49


We're back! Many things have happened over that swift summer break, so pour a coffee and let's jump right back in. In this episode, we're talking Apple Intelligence and other code-complete co-pilots, the redesign of swift.org, shiny M4 Macs, good UI (and UX) design, and the Swift Foundation's move to empower the community to fix tiny annoyances — but the big topic, of course, is how your adoption of Swift 6 has been going. If at all…

Compile Swift
AI Tools for app makers

Compile Swift

Play Episode Listen Later Nov 10, 2024 57:11 Transcription Available


This week's episode discusses AI tools and their applications for developers. The hosts share their experiences using AI chatbots, highlighting their usefulness for code generation, problem-solving guidance, and code explanation. They also discuss the benefits of AI tools for finding code snippets and remembering API names.AI tools are useful for developers, especially when dealing with outdated or obscure technologies, as they can provide accurate and relevant information. While Apple's predictive code completion in Xcode has its limitations, it is a step towards integrating AI into developer tools. However, the lack of progress on Apple's promised chat-based tool, Swift Assist, raises concerns about Apple's ability to compete in the rapidly evolving AI landscape.Mentioned in this episodeDeveloper DuckCursorGitHub Copilot for XcodeClean My MacFollow Peterhttps://peterwitham.comFollow Geoffhttps://cocoatype.comBecome a Patreon member and help this Podcast survivehttps://www.patreon.com/compileswift Thanks to our monthly supporters Emerson Warwick Marko Wiese Adam Wulf bitSpectre Arclite ★ Support this podcast on Patreon ★

Side Project Spotlight
#78: Design Systems For Indies

Side Project Spotlight

Play Episode Listen Later Nov 4, 2024 37:19


In this penultimate episode of 2024, the Trio return to the topic of Design Systems from episode 72 with a discussion about how indie developers can apply the concept to their apps along with specific tips and techniques for implementation using the built-in tools available in SwiftUI and Xcode. Plus, our first thoughts on 2/3 of the M4 announcements this week. ## Topics Discussed - Apple Announcements - iMac M4 - Mac mini M4/M4 Pro - Design Systems for Indies - Previous episode: https://podcast.phillycocoa.org/episodes/72-what-is-a-design-system - Why? - Design for Hackers - https://designforhackers.com - View elements (spacing, shadows, corner radius, etc.) - View Styles - https://developer.apple.com/documentation/swiftui/view-styles - https://movingparts.io/styling-components-in-swiftui - https://movingparts.io/composable-styles-in-swiftui - SF Symbols / Custom SF Symbols - Asset Catalogs - Colors - App Icons - Sounds - Videos - Keep It Simple - Wrap-Up & One More Thing… - http://phillycocoa.org - https://happyscale.com Intro music: "When I Hit the Floor", © 2021 Lorne Behrman. Used with permission of the artist.

CacaoCast
Épisode 284 - iPad mini, Croissant, ReactiveCollectionKit, TinyStorage, Toucan, Fireside Cocoa

CacaoCast

Play Episode Listen Later Oct 23, 2024 47:20


Bienvenue dans le deux-cent-quatre-vingt-quatrième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: iPad mini - Avec Apple Intelligence Croissant - Pour publier sur Mastodon, Threads et Bluesky ReactiveCollectionKit - Un framework moderne pour remplacer IGListKit TinyStorage - Pour éviter des problèmes subtils avec UserDefaults Toucan - Un autre générateur de sites web en Swift Fireside Cocoa - Skiez les Rocheuses au Colorado Ecoutez cet épisode

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

CacaoCast
Épisode 283 - NotebookML, Hummingbird 2.0, SimpleCalendar, OCXML, Mocking URLSession, Goshdarnappleversionnumbers

CacaoCast

Play Episode Listen Later Oct 2, 2024 56:05


Bienvenue dans le deux-cent-quatre-vingt-troisième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: NotebookML - Un podcast à partir de texte? Hummingbird 2.0 - Maintenant avec un vrai site web SimpleCalendar - Pour présenter vos calendriers OCXML - Génerez du XML en quelques lignes de Swift Mocking URLSession - Pour vos tests unitaires Goshdarnappleversionnumbers - Ça devient compliqué toutes ces versions Ecoutez cet épisode

Farklı Düşün
Offline Translate, Xcode Code Completion, The Intelligence Age, MKBHD

Farklı Düşün

Play Episode Listen Later Sep 29, 2024 152:46


Bu bölümde Mert'in yeni macOS uygulaması Offline Translate, Xoode'un yeni Code Completion özelliği, Sam Altman'ın The Intelligence Age yazısı ve MKBHD'nin tartışma yaratan uygulaması üzerine sohbet ettik.Bizi dinlemekten keyif alıyorsanız, kahve ısmarlayarak bizi destekleyebilir ve Telegram grubumuza katılabilirsiniz. :)Yorumlarınızı, sorularınızı ya da sponsorluk tekliflerinizi info@farklidusun.net e-posta adresine iletebilirsiniz. Bizi Twitter üzerinden takip edebilirsiniz.Zaman damgaları:00:00 - Giriş01:15 - Offline Translate15:30 - iOS 18 ile gelen sorunlar21:25 - Xcode Code Completion30:44 - The Intelligence Age1:21:50 - Dikkat Eksikliği ve Hiperaktivite Bozukluğu1:28:19 - MKBHD'nin uygulaması1:46:16 - Space Marine 2 ve oyun incelemeleri1:54:44 - The Plucky Squire2:01:40 - Playstation'ın 30.yıl versiyonu2:14:20 - İzlediklerimizBölüm linkleri:Offline TranslateMeet the Translation APIVisionKitDevCleaner for XcodeThe Intelligence AgeOpenAI asked US to approve energy-guzzling 5GW data centers, report saysMicrosoft wants Three Mile Island to fuel its AI power needsCanva hikes prices by 300pc as it readies for IPOEnterprise Philosophy and The First Wave of AIComedian John Mulaney brutally roasts SF techies at DreamforceHollywood is coming out in force for California's AI safety billCalifornia's new law requires schools to limit phone useAmusing Ourselves to Death: Public Discourse in the Age of Show BusinessTechnopoly: The Surrender of Culture to TechnologyButlerian JihadMarques Brownlee says ‘I hear you' after fans criticize his new wallpaper appGordon Ramsay's perfect scrambled eggs tutorialPlayStation's 30th anniversary PS5 and PS5 Pro are delightfully retroSpace Marine 2videogamedunkeyAstro BotThe Plucky SquireKariyer Gelişiminde Tecrübe, Büyük Proje Yönetme Yetisi, Kültür ve GirişimcilikSteel SwarmLa MaisonThe PenguinOmnivoreTom yum çorbasıMert'in bahsettiği çok acı Laos yemeğiHot Ones

Thinking Elixir Podcast
221: From Keynotes to Job Listings

Thinking Elixir Podcast

Play Episode Listen Later Sep 24, 2024 27:53


News includes ElixirConf keynotes appearing on YouTube, updates on ErrorTracker's latest release, José Valim's deep dive on ChatGPT UX issues with Phoenix LiveView, Dockyard's announcement of LVN Go to streamline LiveView Native workshops, and Livebook's newest notebook navigation features. Plus, Nvidia's job opening that explicitly mentions Elixir, Alchemy Conf 2025 details, NASA's development of a Lunar timezone, and more! Show Notes online - http://podcast.thinkingelixir.com/221 (http://podcast.thinkingelixir.com/221) Elixir Community News - https://www.youtube.com/playlist?list=PLqj39LCvnOWbW2Zli4LurDGc6lL5ij-9Y (https://www.youtube.com/playlist?list=PLqj39LCvnOWbW2Zli4LurDGc6lL5ij-9Y?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf keynotes are appearing on YouTube, currently featuring Justin Schneck's and Chris McCord and Chris Grainger's keynotes. - https://github.com/josevalim/sync (https://github.com/josevalim/sync?utm_source=thinkingelixir&utm_medium=shownotes) – Phoenix Sync archival status clarified - José doesn't have plans to take it forward personally, inviting others to explore and develop the idea further. - https://elixirstatus.com/p/1u4Hf-errortracker-v030-has-been-released (https://elixirstatus.com/p/1u4Hf-errortracker-v030-has-been-released?utm_source=thinkingelixir&utm_medium=shownotes) – ErrorTracker v0.3.0 has been released with new features including support for MySQL and MariaDB, improved error grouping in Oban, and enhanced documentation and typespecs. - https://www.elixirstreams.com/tips/test-breakpoints (https://www.elixirstreams.com/tips/test-breakpoints?utm_source=thinkingelixir&utm_medium=shownotes) – German Velasco shared a new Elixir Stream video on step-through debugging an ExUnit test in Elixir v1.17. - https://www.youtube.com/watch?v=fCdi7SEPrTs (https://www.youtube.com/watch?v=fCdi7SEPrTs?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim shared his video on solving ChatGPT UX issues with Phoenix LiveView, originally posted to Twitter and now available on YouTube. - https://x.com/josevalim/status/1833536127267144101 (https://x.com/josevalim/status/1833536127267144101?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's video on tackling ChatGPT's UX woes with Phoenix LiveView on Twitter. - https://github.com/tailwindlabs/tailwindcss/pull/8394 (https://github.com/tailwindlabs/tailwindcss/pull/8394?utm_source=thinkingelixir&utm_medium=shownotes) – Merged PR in Tailwind project describing hover issue fix. - https://github.com/phoenixframework/phoenixliveview/issues/3421 (https://github.com/phoenixframework/phoenix_live_view/issues/3421?utm_source=thinkingelixir&utm_medium=shownotes) – Issue regarding phx-click-loading affecting modals. - https://dashbit.co/blog/remix-concurrent-submissions-flawed (https://dashbit.co/blog/remix-concurrent-submissions-flawed?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim detailed how Remix's concurrency feature is flawed in a new blog post. - https://dockyard.com/blog/2024/09/10/introducing-lvn-go (https://dockyard.com/blog/2024/09/10/introducing-lvn-go?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post introducing LVN Go, an app to ease starting with LiveView Native without needing XCode. - https://podcast.thinkingelixir.com/200 (https://podcast.thinkingelixir.com/200?utm_source=thinkingelixir&utm_medium=shownotes) – Episode 200 of Thinking Elixir podcast featuring Brian Carderella discussing LiveView Native. - https://x.com/livebookdev/status/1834222475820839077 (https://x.com/livebookdev/status/1834222475820839077?utm_source=thinkingelixir&utm_medium=shownotes) – Livebook v0.14 released with new notebook navigation features. - https://news.livebook.dev/code-navigation-with-go-to-definition-of-modules-and-functions-kuYrS (https://news.livebook.dev/code-navigation-with-go-to-definition-of-modules-and-functions-kuYrS?utm_source=thinkingelixir&utm_medium=shownotes) – Detailed blog post about Livebook v0.14's new features. - https://artifacthub.io/packages/helm/livebook/livebook (https://artifacthub.io/packages/helm/livebook/livebook?utm_source=thinkingelixir&utm_medium=shownotes) – kinoflame 0.1.3 released with Kubernetes support. - https://x.com/miruoss/status/1834690518472966524 (https://x.com/miruoss/status/1834690518472966524?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of kinoflame 0.1.3's Kubernetes support. - https://x.com/hugobarauna/status/1834040830249562299 (https://x.com/hugobarauna/status/1834040830249562299?utm_source=thinkingelixir&utm_medium=shownotes) – Job opening at Nvidia specifically mentioning Elixir. - https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Senior-Software-Engineer---HPC_JR1979406-1?q=Hpc (https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Senior-Software-Engineer---HPC_JR1979406-1?q=Hpc?utm_source=thinkingelixir&utm_medium=shownotes) – Specific job listing at Nvidia mentioning Elixir. - https://x.com/Alchemy_Conf/status/1835597103076094150 (https://x.com/Alchemy_Conf/status/1835597103076094150?utm_source=thinkingelixir&utm_medium=shownotes) – Alchemy Conf 2025 announced, with call for talk proposals open. - https://dev.events/conferences/alchemy-conf-2025-hjp5oo7o (https://dev.events/conferences/alchemy-conf-2025-hjp5oo7o?utm_source=thinkingelixir&utm_medium=shownotes) – Alchemy Conf 2025 event details. - https://ti.to/subvisual/alchemy-conf-2025 (https://ti.to/subvisual/alchemy-conf-2025?utm_source=thinkingelixir&utm_medium=shownotes) – Early bird tickets for Alchemy Conf 2025 are €200. - https://www.papercall.io/alchemy-conf-2025 (https://www.papercall.io/alchemy-conf-2025?utm_source=thinkingelixir&utm_medium=shownotes) – Call for talk proposals for Alchemy Conf 2025 open until Sept 30th. - https://www.engadget.com/science/space/nasa-confirms-its-developing-the-moons-new-time-zone-165345568.html (https://www.engadget.com/science/space/nasa-confirms-its-developing-the-moons-new-time-zone-165345568.html?utm_source=thinkingelixir&utm_medium=shownotes) – NASA confirms developing a Lunar timezone. - https://www.prnewswire.com/news-releases/k1-acquires-mariadb-a-leading-database-software-company-and-appoints-new-ceo-302243508.html (https://www.prnewswire.com/news-releases/k1-acquires-mariadb-a-leading-database-software-company-and-appoints-new-ceo-302243508.html?utm_source=thinkingelixir&utm_medium=shownotes) – MariaDB acquired by K1, strategic investment to expand enterprise solutions. - https://www.aboutamazon.com/news/company-news/ceo-andy-jassy-latest-update-on-amazon-return-to-office-manager-team-ratio (https://www.aboutamazon.com/news/company-news/ceo-andy-jassy-latest-update-on-amazon-return-to-office-manager-team-ratio?utm_source=thinkingelixir&utm_medium=shownotes) – Amazon requiring employees to return to office for work. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

CacaoCast
Épisode 282 - cacaocast.com, annonces d'Apple du 9 septembre 2024

CacaoCast

Play Episode Listen Later Sep 11, 2024 87:24


Bienvenue dans le deux-cent-quatre-vingt-deuxième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: cacaocast.com - Écrit en Swift avec Ignite Apple - Annonces du 9 septembre 2024 Ecoutez cet épisode

Super Feed
Olá, Mundo - 057: Verifying Module Algumacoisa

Super Feed

Play Episode Listen Later Sep 3, 2024 78:07


Rambo reclama com Bunn sobre mais um bug do Xcode. Tem também setups replicáveis, alternativas de virtualização e um mergulho inesperado em Core Animation.

Compile Swift
Alternative App Stores, Trader Status, Are your apps ready for the new OS versions?

Compile Swift

Play Episode Listen Later Aug 25, 2024 13:38 Transcription Available


This week, Peter Witham discusses the emergence of alternative app stores and what they mean for developers. He asks for listeners' experiences and thoughts on managing multiple app store requirements, including code signing and security. He also touches on Apple's reminder about trader status for the European market. He wraps up by emphasizing the importance of testing apps against the latest beta versions of Apple's operating systems and Xcode.(00:00) - Introduction (00:11) - Thank you new Patreon members (00:46) - Alternative app stores go live (05:17) - Get some Coffee (07:29) - Have you done the trader status? (09:54) - Are you ready for the new OS and Xcode versions? (12:31) - Support the podcast (12:57) - Rate and review Become a Patreon member and help this Podcast survivehttps://www.patreon.com/compileswiftPlease leave a review and show your supporthttps://lovethepodcast.com/compileswiftYou can also show your support by buying me a coffeehttps://peterwitham.com/bmcFollow me on Mastodonhttps://iosdev.space/@Compileswift Thanks to our monthly supporters Adam Wulf bitSpectre Arclite ★ Support this podcast on Patreon ★

Rocket Ship
#047 - Challenges of Building an On-call App with Rory Bain

Rocket Ship

Play Episode Listen Later Aug 13, 2024 55:34


In this conversation, Simon interviews Rory Bain, a product engineer at Incident.io, about his experience building a multi-platform on-call mobile app using React Native. Rory shares his background in native mobile app development and his transition to React Native. They discuss the reasons for choosing React Native over frameworks like Flutter or Kotlin Multiplatform. Rory also explains the process of developing the on-call app, including the use of Expo and the challenges of implementing push notifications and critical alerts on Android. They also dive into the differences between iOS and Android development, the use of libraries like Tailwind and SWR, the challenges of CI/CD integration, and debugging issues with Expo's EAS.Learn React Native - https://galaxies.devRory BainRory X: https://x.com/rorybainRory GitHub: https://github.com/rorydbainLinksBuilding a multi-platform on-call mobile app: https://incident.io/hubs/building-on-call/building-a-multi-platform-on-call-mobile-appBehind the Flame: Rory: https://incident.io/blog/behind-the-flame-roryincident.io On-call: https://incident.io/on-callVercel SWR: https://github.com/vercel/swrTakeawaysThe on-call mobile app at Incident.io was developed using React Native and Expo, which allowed for quick prototyping and hot reloading.Choosing React Native over other frameworks like Flutter or Kotlin Multiplatform was influenced by the familiarity with JavaScript and web-based tooling, as well as the desire for a native feel on each platform.Implementing push notifications and critical alerts on Android required writing custom native modules and using data-only notifications to wake up the app and display the notifications.The use of Expo and managed projects simplified the development process and eliminated the need for developers to install Android Studio or Xcode. Building a multi-platform on-call mobile app requires considering the differences between iOS and Android development.Libraries like Tailwind and SWR can enhance the development experience and provide consistent styling and API handling across platforms.Integrating CI/CD for mobile apps can be challenging, especially when dealing with versioning and remote updates.Debugging issues with Expo's EAS may require trial and error and using local build processes to identify and resolve problems.

Giant Robots Smashing Into Other Giant Robots
537: Navigating the Startup Ecosystem with Marc Gauthier

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Aug 8, 2024 45:49


In the latest episode of the "Giant Robots On Tour" podcast, hosts Rémy Hannequin and Sami Birnbaum welcome Marc G. Gauthier, a solopreneur and startup coach, who shares his journey from software development to becoming the founder and developer of The Shadow Boxing App. Marc describes how his interest in software engineering began at a young age with QBasic and evolved through various leadership roles at companies like Drivy (now Getaround) and Back Market. His early passion for gaming led him to learn coding, and over time, he naturally transitioned into management roles, finding excitement in organizing and leading teams while maintaining his love for building products. During the episode, Marc discusses the challenges and intricacies of scaling startups, emphasizing the importance of balancing speed and reliability in software development. He recounts his experiences in leadership positions, where he faced the dual task of managing rapid team growth and maintaining software efficiency. Marc also shares insights into the startup ecosystem, noting that most startups struggle to achieve success due to a combination of market timing, team dynamics, and resource management. His own venture, The Shadow Boxing App, represents his attempt to return to hands-on coding while leveraging his extensive experience in startup coaching and advising. Marc also touches on the role of AI in the future of software development, expressing cautious optimism about its potential to augment human workflows and automate repetitive tasks. He advises current and aspiring developers to embrace AI as a tool to enhance their capabilities rather than a replacement for human ingenuity. Marc concludes by highlighting the importance of realistic expectations in the startup world and the need for continuous learning and adaptation in the ever-evolving tech landscape. Getaround (https://getaround.com/) Follow Getaround on LinkedIn (https://www.linkedin.com/company/getaround/), Facebook (https://www.facebook.com/getaround), X (https://twitter.com/getaround), YouTube (https://www.youtube.com/getaround), or Instagram (https://www.instagram.com/getaround/). Back Market (https://www.backmarket.com/en-us) Follow Back Market on LinkedIn (https://www.linkedin.com/company/back-market/), Facebook (https://www.facebook.com/BackMarketCom), X (https://x.com/backmarket), or Instagram (https://www.instagram.com/backmarket). The Shadow Boxing App (https://shadowboxingapp.com/) Follow Marc Gauthier on LinkedIn (https://www.linkedin.com/in/marcggauthier/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Transcript: RÉMY:  This is the Giant Robots Smashing Into Other Giant Robots podcast, the Giant Robots on Tour series coming to you from Europe, West Asia, and Africa, where we explore the design, development, and business of great products. I'm your host, Rémy Hannequin. SAMI: And I'm your other host, Sami Birnbaum. RÉMY: If you are wondering who we are, make sure you find the previous podcast where we introduced the Giant Robots on Tour series by throwing random icebreakers at each other. And find out that Jared likes it when someone takes the time to understand someone else's point of view. Joining us today is Marc G Gauthier, a Solopreneur and Startup Coach. Marc, you used to be VP of Engineering at Drivy, now known as Getaround, and also Director of Engineering at Back Market. You also have been a coach and advisor to a startup for over a decade. Currently, your current adventure is being the Founder and Developer of The Shadow Boxing App available on the Apple App Store. We always like to go back to the start with our guests. Everyone has a story, and we are interested in your journey. So, Marc, what led you into the world of software engineering in the first place? MARC: Hello. Well, happy to be here. And, yeah, I started getting into software development quite a long time ago. I actually learned software development with QBasic when I was something like seven. And, from there, I just kept on learning, learning, and learning and got into school for it, then worked in different startups, and then moved into more leadership position management. And I'm now, like, coaching people and building my own product. What do you want to get? Because it's broad. I've been doing it for quite a while. Like, I don't think the QBasic days are that insightful. The only thing I remember from that time is being confused by the print comment that I would expect it to print on my printer or something, but it didn't; it just printed on the screen. That's the only thing I have from back then. SAMI: Why at seven years old? And I'm taking you back too far, but at seven years old, I was probably collecting Pokémon cards and possibly like, you know, those football stickers. I don't know if you had the Panini stickers. MARC: Oh yeah, I was doing that as well. SAMI: But you were doing that as well. But then what drove you at that age? What do you think it was that made you think, I want to start learning to code, or play around with the computer, or get into tech? MARC: [laughs] Yeah. Well, I remember, back then, I really wanted a computer to play games. Like, I had a friend who had a computer. He was playing games, and I wanted to do that. So, I was asking my mom to have a computer, and she told me, "Yeah, you can have one." And she found a really old computer she bought from a neighbor, I think. But she told me like, "I don't know anything about it. So, you have to figure it out and set it up." And she just found someone to kind of help me. And this person told me to, like, take the computer apart. She taught me a bit of software development, and I kind of liked it. And I was always trying to change the games. Back then, it was way easier. You could just edit a sound file, and you would just edit the sound file in the game, so yeah, just learning like this. It wasn't really my intent to learn programming. It just kind of happened because I wanted to play video games really. SAMI: That's really cool. It's really interesting. Rémy, do you remember how...how did you first get...do you remember your first computer, Rémy? RÉMY: My first computer, I think I remember, but the first one I used it was, first, a very long time ago. I discovered that it was an Apple computer way, way later when I discovered what Apple was and what computers were actually. And I just remember playing SimCity 2000 on it, and it was amazing. And we had to, you know, cancel people from making phone calls while we were on the computer because of the internet and all the way we had to connect to the internet back then. And after that, just, I think, Windows 95 at home. Yeah, that's the only thing I can remember actually. Because I think I was lucky, so I got one quite early. And I don't really remember not having one, so I was quite lucky with that. And so, I was always kind of in the computer game without being too much [inaudible 05:02] [laughs]. SAMI: Yeah, I think that's similar to me as well. Like, it's interesting because my initial introduction to computers would have been watching my older brothers kind of play computer games and actually being told to get out the room, or like, you know, "We're busy now. Don't bother us." And then, what actually happened is when they left the room, I managed to play what they were playing, which was the first ever GTA. I don't know if anyone ever played this, but it is so cool if you look back on it. You could probably find emulators online, but it was, like, a bird's eye view, like, way of operating. And it was probably also that drive where you get frustrated on a computer because you want to do something, so, like you were saying, Marc, where you went to edit the sound files because you want to change something. You want to do something. I definitely think that is something which I felt as well is that frustration of I want to change this thing. And then, that kind of gets into well, how does it work? And if I know how it works, then I can probably change it. MARC: Yeah. And once you figure out how things work, it's also really exciting. Like, once you figure out the initialization file on Windows, like, you can edit, like, what level is unlocked right away. It's kind of cheat codes but not really. And there are some really fun ones. Like, I would edit sound files for racing games. And, usually, it's just a base sound file, and then they would pitch shift the sound to make it sound like an engine. So, if you record your voice, it's just really funny. RÉMY: So, Marc, you mentioned moving to management positions quite early. Do you remember what made you do this move? Was it for, like, a natural path in your career, or was it something you really wanted from the first part of your career as a developer? What happened at this moment? MARC: Yeah, that was not completely planned. Like, I don't think I really plan my career precisely. It's just something that happens. So, I joined Drivy after, like, I was already a software engineer for, like, five years at that point. I joined as a lead backend engineer. I did that for three years. And after three years, the company went from...I think there was, like, three software engineers to a dozen. There was a need for more structure, and the CTO, at the time so, Nicolas, wanted to focus more on products. And it was hard to do both, like do the product side, the design, the data, and do the engineering, the software, and so on. So, he wanted to get a bit away from software engineering and more into product. So, there was a gap in the organization. I was there. I was interested to try, and I was already doing some more things on the human side, so talking to people, organizing, internal communication. I kind of liked it. So, I was excited to try, give it a try. It was really interesting. I found that it was a different way to have an impact on the team. I just kept doing it. And my plan was to keep doing it until I'm bored with it. And I'm still not bored with it, even though you kind of miss just actually building the software yourselves, actually coding. So, that's also why I'm trying something different right now with my mobile app adventure. SAMI: Right. So, on the side, you've got this Shadow Boxing App, which, in my dedicated research, I downloaded and had a go with it. MARC: Did you actually try it, or did you just click around? SAMI: I did a proper workout, mate. I did. I put myself as, like, the absolute beginner. I did it on my MacBook Pro. I know it's built for iPad or iPhone, but it still worked amazingly well. And it kind of reminded me why I stopped doing boxing because it's hard work. MARC: [laughs] Yeah, it is. SAMI: It's not a gimmick this thing, right? So, it's like, the best way to describe it is it's essentially replacing if I was to go to the gym and have a trainer who's telling me kind of the moves to make or how to do it, then this kind of replaces that trainer. So, it's something you can do at home. It was really cool. I was surprised, actually. I thought, at the beginning, it's not going to be that interactive, or it won't actually be as hard or difficult as a workout, and it really was. So, it's, yeah, it was really cool, really interesting to try it. And going into that, you say you wanted to get back more into coding, and that's why you are doing this kind of, like, app on the side, or it allowed you to kind of do a bit more coding away from the people management. You've been involved in a lot of startups, and I actually often get...as consultants, when we work at thoughtbot, we get a lot of people who come with different startup ideas. When you look back at all the startups you've been involved with, do you think more startups are successful than those that fail? Or have you seen a lot of startups...actually, people come with these great ideas; they want to build this amazing product, but it's actually really hard to be a successful product? MARC: I think it's [inaudible 10:22] how to have the right idea, be at the right spot at the right time, build the right team, get enough momentum. I think most startups fail, and even startups that are successful often can be the result of a pivot. Like, I know companies that pivoted a bunch of times before finding any success. So, it's really hard actually...if I take my past four companies, only two are still alive. Like, the first two went under. Actually, there's even more companies that went under after I left. Yeah, it's just really hard to get anything off the ground. So, yeah, it's complicated, and I have a lot of respect for all the founders that go through it. For The Shadow Boxing App, I worked on it for the past three years, but I'm only working on it almost full-time for the past two months. And it was way safer. I could check the product-market fit. I could check if I enjoyed working on it. So, I guess it was easier. I had the luxury of having a full-time job. Building the app didn't take that much time. But to answer your question, I think, from my experience, most startups fail. And the ones that succeed it's kind of lightning in a bottle, or, like, there's a lot of factors that get into it. It's hard to replicate. A lot of people try to replicate some science, some ideas. They go, oh, we'll do this, and we'll do that. And we use this technique that Google uses and so on, but it's never that straightforward. SAMI: Yeah, I'm so happy you said that because I think it's a real brutal truth that I'd also say most of the startup projects that I've worked on probably have failed. Like, there's very few that actually make it. It's such a saturated market. And I think, I guess, in your role as advising startups, it's really good to come in with that honesty at the beginning and to say, "It's a big investment if you want to build something. Most people probably aren't successful." And then, when you work from that perspective, you can have, like, way more transparent and open discussions from the get-go. Because when you're outside of tech...and a lot of people have this idea of if I could just get an app to do my idea, I'm going to be the next Facebook. I'm going to be the next, you know, Amazon Marketplace. And it just kind of isn't like that. You've got these massive leaders in Facebook, Amazon, Google, Netflix. But below that, there's a lot of failures and a massively saturated market. So, yeah, just, it's so interesting that you also see it in a similar way. MARC: What I saw evolve in the past 10 years is the fact that people got more realistic with it. So, maybe 10 years ago, I would have people coming to me with just the most ridiculous idea, like, you know, I'll do Airbnb for cats. And really think, yeah, I just need a good idea, and that's it. But now I feel like people kind of understand that it's more complicated. There's way more resources online. People are more educated. They also see way more successes. Failures are also a bit more advertised. We saw a bunch of startups just go under. It feels like every month I get an email from a tool I used in the past saying, "Oh, we're shutting down," and so on. So, I think it's not as bad as 10 years ago where weekly I would have just people asking me, "I want to build this app," and the app would be just the most ridiculous thing or something that would be really smart, but it's really like, "Oh, I want to do, like, food delivery but better than what exists." It's like, yeah, that's a really good idea, but then you need...it's not only software. There's logistics. There's so much behind it that you don't seem to understand just yet. But, as a coach, so, what I'm doing is I'm helping startups that are usually before or after series A but not too large of startups just go to the next stage. And people are really aware of that and really worried. Like, they see money going down, market fit not necessarily being there. And they know, like, their company is at risk. And especially when you talk to founders, they're really aware that, you know, everything could be collapsing really quickly. If they make, like, three really bad decisions in a row, you're basically done. Obviously, it depends on the company, but yeah, people are more aware than before, especially nowadays where money is a bit harder to get. Let's say two years ago, there was infinite money, it felt like. Now it's more tight. People are more looking at the unit economics precisely. So, people need to be more realistic to succeed. RÉMY: What's the kind of recurrent struggle the startups you coach usually face? Apparently, it quite changed in the past decade, but maybe what are the current struggles they face? MARC: It really depends. It's kind of broad. But, usually, it would be, let's say, a startup after their first round of funding, let's say, if you take startups that are looking for funding. So, you usually have a group of founders, two to four, usually two or three, that are really entrepreneurs that want to bootstrap some things. They're builders. They're hacking things together, and they're really excited about the product. And, suddenly, fast forward a few years, they're starting to be successful, and they have to lead a team of, you know, like, 50 people, 100 people, and they weren't prepared for that. They were really prepared to, like, build software. Like, especially the CTOs, they are usually really great hackers. They can, like, create a product really quickly. But, suddenly, they need to manage 30 engineers, and it's completely different, and they're struggling with that. So, that's a common problem for CTOs. And then, it creates a bunch of problems. Like, you would have CEOs and CTOs not agreeing on how to approach the strategy, how to approach building a thing. What should be the methodology? Something that worked with 3 engineers around the table doesn't work with 50 engineers distributed in 5 countries. And if it's your first time being a CTO, and often founders of early-stage startups are first-time CTOs, it can be really hard to figure out. MID-ROLL AD: Are your engineers spending too much time on DevOps and maintenance issues when you need them on new features? We know maintaining your own servers can be costly and that it's easy for spending creep to sneak in when your team isn't looking. By delegating server management, maintenance, and security to thoughtbot and our network of service partners, you can get 24x7 support from our team of experts, all for less than the cost of one in-house engineer. Save time and money with our DevOps and Maintenance service. Find out more at: tbot.io/devops. RÉMY: In your past companies, so you've been VP and CTO. So, in your opinion, what's the best a VP or a CTO can bring to a scaling startup? What are your best tips to share? MARC: I guess it depends [laughs], obviously, like, depending on the stage of the company, the size of the company. For instance, when I was at Drivy, at some point, the most important thing was scaling the team hiring, and so on. But, at some point, we got acquired by Getaround, and the priorities got shifted. It was more like, okay, how do you figure out this new setup for the company and the team? Like, what is good? What is bad? How do you communicate with the team? How do you get people to stay motivated when everything is changing? How do you make sure you make the right decisions? And then, when I joined Back Market, Back Market when I joined, I had a team of a bit less than 12 engineers reporting directly to me. And after a bit more than a year, I had 60, and I hired most of them. So, here the challenge was just scaling insanely fast. Like, the company is really successful. Like, Back Market is selling refurbished electronics in a mission to, you know, provide a viable alternative to buying new electronics. So, it's basically, do you want a smartphone that is both cheaper and more ecologically viable? And most people would say yes to that. So, a company is insanely successful, but it's really hard to scale. So, at that point, the role was, okay, how do you make sure you scale as well as possible with a lot of pressure while still leaving the team in a state that they're able to still build software? Because it's just really chaotic. Like, you can't, like, 5X your team without chaos. But how do you minimize that but still go really fast? SAMI: Yeah. So, not only did I try that Shadow App. I actually went on that Backup website. What's it called? It's not called Backup. What's it called again? MARC: Back Market. SAMI: Back Market. Thank you. Yeah, it was really cool. I checked my old iPhone SE from 2020, which I've kept for about...over three years, I've had this iPhone. And they said they would give me $72 for it, which was really cool. So, it sounds like a really cool idea. MARC: That's something we worked on, which is, basically, if you have any old phones in your drawer, it's a really bad spot for them. And so, there's a service. You go on the website. You say, "I have this, I have that; I have this, I have that." And either we buy it from you, or we just take it away from you, and we recycle them, which is much better than just having them collect dust. SAMI: Yeah, no, it's a great idea. What interested me when you were speaking about kind of these different positions that you've been in, I was almost expecting you to talk about maybe, like, a technical challenge or code complexity difficulty. But, actually, what you've described is more people problems. And how do we scale with regards to people, and how do we keep people motivated? So, I guess using that experience, and this might be counterintuitive to what a lot of people think, but what do you think is the hardest thing about software development? I know there could be many things. But if you had to pick something that is the most difficult, and maybe we can all have an answer to what we think this is, but starting with you, Marc, what do you think is the hardest thing about software development then? MARC: What I saw is how do you build something that works for enough time to bring value to the customers? So, it's easy to hack something together pretty quickly and get it in front of people, but then it might not be reliable. It might break down. Or you could decide to build something perfect and spend, like, two years on it and then ship it, and then it's really stable, but maybe it's not what people want. And finding this balance between shipping something fast, but shipping something that is reliable enough for what you're building. Obviously, if you're building a health care system, you will have more, like, the bar will be higher than if you build, like, Airbnb for cats. Finding this balance and adjusting as you go is really hard. So, for instance, when do you introduce caching? Because, obviously, caching is hard to do right. If you don't do it, your site will be slow, which can be okay for a time. But then if you introduce it too late, then it's really hard to just retrofit into whatever you already have. So, finding the right moment to introduce a new practice, introduce a new technology is tricky. And then, like, I talked a lot about the people, and it's also because I spent quite a bit of time in leadership position. But, at the end of the day, it will be the people writing the code that gets the software to exist and run. So, having people aligned and agreeing on the vision is also key because unless I'm the only developer on the project, I can't really make all decisions on things that are going to get built. So, figuring out how to get people motivated, interested in just building in the same direction is really important. It's really easy. Like, one thing with Drivy, when I was there, that was really fun to see, like, many people have this reaction, especially the more senior people joining the company. They would see the engineering team, and they were really, really surprised by how small it was because we were being really, really efficient. Like, we were paying really close attention to what we would work on. So, kind of technology we would introduce would be quite conservative on both to really be able to deliver what is the most important. So, we were able to do a lot with, honestly, not a lot of people. And I think this is a great mark for success. You don't need a thousand people to build your software if you ask the right question, like, "Do I need to build X or Y?" and always having these discussions. RÉMY: What's your opinion on that, Sami? SAMI: Yeah, I guess it changes. Like, for example, today, the hardest thing about software development was just getting Jira to work. That has literally ruined my whole day. But I've found, for me, what I find is the most difficult thing to do is making code resilient to change. What I mean by that is writing code that's easy to change. And a lot of that, I guess, we try to work on at thoughtbot, as consultants, is following kind of design principles and best practices and certain design patterns that really make the code easy to change. Because that, I think, when I'm writing code is the biggest challenge. And where I feel when I'm working with our clients one of the biggest things they can invest in, which is difficult because there's not a lot of visibility around it or metrics, is ensuring that code that's written is easy to change because, at some point, it will. And I've also worked on systems which are bigger, and when you can't change them, conversations start happening about the cost of change. Do we rewrite it from the ground up again? And that opens a whole different can of worms. So, that, for me, I think, is definitely one of the hardest things. How about yourself, Rémy? RÉMY: I don't know about the most difficult. I mean, there are many things difficult. But I remember something that I had to put extra effort, so maybe it was one of the most difficult for me. When I started being a consultant, when I joined thoughtbot was to understand what's the boundary between executing and giving an advice? So, basically, I discovered that when you're a consultant, but it works also when you're a developer in a team, you know, you're not just only the one who is going to write the code. You're supposed to be also someone with expertise, experience to share it and to make the project and the team benefit from it. So, at some point, I discovered that I should not just listen to what the client would say they want. Obviously, that's what they want, but it's more interesting and more difficult to understand why they want it and why they actually need, which could be different from what they want. So, it's a whole different conversation to discover together what is actually the necessary thing to build, and with your expertise and experience, try to find the thing that is going to be the most efficient, reliable, and making both the client and the customers happy. MARC: Yeah. And as software engineers, it's really easy to get excited about a problem and just go, "Oh, I could solve it this way." But then you need to step back and go, "Well, maybe it doesn't need fixing, or we should do something completely different." At some point, I was working with a customer service organization. In their workflows, they had to go on, let's say, five different pages and click on the button to get something to do one action. And so, what they asked for is to have those five buttons on one single page, and so, they could go, click, click, click, click, click. But after looking at it, what they needed is just automation of that, not five buttons on the page. But it's really easy to go, oh, and we could make those buttons, like, kind of generic and have a button creator thing and make it really fancy. When you step back, you go, oh, they shouldn't be clicking that many buttons. SAMI: Yeah, that makes so much sense because just in that example...I can't remember where I read this, but every line of code you write has to be maintained. So, in that example where you've got five buttons, you're kind of maintaining probably a lot more code than when you've got the single button, which goes to, I don't know, a single action or a method that will handle kind of all the automation for you. And that's also, you know, driving at simplicity. So, sometimes, like, you see this really cool problem, and there's a really cool way to solve it. But if you can solve it, you mentioned, like, being conservative with the type of frameworks maybe you used in a previous company, like, solve it in the most simple way, and you'll thank yourself later. Because, at some point, you have to come back to it, and maintain it, change it. Yeah, so it makes a lot of sense. And, Marc, you said you started when you were 7, which is really young. Through that amount of time, you've probably seen massive changes in the way websites look, feel, and how they work. In that time, what's the biggest change you actually think you've seen? MARC: The biggest thing I saw is, when I started, internet didn't exist or at least wasn't available. Like, I remember being at school and the teacher would ask like, "How many people have a computer at home?" And we'd be like, two or three people. So, people didn't have internet until I was like 14, 15, I'd say. So, that's the biggest one. But, let's say, after it started, they just got more complicated. Like, so, the complexity is getting crazy. Like, I remember, at some point, where I saw I think it was called Aviary. It was basically Photoshop in the browser, and I was just insanely impressed by just the fact that you could do this in the browser. And, nowadays, like, you've got Figma, and you've got so many tools that are insanely impressive. Back then, it was just text, images, and that's it. I actually wrote a blog post a few years ago about how I used to build websites just using frames. So, I don't know if you're familiar with just frames, but I didn't really know how to do divs. So, I would just do frames because that's what I understood back then, again, little kid. But it was kind of working. You were dealing with IE 5 or, like, I remember, like, professionally fixing bugs for IE 5.5 or, like, AOL, like, 9, something ridiculous like this. So, building a website just got way easier but also way more complicated, if that makes sense. Like, it's way easier to do most things. For instance, I don't know, like, 20 years ago, you wanted a rounded corner; you would have to create images and kind of overlay them in a weird way. It would break in many cases. Nowadays, you want rounded corners? That's a non-topic. But now you need, like, offline capabilities of your website. And, in a lot of cases, there's really complex features that are expected from users. So, the bar is getting raised to crazy levels. SAMI: Yeah, I always wonder about this. Like, when you look at how the internet used to be and how people develop for the internet, and, like you're saying, now it's more complex but easier to do some things. I don't know if as developers we're making things harder or easier for ourselves. Like, if you look at the amount of technology someone needs to know to get started, it grows constantly. To do this, you have to add this framework, and you need to have this library, and maybe even a different language, and then, to even host something now, the amount of technologies you need to know. Do you think we're making things harder for ourselves, or do you think easier? MARC: Well, I guess there's always back and forth, like, regarding complexity. So, things will get really, really complex, and then someone will go, "Well, let's stop that and simplify." That's why, like, I'm seeing some people not rejecting React and so on, but going a simpler route like Rails has options like this. There's people using HTMX, which is really simple. So, just going back to something simpler. I think a lot of the really complex solutions also come from the fact that now we have massive teams building websites, and you need that complexity to be able to handle the team size. But it's kind of, then you need more people to handle the complexity, and it's just getting crazy. Yeah, honestly, I don't know. I'm seeing a lot of things that feel too complex for...like, the technology feels really complicated to accomplish some things that should be simple or at least feel simple. But, at the same time, there are things that got so simple that it's ridiculous like just accepting payment. I remember, like, if you wanted to accept payment on a site, it would be months of work, and now it takes a minute. You just plug in Stripe, and it works. And it's often cheaper than what it used to be. So, it's kind of...or deploying. You mentioned deploying can be really hard. Well, you don't need to have a physical server in your room just eating your place up to have your website, your personal website running. You just push it to Vercel, or Heroku, or whatever, or just a static page on S3. So, this got simpler, but then, yeah, you can get it to be so much more crazy. So, if you host your static website on S3, fairly simple. But then if you try to understand permissions on S3, then, you know, it's over. RÉMY: I don't know if it's really in the path of our discussion. I just wanted to ask you, so this is the on tour series, where we...so, usually, the Giant Robots podcast used to be a little bit more American-centric, and this on tour is moving back to the other side of the Atlantic with, again, Europe, West Asia, and Africa. You've been part of a company, Drivy, which expanded from France to neighboring countries in Europe. What could you tell our listeners about how to expand a business internationally? MARC: That's a tough question, especially in Europe. Because I know looking from the outside, like, if you're from the U.S. and you look at Europe, it feels like, you know, a uniform continent, but really, it's very different. Like, just payment methods are different. Culture is very different. For instance, when I was working at Back Market in France, one of the branding aspects of Back Market was its humor. Like, we would be making a lot of jokes on the website, and it would work really well in France. Like, people would love the brand. But then you expand to other countries, and they just don't find that funny at all. Like, it's not helping at all, and they're expecting a different tone of voice. So, it's not just, okay, I need to translate my own page; it's I need to internationalize for this market. I guess my advice is do it country by country. Sometimes I see companies going like, oh, we opened in 20 different countries, and you go, how even do you do that? And spend some time understanding how people are using your product or, like, a similar product locally because you would be surprised by what you learn. Sometimes there's different capabilities. For instance, when Drivy went to the UK, there's so much more you can learn. There's the government database that you can look up, and it really helps with managing risk. If people are known to steal cars, you can kind of figure it out. I'm simplifying a bit, but you can use this. You don't have that in France because we just don't have this solution. But if you go to Nordic countries, for instance, they have way more electric vehicles, so maybe the product doesn't work as well. So, it's really understanding what's different locally and being willing to invest, to adapt. Because if you go, okay, I'm going to open in the Netherlands but you don't adopt the payment methods that are used in the Netherlands, you might as well not open at all. So, it's either you do it properly and you kind of figure out what properly means for your product, or you postpone, and you do it well later. Like, right now, I'm struggling a bit with my app because it's open. So, it's on the App Store, so it's open globally. And it's a SaaS, so it's simpler, but I struggle with language. So, it's in French and English. I spoke both of this language, obviously, French better than English. But I think I'm doing okay with both. But I also built it in Spanish because I speak some Spanish fairly poorly, and I wanted to try to hit a different market like the Mexican market that are doing boxing quite a lot. But the quality doesn't seem there. Like, I don't have the specific boxing lingo, so I'm contemplating just rolling it back, like, removing the Spanish language until I get it really well, maybe with a translator dedicated to it that knows boxing in Spanish. Because I work with translators that would translate, but they don't really know that, yeah, like a jab in boxing. In Spanish, they might also say, "Jab." They won't translate it to, like, [inaudible 38:31]. SAMI: Yeah. At thoughtbot, we have one of our clients they wanted to release their app also internationally. And so, we had also kind of a lot of these problems. We even had to handle...so, in some languages, you go from left to right, right to left. So, that kind of also changed a lot of the way you would design things is mainly for people who are going from left to right. I mean, that's thinking kind of more Europe, U.S.-centric. And then, you could be releasing your app into a different country where they read the other direction. So, yeah, a lot of this stuff is really interesting, especially the culture, like you're saying. Do they find this humor funny? And then, how do they translate things? Which, in my head, I think, could you use AI to do that. Which is a nice segue into, like, the mandatory question about AI, which we can't let you go until we ask you. MARC: [laughs] SAMI: So, okay, obviously, I'm going to ask you about your thoughts on AI and where you think we're headed. But I've seen something interesting, which I don't know if this is something that resonates with you as well. I've seen a bit of a trend where the more experienced developers or more senior developers I talk to seem to be a bit more calm and less concerned. Whereas I would consider myself as less experienced, and I feel, like, kind of more anxious, more nervous, more jumping on the bandwagon sort of feeling of keeping an eye on it. So, I guess, with your experience, what are your thoughts on AI? Where do you think we are headed? MARC: That's a big question, and it feels like it's changing month to month. It feels way more interesting than other trends before. Like, I'm way more excited about the capabilities of AI than, like, NFTs or stuff like this. I'm actively using AI tooling in my app. I was using some AI at Back Market. So, it's interesting. There's a bunch of things you can be doing. Personally, I don't think that it's going to, like, make programming irrelevant, for instance. It will just change a bit how you will build things just like...so, we talked about what changed in the past. For instance, at some point, you would need a team of people moving around physical computers and servers and just hooking them up to be able to have a website. But now, most people would just use a cloud provider. So, all those people either they work for the cloud provider, or they're out of a job. But really what happened is most shifted into something different, and then we focused on something different. Instead of learning how to handle a farm of servers, we learned how to, I don't know, handle more concurrency in our models. And I think when I look back, I feel like, technically, maybe, I don't know, 70%, 80% of what I learned is now useless. Like, I spent years getting really good at handling Internet Explorer as a web developer. Now it's just gone, so it's just gone forever. And it feels like there's some practice that we're having right now that will be gone forever thanks to AI or because of AI, depending on how you look at it. But then there'll be new things to do. I'm not sure yet what it will be, but it will create new opportunities. There are some things that look a bit scary, like, or creepy. But I'm not worried about jobs or things like this. I'm a bit concerned about people learning programming right now because, yeah, there's a lot of hand-holding, and there's a lot of tools that you have to pay to get access to this hand-holding. So, if you're a student right now in school learning programming and your school is giving you some AI assistant, like Copilot or whatever, and this assistant is really good, but suddenly it goes away because you're not paying anymore, or, like, the model change, if you don't know how to code anymore, then it's a problem. Or maybe you're not struggling as much. And you're not digging deep enough, and so you're learning slower. And you're being a bit robbed of the opportunity to learn by the AI. So, it's just giving you the solution. But it's just, like, the way I use it right now, so I don't have an assistant enabled, but I usually have, like, a ChatGPT window open somewhere. It's more like a better Stack Overflow or a more precise Stack Overflow. And that helps me a lot, and that's really convenient. Like, right now, I'm building mostly using Swift and Swift UI, but I'm mainly a Ruby and JavaScript developer. So, I'm struggling a lot and being able to ask really simple questions. I had a case just this morning where I asked how to handle loading of images without using the assets folder in Xcode. I just couldn't figure it out, but it's really simple. So, it was able to tell me, like, right away, like, five options on how to do it, and I was able to pick the one that would fit. So, yeah, really interesting, but yeah, I'm not that worried. The only part I would be worried is if people are learning right now and relying way too much on AI. RÉMY: Well, at least it's positive for our job. Thank you for making us believe in a bright future, Marc. MARC: [laughs] RÉMY: All right. Thank you so much, Marc, for joining us. It was a real pleasure. Before we leave, Marc, if you want to be contacted, if people want to get a hold of you, how can you be contacted? MARC: There's two ways: either LinkedIn, look up Marc G Gauthier. Like, the middle initial is important because Marc Gauthier is basically John Smith in France. My website, which is marcgg.com. You can find my blog. You can find a way to hire me as a coach or advisor. That's the best way to reach out to me. RÉMY: Thank you so much. And thank you, Sami, as well. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have any questions or comments, you can email us at hosts@giantrobots.fm. You can find me on social media as rhannequin. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening, and see you next time.  AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Merge Conflict
419: An Ideal Mobile Developer Setup on macOS

Merge Conflict

Play Episode Listen Later Jul 15, 2024 38:36


James is setting up a fresh macOS install and is ready to get .NET MAUI ready to go. Now the question is how... Xcode, VS Code, Android Studio, SDKs, emulators, simulators, and so much more! Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us (https://itunes.apple.com/us/podcast/merge-conflict/id1133064277?mt=2&ls=1) ⭐⭐ Machine transcription available on http://mergeconflict.fm

Side Project Spotlight
#70: Dipping Our Toes in Predictive Code Completion

Side Project Spotlight

Play Episode Listen Later Jul 15, 2024 51:18


The trio is back from celebrating America to talk about Apple's horribly named "predictive code completion" in Xcode 16. Kotaro tries to use it with some interesting results and we compare Apple's current implementation with other similar products like Microsoft's Intellicode and contrast it with something like Copilot or Apple's forthcoming Swift Assist product. We also engage in an interesting discussion about when you should or should not use these code generation tools. ## Topics Discussed: - Introductions - Kotaro installs macOS 15 beta - Kotaro tries “predictive code completion” in Xcode 16 - Steve's experience with Intellicode in Visual Studio - DocC generation with Copilot in VS Code - Generating unit tests with Copilot - When and when not to use these tools? - Future experiments - Wrap-up - PhillyCocoa.org - One More Thing - AzamSharp.school Intro music: "When I Hit the Floor", © 2021 Lorne Behrman. Used with permission of the artist.

CppCast
libunifex and std::execution

CppCast

Play Episode Listen Later Jun 28, 2024 61:56


Jessica Wong and Ian Peterson join Timur and Phil. Ian and Jessica talk to us about libunifex and other async code projects at Meta, how it has evolved in the proposed std::execution and what structured concurrency is. Show Notes News XCode 16 beta The std library that ships with XCode 16 supports "hardening" libc++ hardening modes "What's the deal with std::type_identity?" - Raymond Chen "C++ programmer's guide to undefined behavior: part 1 of 11" - PVS Studio "C++ Brain Teasers: Exercise Your Mind" - Anders Schau Knatten Links "std::execution" - P2300R9 "async_scope – Creating scopes for non-sequential concurrency" - P3149R3 "Notes on structured concurrency, or: Go statement considered harmful" Folly Coro

Infinitum
Višestruki strani plaćenik

Infinitum

Play Episode Listen Later Jun 15, 2024 78:26


Ep 236I am Jakoby on X: trying to turn in that bounty to microsoft is one of the biggest regrets of my life.PSA: Bartender Mac App Under New Ownership, But Lack of Transparency Raises ConcernsCraig Hockenberry: A lot of bad things can be done with Bartender's permissions: Accessibility lets you take control of the Mac and Screen Recording lets you see everything going on. It's quite possible someone just bought a valuable back door.Ars Technica: Windows Recall demands an extraordinary level of trust that Microsoft hasn't earnedThe Verge: There's a secret smart home radio in your new Mac2024 winners and finalists - Apple Design Awards - Apple DeveloperWWDC24 GuidesWWDC24: Platforms State of the UnionApple Developer on YouTubeApple Intelligence PreviewiOS 18 PreviewiPadOS 18 PreviewmacOS Sequoia PreviewwatchOS 11 PreviewiPhone - Compare Models 15 Pro vs 14 Pro vs 13 ProiPhone RAM list: How much every model hasWS skapirao: Apple Passes Microsoft to Become World's Most Valuable Company AgainDave Rahardja:This cloud architecture is fascinating.jesse squires: Kind of ridiculous that the first 2 default configurations of the latest M3 MacBook Pro (up to $1800!!!) can't even do full Xcode 16.Najnovejši operacijski sistem za iPhone tudi v slovenščiniSlovenščina na Applovih napravah. Končno. - Tehnična podporaThomas A.:After almost two month long review process, Apple has rejected UTM SE from the iOS App Store as well as from notarization for third party app stores.Steve Troughton-Smith:Apple needs to read the terms of the DMA again; Apple can't reject UTM from distribution in third party marketplaces, in just the same way it can't prevent Epic from building an App Store. App Review is going to land them yet another clash with the EU, and potential fine-worthy rule violationJapan Passes Law to Allow Third-Party App Stores on the iPhoneThe Ineffable Importance of Corporate Communications - TidBITSApple TV Go: How a USB-C Mod Spiraled into an iPad-Based tvOS WorkstationThomas Godden: Today I learned that SIM cards have a decently powerful MCU on them and run Java.Java CardZahvalniceSnimano 15.6.2024.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartu

Magic Rays of Light
WWDC 2024 Special

Magic Rays of Light

Play Episode Listen Later Jun 12, 2024 46:44


Sigmund and Devon recap the Apple TV and entertainment announcements at WWDC – including tvOS 18, visionOS 2, Immersive Video updates, and more – and score their event predictions.   ---   Opening Chat Apple TV Go: How a USB-C Mod Spiraled into an iPad-Based tvOS Workstation   WWDC Coming to Apple TV+ visionOS 2: The MacStories Overview Apple Announces Vision Pro Launching in More Countries Later This Month iOS and iPadOS 18: The MacStories Overview Audio and Home: The MacStories Overview tvOS18: The MacStories Overview watchOS 11: The MacStories Overview macOS Sequoia: The MacStories Overview A Look at Code Completion and Swift Assist Coming in Xcode 16 Apple Intelligence: The MacStories Overview Apple Announces New Features Coming to Its Services This Fall   News Apple Design Award Winners Announced ‎Theater: The Future of Cinema   Releases Presumed Innocent Camp Snoopy   Extras ‎Apple Developer   Up Next WWDC24   ---   Send us a voice message all week via iMessage or email to magic@macstories.net. Sigmund Judge | Follow Sigmund on X, Mastodon, or Threads Devon Dundee | Follow Devon on Mastodon or Threads Join Club MacStories. Subscribe to Apple TV+. Subscribe to MLS Season Pass.

Connected
502: Overcome by the Thinness

Connected

Play Episode Listen Later May 15, 2024 79:21


Wed, 15 May 2024 20:45:00 GMT http://relay.fm/connected/502 http://relay.fm/connected/502 Overcome by the Thinness 502 Federico Viticci, Stephen Hackett, and Myke Hurley With Myke on vacation, Stephen and Federico discuss what's going on with iPadOS, then cover the news from Google I/O and OpenAI's recent event. With Myke on vacation, Stephen and Federico discuss what's going on with iPadOS, then cover the news from Google I/O and OpenAI's recent event. clean 4761 With Myke on vacation, Stephen and Federico discuss what's going on with iPadOS, then cover the news from Google I/O and OpenAI's recent event. This episode of Connected is sponsored by: Jam: Developer-friendly bug reports in 1 click. Squarespace: Save 10% off your first purchase of a website or domain using code CONNECTED. Tailscale: Secure remote access to shared resources. Sign up today. Links and Show Notes: Get Connected Pro: Preshow, postshow, no ads. Submit Feedback Upgrade #512: Pros Are the Little Boats - Relay FM Johan in Rome Apple announces new accessibility features, including Eye Tracking - Apple Apple Marks Global Accessibility Awareness Day with a Preview of OS Features Coming Later This Year - MacStories Get alerts for smoke and carbon monoxide detectors - Apple Support STS on Mastodon: "If you install Xcode, both iPhone and iPad simulators run out of the exact same OS root." Apple still isn't done building its dream iPad - Fast Company Spring Update | OpenAI Google I/O 2024: all the news from the developer conference - The Verge Watch this screaming, rainbow-clad musician demo Google's AI DJ - The Verge Marc Rebillet - YouTube Project Astra: the future of AI at Google is fast, multi-modal assistants like Gemini Live - The Verge Google is overhauling its search results page with AI overviews and Gemini organization - The Verge Google Search adding 'Web' filter that skips AI answers It's the End of Go

Relay FM Master Feed
Connected 502: Overcome by the Thinness

Relay FM Master Feed

Play Episode Listen Later May 15, 2024 79:21


Wed, 15 May 2024 20:45:00 GMT http://relay.fm/connected/502 http://relay.fm/connected/502 Federico Viticci, Stephen Hackett, and Myke Hurley With Myke on vacation, Stephen and Federico discuss what's going on with iPadOS, then cover the news from Google I/O and OpenAI's recent event. With Myke on vacation, Stephen and Federico discuss what's going on with iPadOS, then cover the news from Google I/O and OpenAI's recent event. clean 4761 With Myke on vacation, Stephen and Federico discuss what's going on with iPadOS, then cover the news from Google I/O and OpenAI's recent event. This episode of Connected is sponsored by: Jam: Developer-friendly bug reports in 1 click. Squarespace: Save 10% off your first purchase of a website or domain using code CONNECTED. Tailscale: Secure remote access to shared resources. Sign up today. Links and Show Notes: Get Connected Pro: Preshow, postshow, no ads. Submit Feedback Upgrade #512: Pros Are the Little Boats - Relay FM Johan in Rome Apple announces new accessibility features, including Eye Tracking - Apple Apple Marks Global Accessibility Awareness Day with a Preview of OS Features Coming Later This Year - MacStories Get alerts for smoke and carbon monoxide detectors - Apple Support STS on Mastodon: "If you install Xcode, both iPhone and iPad simulators run out of the exact same OS root." Apple still isn't done building its dream iPad - Fast Company Spring Update | OpenAI Google I/O 2024: all the news from the developer conference - The Verge Watch this screaming, rainbow-clad musician demo Google's AI DJ - The Verge Marc Rebillet - YouTube Project Astra: the future of AI at Google is fast, multi-modal assistants like Gemini Live - The Verge Google is overhauling its search results page with AI overviews and Gemini organization - The Verge Google Search adding 'Web' filter that skips AI answers It's the

Stacktrace
204: “Ship a prompt”

Stacktrace

Play Episode Listen Later May 14, 2024 63:16


Stacktrace is back! John and Rambo check their hype levels for WWDC24, and discuss how AI might fit into Apple's plans for this year's releases. Also, Xcode wishes, and the challenges of building distributed systems.

The Rebound
491: A Fundamental Misunderstanding of Podcasts

The Rebound

Play Episode Listen Later Apr 17, 2024 49:53


Everyone loves the new AI pins, Dan has fun with setting up a new iCloud account and Lex is mad at Xcode.The Humane pin is not getting good reviews.Jason wonders if anyone but a tech giant can make the next big thing.Limitless is another AI pin we have questions about.Lex bought a CarPlay head unit.If you want to help out the show and get some great bonus content, consider becoming a Rebound Prime member! Just go to prime.reboundcast.com to check it out!You can now also support the show by buying shirts, iPhone cases, hats and more items featuring our catchphrase, "TECHNOLOGY"! Are we right?!