Podcasts about vpc

  • 150PODCASTS
  • 379EPISODES
  • 41mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Dec 19, 2025LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about vpc

Latest podcast episodes about vpc

Le Panier
#367 - Damart : comment moderniser une marque iconique sans la travestir ?

Le Panier

Play Episode Listen Later Dec 19, 2025 64:46


Dans cet épisode, Laurent Kretz reçoit Bérénice Maîtrepierre, responsable e-commerce chez Damart, la marque iconique qui réchauffe les Français depuis 1953. Aujourd'hui, Damart est face à un nouveau défi : conjuguer héritage et modernité. Le retail tient bon avec plus de 90 magasins. Le print reste stratégique : un achat en ligne sur deux est précédé d'un catalogue. Et le digital prend de l'ampleur : 50 millions d'euros de ventes, 20 millions de visites… Bérénice explique comment moderniser une marque patrimoniale sans perdre son ADN. Tout est pensé pour raconter Damart autrement, en ligne comme en magasin.Au programme : 00:00:00 - Introduction00:02:12 - Présentation de Damart00:09:54 – Concurrence sur le thermique, SEO/SEA, enjeux de notoriété00:16:15 - La cible client : âge, usages & comportements00:22:37 - Le rôle du print : influence sur la conversion00:28:11 - Références catalogue, VPC et adaptation digitale.00:31:48 - Le digital = 20% du CA (≈ 50 M€)00:37:04 – Comment arbitrer ses développements tech00:51:02 – E-merchandising & expérience Client01:03:38 - Conclusion Et quelques dernières infos à vous partager :Suivez Le Panier sur Instagram @lepanier.podcast !Inscrivez- vous à la newsletter sur lepanier.io pour cartonner en e-comm !Écoutez les épisodes sur Apple Podcasts, Spotify ou encore Podcast Addict Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.

InfosecTrain
AWS RAM Explained: Mastering Secure Multi-Account Resource Sharing

InfosecTrain

Play Episode Listen Later Dec 18, 2025 5:04


Managing complex multi-account environments often leads to resource duplication, high operational overhead, and ballooning cloud costs. In this episode, we break down AWS Resource Access Manager (RAM), a powerful service that allows you to create resources once and share them securely across your entire organization. Discover how to centralize your infrastructure while maintaining granular control, ensuring your architecture is both scalable and cost-effective without compromising security.

Code Story
The Railsware Way - Mistakes & Lessons in Product Evolution, with Oleksii Ianchuk

Code Story

Play Episode Listen Later Dec 3, 2025 26:14


Today, we are dropping our final episode in the series "The Railsware Way", sponsored by our good friends at Railsware. Railsware is a leading product studio with two main focuses - services and products. They have created amazing products like Mailtrap, Coupler and TitanApps, while also partnering with teams like Calendly and Bright Bytes. They deliver amazing products, and have happy customers to prove it.In this series, we are digging into the company's methods around product engineering and development. In particular, we will cover relevant topics to not only highlight their expertise, but to educate you on industry trends alongside their experience.In today's episode, we are speaking with Oleksii Ianchuk, Product Lead at Railsware, specifically for Mailtrap. Thought he doesn't like to limit his activities to product development, Oleksii has spent six years in product and project management, and is keen on searching for insights and putting them to work, as well as gauging the effects of his input.Questions:The story of Mailtrap starts with accidentally sending test emails to real users in 2011. How did Mailtrap evolve from an internal "fail" to a platform serving hundreds of thousands of users? How did that mistake spark the creation of Mailtrap, and what lessons did you learn about turning problems into opportunities?What made you decide to expand from email testing into Email API/SMTP delivery - and why was it harder than expected? What specific challenges around deliverability, spam fighting, and infrastructure caught you off guard?Can you walk us through the "splitting the product" mistake and its long-term consequences? Your team decided to separate testing and sending into different repositories and isolated VPC projects. What seemed like a good engineering decision at the time - how did this create problems as you scaled, and what would you do differently?You spent a year struggling with Redshift before switching to Elasticsearch - what did that teach you about technology decisions? You ran tests, evaluated alternatives, and still picked the wrong database for your use case. How do you balance thorough research with the reality that you can't always predict what will work until you're in production?When do you buy external expertise versus rely on your internal team? How do you decide when to hire outside knowledge, and how do you find the right consultants for niche problems?Why didn't existing Mailtrap users immediately adopt the Email API/SMTP feature, and what did that teach you?You expected current users to quickly transition to the new sending functionality. What did you learn about switching costs, user perception, and the challenge of changing how people think about your product?What business insights around deliverability, spam prevention, and compliance surprised you most?Email delivery isn't just about infrastructure - there's a whole ecosystem of postmasters, anti-spam systems, and compliance requirements. What aspects of this business were most unexpected, and how did they shape your product strategy?Looking at Mailtrap's 13-year journey, what's your philosophy on "failing fast" versus "building solid foundations"?Linkshttps://railsware.com/https://www.linkedin.com/in/yanch/Our Sponsors:* Check out Incogni: https://incogni.com/codestory* Check out NordProtect: https://nordprotect.com/codestorySupport this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

VMware Communities Roundtable
#749 - Vitrual Private Cloud cababilities in VCF 9 with Stephen Debarros

VMware Communities Roundtable

Play Episode Listen Later Nov 25, 2025


Stephen is a VMware trainer who teaches classes on NSX and now has been leaving VCF networking. Eric and Stephen talk about the setup and security aspects of VPC's and how the average admin can now tackle this config.

EM360 Podcast
Why Do Most ‘Full-Stack Observability' Tools Miss the Network?

EM360 Podcast

Play Episode Listen Later Nov 25, 2025 24:06


Tech leaders are often led to believe that they have “full-stack observability.” The MELT framework—metrics, events, logs, and traces—became the industry standard for visibility. However, Robert Cowart, CEO and Co-Founder of ElastiFlow, believes that this MELT framework leaves a critical gap. In the latest episode of the Tech Transformed podcast, host Dana Gardner, President and Principal Analyst at Interabor Solutions, sits down with Cowart to discuss network observability and its vitality in achieving full-stack observability.The speakers discuss the limitations of legacy observability tools that focus on MELT and how this leaves a significant and dangerous blind spot. Cowart emphasises the need for teams to integrate network data enriched with application context to enhance troubleshooting and security measures. What's Beyond MELT?Cowart explains that when it comes to the MELT framework, meaning “metrics, events, logs, and traces, think about the things that are being monitored or observed with that information. This is alluded to servers and applications.“Organisations need to understand their compute infrastructure and the applications they are running on. All of those servers are connected to networks, and those applications communicate over the networks, and users consume those services again over the network,” he added.“What we see among our growing customer base is that there's a real gap in the full-stack story that has been told in the market for the last 10 years, and that is the network.”The lack of insights results in a constant blind spot that delays problem-solving, hides user-experience issues, and leaves organizations vulnerable to security threats. Cowart notes that while performance monitoring tools can identify when an application call to a database is slow, they often don't explain why.“Was the database slow, or was the network path between them rerouted and causing delays?” he questions. “If you don't see the network, you can't find the root cause.”The outcome is longer troubleshooting cycles, isolated operations teams, and an expensive “blame game” among DevOps, NetOps, and SecOps.Elastiflow's approaches it differently. They focus on observability to network connectivity—understanding who is communicating with whom and how that communication behaves. This data not only speeds up performance insights but also acts as a “motion detector” within the organization. Monitoring east-west, north-south, and cloud VPC flow logs helps organizations spot unusual patterns that indicate internal threats or compromised systems used for launching external attacks.“Security teams are often good at defending the perimeter,” Cowart says. “But once something gets inside, visibility fades. Connectivity data fills that gap.”Isolated Monitoring to Unified Experience Cowart believes that observability can't just be about green lights...

The DevOps Kitchen Talks's Podcast
DKT83 - DevOps Mock Interview #4 (Junior/Middle DevOps Engineer)

The DevOps Kitchen Talks's Podcast

Play Episode Listen Later Sep 26, 2025 97:10


Мок-интервью для junior/начинающего middle DevOps: CI/CD, Git-ветки, AWS (VPC, S3), Kubernetes (probes, DaemonSet), Terraform. Разбираем основы, типовые вопросы и ошибки — простым языком.

The Tech Blog Writer Podcast
3417: Inflection AI and the Rise of Contextual Intelligence

The Tech Blog Writer Podcast

Play Episode Listen Later Sep 11, 2025 32:10


Here's the thing. Most enterprise AI pitches talk about scale and speed. Fewer talk about trust, tone, and culture. In this conversation with Inflection AI's Amit Manjhi and Shruti Prakash, I explore a different path for enterprise AI, one that combines emotional intelligence with analytical horsepower, enabling teams to ask more informed questions of their data and receive answers that are grounded in context. Amit's story sets the pace. He is a three-time founder, a YC alum, and a CS PhD who has solved complex problems across mobile, ad tech, and data. Shruti complements that arc with a product lens shaped by real operational trenches, from clean rooms to grocery retail analytics.  Together, they built BoostKPI during the pandemic, transforming natural language into actionable insights, and then joined Inflection AI to help refocus the company on achieving enterprise outcomes. Their shared north star is simple to say yet tricky to execute. Make data analysis conversational, accurate, and emotionally aware so people actually use it. We unpack Inflection's shift from Pi's consumer roots to privacy-first enterprise tools. That history matters because it gives the team a head start on EQ. When you combine a deep well of human-to-AI conversations with modern LLMs, you get systems that explain, probe, and adapt rather than dump charts and call it a day.  Shruti breaks down what dialogue with data looks like in practice. Think back-and-forth exchanges that move from "what happened" to "why it happened," then on to "where else this pattern appears" and "what to do next," all grounded in an organization's language and values. Amit takes us under the hood on deployment choices and ownership. If a customer wants on-prem or VPC, they get it. If they're going to fine-tune models to their vernacular, they can. The model, the insights, and the guardrails remain in the customer's control. I enjoyed the honesty around adoption. Chasing AGI makes headlines, but it rarely helps a merchandising manager spot an early drop in lifetime value or a CX lead understand churn risk before quarter end. The duo keeps the conversation grounded in everyday questions that drive numbers and reduce meetings. They describe a path where EQ and IQ come together to form what Shruti calls contextual intelligence, and where brands can trust AI agents to assist without losing ownership or voice. If you care about making data useful to more people, and you want AI that sounds like your company rather than a generic assistant, this one is for you. We cover startup lessons, the reality of cofounding as a couple during lockdowns, and how Inflection is working with large enterprises to bring conversational analysis to real workloads. It is a grounded look at where enterprise AI is heading, and a timely reminder that technology should elevate humans, not replace them. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

AWS Morning Brief
Elemenental Innovation Isn't, Resource Explorer Doesn't

AWS Morning Brief

Play Episode Listen Later Sep 8, 2025 4:57


AWS Morning Brief for the week of September 8th, with Corey Quinn. Links:Amazon disrupts watering hole campaign by Russia's APT29AWS IAM launches new VPC endpoint condition keys for network perimeter controlsRDS Data API now supports IPv6Now Open — AWS Asia Pacific (New Zealand) RegionAWS Resource Explorer is now available in AWS Asia Pacific (Taipei) RegionProtect your Amazon Route 53 DNS zones and records Efficiently verify Amazon S3 data at scale with compute checksum operationAWS Elemental celebrates 10 years of innovationChoosing the right AWS live streaming solution for your use case

Adventures of Alice & Bob
Ep. 86 - When Your VPC Partner Gets Pwned // Brian Wagner

Adventures of Alice & Bob

Play Episode Listen Later Aug 29, 2025 54:52


In this episode, James Maude sits down with Brian Wagner, CTO at Revenir, whose cybersecurity story started at just 15, building Microsoft Access databases for a medical hospice. From teenage entrepreneur to AWS security specialist, Brian's path has been anything but ordinary. He pulls back the curtain on his time with the elite Zipline incident response team where he confronted a catastrophic VPC peering breach that spiraled into data theft and blackmail. Together, James and Brian dissect how vendor network compromises can silently open doors into your cloud and why Brian insists that true security isn't something you bolt on later - it's a culture you build from day one.

Cables2Clouds
Cloud Networking Basics: VPC - AWS vs Azure vs Google Cloud

Cables2Clouds

Play Episode Listen Later Aug 27, 2025 40:54 Transcription Available


Send us a textWhat happens when three major cloud providers each reimagine network design from scratch? You get three completely different approaches to solving the same fundamental problem.The foundation of cloud networking begins with the virtual containers that hold your resources: AWS's Virtual Private Clouds (VPCs), Azure's Virtual Networks (VNets), and Google Cloud's VPCs (yes, the same name, very different implementation). While they all serve the same basic purpose—providing logical isolation for your workloads—their design philosophies reveal profound differences in how each provider expects you to architect your solutions.AWS took the explicit control approach. When you create subnets within an AWS VPC, you must assign each to a specific Availability Zone. This creates a vertical architecture pattern where you're deliberately placing resources in specific physical locations and designing resilience across those boundaries. Network engineers often find this intuitive because it matches traditional fault domain thinking. However, this design means you must account for cross-AZ data transfer costs and explicit resiliency patterns.Azure flipped the script with their horizontal approach. By default, subnets span across all AZs in a region, with Microsoft's automation handling the resilience for you. This "let us handle the complexity" philosophy makes initial deployment simpler but provides less granular control. Meanwhile, Google Cloud went global, allowing a single VPC to span regions worldwide—an approach that simplifies global connectivity but introduces new challenges for security segmentation.These architectural differences aren't merely academic—they fundamentally change how you design for resilience, manage costs, and implement security. The cloud introduced "toll booth" pricing for data movement, where crossing availability zones or regions incurs charges that didn't exist in traditional data centers. Understanding these nuances is crucial whether you're migrating existing networks or designing new ones.Want to dive deeper into cloud networking concepts? Let us know what topics you'd like us to cover next as we explore how traditional networking skills translate to the cloud world.Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

InfosecTrain
Top AWS Interview Questions & Answers 2025 | Land Your Cloud Job – Part 1

InfosecTrain

Play Episode Listen Later Aug 17, 2025 71:03


Got an AWS interview coming up? These are the real questions recruiters are asking in 2025—plus the winning answers that get you hired. Perfect for beginners, cloud enthusiasts, and anyone targeting DevOps or AWS roles, this session covers the most frequently asked AWS interview questions with expert, concise answers you can apply immediately.

AWS Morning Brief
The Most Expensive Toggle In The World

AWS Morning Brief

Play Episode Listen Later Aug 11, 2025 5:07


Episode Summary:AWS Morning Brief for the week of August 11th, 2025, with Corey Quinn.Links: AWS Cloud Visibility Best PracticesThis Ars articleAWS European Sovereign Cloud to be operated by EU citizensAmazon killing a user's accountMountpoint for Amazon S3 CSI driver v2: Accelerated performance and improved resource usage for Kubernetes workloadsStreamlining outbound emails with Amazon SES Mail ManagerAWS Lambda now supports GitHub Actions to simplify function deploymentAnthropic's Claude Opus 4.1 now in Amazon BedrockAmazon CloudWatch introduces organization-wide VPC flow logs enablementUnderstanding and Remediating Cold Starts: An AWS Lambda PerspectiveAmazon SQS increases maximum message payload size to 1 MiBOpenAI open weight models now available on AWS Best practices for analyzing AWS Config recording frequenciesAmazon EKS adds safety control to prevent accidental cluster deletionAWS Console Mobile App now offers access to AWS SupportAmazon EC2 now supports force terminate for EC2 instances Amazon DynamoDB adds support for Console-to-CodeUsing generative AI for building AWS networksSimplify network connectivity using Tailscale with Amazon EKS Hybrid NodesCost tracking multi-tenant model inference on Amazon Bedrock

Association RevUP
One Thing Your Partnerships Are Missing and How EDUCAUSE Found It

Association RevUP

Play Episode Listen Later Jul 31, 2025 13:59


"We trust them with their money, but we can't trust them with their expertise?"Podcast mic drop. Imagine the progress that could be made if associations consistently invited partners (and their expertise) into the conversation.That's what John Bacon, ASAE's Vice President of Enterprise Sales, is challenging associations to do: go beyond the dollars and recognize partners as strategic allies. This 14-minute episode will give you tips on how to do it.Learn how EDUCAUSE , an association committed to advancing higher education through technology, is bringing partners together in a unique way to advance their industry.Just a year in, partnerships have doubled and the program is at capacity according to Leah Lang (EDUCAUSE).If you're ready to not only increase revenue but also advance your industry by bringing partners into the conversation, give this episode a listen!About Association RevUP: A PAR podcast that celebrates the stories of associations who are utilizing business to bring about real change in their industries. Episodes are less than 15-minutes, written and produced to keep you engaged, and full of actionable insights. Hosted by Carolyn Shomali.- ⁠⁠VPC, Inc.⁠⁠, the boutique production company PAR trusts with our in-person event the RevUP Summit, and the parter of this podcast.- ⁠⁠MyPar.org⁠⁠: learn more about the PAR member community- ⁠⁠RevUP Summit⁠⁠: November 4-6, 2025 in Annapolis, MD. The only association event focused on revenue health.

Association RevUP
APRO's Advocacy Fellowship and How Momentum Drives the Future

Association RevUP

Play Episode Listen Later Jul 7, 2025 13:45


“How do we continue to sustain this for another 30 years?” If sustainability, advocacy, partnerships or a new generation of members is of importance for your association, this is a 13-minute episode you won't want to miss. Find out how the Association of Progressive Rental Organizations (APRO) is engaging a new generation of members to advocate for its industry in Washington. And, how a long-term relationship between APRO and vendor-member Ashley Furniture was the key to making it happen. You'll hear from APRO CEO Charles Smitherman and Ashley Furniture's Mike Kays on a mentorship/fellowship opportunity in Washington.Plus, learn why momentum is vital to your association's success. You'll learn how to build it in a way that is unstoppable from best-selling author and keynote speaker, Brant Menswar.About Association RevUP: A PAR podcast that celebrates the stories of associations who are utilizing business to bring about real change in their industries. Episodes are less than 15-minutes, written and produced to keep you engaged, and full of actionable insights. Hosted by Carolyn Shomali.- ⁠VPC, Inc.⁠, the production company PAR trusts with our in-person event the RevUP Summit, and the parter of this podcast.- ⁠MyPar.org⁠: learn more about the PAR member community- ⁠RevUP Summit⁠: November 4-6, 2025 in Annapolis, MD- Brant Menswar- APRO Fellowship Program

Virtually Speaking Podcast
VMware Cloud Foundation 9 Vision and Strategy

Virtually Speaking Podcast

Play Episode Listen Later Jun 17, 2025 20:45


Welcome to the first episode of the Virtually Speaking Podcast's VMware Cloud Foundation 9 series! In this inaugural installment, hosts Pete Flecha and John Nicholson sit down with Paul Turner, Broadcom's  leader of VCF product management, to kick off an in-depth exploration of VMware Cloud Foundation 9. This series will dive deep into the platform's latest innovations, capabilities, and transformative potential for private cloud computing.    Key Highlights: The rise of private cloud and its cost-efficiency compared to public cloud services Sovereign cloud capabilities and regulatory compliance Innovations in GPU as a service Native support for VMs and Kubernetes in a single platform Advances in storage (vSAN) with enhanced performance and resilience Networking improvements with native VPC setup in vCenter VMware's partnerships with hyperscalers and 500+ cloud service providers Join us as we unpack the exciting developments in VCF 9 across multiple episodes.

Cables2Clouds
The Fine Line Between Brilliant and Bizarre Cyber Tactics (Fortnightly News Update)

Cables2Clouds

Play Episode Listen Later Jun 4, 2025 25:26 Transcription Available


Send us a textTim and Chris discuss major cybersecurity acquisitions and innovations, examining how these changes will impact enterprise security and cloud architecture.• Zscaler acquires Red Canary MDR (Managed Detection and Response) to fill gaps in their platform despite potential integration challenges• AWS Network Firewall now supports multiple VPC endpoints without requiring Transit Gateway deployment• AWS exits the private 5G market, pivoting to partnerships with established telecommunications providers• CheckPoint acquires Veritai Cybersecurity to enhance their Infinity platform with "virtual patching" capabilities• North Korean IT workers using sophisticated techniques to infiltrate Western companies by posing as legitimate remote employeesCheck the news document for additional stories we didn't have time to cover, including a project called MPIC focused on preventing BGP attacks with certificate validation.Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Valley Presbyterian Church
5.18.25 - Message: Supporting Queer Youth

Valley Presbyterian Church

Play Episode Listen Later May 28, 2025 23:51


VPC is committed to being a place of welcome for everyone. We are always seeking new ways to spread our arms wider to the world. Over the years, our welcome of LGBTQIA+ folks has grown organically as we have listened to the stories in our congregation and come to understand how important it is to have a clear commitment to full inclusion. We are so proud to be a partner with kin•dom community, an organization that was founded by Rev. Pepa Paniagua, who grew up at VPC. This organization provides safe spaces of belonging for queer youth. And this coming Sunday, we are honored to have Rev. Dr. John Leedy, the current Executive Director of kin•dom, preach in our Sunday morning service.

GrowthCap Insights
Private Credit Investor: Victory Park's Brendan Carroll

GrowthCap Insights

Play Episode Listen Later May 21, 2025 23:20


In this episode we speak with Brendan Carroll, Co-Founder and Senior Partner at Victory Park Capital (VPC), a global leader in alternative asset management specializing in private asset-backed credit. Founded in 2007 and now a majority-owned affiliate of Janus Henderson Group, VPC offers comprehensive financing and capital markets solutions through its affiliate platform, Triumph Capital Markets. With a global presence across 25 cities, VPC is at the forefront of innovative investment strategies. Brendan leads strategic initiatives, firm operations, and the sourcing of high-impact investment opportunities at VPC. As a key member of VPC's Investment and Valuation Committees, Brendan's leadership is integral to driving the firm's success and delivering exceptional value to its investors. Brendan supports Lurie Children's Hospital. To learn more about this organization click here. I am your host RJ Lumba.  We hope you enjoy the show.  If you like the episode, click to follow.

AWS Bites
143. Is App Runner better than Fargate?

AWS Bites

Play Episode Listen Later May 8, 2025 42:09


Picture this. You've got a web app built with Rust and Solid.js. It started life running on a dusty on-prem server, but now it's time to move it to the cloud. The clock is ticking. You could take the well-worn AWS path: set up a VPC, configure subnets, attach an ALB, define IAM roles, and deploy with Fargate. Or you could try something different. In this episode of AWS Bites, we share the real story of migrating a monolithic containerized app to AWS App Runner. It promises to take your code, build it, deploy it, and scale it with minimal effort. But does it really deliver? We compare App Runner with Fargate based on hands-on experience. You'll learn where App Runner shines, where it gets in the way, and how we handled everything from custom domains to background job processing. You'll also hear when we would still choose Fargate, and why. If you've ever hoped for a Heroku-like experience on AWS, or you want to simplify your container deployments without giving up too much control, this episode is for you.AWS Bites is brought to you in association with fourTheorem. At fourTheorem, we believe serverless should be simple, scalable, and cost-effective — and we help teams do just that. Whether you're diving into containers, stepping into event-driven architecture, or scaling a global SaaS platform on AWS, our team has your back. Visit https://fourTheorem.com to see how we can help you build faster, better, and with more confidence using AWS cloud!

Association RevUP
How AGU's Personalized Tech Powers Young Member Engagement (S2.E3)

Association RevUP

Play Episode Listen Later May 5, 2025 12:43


"I think what we're doing is trailblazing." We don't often hear 'trailblazing' associated with associations, but that's exactly how Thad Lurie describes the most recent initiative of the American Geophysical Union (AGU) in episode 3.Effectively reaching the youngest career professionals is an ongoing challenge for associations in the quest to remain relevant. In less than 13-minutes, you'll learn how AGU is using personalization to provide members with the content and connections they are looking for. And, how they are adapting the new technology to meet immediate and evolving member needs.The Association RevUP Podcast is presented by the Professionals for Association Revenue (PAR) and hosted by Carolyn Shomali.Learn more about ⁠PAR⁠ and its annual conference, ⁠The RevUP Summit. ⁠Special Thanks to podcast partner, the event production company ⁠VPC, Inc.⁠Helpful Links:Learn more about AGUAGU partner: Tasio Labs

Cables2Clouds
We Nearly Became Invulnerable! (No More CVE) - NC2C034

Cables2Clouds

Play Episode Listen Later Apr 23, 2025 24:10 Transcription Available


Send us a textThe tech world gives and takes away as Google introduces CloudWAN while MITRE nearly loses CVE funding, showcasing both innovation and vulnerability in our digital infrastructure landscape. Politics increasingly intersects with technology as we examine controversial security clearance revocations alongside much-needed technical improvements in cloud networking.• Google Cloud Next introduces CloudWAN service with two use cases: high-performance data center connectivity and premium branch networking• Google's approach differs from AWS, encouraging single global VPC deployments across regions• MITRE loses funding for the CVE program, threatening the global vulnerability tracking system• CISA provides 11-month bridge funding, but fragmentation begins as EU launches alternative vulnerability tracking• Azure announces general availability of route maps for Virtual WAN, bringing traditional networking capabilities to cloud• Former CISA director Chris Krebs targeted in federal investigation for debunking 2020 election fraud claims• Security clearance revocations increasingly used as political weapons against technology professionalsSubscribe to Cables to Clouds Fortnightly News and tell a friend about the show to stay informed about the evolving cloud technology landscape.Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Telemetry Now
Telemetry News Now: Starlink enters provider space, OpenAI raises $40B, AWS announces support for VPC route server, Comcast acquires Nitel

Telemetry Now

Play Episode Listen Later Apr 4, 2025 30:09


In this news episode, Phil Gervasi and Justin Ryburn cover some of the latest tech headlines, such as Starlink entering the service provider space with Community Gateways, OpenAI raising $40 billion for AGI development, Comcast acquiring Nitel to expand its channel distribution strategy, and AWS announcing support for VPC route server running BGP. Tune in for the latest news and a quick roundup of upcoming network industry events.

Cables2Clouds
Identity Crisis: What is Cloud Networking?

Cables2Clouds

Play Episode Listen Later Apr 2, 2025 46:08 Transcription Available


Send us a textWhat exactly is cloud networking? This seemingly simple question quickly descends into a fascinating philosophical debate as we welcome back Nico Vibert, Senior Staff Technical Marketing Engineer at Isovalent/Cisco, to tackle this identity crisis head-on.The conversation begins with a startling observation from Nico about analyst reports that group wildly different vendors together under the "cloud networking" umbrella. From there, we explore how defining cloud networking has become increasingly complex as technologies evolve and converge. We trace the origins back to AWS's introduction of VPC in 2009 and discuss how different cloud providers approach networking based on their unique company cultures.One clear consensus emerges: true cloud networking must be API-driven. Whether consumed directly via APIs or through infrastructure-as-code tools like Terraform, programmability stands as a non-negotiable requirement. But beyond this foundation, the boundaries blur significantly when examining various technologies that might qualify.Does Kubernetes networking fall under the cloud networking umbrella? What about Middle Mile providers like Equinix or Megaport that physically connect clouds? Are CDNs part of cloud networking, or something entirely different? We dissect these questions without settling on definitive answers, highlighting how technology's rapid evolution makes categorization increasingly difficult.Looking ahead, we explore how AI is reshaping cloud networking in two critical ways: networks optimized for AI workloads and AI-enhanced network management. Cloud providers are investing billions in infrastructure upgrades, developing custom silicon to reduce dependency on GPU manufacturers, signaling massive transformation on the horizon.Whether you're a network engineer, cloud architect, or technology leader trying to understand this evolving landscape, this episode provides valuable perspective on cloud networking's past, present, and future directions.Connect with Nico: https://www.linkedin.com/in/nicolasvibert/Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Association RevUP
AAVSB's Culture of Innovation and How to Think Like a Startup (S2.E2)

Association RevUP

Play Episode Listen Later Mar 31, 2025 18:58


"We need to take these things that are ideas and figure out if they even have legs before we spend a bunch of time and money and invest in them."What would it look like if your association acted like a startup? That's the question explored in episode 2 with Chrissy Bagby of the American Association of Veterinary State Boards (AAVSB) and Elizabeth Engel of Spark Consulting. Learn how AAVSB is implementing a culture of innovation where employees at all levels of the organization have learned how to identify and develop non-dues revenue ideas with the help of a proven framework.Learn how to understand your audience, their needs and potential solutions to their problems before investing valuable time and resources moving in the wrong direction.The Association RevUP Podcast is presented by the Professionals for Association Revenue (PAR) and hosted by Carolyn Shomali.Learn more about PAR and its annual conference, The RevUP Summit. Special Thanks to podcast partner, the event production company VPC, Inc.Helpful Links:Lean Startup by Eric RiesSkateboard to Car: MVP analogyVideo: Eric Ries 2011 Google Talk

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!Unsupervised Learning is a podcast that interviews the sharpest minds in AI about what's real today, what will be real in the future and what it means for businesses and the world - helping builders, researchers and founders deconstruct and understand the biggest breakthroughs. Top guests: Noam Shazeer, Bob McGrew, Noam Brown, Dylan Patel, Percy Liang, David LuanFull Episode on Their YouTubeTimestamps* 00:00 Introduction and Excitement for Collaboration* 00:27 Reflecting on Surprises in AI Over the Past Year* 01:44 Open Source Models and Their Adoption* 06:01 The Rise of GPT Wrappers* 06:55 AI Builders and Low-Code Platforms* 09:35 Overhyped and Underhyped AI Trends* 22:17 Product Market Fit in AI* 28:23 Google's Current Momentum* 28:33 Customer Support and AI* 29:54 AI's Impact on Cost and Growth* 31:05 Voice AI and Scheduling* 32:59 Emerging AI Applications* 34:12 Education and AI* 36:34 Defensibility in AI Applications* 40:10 Infrastructure and AI* 47:08 Challenges and Future of AI* 52:15 Quick Fire Round and Closing RemarksTranscript[00:00:00] Introduction and Podcast Overview[00:00:00] Jacob: well, thanks so much for doing this, guys. I feel like we've we've been excited to do a collab for a while. I[00:00:13] swyx: love crossovers. Yeah. Yeah. This, this is great. Like the ultimate meta about just podcasters talking to other podcasters. Yeah. It's a lot. Podcasts all the way up.[00:00:21] Jacob: I figured we'd have a pretty free ranging conversation today but brought a few conversation starters to, to, to kick us off.[00:00:27] Reflecting on AI Surprises and Trends[00:00:27] Jacob: And so I figured one interesting place to start is you know, obviously it feels that this world is changing like every few months. Wondering as you guys reflect path on the past year, like what surprised you the most?[00:00:36] Alessio: I think definitely recently models we kinda on the, on the right here. Like, oh, that, well, I, I I think there's, there's like the, what surprised us in a good way.[00:00:44] May maybe in a, in a bad way. I would say in a good way. Recently models and I think the release of them right after the new reps scaling instead talked by Ilia. I think there was maybe like a, a little. It's so over and then we're so back. I'm like such a short, short period. It was really [00:01:00] fortuitous[00:01:00] Jacob: timing though, like right.[00:01:01] As pre-training died, I mean, obviously I'm sure within the labs they knew pre-training was dying and had to find something. But you know, from the outside it was it, it felt like one right into the other.[00:01:09] Alessio: Yeah. Yeah, exactly. So that, that was a good surprise,[00:01:12] swyx: I would say, if you wanna make that comment about timing, I think it's suspiciously neat that like, because we know that Strawberry was being worked on for like two years-ish.[00:01:20] Like, and we know exactly when Nome joined OpenAI, and that was obviously a big strategic bet by OpenAI. So like, for it to transition, so transition so nicely when like, pre-training is kind of tapped out to, into like, oh, now inference time is, is the new scaling law is like conv very convenient. I, I, I like if there were an Illuminati, this would be what they planned.[00:01:41] Or if we're living in a simulation or something. Yeah.[00:01:44] Open Source Models and Their Impact[00:01:44] swyx: Then you said open source[00:01:45] Alessio: as well? Yeah. Well, no, I, I think like open source. Yeah. We're discussing this on the negative. I would say the relevance of open source. I would specifically open models. Yeah, I was surprised the lack, like the llamas of the world by the lack of adoption.[00:01:56] And I mean, people use it obviously, but I would say nobody's [00:02:00] really like a huge fanboy, you know, I think the local llama community and some of the more obvious use cases really like it. But when we talk to like enterprise folks, it's like, it's cool, you know? And I think people love to argue about licenses and all of that, but the reality is that it doesn't really change the adoption path of, of ai.[00:02:18] So[00:02:19] swyx: yeah, the specific stat that I got from on anchor from Braintrust mm-hmm. In one of the episodes that we did was I think he estimated that open source model usage in work in enterprises is that like 5% and going down.[00:02:31] Jacob: And it feels like you're basically all these enterprises are in like use case discovery mode, where it's like, let's just take what we think is the most powerful model and figure out if we can find anything that works.[00:02:39] And, you know, so much of, of, of it feels like discovery of that. And then, right, as you've discovered something, a new generation of models are out and so you have to go do discovery with those. And you know, I think obviously we're probably optimistic that the that the open source models increase in uptake.[00:02:50] It's funny, I was gonna say my biggest surprise in the last year was open source related, but it was just how Fast Open Source caught up on the reasoning models. It was kind of unclear to me, like over time whether there would be, you know, [00:03:00] a compounding advantage for some of the closed source models where in the, okay, in the early days of, of scaling you know, there was a, a tight time loop, but over time, you know, would would the gap increase?[00:03:08] And if anything it feels like a trunk. You know, and I think deep seek specifically was just really surprising in how, you know, in many ways if the value of these model companies is like you have a model for a period of time and you're the only one that can build products on top of that model while you have it.[00:03:21] Like, God, that time period is a lot shorter than a, than I thought it was gonna be a year ago.[00:03:25] swyx: Yeah. I mean, again, I I, I don't like this label of how Fast Open Source caught up because it's really how Fast Deepsea caught up. Right. And now we have, like, I think some of it is that Deepsea is basically gonna stop open sourcing models.[00:03:36] Yeah. So like there, there's no team open source, there's just different companies and they choose to open source or not. And we got lucky with deep seek releasing something and then everyone else is basically distilling from deep seek and those are distillations. Catching up is such an easier lower bar than like actually catching up, which is like you, you are like from scratch.[00:03:56] You're training something that like is competitive on that front. I don't know if [00:04:00] that's happening. Like basically the only player right now is we're waiting for LA four.[00:04:03] Jordan: I mean, it's always an order of magnitude cheaper to replicate what's already been done than to create something fundamentally new.[00:04:09] And so that's why I think deep seek overall was overhyped. Right? I mean obviously it's a good open source, new entrant, but at the same time there's nothing new fundamentally there other than sort of doing it executing what's already been done really well.[00:04:21] Alessio: Yeah,[00:04:21] Jordan: right.[00:04:21] Alessio: So Well, but I think the traces is like maybe the biggest thing, I think most previous open models is like the same model, just a little worse and cheaper.[00:04:30] Yeah. Like R one is like the first model that had the full traces. So I think that's like a net unique thing in fair, open source. But yeah, I, I think like we talked about deep seek in the our n of year 2023 recap, and we're mostly focused on cheaper inference. Like we didn't really have deep, see, deep CV three[00:04:47] swyx: was out then, and we were like, that was already like talking about fine green mixture of experts and all that.[00:04:51] Like that's a great receipt to[00:04:52] Jacob: have[00:04:52] swyx: to be like, yeah.[00:04:52] Jacob: End[00:04:53] swyx: of year 20. Yeah. That's a,[00:04:54] Jacob: that's a, that's, that's an[00:04:55] swyx: impressive one. You follow the right whale believers in Twitter. It's, it's like [00:05:00] pretty obvious. I actually had like so, you know, I used to be in finance and, and a lot, a lot of my hedge fund and PE friends called me up.[00:05:06] They were like, why didn't you tip us off on deep seek? And I'm like, well, I mean, it's been there. It's, it's actually like kind of surprising that like, Nvidia like fell like what, 15% in one day? Yeah. Because deep seek and I, I think it's just like whatever the market, public market narrative decides is a story, becomes the story, but really like the technical movements are usually.[00:05:26] One to two years in the making. Before that,[00:05:27] Jacob: basically these people were telling on themselves that they didn't listen to your podcast. They've been on the end of year 22, 3. No, no,[00:05:32] swyx: no. Like yeah, we weren't, we weren't like banging the drum. So like it's also on us to be like, no, like this. This is an actual tipping point.[00:05:38] And I think I like as people who are like, our function as podcasters and industry analysts is to raise the bar or focus attention on things that you think matter. And sometimes we're too passive about it. And I think I was too passive there. I'd be, I'd be happy to own up on that.[00:05:52] Jacob: No, I feel like over time you guys have moved into this margin general role of like taking stances of things that are or aren't important and, you know I feel like you've done that with MCP of [00:06:00] late and a bunch of[00:06:00] swyx: things.[00:06:00] Yeah.[00:06:01] Challenges and Opportunities in AI Engineering[00:06:01] swyx: So like the, the general pushes is AI engineering, you know, like it's gotta, gotta wrap the shirt. And MCP is part of that, but like the, the general movement is what can engineers do above the model layer to augment model capabilities. And it turns out it's a lot. And turns out we went from like, making fun of GPT rappers to now I think the overwhelming consensus GPT wrappers is the only thing that's interesting.[00:06:20] Yeah.[00:06:21] Jacob: I remember like, Arvin from Perplexity came on our podcast and he was like, I'm proudly a rapper. Like, you know, it's like anyone that's like talking about like, you know, differentiation, like pre-product market fit is like a ridiculous thing to, to say, like, build something people want and then yeah.[00:06:33] Over time you can kind of worry about that.[00:06:35] swyx: Yeah. I, I interviewed him in 2023 and I think he may have been the first person on our podcast to like, probably be a GBT rapper. Yeah. And yeah, and obviously he's built a huge business on that. Totally. Now, now we now we all can't get enough of it. I have another one for, Oh, nice.[00:06:47] That was Alessia's one and we, we perhaps individual answers just to be interesting in the same Uber on the way up. Yeah. You just like in the, in different Oh, I was driving too. Oh, you were driving. So I actually, I mean, it was a Tesla mostly drove mine was [00:07:00] actually, it is interesting that low-code builders did not capture the AI builder market.[00:07:04] Right. AI builders being bought lovable, low-code builders being Zapier, Airtable, retool notion. Any of those, like you're not technical. You can build software.[00:07:14] misc: Yeah.[00:07:14] swyx: Somehow not all them missed it. Why? It's bizarre. Like they should have the DNA, I don't know. They should have. They already have the reach, they already have the, the distribution.[00:07:25] Like why? I I have no idea. The ability to[00:07:27] Jacob: fast follow too. Like I'm surprised there's Yeah. There's just[00:07:29] swyx: nothing. Yeah. What do you make of that? I, it seems and you know, not to come back to the AI engineering future, like it takes a, a certain kind of. Founder mindset or AI engineer mindset to be like, we will build this from whole cloth and not be tied to existing paradigms.[00:07:45] I think, 'cause I like, if I was, if I'm to, you know, you know, Wade or who's, who's, who's the Zapier person than, you know, Mike. Mike who has left the Zapier. Yeah. What's the, yeah. Like you know, Zapier, when they decided to do Zapier ai, they [00:08:00] were like, oh, you can use natural language to make Zap actions, right?[00:08:03] When Notion decided to do Notion ai, they were like, oh, you can like, you know write documents or, you know, fill in tables with, with ai. Like, they didn't do the, the, the, the next step because they already had their base and they were like, let's improve our baseline. And the other people who actually tried for to, to create a phone cloth were like, we, we got no prior preconceptions.[00:08:24] Like, let's see what we can, what kinda software people can build with like from scratch, basically. I don't know that, that's my explanation. I dunno if you guys have any retros on the AI builders?[00:08:33] Jacob: Yeah. Or, or, or did they kind of get lucky getting, you know starting that product journey? Like right as the models were reaching the inflection point?[00:08:39] There's the timing[00:08:40] swyx: issue. Yeah. Yeah, yeah. Yeah. Yeah, I don't know. Like I, I, to some extent, I think the only reason you and I are talking about it is that they, both of them have reported like ridiculous numbers. Like zero to 20 million in three months, basically, both of them. Jordan, did you have a, a big surprise?[00:08:55] Jordan: Yeah, I mean, some of what's already been discussed. I guess the only other thing would be on the Apple side in particular, I [00:09:00] think, I think you know, for the last text message summary, like, but they're[00:09:04] Jacob: funny. They're funny at how bad they had, how off they're, they're viral. Yeah.[00:09:08] Jordan: I mean, so like for the last couple years we've seen so many companies that are trying to do personal assistance, like all these various consumer things, and one of the things we've always asked is, well, apple is in prime position to do all this.[00:09:18] And then with Apple Intelligence, they just. Totally messed up in so many different ways. And then the whole BBC thing saying that the guy shot himself when he didn't. And just like, there's just so many things at this point that I would've thought that they would've ironed up their, their AI products better, but just didn't really catch on,[00:09:35] Jacob: you know, second on this list of, of generally overly broad opening questions would be anything that you guys think is kind of like overhyped or under hyped in the AI world right now?[00:09:43] Alessio: Overhyped agents framework. Sorry. Not naming any particular ones. I'm sorry. Not, not not, yeah, exactly. It's not, I, I would say they're just overall a chase to try and be the framework when the workloads are like in such flux. Yeah. That I just think is like so [00:10:00] hard to reconcile the two. I think what Harrison and Link Chain has done so amazingly, it's like product velocity.[00:10:05] Like, you know, the initial obstructions were maybe not the ending obstruction, but like they were just releasing stuff every day trying to be on top of it. But I think now we're like past that, like what people are looking for now. It's like something that they can actually build on mm-hmm. And stay on for the next couple of years.[00:10:23] And we talked about this with Brett Taylor on our episode, and it feels like, it's like the jQuery era Yeah. Of like agents and lms. It's like, it's kinda like, you know, single file, big frameworks, kinda like a lot of players, but maybe we need React. And I think people are just trying to build still Jake Barry.[00:10:39] Like, I don't really see a lot of people doing react like,[00:10:43] swyx: yeah. Maybe the, the only modification I made about that is maybe it's too early even for frameworks at all. And the thing that, and do you think[00:10:50] Jacob: there's enough stability in the underlying model layer and, and patterns to, to have this,[00:10:54] swyx: the thing is the protocol and not the framework?[00:10:56] Jacob: Yeah.[00:10:56] swyx: Because frameworks inherently embed protocols, but if you just focus on a protocol, maybe that [00:11:00] works. And obviously MCP is. The current leading mm-hmm. Area. And you know, I think the comparison there would be, instead of just jQuery, it is XML HTB requests, which is like the, the thing that enabled Ajax.[00:11:10] And that was the, the, the, the, the sort of inciting incident for JavaScripts being popular as a language.[00:11:16] Jordan: I would largely agree with that. I mean, I think on the, the react side of things, I think we're starting to see more frameworks sort of go after more of that, I guess like master is sort of like on the TypeScript side and more of like a sort of master.[00:11:28] Yeah, yeah, yeah, yeah. The traction is really impressive there. And so I think we're starting to see more surface there, but I think there's still a big opportunity. What do you have for for an over or under hyped on the under hype side? You know, I actually, I, I know I mentioned Apple already, but I think the private cloud compute side with PCC, I actually think that could be really big.[00:11:45] It's under the radar right now. Mm-hmm. But in terms of basically bringing. The on device sort of security to the cloud. They've done a lot of architecturally interesting things there. Who's they? Apple. Oh, okay. On the PCC side. And so I actually think of that.[00:11:58] swyx: So you're negative on Apple [00:12:00] Intelligence, but also on Apple Cloud,[00:12:01] Jordan: on the more of the local device.[00:12:04] Sort of, I think there'll be a lot of workloads still on device, but when you need to speak to the cloud for larger LLMs, I think that Apple has done really interesting thing on the privacy side.[00:12:13] Alessio: Yeah. We did the seed of a company that does that, so Yeah. Especially as things become more co that you set 'em up on purpose.[00:12:18] So that felt like a perfect Yeah, no, I was like, let's go Jordan, you guys concluding before this episode? Tell me about that company after. We'll chat after, but, but yes, I, I think that's like the unique the thing about LLM workflows is like you just cannot have everything be single tenant, right?[00:12:35] Because you just cannot get enough GPUs. Like even like large enterprises are used to having VPCs and like everything runs privately. But now you just cannot get enough GPUs to run in a VPC. So I think you're gonna need to be in a multi-tenant architecture, and you need, like you said, like single tenant guarantees in multi-tenant environment.[00:12:52] So yeah, it's a interesting space.[00:12:55] swyx: Yeah. What about you, Swiss? Under hypes, I want to say [00:13:00] memory. Just like stateful ai. As part of my keynote on, on for just like every, every conference I do, I do a keynote and I try to do the task of like defining an agent, just, you know, always evergreen content, every content for a keynote.[00:13:14] But I did it in a, in a way that it was like I think like a, what a researcher would do. Like you, you survey what people say and then you sort of categorize and, and go like, okay, this is the, the. What everyone calls agents and here are the groups of DEF definitions. Pick and choose. Right. And then it was very interesting that the week after that OpenAI launched their agents SDK and kind of formalized what they think agents are.[00:13:34] CloudFlare also did the same with us and none of them had memory. Yeah, it's very strange. The, pretty much like the only big lab o obviously there, there's conversation memory, but there's not memory memory like in like a, like a let's store a large across fact about you and like, you know, exceed the, the context length.[00:13:54] And here's the, if you, if you're look, if you look closely enough, there's a really good implementation of memory inside of [00:14:00] MCP when they launched with the initial set of servers. They had a memory server in there, which I, I would recommend as like, that's where you start with memory. But I think like if there was a better, I.[00:14:10] Memory abstraction, then a lot of our agents would be smarter and could learn on, on the job, which is something that we all want. And for some reason we all just like ignored that because it's just convenient to, and, but do you feel like[00:14:24] Jacob: it's being ignored or it's just a really hard problem and like lots of, I feel like lots of people are working on it.[00:14:27] Just feels like it's, it's proven more challenging.[00:14:29] swyx: Yeah. Yeah. Yeah. So, so Harrison has lang me, which I think now he's like, you know, relaunched again. And then we had letter come speak at our mm-hmm. Our conference I don't know, Zep, I think there's a bunch of other memory guys, but like, something like this I think should be normal in the stack.[00:14:44] And basically I think anything stateful should be interesting to VCs 'cause it's databases and, you know, we know how those things make money.[00:14:51] Jacob: I think on the over hype side, the only thing I'd add is like, I'm, I'm still surprised how many net new companies there are training models. I thought we were kind of like past that.[00:14:58] And[00:14:58] swyx: I would say they died end of last year. And now, [00:15:00] now they've resurfaced. Yeah. I mean they, that's one of the questions that you had down there of like, yeah. Sorry. Is there an opportunity for net new model players? I wouldn't say no. I don't know what you guys think.[00:15:08] Alessio: I, I don't have a reason to say no, but I also don't have a reason to say, this is what is missing and you should have a new model company do it.[00:15:15] But again, I'm an add here. Like, all these guys wanna[00:15:17] swyx: pursue a GI, you know, all, they all want to be like, oh, we'll, we'll like hit, you know, soda on all the benchmarks and like, they can't all do it. Yeah.[00:15:25] Jacob: I mean, look, I don't know if Ilia has the secret secret approach up his sleeve of of something beyond test time compute.[00:15:29] Mm-hmm. But it was funny, I, we had Noam Shaer on the podcast last week. I was asking him like, you know, is, is there like some sort of other algorithmic breakthrough? Would he make a Ilia? And he's like, look, I think what he is implicitly said was test time compute gets to the point where these models are doing AI engineering for us.[00:15:43] And so, you know, at that point they'll figure out the next algorithm breakthrough. Yeah. Which I thought was was pretty interesting.[00:15:47] Jordan: I agree with you folks. I think that we're most interested, at least from our side and like, you know, foundation models for specific use cases and more specialized use cases.[00:15:55] Mm-hmm. I guess the broader point is if there is something like that, that these companies can latch onto [00:16:00] and being there sort of. Known for being the best at. Maybe there's a case for that. Largely though I do agree with you that I don't think there should be, at this point, more model companies. I think it's like[00:16:09] Jacob: these[00:16:09] Jordan: unique data[00:16:09] Jacob: sets, right?[00:16:10] I mean, obviously robotics has been an area we've been really interested in. It's entirely different set of data that's required, you know, on top of like a, a good BLM and then, you know, biology, material sciences, more the specific use cases basically. Yeah. But also specific, like specific markets. A lot of these models are super generalizable, but like, you know finding opportunities to, you know, where, you know, for a lot of these bio companies, they have wet labs, like they're like running a ton of experiments or you know, same on the material sciences side.[00:16:31] And so I still feel like there's some, some opportunities there, but the core kind of like LLM agent space is it's tough, tough to compete with the big ones.[00:16:38] Alessio: Yeah. Agree. Yeah. But they're moving more into product. Yeah. So I think that's the question is like, if they could do better vertical models, why not do that instead of trying to do deep research and operator?[00:16:50] And these different things. Mm-hmm. I think that's what I'm, in my mind, it's like the agents coming[00:16:53] swyx: out too.[00:16:54] Alessio: Well. Yeah. In my, in my mind it's like financial pressure. Like they need to monetize in a much shorter timeframe [00:17:00] because the costs are so high. But maybe it's like, it's not that easy to, do[00:17:04] Jacob: you think they would be, that it would be a better business model to like, do a bunch of vertical?[00:17:07] Well, it's more like[00:17:07] Alessio: why wouldn't they, you know, like you make less enemies if you're like a model builder, right? Yeah. Like, like now with deep research and like search, now perplexity like an enemy and like a, you know, Gemini deep research is like more of an enemy. Versus if they were doing a finance model, you know?[00:17:25] Mm-hmm. Or whatever, like they would just enable so many more companies and they always have, like they had as one of the customer case studies for GBT search, but they're not building a finance based model for them. So is it because it's super hard and somebody should do it? Or is it because the new models.[00:17:41] Are gonna be so much better that like the vertical models are useless anyways. Like this is better lesson. Exactly.[00:17:46] Jacob: It still seems to be a somewhat outstanding question. I, I'd say like, all the signs of the last few years seem to be like a general purpose model is like the way to go. And, you know, you know, like training a hyper-specific model in this, in, in a domain is like, you know, maybe it's cheaper and faster, but it's not gonna be like higher quality.[00:17:59] But [00:18:00] also like, I think it's still an, I mean, we were talking to, to no and Jack Ray from Google last week, and they were like, yeah, this is still an outstanding, like, we, we check this every time we have a new model. Like whether there's you know, there that still seems to be holding. I remember like a few years ago, it felt like all the rage was like the, it was like the Bloomberg GPT model came out.[00:18:14] Everyone was like, oh, you gotta like, you know, massive data. Yeah. I had[00:18:17] swyx: a GPA, I had DP of AI of Bloomberg present on that. Yeah. That must be a really[00:18:20] Jacob: interesting episode to go back on because I feel like, like very shortly thereafter, the next opening AI model came out and just like beat it on all sorts of[00:18:25] swyx: No, it, it was a talk.[00:18:26] We haven't released it yet, but yeah, I mean it's basically they concluded that the, the closed models were better so they just Yeah. Stopped. Interesting. Exactly. So I feel like that's been the but he's I, I would be. He's very insistent that the work that they did, the team he assembled, the data that he collected is actually useful for more than just the model.[00:18:42] So like, basically everything but the model survived. What are the other things? The data pipeline. Okay. The team that they, they, they assembled for like fine tuning and implementing whatever models they, they ended up picking. Yeah, it seems like they are happy with that. And they're running with that.[00:18:57] He runs like 12, 13 [00:19:00] teams at Bloomberg just working. Jenny, I across the company.[00:19:03] Jacob: I mean, I guess we've, we've all kind of been alluding it to it right now, but I guess because it's a natural transition. You know, the other broad opening I have is just what we're paying most attention to right now. And I think back on this, like, you know, the model company's coming into the product area.[00:19:13] I mean, I think that's gonna be like, I'm fascinated to see how that plays out over the next year and kind of these like frenemy dynamics and it feels like it's gonna first boil up on like cursor anthropic and like the way that plays out over the next six months I think will be. What, what is Cursor?[00:19:26] swyx: Anthropic is, you mean Cursor versus anthropic or, yeah. And I[00:19:29] Jacob: assume, you know, over time Anthropic wants to get more into the application side of coding Uhhuh. And you know, I assume over time Cursor will wanna diversify off of, you know, just using the Anthropic model.[00:19:39] swyx: It's interesting that now Cursor is now worth like 10 billion, nine, nine, 10 billion.[00:19:43] Yeah. And like they've made themselves hard to acquire, like I would've said, like, you should just get yourself to five, 6 billion and join OpenAI. And like all the training data goes through OpenAI and that's how they train their coding model. Now it's not as complicated. Now they need to be an independent company.[00:19:57] Jacob: Increasingly, it's seems to the model companies want to get into the [00:20:00] product layer. And so seeing over the next six, 12 months does having the best model, you know let you kind of start from a cold start on the product side and, and get something in market. Or are the, you know, companies with the best products, even if they eventually have to switch to a somewhat worse, tiny bit worse model, does it not, you know, where do the developers ultimately choose to go?[00:20:16] I think that'll be super interesting. Yeah.[00:20:18] Alessio: Don't you think that Devon is more in trouble than cursor? I, I feel like on Tropic, if anything wants to move more towards, I don't think they wanna build the ID like if I think about coding, it's like kind of like, you know, you look at it like a cube, it's like the ID is like one way to get the code and then the agent is like the other side.[00:20:33] Yeah. I feel like on Tropic wants more be on the agent side and then hand you off the cursor when you want to go in depth versus like trying to build the claw. IDEI think that's not, I would say, I don't know how you think the[00:20:46] swyx: existence, a cloud code doesn't show, doesn't support what you say. Like maybe they would, but[00:20:52] Jacob: assume, like I assume both just converge eventually where you want have where will you be able to do both?[00:20:57] So,[00:20:57] swyx: so in order to be so we're, we're talking [00:21:00] about coding agents, whether it's sort of what is it? Inner loop versus auto loop, right? Like inner loop is inside cursor, inside your ID between inside of a GI commit and auto loop is between GI commits on, on the cloud. And I think like to be an outer loop coding agent, you have to be more of a, like, we will integrate with your code base, we'll sign your whatever.[00:21:17] You know, security thing that you need to sign. Yeah. That kinda schlep. I don't think the model ads wanna do that schlep, they just want to provide models. So that, that, that's, that would be my argument against like why cognition should still have, have, have some moat against anthropic just simply because they cognition would do the schlep and the biz dev and the infra that philanthropic doesn't really care about.[00:21:39] Jacob: I know the schlep is pretty sticky though. Once you do it,[00:21:41] swyx: it's very sticky. Yeah. Yeah. I mean it's, it's, it's interesting. Like, I, I think the natural winner of that should be sourcegraph. But there's another[00:21:47] Jacob: unprompted point portfolio. Nice. We, I mean they, they're[00:21:51] swyx: big supporters like very friendly with both Quinn and B and they've they've done a lot of work with Cody, but like, no, not much work on the outer [00:22:00] loop stuff yet.[00:22:01] But like any company where like they have already had, like, we've been around for 10 years, we, we like have all the enterprise contracts that you already trust us with your code base. Why would you go trust like factory or cognition as like, you know, 2-year-old startups who like just came outta MIT Like, I don't know.[00:22:17] Product Market Fit in AI[00:22:17] Jacob: I guess switching gears to the to the application side I'm curious for both of you, like how do you kind of characterize what has genuine product market fit in AI today? And I guess less, you more and your side of the investing side, like more interesting to invest in that category of the stuff that works today or kind of where the capabilities are going long term.[00:22:35] Alessio: That's hard. I was asking you to do my job for you, like, man, that's a easy, that's a layout. Tell us all your investing[00:22:40] pieces. Yeah, yeah, yeah. I, I, I would say we, well we only really do mostly seed investing, so it's hard to invest in things that already work. Yeah. That fair. Are really late. So we try to, but, but we try to be at the cusp of like, you know, usually the investments we like to make, there's like really not that much market risk.[00:22:57] It's like if this works. Obviously people are gonna [00:23:00] use it, but like it's unclear whether or not it's gonna work. So that's kind of more what we skew towards. We try not to chase as many trends and I don't know, I, you know, I was a founder myself and sometimes I feel like it's easy to just jump in and do the thing that is hot, but like becoming a founder to do something that is like underappreciated or like doesn't yet work shows some level of like dread and self, like you, you actually really believe in the thing.[00:23:25] So that alone for me is like, kind of makes me skew more towards that. And you do a lot of angel investing too, so I'm curious how,[00:23:31] swyx: Yeah, but I don't regard, I don't have, I don't use, put, put that in my mental framework of things like I come at this much more as a content creator or market analyst of like, yeah, it, it really does matter to me what has part of market fit because.[00:23:45] People, I have to answer the question of what is working now When, when people ask me,[00:23:50] Jacob: do you feel like relative to the, the obviously the hype and discourse out there, like, you know, do you feel like there's a lot of things that have product market fit or like a few things, like where a few things? Yeah.[00:23:58] swyx: I was gonna say this, so I have a list [00:24:00] of like two years ago we, I wrote the Anatomy of autonomy posts where it was like the, the first, like what's going on in agents and, and and, and, and what is actually making money. Because I think there's a lot of gen I skeptics out there. They're all like, these, these things are toys.[00:24:13] They're, they're not unreliable. And you know, why, why, why you dedicating your life to these things. And I think for me, the party market fit bar at the time was a hundred million dollars, right? Like what use cases can reasonably fit a hundred million dollars. And at the time it was like co-pilot it was Jasper.[00:24:30] No longer, but mm-hmm. You know, in that category of like help you write. Yeah. Which I think, I think was, was helpful. And then and the cursor I think was on there as, as a, as, as, as like a coding agent. Plus plus. I think that list will just grow over time of like the form factors that we know to work, and then we can just adapt the form factors to a bunch of other things.[00:24:47] So like the, the one that's the most recently added to this is deep research.[00:24:52] misc: Yeah.[00:24:52] swyx: Right. Where anything that looks like a deep research whether it's a grok version, Gemini version, perplexity version, whatever. He has an investment [00:25:00] that that he likes called Brightwave that is basically deep research for finance.[00:25:02] Yeah. And anything where like all it is like long-term agent, agent reporting and it's starting to take more and more of the job away from you and, and just give you much more reason to report. I think it's going to work. And that has some PMFI think obviously has PMF like I, I would say. It's I, I went to this exercise of trying to handicap how much money open AI made from launching open ai deep research.[00:25:25] I think it's billions. Like the, the, the mo the the she upgrade from like $20 to 200. It has to be billions in the R off. Maybe not all them will stick around, but like that is some amount of PMF that is didn't they have to immediately drop it down[00:25:38] Jacob: to the $20 tier?[00:25:39] swyx: They expanded access. I don't, I wouldn't say, which I thought was[00:25:42] Jacob: really telling of the market.[00:25:43] Right. It's like where you have a you know, I think it's gonna be so interesting to see what they're actually able to get in that 200 or $2,000 tier, which we all think is, is, you know, has a ton of potential. But I thought it was fascinating. I don't know whether it was just to get more people exposure to it or the fact that like Google had a similar product obviously, and, and other folks did too.[00:25:59] But [00:26:00] it was really interesting how quickly they dropped it down.[00:26:02] swyx: I don't, I think that's just a more general policy of no matter what they have at the top tier, they always want to have smaller versions of that in the, in the lower tiers. Yeah. And just get people exposure to it. Just, yeah, just get exposure.[00:26:12] The brand of being first to market and, and like the default choice Yeah. Is paramount to open ai[00:26:18] Jacob: though. I thought that whole thing was fascinating 'cause Google had the first product, right? Yeah. And no, like, you know, I, we[00:26:24] swyx: interviewed them. I, I, I, straight up to their faces, I was like, opening, I mocked you.[00:26:28] And they were like, yeah, well, actually curious, what's[00:26:30] Jacob: it, this is totally off topic, but whatever. Like, what is it going to take for go? Google just released some great models like a, a few weeks ago. Like I feel like it's happening. The stuff they're shipping is really cool. It's happening. Yeah, but I, I, I also, I feel like at least in the, you know, broader discourse, it's still like a drop in the bucket relative to[00:26:45] swyx: Yeah.[00:26:45] I mean, I, I can riff on, on this. I, I, but I, I think it's happening. I think it takes some time, but I am, like my Gemini usage is up. Like, I, I use, I use it a lot more for anything from like summarizing YouTube videos to the [00:27:00] native image generation Yeah. That they just launched to like flash thinking.[00:27:02] So yeah, multi-mobile stuff's great. Yeah. I run you know, and I run like a daily sort of news recap called AI news that is, 99% generated by models, and I do a bake off between all the frontier models every day. And it's every day. Like does it switch? I manual? Yes, it does switch. And I, man, I manually do it.[00:27:18] And flash is, flash wins most days. So, so like, I think it's happening. I think I was thinking, I was thinking about tracking myself like number of opens of tragedy, g Bt versus Gemini. And at some point it will cross. I think that Gemini will be my main and, and it, it, I I like that will slowly happen for a bunch of people.[00:27:37] And, and, and then that will, that'll shift. I, I think that's, that's a really interesting for developers, this is a different question. Yeah. It's Google getting over itself of having Google Cloud versus Vertex versus AI studio, all these like five different brands, slowly consolidating it. It'll happen just slowly, I guess.[00:27:53] Alessio: Yeah.[00:27:54] Yeah. I, I mean, another good example is like you cannot use the thinking models in cursor. Yeah. And I know [00:28:00] Logan killed Patrick's that they're working on it, but I, I think there's all these small things where like if I cannot easily use it, I'm really not gonna go out of my way to do it. But I do agree that when you do use them, their models are, are great.[00:28:12] So yeah. They just need better, better bridges.[00:28:15] swyx: You had one of the questions in the prep.[00:28:16] Debating Public Companies: Google vs. Apple[00:28:16] swyx: What public company are you long and short and minus Google versus, versus Apple, like, long, short. That was also my[00:28:23] Jacob: combo. I, I feel like, yeah, I mean, it does feel like Google's really cooking right now.[00:28:26] swyx: Yeah. So okay, coming back to what has product market fit[00:28:29] Jacob: now,[00:28:29] swyx: now that we come[00:28:30] Jacob: back to my complete total sidetrack,[00:28:33] Customer Support and AI's Role[00:28:33] swyx: there's also customer support.[00:28:35] We were talking on, on the car about Decagon and Sierra, obviously Brett, Brett Taylor is founder of Sierra. And yeah, it seems like there's just this, these layers of agents that'll like, I think you just look at like the income statement or like the, the org chart of any large scaled company and you start picking them off one by one.[00:28:51] What like is interesting knowledge work? And they would just kind of eat. Things slowly from the outside in. Yeah, that makes sense.[00:28:57] Alessio: I, I mean, the episode with the, [00:29:00] with Brett, he's so passionate about developer tools and Yeah. He did not do a developer tools. We spent like two hours talking about developer tools and like, all, all of that stuff.[00:29:10] And it's like, I, they a customer support company, I'm like, man, that says something. You know what I mean? Yeah. It's like when you have somebody like him who can like, raise any amount of money from anybody to do anything. Yeah. To pick customer support as the market to go after while also being the chairman of OpenAI, like that shows you that like, these things have moats and have longstanding, like they're gonna stick around, you know?[00:29:32] Otherwise he's smarter than that. So yeah, that's a, that's a space where maybe initially, you know, I would've said, I don't know, it's like the most exciting thing to, to jump into, but then if you really look at the shape of like, how the workforce are structured and like how the cost centers of like the business really end up, especially for more consumer facing businesses, like a lot of it goes into customer support.[00:29:54] AI's Impact on Business Growth[00:29:54] Alessio: All the AI story of the last two years has been cost cutting. Yeah. I think now we're gonna switch more towards growth revenue. [00:30:00] Totally. You know, like you've seen Jensen, like last year, GTC was saying the more you buy, the more you save this year is that the more you buy, the more you make. So we're hot off the[00:30:08] Jacob: press.[00:30:10] We were there. We were there. Yeah. I do think that's one of the most interesting things about the, this first wave of apps where it's like almost the easiest thing that you could you could get real traction with was stuff that, you know, for lack of a better way to frame it, like so that people had already been comfortable outsourcing the BPOs or something and kind of implicitly said like, Hey, this is a cost center.[00:30:24] Like we are willing to take some performance cut for cost in the past. You know, the, the irony of that, or what I'm really curious to see how it plays out is, you know, you, you could imagine that is the area where price competition is going to be most fierce because it's already stuff that you know, that people have said, Hey, we don't need the like a hundred percent best version of that.[00:30:42] And I wonder, you know, this next wave of apps. May prove actually even more defensible as you get these capabilities that actually are, you know, increased top line or whatnot where you're like, you take ai, go to market, for example. Like you're, you'd pay like twice as much for something that brought, like, 'cause there's just a kind of very clean ROI story to it.[00:30:59] And so [00:31:00] I wonder ultimately whether the, like this next set of apps actually ends up being more interesting than the, than the first wave.[00:31:05] Alessio: Yeah,[00:31:05] Voice AI and Scheduling Solutions[00:31:05] Jordan: I think a lot of the voice AI ones are interesting too, because you don't need a hundred percent precision recall to actually, you know, have a great product.[00:31:12] And so for example, we looked into a bunch of you know, scheduling intake companies, for example, like home services, right? For electricians and stuff like that. Today they miss 50% of their calls. So even if the AI is only effective, say 75% of the time, yeah, it's crazy, right? So if it's effective 75% of the time, that's totally fine because that's still a ton of increased revenue for the customer, right?[00:31:32] And so you don't need that a hundred percent accuracy. Yeah. And so as the models. And the reliability of these agents are getting better is totally fine, because you're still getting a ton of value in the meantime.[00:31:41] swyx: Yeah. One, this is, I don't know how related this is, but I, one of my favorite meetings at it is related one of my favorite meetings at AI Engineer Summit, it is like, like I do these, this is our first one in New York, and I it is like met the different crew than, than you meet here.[00:31:55] Like everyone here is loves developer tools, loves infra over there. They're actually more interested in [00:32:00] applications. It's kind of cool. I met this like bootstrap team that, like, they're only doing appointment scheduling for vets. They, they, yeah. And like, they're like, this is a, this is an anomaly. We don't usually come to engineering summits 'cause we usually go to vet summits and like talk to the, they're, they're like, you know, they, they're, they're literally, I'm sure it's a[00:32:16] Jordan: massive pain point.[00:32:17] They're willing to pay a lot of money.[00:32:20] Alessio: Yeah. But, but, but this is like my point about saving versus making more, it's like if an electrician takes two x more calls, do they have the bandwidth? To actually do two X more in-house and they get higher. Well, yeah, exactly. That's the thing is like, I don't think today most businesses are like structured to just like overnight two, three x the band, you know?[00:32:38] I think that's like a startup thing. Like mo most businesses then you make an[00:32:42] swyx: electrician agent. Well, no, totally. That's how do you, how do you recruiting agent for electrician, for like[00:32:49] Alessio: electrician. Great. That's a good point. How do you do lambda school for electrician? I, it's hilarious.[00:32:53] Jacob: Whack-a-mole for the bottlenecks in these businesses.[00:32:55] Like as, oh, now we have a ton of demand. Like, cool. Like where do we go?[00:32:58] swyx: Yeah.[00:32:59] Exploring AI Applications in Various Fields[00:32:59] swyx: So just to [00:33:00] round out the, the this PMF thing I think this is relevant in a certain sense of, like, it's pretty obvious that the killer agents are coding agents, support agents, deep research, right? Roughly, right. We've covered all those three already.[00:33:10] Then, then, then you have to sort of be, turn to offense and go like, okay, what's next? And like, what, what about, I[00:33:16] Jacob: mean, I also just like summarization of, of voice and conversation, right? Yep. Absolutely. We actually had that on there. I[00:33:21] swyx: just, I didn't put it as agent. Because seems less agentic, you know? But yes, still, still a good AI use case.[00:33:26] That one I, I've seen I would mention granola and what's the other one? Monterey, I think a bridge was one wanted to mention. I was say bridge. Yeah, bridge. Okay. So I'll just, I'll call out what I had on my slides. Yeah. For, for the agent engineering thing. So it was screen sharing, which I think is actually kind of, kind of underrated.[00:33:42] Like people, like an AI watching you as you do your work and just like offering assistance outbound sales. So instead of support, just being more outbound hiring, you say[00:33:51] Jacob: outbound sales has brought a market fit?[00:33:53] swyx: No, it, it, it will, it's come out. Oh, on the comp. Yeah. I was totally agree with that. Yeah. Hiring like the recruiting side education, like the, [00:34:00] the sort of like personalized teaching, I think.[00:34:02] I'm kind of shocked we haven't seen more there. Yeah. Yeah. I don't know if that's like, like it's like Duolingo is the thing. Amigo.[00:34:08] Jacob: Yeah. I mean, speak in some of these like, you know,[00:34:10] swyx: speak, practice, yeah. Interesting. And then finance, I, there's, there's a ton of finance cases that we can talk about that and then personal ai, which we also had a little bit of that, but I think personal AI is a harder to monetize, but I, I think those would be like, what I would say is up and coming in terms of like, that's what I'm currently focusing on.[00:34:27] Jacob: I feel like this question's been asked a few different ways but I'm, I'm curious what you guys think it's like, is it like, if we just froze model capabilities today, like is there, you know, trillions of dollars of application value to be unlocked? Like, like AI education? Like if we just stopped today all model development, like with this current generation of models, we could probably build some pretty amazing education apps.[00:34:44] Or like, how much of this, how much of, of all this is like contingent upon just like, okay, people have had two years with GBT four and like, you know, I don't know, six months with the reasoning models, like how much is contingent upon it just being more time with these things versus like the models actually have to get better?[00:34:58] I dunno, it's a hard question, so I'm gonna just throw it [00:35:00] to you.[00:35:00] Alessio: Yeah. Well I think the societal thing, it's maybe harder, especially in education. You know, like, can you basically like Doge. The education system. Probably you should, but like, can you, I I think it's more of a human,[00:35:14] Jacob: but people pay for all sorts of like, get ahead things outside of class and you know, certainly in other countries there's a ton of consumer spend and education.[00:35:21] It feels like the market opportunity is there.[00:35:23] swyx: Yeah. And, and private education, I think yeah, public Public is a very different, yeah. One of my most interesting quests from last year was kind of reforming Singapore's education system to be more sort of AI native, just what you were doing on the side while you were Yes.[00:35:38] That's a great, that's a great side quest. My stated goal is for Singapore to be the first country that has Python as a first language, as a, as a national language. Anyway, so, but the, the, the, the defense, the pushback I got from Ministry of Education was that the teachers would be unprepared to do it.[00:35:53] So it's like, it was like the def the, like, the it was really interesting, like immediate pushback. Was that the defacto teachers union being like, [00:36:00] resistant to change and like, okay. It's that that's par for the course. Anyway, so not, not to, not to dwell too much on that, but like yeah, I mean, like, I, I think like education is one of those things that pe everyone, like has strong opinions on.[00:36:11] 'cause they all have kids, all be the education system. But like, I think it's gonna be like the, the domain specific, like, like speak like such a amazing example of like top down. Like, we will go through the idea maze and we'll go to Korea and teach them English. Like, it's like, what the hell? And I would love to see more examples of that.[00:36:29] Like, just like really focus, like no one tried to solve everything. Just, just do your thing really, really well[00:36:34] Defensibility in AI Applications[00:36:34] Jacob: on this trend of of, of difficult questions that come up. I'm gonna just ask you the one that my partners like to ask me every single Monday, which is how do you think about defensibility at the at the app layer?[00:36:41] Alessio: Oh[00:36:41] Jacob: yeah, that's great. Just gimme an answer. I can copy paste and just like, you know, have network effects. Auto, auto response.[00:36:47] swyx: Honestly like network effects. I think people don't prioritize those enough because they're trying to make the single player experience good. But then, then they neglect the [00:37:00] multiplayer experience.[00:37:00] I think one of the I always think about like load-bearing episodes, like, you know, as, as park that you do one a week and like, you know, some of those you don't really talk about ever again. And others you keep mentioning every single podcast. And one of the, this is obviously gonna be the last one. I think the recap episodes for us are pretty load-bearing.[00:37:15] Like we, we refer to them every three months or so. And like one of them I think for us is Chai for me is chai research, even though that wasn't like a super popular one among the broader community outside of Chai, the chai community, for those who don't know, chai Research is basically a character AI competitor.[00:37:32] Right. They were bootstraps, they were founded at the same time and they have out outlasted character of de facto. Right. It's funny, like I, I would love to ask Mil a bit more about like the whole character thing, but good luck getting past the Google copy. But like, so he, like, he, like he doesn't have his own models, basically he has his own network of people submitting models to be run.[00:37:54] And I think like. That is like short term going to be hurting him because he doesn't have [00:38:00] proprietary ip. But long term he has the network network effect to make him robust to any changes in the future. And I think, like I wanna see more of that where like he's basically looking himself as kind of a marketplace and he's identified the choke point, which is will be app or the, the sort of protocol layer that interfaces between the users and the model providers.[00:38:18] And then make sure that the money kind of flows through and that works. I, I wish that more AI builders or AI founders emphasize network effects. 'cause that that's the only thing that you're gonna have with the end of the day. Yeah. And like brand deeds into network effects you.[00:38:34] Jacob: Yeah, I guess you know, harder in, in the enterprise context.[00:38:36] Right. But I mean, I feel, it's funny, we do this exercise and I feel like we talk a lot about like, you know, obviously there's, you know kind of the velocity and the breadth you're able to kind of build of product surface area. There's just like the ability to become a brand in a space. Like, I'm shocked that even in like six, nine months, how an individual company can become synonymous with like an entire category.[00:38:52] And like, then they're in every room for customers and like all the other startups are like clawing their way to try and get in like one, you know, 20th of those rooms.[00:38:59] Jordan: There's a [00:39:00] bunch of categories where we talk about an IC and it's like, oh, pricing compression's gonna happen, not as defensible. And so ACVs are gonna go down over time.[00:39:08] In actuality, some of these, the ACVs have doubled, we've seen, and the reason for that is just, you know, people go to them and pay for that premium of being that brand.[00:39:16] Jacob: Yeah. I mean, one thing I'm struck by is there's been, there was such a head fake in the early days of, of AI apps where people were like, we want this amazing defensibility story, and then what's the easiest defensibility story?[00:39:24] It's like, oh, like. Totally unique data set or like train your own model or something. And I feel like that was just like a total head fake where I don't think that's actually useful at all. It's the much less, you sound much less articulate when you're like, well the defensibility here is like the thousand small things that this company does to make like the user experience design everything just like delightful and just like the speed at which they move to kind of both create a really broad product, but then also every three, six months when a new model comes out, it's kind of an existential event for like any company.[00:39:49] 'cause if you're not the first to like figure out how to use it, someone else will. Yeah. And so velocity really matters there. And it's funny in in, in kinda our internal discussions, we've been like, man, that sounds pretty similar to like how we thought about like application SaaS [00:40:00] companies. That there isn't some like revolutionary reason you don't sound like a genius when you're like, here's applications why application SaaS company A is so much better than B.[00:40:07] But it's like a lot of little things that compound over time.[00:40:10] Infrastructure and AI: Current Trends[00:40:10] Jacob: What about the infrastructure space, guys? Like I'm curious you know. What, how do you guys think about where the interesting categories are here today and you know, like where, where, where do you wanna see more startups or, or where do you think there are too many?[00:40:21] Alessio: Yeah. Yeah, we call it kind of the L-L-M-O-S. But I would say[00:40:24] swyx: not we, I mean Andre, Andre calls it LMOS[00:40:27] Alessio: Well, but yeah, we, well everyone else just copies whatever two. And Andre, the three of you call it the LMO. Well, we have just like four words of ai framework Yeah. Yeah. That we use. And LM Os is one of them, but yeah, I mean, code execution is one.[00:40:39] We've been banging the drum, everybody now knows where investors in E two B. Mm-hmm. Memory, you know, is one that we kind of touched on before. Super interesting search we talked about. I, I think those are more not traditional infra, not like the bare metal infra. It's more like the infra around the tools for agents model, you know?[00:40:57] Which I think is where a lot of the value is gonna [00:41:00] be. The security[00:41:00] swyx: ones. Yeah.[00:41:01] Alessio: Yeah. And cyber security. I mean there's so much to be done there. And it's more like basically any area where. AI is being used by the offense. AI needs to be applied on the defense side, like email security, you know, identity, like all these different things.[00:41:16] So we've been doing a lot there as well as, you know, how do you rethink things that used to be costly, like red teaming and maybe used to be a checkbox in the past Today they can be actually helpful. Yeah. To make you secure your app. And there's this whole idea of like, semantics, right? That not the models can be good at.[00:41:32] You know, in the past everything is about syntax. It's kind of like very basic, you know, constraint rules. I think now you can start to infer semantics from things that are beyond just like simple recognition to like understanding why certain things are happening a certain way. So in the security space, we're seeing that with binary inspection, for example.[00:41:51] Like there's kinda like the syntax, but then there are like semantics of like understanding what is the scope overall really trying to do. Even though this [00:42:00] individual syntax, it's like seeing something specific. Not to get too technical, but yeah, I, I think infra overall, it's like a super interesting place if you're making use of the model, if you're just, I'm less bullish.[00:42:13] Not, not that it's not a great business, but I think it's a very capital intensive business, which is like serving the models. Mm-hmm. Yeah. I think that infra is like, great people will make money, but yeah. I, I, I don't think there's as much of a interest from, from us at[00:42:25] Jordan: least. Yeah. How, how do you guys think about what OpenAI and the big research labs will encompass as part of the developer and infra category?[00:42:31] Yeah.[00:42:31] Alessio: That, that's why I, I would say I search is the first example of one of the things we used to mention on, you know, we had X on the podcast and perplexity obviously as a, as an API. The basic idea[00:42:44] swyx: is if you go into like the chat GBT custom GPT builder, like what are the check boxes? Each of them is a startup.[00:42:50] Alessio: Yeah. And, and now they're also APIs. So now search is also an a p, we will see what the adoption is. There's the, you know, in traditional infra, like everybody wants to be [00:43:00] multi-cloud, so maybe we'll see the same Where change GPD search or open AI search. API is like, great with the open AI models because you get it all bundled in, but their price is very high.[00:43:11] If you compare it to like, you know, XI think is like five times the, the price for the same amount of research, which makes sense if you have a big open AI contract. But maybe if you're just like pick and best in breed, you wanna compare different ones. Yeah. Yeah, they don't have a code execution one.[00:43:26] I'm sure they'll release one soon. So they wanna own that too, but yeah. Same question we were talking about before, right? Did they wanna be an API company or a product company? Do you make more money building Tri g BT search or selling search? API?[00:43:38] swyx: Yeah. The, the broader lesson, instead of like going, we did applications just now.[00:43:42] And then what do you think is interesting infrastructure? Like it's not 50 50, it's not like equal weighted, like it, it's just very clearly the application layer has like. Been way more interesting. Like yes, there, there's interesting in infrastructure plays and I even want to like push back on like the, the, the whole GPU serving thing because like together [00:44:00] AI is doing well, fireworks, I mean I was, that worked.[00:44:02] Alessio: It's like data[00:44:02] Jacob: centers[00:44:03] Alessio: and inference[00:44:03] Jacob: providers,[00:44:04] Alessio: the,[00:44:04] swyx: you know,[00:44:04] Alessio: I think it's not like the capital[00:44:06] swyx: Oh, I see.[00:44:07] Alessio: I for, for again, capital efficiency. Yeah. Much larger funds. So you, I'm sure you have GPU clouds. Yeah.[00:44:13] swyx: Yeah. So that's, that's, that is one thing I have been learning in, in that you know, I think I have historically had dev tools and infra bias and so has he, and we've had to learn that applications actually are very interesting and also maybe kind of the killer application of models in a sense that you can charge for utility and not for cost.[00:44:33] Right? Which, where like most infrastructure reduces to cost plus. Yeah. Right. So, and like, that's not where you wanna be for ai. So that's, that's interesting for, for me I thought it would be interesting for me to be the only non VC in the room to be saying what is not investible. 'cause like then I then, you know, you can I, I won't be canceled for saying like, your, your whole category is, we have a great thing where like, this thing's[00:44:54] Jacob: not investible and then like three months later we're desperately chasing.[00:44:56] Exactly. Exactly. So you don't wanna be on a record space changes so [00:45:00] fast. It's like you gotta, every opinion you hold, you have to like, hold it quite loosely. Yeah.[00:45:02] swyx: I'm happy to be wrong in public, you know, I think that's how you learn the most, right? Yeah. So like, fine tuning companys is something I struggled with and still, like, I don't see how this becomes a big thing.[00:45:12] Like you kind of have to wrap it up in a broader, ser broader enterprise AI company, like services company, like a writer, AI where like they will find you and it's part of the overall offering. Mm-hmm. But like, that's not where you spike. Yeah, it's kind of interesting. And then I, I'll, I'll just kind of AI DevOps and like, there's a lot of AI SRE out there seems like.[00:45:32] There's a lot of data out there that that should be able to be plugged into your code base or, or, or your app to it's self-heal or whatever. It's just, I don't know if that's like, been a thing yet. And you guys can correct me if you're, if I'm wrong. And then the, the last thing I'll mention is voice realtime infra again, like very interesting, very, very hot.[00:45:49] But again, how big is it? Those are the, the main three that I'm thinking about for things I'm struggling with.[00:45:54] Jordan: Yeah. I guess a couple comments on the A-I-S-R-E side. I actually disagree with that one. Yeah. I think that the [00:46:00] reason they haven't sort of taken off yet is because the tech is just not there quite yet.[00:46:04] And so it goes back to the earlier question, do we think about investing towards where the companies will be when the models improve versus now? I think that's going to be, in short term we'll get there, but it's just not there just yet. But I think it's an interesting opportunity overall.[00:46:18] swyx: Yeah. It's my pushback to you is, well it's monitoring a lot of logs, right?[00:46:22] Yeah. And it's basically anomaly detection rather than. Like there's, there's a whole bunch of like stuff that can happen after you detect the anomaly, but it's really just an anomaly detection. And we've always had that, you know, like it's, this is like not a Transformers LLM use case. This is just regular anomaly detection.[00:46:38] Jordan: It's more in terms of like, it's not going to be an autonomous SRE for a while. Yeah. And so the question is how, how much can the latest sort of AI advancements increase the efficacy of going, bringing your MTTR

The MAD Podcast with Matt Turck
Why This Ex-Meta Leader is Rethinking AI Infrastructure | Lin Qiao, CEO, Fireworks AI

The MAD Podcast with Matt Turck

Play Episode Listen Later Mar 27, 2025 59:14


In 2022, Lin Qiao decided to leave Meta, where she was managing several hundred engineers, to start Fireworks AI. In this episode, we sit down with Lin for a deep dive on her work, starting with her leadership on PyTorch, now one of the most influential machine learning frameworks in the industry, powering research and production at scale across the AI industry. Now at the helm of Fireworks AI, Lin is leading a new wave in generative AI infrastructure, simplifying model deployment and optimizing performance to empower all developers building with Gen AI technologies.We dive into the technical core of Fireworks AI, uncovering their innovative strategies for model optimization, Function Calling in agentic development, and low-level breakthroughs at the GPU and CUDA layers.Fireworks AIWebsite - https://fireworks.aiX/Twitter - https://twitter.com/FireworksAI_HQLin QiaoLinkedIn - https://www.linkedin.com/in/lin-qiao-22248b4X/Twitter - https://twitter.com/lqiaoFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro(01:20) What is Fireworks AI?(02:47) What is PyTorch?(12:50) Traditional ML vs GenAI(14:54) AI's enterprise transformation(16:16) From Meta to Fireworks(19:39) Simplifying AI infrastructure(20:41) How Fireworks clients use GenAI(22:02) How many models are powered by Fireworks(30:09) LLM partitioning(34:43) Real-time vs pre-set search(36:56) Reinforcement learning(38:56) Function calling(44:23) Low-level architecture overview(45:47) Cloud GPUs & hardware support(47:16) VPC vs on-prem vs local deployment(49:50) Decreasing inference costs and its business implications(52:46) Fireworks roadmap(55:03) AI future predictions

Des Montres et Vous
#121 Essai de la VPC Type 37HW : Tient-elle toutes ses promesses ?

Des Montres et Vous

Play Episode Listen Later Mar 14, 2025 13:03 Transcription Available


Bonjour à tous et bienvenue sur DM&V, j'espère que vous allez pour le mieux ! Aujourd'hui, je vous propose l'essai d'une montre singulière qui a fait couler beaucoup d'encre depuis sa sortie et qui intrigue énormément d'amateurs. D'autant qu'elle fait partie de ces montres que très peu de personnes ont eu entre les mains afin de réellement pouvoir la juger. Cette montre, c'est la VPC type 37HW, créé par un des membres de Fratello, un des médias horlogers les plus influents du monde. Alors, quand des professionnels aussi passionnés, qui ont eu autant de pièces entre les mains sortent leur propre montre, forcément, on s'attend à quelque chose d'ultra-abouti et on a envie de savoir si la pièce tient toutes ses promesses. Dans cet épisode, je vais donc vous retracer l'histoire de cette montre, mais aussi vous livrer mes impressions après plusieurs jours au poignet : - Est-elle à la hauteur de mes attentes ? - Fait-elle désormais partie de mes préférées ? - Fratello a-t'il coché toutes les cases avec cette pièce murement réfléchie ? Je vais tout vous dire ! Mais avant de commencer, sachez que cet épisode est disponible en version audio sur toutes les plateformes de podcast mais également en vidéo sur ma chaine Youtube Des Montres & Vous. Si vous aimez la chaine et son contenu, N'hésitez pas à liker, à vous abonner et à activer les notifications pour ne rien louper et pour aider DM&V à progresser.  Pour ceux qui écoutent en version podcast, pensez à laisser une note 5 étoiles et un commentaire, ça fait toujours plaisir Au passage, je tiens à vous remercier pour votre adhésion en masse au cercle DM&V, le groupe Whatsapp que j'ai créé en février et dans lequel nous échangeons autour de notre passion commune. Vous êtes déjà plus de 700, c'est incroyable ! Pour ceux qui ne sont pas encore dans le cercle, rien de plus simple : https://chat.whatsapp.com/F96PntzE9C5GVqxFC7xpBX Bonne écoute ! Building A Watch Brand Episode 1 — Introduction https://www.fratellowatches.com/building-a-watch-brand-follow-thomas-as-he-develops-a-watch-of-his-own/ Building A Watch Brand Episode 2 — Brand and name https://www.fratellowatches.com/building-a-watch-brand-episode-2-revealing-the-name-and-brand-concept/ Building A Watch Brand Episode 3 — Finances, risk mitigation, and a designer https://www.fratellowatches.com/building-a-watch-brand-ep-3-designer-finances/ Building A Watch Brand Episode 4 — Unveiling the watch concept https://www.fratellowatches.com/building-a-watch-brand-episode-4-unveiling-the-watch-concept/ Building A Watch Brand Episode 5 — The first design ideation sketches https://www.fratellowatches.com/building-a-watch-brand-episode-5-the-first-design-ideation-sketches/ Building A Watch Brand Episode 6 — Caliber, pouch, case and bracelet updates, typography https://www.fratellowatches.com/building-a-watch-brand-ep-6/ Building A Watch Brand Episode 7 — Dial design https://www.fratellowatches.com/building-a-watch-brand-episode-7-dial-design/ Building A Watch Brand Episode 8 — Geeking out on typography https://www.fratellowatches.com/building-a-watch-brand-episode-8-geeking-out-on-typography/ Building A Watch Brand Episode 9 — 3D Modeling, Tech Development, And Opening Up On The Mentally Challenging Side https://www.fratellowatches.com/building-a-watch-brand-episode-9-3d-modeling-tech-development-and-opening-up-on-the-mentally-challenging-side/Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Association RevUP
SIO's 6M Win and the Case for C-Suite Deals (S2:E1)

Association RevUP

Play Episode Listen Later Mar 3, 2025 17:10


"Clinical research is not cheap."Season 2 of the Association RevUP podcast celebrates the stories of associations who are utilizing business to bring about real change in their industries.In this debut episode, discover how the Society of Interventional Oncology and Jena Eberly Stack united multiple industry partners to fund a high-cost clinical trial with real-world health impacts. Then Dan Cole and Brittany Shoul share 5 ways your association can level up negotiations for your next big deal, no matter the size.About Association RevUP : Episodes are less than 20-minutes, written and produced to keep you engaged, and full of actionable insights. Hosted by Carolyn Shomali.- VPC, Inc., the production company PAR trusts with our in-person event the RevUP Summit, and the parter of this podcast.- MyPar.org: learn more about the PAR member community- RevUP Summit: November 4-6, 2025 in Annapolis, MD

Des Montres et Vous
#120 Tout plaquer pour devenir photographe horloger chez Fratello : le parcours étonnant de Morgan Saignes.

Des Montres et Vous

Play Episode Listen Later Feb 28, 2025 68:04


Bonjour à tous et bienvenue sur DM&V, j'espère que vous allez pour le mieux.Aujourd'hui je reçois un invité que beaucoup d'entre vous attendaient avec impatience : Morgan Saignes, Un des photographes horlogers les plus talentueux du moment, bien connu pour avoir shooté les plus belles pièces sous l'égide Fratello ! Dans cet épisode tout en simplicité, Morgan revient sur sa passion pour les montres : Comment tout cela est né, comment la photo est arrivée, comment a-t'il rejoint l'équipe Fratello mais aussi quelles sont ses pièces préférées, après en avoir eu plusieurs centaines, voire plusieurs milliers entre les mains... Evidemment, nous ne pouvions faire cet épisode sans parler du projet VPC...déjà un Blockbuster (bientôt la revue sur DM&V) Cet épisode est, comme d'habitude, disponible en version audio sur toutes les plateformes de podcast mais également en vidéo sur ma chaine Youtube Des Montres & Vous. Si vous aimez la chaine et son contenu, N'hésitez pas à liker, à vous abonner et à activer les notifications pour ne rien louper et pour aider DM&V à progresser. Pour ceux qui écoutent en version podcast, pensez à laisser une note 5 étoiles et un commentaire, ça fait toujours plaisir. Bonne écoute ! Voici les références abordées lors de cet épisode : - Silo, la série disponible sur Apple TV (2 saisons) - Foundation, la série disponible sur Apple TV (2 saisons) - Dark Matter, la série disposable sur Apple TV (1 saison) L'insta de Morgan : morgansaignes Le site web de Morgan : morgansaignes.comHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

AWS Morning Brief
The AWS Chatbot Disappointment

AWS Morning Brief

Play Episode Listen Later Feb 17, 2025 6:31


AWS Morning Brief for the week of February 17, with Corey Quinn. Links:Amazon DynamoDB now supports auto-approval of quota adjustmentsAmazon Elastic Block Store (EBS) now adds full snapshot size information in Console and APIAmazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20250103Amazon Redshift Serverless announces reduction in IP Address Requirements to 3 per SubnetAWS Deadline Cloud now supports Adobe After Effects in Service-Managed FleetsAWS Network Load Balancer now supports removing availability zonesAWS CloudTrail network activity events for VPC endpoints now generally availableHarness Amazon Bedrock Agents to Manage SAP InstancesTimestamp writes for write hedging in Amazon DynamoDBUpdating AWS SDK defaults – AWS STS service endpoint and Retry StrategyLearning AWS best practices from Amazon Q in the ConsoleAutomating Cost Optimization Governance with AWS ConfigAmazon Q Developer in chat applications rename - Summary of changes - AWS Chatbot

[i3] Podcast
Victory Park Capital's Brendan Carroll – Private Credit, Insurance Friendly Strategies

[i3] Podcast

Play Episode Listen Later Jan 29, 2025 35:56


Brendan Carroll is a Senior Partner at Victory Park Capital, which he co-founded in 2007. In this episode, we discussed the evolution of private credit investments, how they can be made more friendly to insurers and the acquisition of VPC by Janus Henderson Investors. Enjoy the show! Overview of podcast with Brendan Carroll, Victory Park Capital 01:00 Getting started in investing 03:00 Setting up the firm in GFC 06:00 Asset backed lending 07:30 Insurance-friendly strategies 09:00 Insurers are the fastest growing LP types 13:00 The industry has always been competitive 17:30 Historic data sees through cycle 18:30 Business review triggers 28:00 The Janus Henderson acquisition 31:00 Developing ChatVPC

The DevOps Kitchen Talks's Podcast
DKT72 - Новости 2025 из мира AWS, Terraform и DevOps

The DevOps Kitchen Talks's Podcast

Play Episode Listen Later Jan 28, 2025 83:24


Ну что? Готовы к 10000 подписчикам до конца года? Мы уже готовы два года, но все никак :) 2025 наступил и решили начать с новостного выпуска.

AWS Morning Brief
A Legend Moves On

AWS Morning Brief

Play Episode Listen Later Dec 23, 2024 7:29


AWS Morning Brief for the week of December 23, with Corey Quinn. Links:Amazon AppStream 2.0 introduces client for macOSAmazon EC2 instances support bandwidth configurations for VPC and EBSAmazon Timestream for InfluxDB now supports Internet Protocol Version 6 (IPv6) connectivityAmazon WorkSpaces Thin Client now available to purchase in IndiaAWS Backup launches support for search and item-level recoveryAWS Mainframe Modernization now supports connectivity over Internet Protocol version 6 (IPv6)AWS Marketplace now supports self-service promotional media on seller product detail pagesAWS re:Post now supports Spanish and PortugueseAWS Resource Explorer supports 59 new resource typesAWS offers a self-service feature to update business names on AWS InvoicesAnnouncing CloudFormation support for AWS Parallel Computing ServiceAnnouncing Node Health Monitoring and Auto-Repair for Amazon EKS - AWSAnd that's a wrap!Best practices for creating a VPC for Amazon RDS for Db2How the Amazon TimeHub team handled disruption in AWS DMS CDC task caused by Oracle RESETLOGS: Part 3How to detect and monitor Amazon Simple Storage Service (S3) access with AWS CloudTrail and Amazon CloudWatchEnforce resource configuration to control access to new features with AWSMaximizing your cloud journey: Engaging an AWS Solutions Architect

The Pure Report
Purely Cloud Guest Series: Cloud Block Store on AWS Deep Dive Architecture and Deployment

The Pure Report

Play Episode Listen Later Dec 18, 2024 30:19


It's Wednesday morning and that means another edition of the Purely Cloud Guest Series featuring co-host Ondrej Bursik and the Cloud Technical Product Specialist team delivering all the technical details around Pure's growing portfolio of cloud solutions. In this one, Ondrej and Pavel Kovar take you deep into the depths of Pure's Cloud Block Store for AWS offering, including architectural decisions and deployment best practices. Learn about the technical aspect of Cloud Block Store for CBS including virtual drives and shelves, controllers, networking best practices, and VPC end points. From there, Ondrej and Pavel take you through deployment recommendations via portal or CLI and deployment steps to ensure success across security and user management aspects. Thanks for checking out this episode of the Pure Report podcast and for more information on Cloud Block Store for AWS, go to: https://www.purestorage.com/products/cloud-block-storage/cbs.html

Cloud Security Podcast
Centralized VPC Endpoints - Why It Works for AWS Networking

Cloud Security Podcast

Play Episode Listen Later Dec 17, 2024 48:41


In this episode, Meg Ashby, a senior cloud security engineer shares how her team tackled AWS's centralized VPC interface endpoints, a design often seen as an anti-pattern. She explains how they turned this unconventional approach into a cost-efficient and scalable solution, all while maintaining granular controls and network visibility. She shares why centralized VPC endpoints are considered an AWS anti-pattern, how to implement granular IAM controls in a centralized model and the challenges of monitoring and detecting VPC endpoint traffic. Guest Socials: ⁠Meg's Linkedin Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp Questions asked: (00:00) Introduction (02:48) A bit about Meg Ashby (03:44) What is VPC interface endpoints? (05:26) Egress and Ingress for Private Networks (08:21) Reason for using VPC endpoints (14:22) Limitations when using centralised endpoint VPCs (19:01) Marrying VPC endpoint and IAM policy (21:34) VPC endpoint specific conditions (27:52) Is this solution for everyone? (38:16) Does VPC endpoint have logging? (41:24) Improvements for the next phase Thank you to our episode sponsor Wiz. Cloud Security Podcast listeners can also get a free cloud security health scan by going to wiz.io/csp

AWS Bites
137. Transit Gateway Explained

AWS Bites

Play Episode Listen Later Dec 13, 2024 18:37


In this episode, David Lynam provides an overview of AWS Transit Gateway, which aims to simplify complex network connectivity between VPCs, VPNs, and on-premises networks. We discuss the limitations of using VPC peering and the benefits Transit Gateway provides through its hub-and-spoke model. The main components of Transit Gateway are explained, including attachments, route tables, associations, and route propagation. We go through some example use cases like sharing Transit Gateways across accounts, network isolation for compliance, routing traffic through security services, and bandwidth/scaling capabilities. In this episode, we mentioned the following resources: - How Amazon VPC Transit Gateways work Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on X/Twitter: - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/eoins⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/loige⁠⁠⁠⁠

gateway aws transit vpns vpc vpcs aws transit gateway david lynam
AWS Morning Brief
A Return to Greatness, or Degenerate Day 3?

AWS Morning Brief

Play Episode Listen Later Dec 9, 2024 18:39


AWS Morning Brief for the week of December 9, with Corey Quinn. Links:AWS announces access to VPC resources over AWS PrivateLinkAnnouncing Amazon Aurora DSQL (Preview)Announcing Amazon Bedrock IDE in preview as part of Amazon SageMaker Unified StudioAWS announces Amazon CloudWatch Database InsightsAmazon DynamoDB global tables previews multi-Region strong consistencyAmazon EC2 introduces Allowed AMIs to enhance AMI governanceAnnouncing Amazon EC2 I8g instancesAnnouncing Amazon EKS Auto ModeAnnouncing Amazon EKS Hybrid NodesAnnouncing Amazon Elastic VMware Service (Preview)Announcing Amazon FSx Intelligent-Tiering, a new storage class for FSxAmazon Q Developer can now automate code reviewsAmazon Q Developer announces automatic unit test generation to accelerate feature developmentAmazon S3 adds new default data integrity protectionsAnnouncing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadataAmazon S3 launches storage classes for AWS Dedicated Local ZonesAnnouncing Amazon S3 Tables – Fully managed Apache Iceberg tables optimized for analytics workloadsAWS announces Amazon SageMaker LakehouseAWS Control Tower launches managed controls using declarative policiesAWS announces AWS Data Transfer Terminal for high-speed data uploadsAmazon Web Services announces declarative policiesIntroducing AWS Glue 5.0AWS announces Invoice ConfigurationAWS Marketplace now offers EC2 Image Builder components from independent software vendorsAWS announces AWS Security Incident Response for general availabilityAnnouncing AWS Transfer Family web appsBuy with AWS accelerates solution discovery and procurement on AWS Partner websitesOracle Database@AWS is now in limited previewPartyRock improves app discovery and announces upcoming free daily useAnnouncing the preview of Amazon SageMaker Unified StudioVPC Lattice now includes TCP support with VPC ResourcesAnnouncing the 2024 Geo and Global AWS Partners of the YearAmazon MemoryDB Multi-Region is now generally availableTop announcements of AWS re:Invent 2024SponsorThe Duckbill Group: https://www.duckbillgroup.com/

Hipsters Ponto Tech
Carreiras: Especialista de Dados na AWS, com Erika Nagamine – Hipsters Ponto Tech #440

Hipsters Ponto Tech

Play Episode Listen Later Dec 3, 2024 44:36


Hoje é dia de sobre carreira! No episódio de estreia da série especial do podcast, conversamos com Erika Nagamine, Golden Jacket da AWS, sobre a sua trajetória, sobre as suas decisões, e sobre o poder que a curiosidade teve para lhe impulsionar ao longo de toda a sua carreira. Vem ver quem participou desse papo: Paulo Silveira, o host que gosta de certificação André David, o cohost que está rolando até agora Erika Nagamine, Arquiteta de Soluções Especialista em Dados & AI - Analytics na AWS

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

The MongoDB Podcast
EP. 248 Stream Processing Simplified: Joe Niemiec on MongoDB's Latest Innovations

The MongoDB Podcast

Play Episode Listen Later Nov 29, 2024 10:27


Join us for an insightful conversation with Joe Niemiec, Senior Product Manager for Streaming at MongoDB, recorded live at MongoDB Local London. In this video, Joe explains the fundamentals of stream processing and how it empowers developers to run continuous aggregation queries on real-time data. Discover practical use cases, including monitoring oil well pumps and smart grid applications, that showcase the power of stream processing in various industries. Joe also discusses the latest enhancements, such as expanded regional support, VPC peering for secure connections, and improved Kafka integration. Whether you're new to MongoDB or looking to enhance your data processing capabilities, this video is packed with valuable information to help you get started with stream processing!

AWS Bites
136. 20 Amazing New AWS Features

AWS Bites

Play Episode Listen Later Nov 29, 2024 17:39


In this pre-re:Invent 2024 episode, Luciano and Eoin discuss some of their favorite recent AWS announcements, including improvements to AWS Step Functions, Lambda runtime updates, DynamoDB price reductions, ALB header injection, Cognito enhancements, VPC public access blocking, and more. They share their thoughts on the implications of these new capabilities and look forward to seeing what else is announced at the conference. Overall, it's an exciting time for AWS developers with many new features to explore. Very important: no focus on GenAI in this episode :) AWS Bites is brought to you, as always, by fourTheorem! Sometimes, AWS is overwhelming and you might need someone to provide clear guidance in the fog of cloud offerings. That someone is fourTheorem. Check them out at ⁠fourtheorem.com⁠ In this episode, we mentioned the following resources: The repo containing the code of the AWS Bites website: https://github.com/awsbites/aws-bites-site Orama Search: https://orama.com/ JSONata in AWS Step Functions: https://aws.amazon.com/blogs/compute/simplifying-developer-experience-with-variables-and-jsonata-in-aws-step-functions/ EC2 Auto Scaling improvements: https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-ec2-auto-scaling-highly-responsive-scaling-policies/ Node.js 22 is available for Lambda: https://aws.amazon.com/blogs/compute/node-js-22-runtime-now-available-in-aws-lambda/ Python 3.13 runtime: https://aws.amazon.com/blogs/compute/python-3-13-runtime-now-available-in-aws-lambda/ Aurora Serverless V2 now scales to 0: https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-aurora-serverless-v2-scaling-zero-capacity/ Episode 95 covering Mountpoint for S3: https://awsbites.com/95-mounting-s3-as-a-filesystem/ One Zone caching for Mountpoint for S3: https://aws.amazon.com/about-aws/whats-new/2024/11/mountpoint-amazon-s3-high-performance-shared-cache/ Appending to S3 objects: https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-objects-append.html 1 million S3 Buckets per account: https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-s3-up-1-million-buckets-per-aws-account/ DynamoDB cost reduction: https://aws.amazon.com/blogs/database/new-amazon-dynamodb-lowers-pricing-for-on-demand-throughput-and-global-tables/ ALB Headers: https://aws.amazon.com/about-aws/whats-new/2024/11/aws-application-load-balancer-header-modification-enhanced-traffic-control-security/ Cognito Managed Login: https://aws.amazon.com/blogs/aws/improve-your-app-authentication-workflow-with-new-amazon-cognito-features/ Cognito Passwordless Authentication: https://aws.amazon.com/blogs/aws/improve-your-app-authentication-workflow-with-new-amazon-cognito-features/ VPC Block Public Access: https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-block-public-access/ Episode 88 where we talk about VPC Lattice: https://awsbites.com/88-what-is-vpc-lattice/ Direct integration between Lattice and ECS: https://aws.amazon.com/blogs/aws/streamline-container-application-networking-with-native-amazon-ecs-support-in-amazon-vpc-lattice/ Resource Control Policies: https://aws.amazon.com/blogs/aws/introducing-resource-control-policies-rcps-a-new-authorization-policy/ Episode 23 about EventBridge: https://awsbites.com/23-what-s-the-big-deal-with-eventbridge/ EventBridge latency improvements: https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-eventbridge-improvement-latency-event-buses/ AppSync web sockets: https://aws.amazon.com/blogs/mobile/announcing-aws-appsync-events-serverless-websocket-apis/ Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on X/Twitter: - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/eoins⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/loige⁠⁠⁠⁠

DMRadio Podcast
Responsible Enterprise RAG

DMRadio Podcast

Play Episode Listen Later Nov 14, 2024 51:58


Building AI Assistants and Agents for mission-critical enterprise applications presents challenges related to accuracy, security, and explainability. An effective end-to-end Retrieval Augmented Generation (RAG) platform can mitigate AI "hallucinations," reduce bias, and ensure high-precision, contextually accurate outputs. This enhances compliance with copyright standards and delivers consistently trustworthy results. Robust security measures, including access controls, defenses against prompt injection attacks, and adherence to SOC2 and HIPAA compliance requirements, are essential for protecting sensitive data. Additionally, explainability for results and actions aids compliance, simplifies troubleshooting, and builds trust in AI-driven applications. Such a platform should offer sophisticated capabilities, extending beyond basic vector search by combining semantic and lexical retrieval, advanced reranking, real-time updates, and flexible user-defined functions, while remaining easy to use through a simple API. This empowers developers to build high-performing AI-driven applications without needing deep expertise in complex RAG systems. By integrating a plug-and-play RAG solution, enterprises can lower costs, simplify custom solutions, and benefit from ongoing model and infrastructure updates, all while maintaining compliance and scalability through flexible deployment options like SaaS, on-premises, or VPC hosting. Learn more on this episode of DM Radio!

AWS Podcast
#689: Diving Deep into AWS Transit Gateway

AWS Podcast

Play Episode Listen Later Oct 14, 2024 28:31


In this episode, Simon Elisha and Brett Looney dive deep into the AWS Transit Gateway, a cloud-scale router that connects VPCs and other networking resources in AWS. They explain how Transit Gateway works, its advantages over VPC peering, and its scalability and resilience. They also touch on concepts like attachments, route tables, and VRF (virtual routing and forwarding). The conversation highlights the benefits of Transit Gateway for both experienced network administrators and newcomers to networking in the cloud. https://aws.amazon.com/transit-gateway

aws diving deep vpc vrf vpcs aws transit gateway
Hipsters Ponto Tech
Por Dentro da AWS e Amazon.com.br – Hipsters Ponto Tech #432

Hipsters Ponto Tech

Play Episode Listen Later Oct 8, 2024 37:48


Hoje é dia de falar de nuvem! Neste episódio, exploramos a surpreendente relação entre a AWS e a Amazon Brasil, e as importantes questões ligadas a dimensionamento, escalabilidade e, é claro, segurança quando o assunto é nuvem. Vem ver quem participou desse papo: André David, o host que fica ligado em palavrinhas-chave Vinny Neves, co-host e Tech Lead na UsTwo Bruno Toffolo, Principal Software Development Engineer na Amazon Gaston Perez, Principal Solutions Architect na AWS

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

AWS Morning Brief
The HPC Team Starts a War

AWS Morning Brief

Play Episode Listen Later Sep 30, 2024 5:18


AWS Morning Brief for the week of September 30, with Corey Quinn. Links:Amazon Aurora MySQL now supports RDS Data APIIntroducing Amazon EC2 C8g and M8g InstancesAmazon EC2 Instance Connect now supports IPv6Amazon SNS now delivers SMS text messages via AWS End User MessagingAWS CloudFormation Git sync now supports pull request workflows to review your stack changesAWS CloudTrail launches network activity events for VPC endpoints (preview)AWS Serverless Application Repository now supports AWS PrivateLinkAWS announces general availability for Security Group Referencing on AWS Transit GatewayGenerative AI Cost Optimization StrategiesCustomizing your HPC environment: building AMIs for AWS Parallel Computing ServiceMaking traffic lights more efficient with Amazon RekognitionWhose contract is it anyway? How AWS Marketplace worksSwitch your file share access from Amazon FSx File Gateway to Amazon FSx for Windows File Server

amazon starts cloud sms aws devops amis hpc vpc corey quinn amazon fsx last week in aws windows file server aws serverless application repository
InfluenceWatch Podcast
Episode 332: Voter Participation, Unless You Like NASCAR

InfluenceWatch Podcast

Play Episode Listen Later Sep 6, 2024 28:16


The Internal Revenue Service rule regarding charities in voter registration is explicit: “voter education or registration activities with evidence of bias that (a) would favor one candidate over another; (b) oppose a candidate in some manner; or (c) have the effect of favoring a candidate or group of candidates, will constitute prohibited participation or intervention.” Now questions are being raised about whether the Voter Participation Center, a charitable organization that is part of the left's “voting machine,” is giving “evidence of bias” after the Washington Free Beacon revealed through Meta Platforms' advertising disclosure tools that VPC was excluding fans of “NASCAR, golf, Jeeps, or other interests and hobbies typically associated with Republican men” from seeing its voter-registration advertisements. Here to discuss this and other stories from the world of dubiously nonpartisan nonpartisan civic engagement are CRC president Scott Walter and Parker Thayer, author of CRC's special report on “How Charities Secretly Help Win Elections.”Links: Do You Like NASCAR? This 'Non-partisan' Voter Registration Group Doesn't Want To Help YouVoter Participation Center (VPC)How Charities Secretly Help Win ElectionsThe Voter Participation Center: A Tax-Exempt Turnout Machine for DemocratsFollow us on our socials: Twitter: @capitalresearchInstagram: @capitalresearchcenterFacebook: www.facebook.com/capitalresearchcenterYouTube: @capitalresearchcenter

Association RevUP
Ep 5: RevUP Corporate Connections

Association RevUP

Play Episode Listen Later Jun 24, 2024 23:54


“Maybe you don't always think of yourself this way, but associations are authority.”   When associations operate with that mindset, corporate outreach becomes less of an "ask" and more of a business opportunity. In Episode 5, Lori Zoss Kraska (Growth Owl, LLC) shares how association business teams can effectively connect with Fortune1000 companies to develop partnerships, including:   Why corporate partners matter for your association Who the decision makers are and how to connect Tips for structuring your outreach 3 types of partnerships that are increasingly intriguing to corporations Then, Josh Slayton (Holmes Corporation) shares a real-world example of how associations and corporations are partnering alongside higher ed to impact the nation's labor shortage, all while increasing non-dues revenue.  Additional Links: Gallup worforce research Growth Owl, LLC Holmes Corporation RevUP Summit 2024 (PARPOD code to register) VPC, Inc: Elevate your next in-person event!

HODINKEE Podcasts
New Watches From Audemars Piguet And More; A Visit To The Patek Philippe Museum

HODINKEE Podcasts

Play Episode Listen Later Jun 6, 2024 48:38


This week, we discuss a few new releases on our desks from VPC and Anoma which leads to a short discussion of the growing small-brand scene. We also jump into our thoughts on the new Audemars Piguet [Re]Master02. Finally, we talk about brand exhibits and spend some time chatting about a recent visit to the Patek Philippe Museum in Geneva, and what made our jaws drop at the mecca of museums.Show Notes2:50: Hands-On: The VPC Type 37HW3:00: Series on Fratello about the development of VPC7:30: Anoma Watch (hands-on review to come)8:20: Amida Digitrend11:40: The B/1 Is An Audaciously Designed Watch By Newcomer Toledano & Chan15:05: Introducing: The Audemars Piguet [RE]Master0217:25: AP Chronicles on the original ref. 515925:00: Introducing: The Audemars Piguet Mini Royal Oak29:00: AP Chronicles 20mm Royal Oak from 199737:35: Patek Philippe Museum44:30: Raúl Pagès Turns The Page With His Old-School Régulateur à Détente45:30: Steel Patek Philippe 96 Sector Dial at Sotheby's46:30: The 'Imperial Patek Philippe' Owned By The Last Emperor Of China Sells For $6.2 Million47:45: Buying, Selling, Collecting: A Rare, Complicated Patek Calatrava You Might Not Have Known About Is Coming Up For Auction

Whiskey&Watches
Episode 193: Buzzy Reactions to Watches&Wonders

Whiskey&Watches

Play Episode Listen Later Apr 23, 2024 52:54


Yes, we are riffing on the title from our friends at Spirit of Time. We have the original Buzz though, so it is fair. Spence and Buzz talk about W&W, the VPC type 37 HW, among other shenanigans. Enjoy!