Podcasts about Datadog

Monitoring platform for cloud applications

  • 446PODCASTS
  • 1,116EPISODES
  • 41mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jul 15, 2025LATEST
Datadog

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Datadog

Show all podcasts related to datadog

Latest podcast episodes about Datadog

Develpreneur: Become a Better Developer and Entrepreneur
What Happens When Software Fails? Tools and Tactics to Recover Fast

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Jul 15, 2025 26:32


In this episode of Building Better Developers with AI, Rob Broadhead and Michael Meloche revisit a popular question: What Happens When Software Fails? Originally titled When Coffee Hits the Fan: Developer Disaster Recovery, this AI-enhanced breakdown explores real-world developer mistakes, recovery strategies, and the tools that help turn chaos into control. Whether you're managing your first deployment or juggling enterprise infrastructure, you'll leave this episode better equipped for the moment when software fails. When Software Fails and Everything Goes Down The podcast kicks off with a dramatic (but realistic) scenario: CI passes, coffee is in hand, and then production crashes. While that might sound extreme, it's a situation many developers recognize. Rob and Michael cover some familiar culprits: Dropping a production database Misconfigured cloud infrastructure costing hundreds overnight Accidentally publishing secret keys Over-provisioned “default” environments meant for enterprise use Takeaway: Software will fail. Being prepared is the difference between a disaster and a quick fix. Why Software Fails: Avoiding Costly Dev Mistakes Michael shares an all-too-common situation: connecting to the wrong environment and running production-breaking SQL. The issue wasn't the code—it was the context. Here are some best practices to avoid accidental failure: Color-code terminal environments (green for dev, red for prod) Disable auto-commit in production databases Always preview changes with a SELECT before running DELETE or UPDATE Back up databases or individual tables before making changes These simple habits can save hours—or days—of cleanup. How to Recover When Software Fails Rob and Michael outline a reliable recovery framework that works in any team or tech stack: Monitoring and alerts: Tools like Datadog, Prometheus, and Sentry help detect issues early Rollback plans: Scripts, snapshots, and container rebuilds should be ready to go Runbooks: Documented recovery steps prevent chaos during outages Postmortems: Blameless reviews help teams learn and improve Clear communication: Everyone on the team should know who's doing what during a crisis Pro Tip: Practice disaster scenarios ahead of time. Simulations help ensure you're truly ready. Essential Tools for Recovery Tools can make or break your ability to respond quickly when software fails. Rob and Michael recommend: Docker & Docker Compose for replicable environments Terraform & Ansible for consistent infrastructure GitHub Actions, GitLab CI, Jenkins for automated testing and deployment Chaos Engineering tools like Gremlin and Chaos Monkey Snapshot and backup automation to enable fast data restoration Michael emphasizes: containers are the fastest way to spin up clean environments, test recovery steps, and isolate issues safely. Mindset Matters: Staying Calm When Software Fails Technical preparation is critical—but so is mindset. Rob notes that no one makes smart decisions in panic mode. Having a calm, repeatable process in place reduces pressure when systems go down. Cultural and team-based practices: Use blameless postmortems to normalize failure Avoid root access in production whenever possible Share mistakes in standups so others can learn Make local environments mirror production using containers Reminder: Recovery is a skill—one you should build just like any feature. Think you're ready for a failure scenario? Prove it. This week, simulate a software failure in your development environment: Turn off a service your app depends on Delete (then restore) a local database from backup Use Docker to rebuild your environment from scratch Trigger a mock alert in your monitoring tool Then answer these questions: How fast can you recover? What broke that you didn't expect? What would you do differently in production? Recovery isn't just theory—it's a skill you build through practice. Start now, while the stakes are low. Final Thought Software fails. That's a reality of modern development. But with the right tools, smart workflows, and a calm, prepared team, you can recover quickly—and even improve your system in the process. Learn from failure. Build with resilience. And next time something breaks, you'll know exactly what to do. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources System Backups – Prepare for the Worst Using Dropbox To Provide A File Store and Reliable Backup Testing Your Backups – Disaster Recovery Requires Verification Virtual Systems On A Budget – Realistic Cloud Pricing Building Better Developers With AI Podcast Videos – With Bonus Content

Data Gen
#216 - Datadog : Adopter une approche Analytics Engineering et Self-Service

Data Gen

Play Episode Listen Later Jul 14, 2025 19:05


Marguerite Vial est l'ex-Analytics Engineering Manager de Datadog, une entreprise tech américaine fondée par deux français cotée à la bourse de NY. Aujourd'hui, l'entreprise compte plus de 6 500 employés et est présente dans 33 pays dans le monde.On aborde :

Software Defined Talk
Episode 528: You can't spell Clippy without CLI

Software Defined Talk

Play Episode Listen Later Jul 11, 2025 69:37


This week, we discuss the return of command line tools, Kubernetes embracing VMs, and the steady march of Windows. Plus, thoughts on TSA, boots, and the “old country." Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/W0_6hPybAsQ?si=QfWLvts-ueyATxtz) 528 (https://www.youtube.com/live/W0_6hPybAsQ?si=QfWLvts-ueyATxtz) Runner-up Titles Pool Problems Stockholm Pool Syndrome The Old Country VMs aren't going anywhere Vaya con Dios This is why we have milk Why's he talking about forks? Commander Claude The Columbia Music House business plan Beat this horse into glue Reinvent Command Line “Man falls in love with CLI” Windows is always giving you the middle button CBT - Claude Behavioral Therapy. Rundown TSA tests security lines allowing passengers to keep their shoes on (https://www.washingtonpost.com/travel/2025/07/08/tsa-shoe-policy-airport-security/) KubeCon Is Starting To Sound a Lot Like VMCon (https://thenewstack.io/kubecon-is-starting-to-sound-a-lot-like-vmcon/?link_source=ta_bluesky_link&taid=6866ce8d6b59ab0001f28d23&utm_campaign=trueanthem&utm_medium=social&utm_source=bluesky) Claude Code and the CLI Gemini CLI: your open-source AI agent (https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/) Cursor launches a web app to manage AI coding agents (https://techcrunch.com/2025/06/30/cursor-launches-a-web-app-to-manage-ai-coding-agents/) AI Tooling, Evolution and The Promiscuity of Modern Developers (https://redmonk.com/sogrady/2025/07/09/promiscuity-of-modern-developers/) Introducing OpenCLI (https://patriksvensson.se/posts/2025/07/introducing-open-cli) Windows Windows 11 has finally overtaken Windows 10 as the most used desktop OS (https://www.theverge.com/news/699161/microsoft-windows-11-usage-milestone-windows-10) Microsoft quietly implies Windows has LOST millions of users since Windows 11 debut — are people really abandoning ship? [UPDATE] (https://www.windowscentral.com/software-apps/windows-11/windows-11-10-lost-400-million-users-3-years) Relevant to your Interests Public cloud becomes a commodity (https://www.infoworld.com/article/4011196/public-cloud-becomes-a-commodity.html?utm_date=20250625131623&utm_campaign=InfoWorld%20US%20%20All%20Things%20Cloud&utm_content=slotno-1-title-Public%20cloud%20becomes%20a%20commodity&utm_term=Infoworld%20US%20Editorial%20Newsletters&utm_medium=email&utm_source=Adestra&aid=29276237&huid=) Nvidia Ruffles Tech Giants With Move Into Cloud Computing (https://www.wsj.com/tech/ai/nvidia-dgx-cloud-computing-28c49748?mod=hp_lead_pos11) Which Coding Assistants Retain Their Customers and Which Ones Don't (https://www.theinformation.com/articles/coding-assistants-retain-customers-ones?utm_source=ti_app&rc=giqjaz) Anthropic destroyed millions of print books to build its AI models (https://arstechnica.com/ai/2025/06/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models/) Sam Altman upstages the critics (https://www.platformer.news/sam-altman-hard-fork-live/?ref=platformer-newsletter) The Ultimate Cloud Security Championship | 12 Months × 12 Challenges (https://cloudsecuritychampionship.com/) Denmark to tackle deepfakes by giving people copyright to their own features (https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence) Why Automattic CEO Matt Mullenweg went to war over WordPress (https://overcast.fm/+AAQLdsK48uM) How Nintendo locked down the Switch 2's USB-C port and broke third-party docking (https://www.theverge.com/report/695915/switch-2-usb-c-third-party-docks-dont-work-authentication-encryption) CoreWeave to acquire Core Scientific in $9 billion all-stock deal (https://www.cnbc.com/2025/07/07/coreweave-core-scientific-stock-acquisition.html) Valve conquered PC gaming. What comes next? (https://www.ft.com/content/f4a13716-838a-43da-853b-7c31ac17192c) Datadog stock jumps 10% on tech company's inclusion in S&P 500 index (https://www.cnbc.com/2025/07/02/datadog-stock-jumps-sp-500-index-inclusion.html) Jack Dorsey just released a Bluetooth messaging app that doesn't need the internet (https://www.engadget.com/apps/jack-dorsey-just-released-a-bluetooth-messaging-app-that-doesnt-need-the-internet-191023870.html) Unlock the Full Chainguard Containers Catalog – Now with a Catalog Pricing Option (https://www.chainguard.dev/unchained/unlock-the-full-chainguard-containers-catalog-now-with-a-catalog-pricing-option) Oracle stock jumps after $30 billion annual cloud deal revealed in filing (https://www.cnbc.com/2025/06/30/oracle-orcl-stock-cloud-deal.html) Docker State of App Dev: AI (https://www.docker.com/blog/docker-state-of-app-dev-ai/) X Chief Says She Is Leaving the Social Media Platform (https://www.nytimes.com/2025/07/09/technology/linda-yaccarino-x-steps-down.html) Meta's recruiting blitz claims three OpenAI researchers (https://techcrunch.com/2025/06/25/metas-recruiting-blitz-claims-three-openai-researchers/) OpenAI Reportedly Shuts Down for a Week as Zuck Poaches Its Top Talent (https://gizmodo.com/openai-reportedly-shuts-down-for-a-week-as-zuck-poaches-its-top-talent-2000622145?utm_source=tldrnewsletter) OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home' (https://www.wired.com/story/openai-meta-leadership-talent-rivalry/) Sam Altman Slams Meta's AI Talent-Poaching Spree: ‘Missionaries Will Beat Mercenaries' (https://www.wired.com/story/sam-altman-meta-ai-talent-poaching-spree-leaked-messages/) Report: Apple looked into building its own AWS competitor (https://9to5mac.com/2025/07/03/report-apple-looked-into-building-its-own-aws-competitor/) Cloudflare launches a marketplace that lets websites charge AI bots for scraping (https://techcrunch.com/2025/07/01/cloudflare-launches-a-marketplace-that-lets-websites-charge-ai-bots-for-scraping/) The Open-Source Software Saving the Internet From AI Bot Scrapers (https://www.404media.co/the-open-source-software-saving-the-internet-from-ai-bot-scrapers/) Nonsense After 8 years of playing D&D nonstop, I've finally tried its biggest alternative (https://www.polygon.com/tabletop-games/610875/dnd-alternative-dungeon-crawl-classics-old-school) TSA tests security lines allowing passengers to keep their shoes on (https://www.washingtonpost.com/travel/2025/07/08/tsa-shoe-policy-airport-security/) Conferences Sydney Wizdom Meet-Up (https://www.wiz.io/events/sydney-wizdom-meet-up-aug-2025), Sydney, August 7. Matt will be there. SpringOne (https://www.vmware.com/explore/us/springone?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/watch?v=f_xOudsmUmk). Explore 2025 US (https://www.vmware.com/explore/us?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/shorts/-COoeIJcFN4). SREDay London (https://sreday.com/2025-london-q3/), Coté speaking, September 18th and 19th. Civo Navigate London (https://www.civo.com/navigate/london/2025), Coté speaking, September 30th. Texas Linux Fest (https://2025.texaslinuxfest.org), Austin, October 3rd to 4th. CFP closes August 3rd (https://www.papercall.io/txlf2025). CF Day EU (https://events.linuxfoundation.org/cloud-foundry-day-europe/), Frankfurt, October 7th, 2025. AI for the Rest of Us (https://aifortherestofus.live/london-2025), Coté speaking, October 15th to 16th, London. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Park ATX Free Parking Codes (https://www.austintexas.gov/page/park-atx): FREE15ATX1 and FREE15ATX2 Matt: Careless People (https://en.wikipedia.org/wiki/Careless_People) Coté: Wizard Zines bundle (https://wizardzines.com/zines/all-the-zines/), Julia Evans. Photo Credits Header (https://unsplash.com/photos/grayscale-photo-of-airplane-on-airport-OxJ6RftLbEA)

Worldwide Exchange
Tariff Risks Rising, Investors Unphased, Big Tech Focus 7/9/25

Worldwide Exchange

Play Episode Listen Later Jul 9, 2025 43:30


President Trump promises to send letters to at least seven more countries today after threatening new tariffs on copper and pharmaceuticals. Plus, despite initial swings, investors appear to be taking it all in stride, focusing on the positives. US Stock Futures point to a muted open. And later, Big Tech in focus with several story lines at play from Apple and Nvidia to Software marker Datadog. 

CNBC Business News Update
Market Open: Stocks Higher, Nvidia Becomes First Ever $4 Trillion Company, DataDog Is In The S&P 500 Index Today 7/9/25

CNBC Business News Update

Play Episode Listen Later Jul 9, 2025 3:39


From Wall Street to Main Street, the latest on the markets and what it means for your money. Updated regularly on weekdays, featuring CNBC expert analysis and sound from top business newsmakers. Anchored by CNBC's Jessica Ettinger.

Alles auf Aktien
Regenbogen-Rendite und Was tun mit 100.000 Euro?

Alles auf Aktien

Play Episode Listen Later Jul 4, 2025 18:58


In der heutigen Folge sprechen die Finanzjournalisten Daniel Eckert und Christoph Kapalschinski über einen Diversity-ETF, neue Verwirrung um Robinhood und Rekordstände zum Independence Day. Außerdem geht es um Data Dog, Commerzbank, SAP, ASML, Novo Nordisk, BASF, Nvidia, Apple, Krispy Kream, Hewlett Packard Enterprise, OpenAI, Yum, Kia, Bitcoin, Société Général, Bank Santander, Coca-Cola, National Bank of Greece, Allianz, Michelin, Unilever, Sony, Yum, Kia, Scottish Mortgage Investment Trust, Berkshire Hathaway, ETF iShares Refinitiv Inclusion and Diversity (WKN A2DVK8), iShares EUR Ultrashort Bond ETF (WKN: A1W375), Xtrackers II EUR Overnight Rate Swap (WKN: DBX0A2), Vanguard FTSE All-World ETF thesaurierend (WKN: A2PKXG) und Vanguard FTSE All-World ETF ausschüttend (WKN: A1JX52). Wir freuen uns über Feedback an aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter.[ Hier bei WELT.](https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html.) [Hier] (https://open.spotify.com/playlist/6zxjyJpTMunyYCY6F7vHK1?si=8f6cTnkEQnmSrlMU8Vo6uQ) findest Du die Samstagsfolgen Klassiker-Playlist auf Spotify! Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/alles_auf_aktien) Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html

OHNE AKTIEN WIRD SCHWER - Tägliche Börsen-News
“Chip-Duopol: Synopsys & Cadence" - Datadog, Tripadvisor, Luxus & Zollfreie T-Shirts

OHNE AKTIEN WIRD SCHWER - Tägliche Börsen-News

Play Episode Listen Later Jul 4, 2025 13:44


Aktien hören ist gut. Aktien kaufen ist besser. Bei unserem Partner Scalable Capital geht's unbegrenzt per Trading-Flatrate oder regelmäßig per Sparplan. Alle weiteren Infos gibt's hier: scalable.capital/oaws. Aktien + Whatsapp = Hier anmelden. Lieber als Newsletter? Geht auch. Das Buch zum Podcast? Jetzt lesen. Datadog in S&P 500. Tripadvisor in Starboard-Hände. Olo in Thoma-Bravo-Fonds. Watches of Switzerland hat Hassliebe mit den USA. Shop Apotheke liefert ab. T-Shirts mit wenig Zoll. Gildan Activewear (WKN: 915121) aus Kanada macht's möglich. Trump sagt nein. Trump sagt ja. Das ist Börse in a nutshell. Cadence (WKN: 873567) und Synopsys (WKN: 883703) freut's. Auch sonst gibt's bei dem Business viel Grund zur Freude. Wäre da nicht die Bewertung. Diesen Podcast vom 04.07.2025, 3:00 Uhr stellt dir die Podstars GmbH (Noah Leidinger) zur Verfügung.

Digest & Invest by eToro
DV402 - Datadog jumps on S&P 500 inclusion, Market awaits US Jobs Report & Why Tomorrow Will be Quiet - 3rd July

Digest & Invest by eToro

Play Episode Listen Later Jul 3, 2025 4:29


In today's episode of The Daily Voice, Sam reviews the main headlines from yesterday and previews the day ahead.

Alles auf Aktien
KI-Aktien als Schutzschild und Sommer-Booster fürs Depot

Alles auf Aktien

Play Episode Listen Later Jun 30, 2025 22:11


In der heutigen Folge sprechen die Finanzjournalisten Anja Ettel und Holger Zschäpitz über Außerdem geht es um Trumps Triple B, Bezos Hochzeit und was sonst noch wichtig wird in dieser Woche. außerdem geht es um Alphabet, Nvidia, ASML, AristaNetworks, Crowdstrike, Datadog, Palantir, Netflix, John Deere, Walmart, Parker-Hannifin, Shell, Xtrackers Artificial Intelligence & Big Data (WKN: A2N6LC), iShares Global Clean Energy Transition ETF (WKN: A0MW0M), Microsoft, Plug Power, Enphase, Jinko Solar, SMA Solar, Carnival, Royal Caribbean, Marriott International, Hilton, Booking, AirBnB, ishares Stoxx Europe 600 Travel&Leisure (WKN: A0H08S) Intercontinental, Ryanair, Evolution, Accor und US Global Investors Travel ETF (WKN: A3CPGE). Wir freuen uns über Feedback an aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter.[ Hier bei WELT.](https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html.) [Hier] (https://open.spotify.com/playlist/6zxjyJpTMunyYCY6F7vHK1?si=8f6cTnkEQnmSrlMU8Vo6uQ) findest Du die Samstagsfolgen Klassiker-Playlist auf Spotify! Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/alles_auf_aktien) Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html

Late Night Linux All Episodes
Hybrid Cloud Show – Episode 33

Late Night Linux All Episodes

Play Episode Listen Later Jun 27, 2025 32:17


How much observability and monitoring is really needed, the tooling people actually use (from Datadog and Grafana Cloud to open source options like Prometheus, Loki, and Tempo), and how to approach observability without overcomplicating things.       Support us on patreon and get an ad-free RSS feed with early episodes sometimes      ... Read More

Hybrid Cloud Show
Hybrid Cloud Show – Episode 33

Hybrid Cloud Show

Play Episode Listen Later Jun 27, 2025 32:17


How much observability and monitoring is really needed, the tooling people actually use (from Datadog and Grafana Cloud to open source options like Prometheus, Loki, and Tempo), and how to approach observability without overcomplicating things.       Support us on patreon and get an ad-free RSS feed with early episodes sometimes       … Continue reading "Hybrid Cloud Show – Episode 33"

Conversations with Tyler
Austan Goolsbee on Central Banking as a Data Dog

Conversations with Tyler

Play Episode Listen Later Jun 25, 2025 58:40


Austan Goolsbee is one of Tyler Cowen's favorite economists—not because they always agree, but because Goolsbee embodies what it means to think like an economist. Whether he's analyzing productivity slowdowns in the construction sector, exploring the impact of taxes on digital commerce, or poking holes in overconfident macro narratives, Goolsbee is consistently sharp, skeptical, and curious. A longtime professor at the University of Chicago's Booth School and former chair of the Council of Economic Advisers under President Obama, Goolsbee now brings that intellectual discipline—and a healthy dose of humor—to his role as president of the Federal Reserve Bank of Chicago. Tyler and Austan explore what theoretical frameworks Goolsbee uses for understanding inflation, why he's skeptical of monetary policy rules, whether post-pandemic inflation was mostly from the demand or supply side, the proliferation of stablecoins and shadow banking, housing prices and construction productivity, how microeconomic principles apply to managing a regional Fed bank, whether the structure of the Federal Reserve system should change, AI's role in banking supervision and economic forecasting, stablecoins and CBDCs, AI's productivity potential over the coming decades, his secret to beating Ted Cruz in college debates, and more. Read a full transcript enhanced with helpful links, or watch the full video on the new dedicated Conversations with Tyler channel. Recorded March 3rd, 2025. Help keep the show ad free by donating today! Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Austan on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.

Ardan Labs Podcast
Cybersecurity, Music, and RapDev with Ryan Henrich

Ardan Labs Podcast

Play Episode Listen Later Jun 18, 2025 93:27


In this engaging conversation, Ryan Henrich shares his journey in the cybersecurity field, discussing his current role at RapDev, the evolution of cybersecurity careers, and his early experiences with hacking. He reflects on his high school years, his passion for music, and the impact of technology on learning. The discussion also dives into the challenges faced in early career roles, the importance of problem-solving, and the lessons learned from mistakes. 00:00 Introduction00:30 What is Ryan Doing Today?09:30 First Memory of a Computer12:00 Highschool Interests / Stories20:00 Searching for Information30:00 Entering University38:00 Skill in Music42:30 First Security Job55:00 Lessons Learned1:02:00 Entering the Cloud1:19:00 Why Buy Security1:30:00 Staying Relevant1:34:40 Contact InfoConnect with Ryan: Linkedin: https://www.linkedin.com/in/ryanhenrichEmail: ryan.henrich@rapdev.ioMentioned in this Episode:RapDev: https://www.rapdev.io/Datadog: https://www.datadoghq.com/ServiceNow: https://www.servicenow.com/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs

All TWiT.tv Shows (MP3)
Untitled Linux Show 207: Distro-Hopping Distro

All TWiT.tv Shows (MP3)

Play Episode Listen Later Jun 16, 2025 87:14


There's a new Linux phone, but it stretches the definition of "affordable". Another government is going Libre, Xlibre continues to divide, and Apple brings WSL to their platform. Nano has an update with a secret feature, the kernel may get an API, and Rocky hits 10! For tips we have Uptime Kuma and datadog for system monitoring, and a bug report from pw-cli, for something that really should work. It's fun don't miss it! And don't miss the show notes at https://bit.ly/4jJIA6x Enjoy! Host: Jonathan Bennett Co-Hosts: Ken McDonald and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Artisan Développeur
Développeur entrepreneur aux USA avec Julien Delange 2025

Artisan Développeur

Play Episode Listen Later Jun 10, 2025 47:16


Comment réfléchit un développeur français qui vit aux USA, qui a monté une startup et l'a revendue à Datadog ?Si tu te poses la question, viens écouter mon interview avec Julien Delange.On y parle état du marché, place de l'IA et de ce qui se passe aux USA en ce moment.Le profil de Julien : https://www.linkedin.com/in/juli1/Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

EUVC
VC | E487 | Building GTM Teams with Gia Scinto, Talent Partner at The Cole Group [Path to Market – Seedcamp Series]

EUVC

Play Episode Listen Later Jun 8, 2025 55:15


In this episode of Path to Market, Seedcamp's Natasha Lytton and Pipeline Ventures' Micah Smurthwaite are joined by Gia Scinto, Partner at The Cole Group and one of the most seasoned go-to-market talent experts in tech. Gia has helped build out executive teams at category-defining startups like Stripe, Airbnb, Datadog, Canva, and Confluent — and previously led talent at Y Combinator and Andreessen Horowitz.Gia shares hard-earned lessons from years of recruiting top-tier GTM leaders and partnering directly with founders at every stage, from pre-seed to IPO. In this conversation, she breaks down how to hire your first sales leader, how to evaluate candidates for stage fit and values alignment, and how to avoid common hiring pitfalls that can cost startups months of momentum.From sales methodology and hiring frameworks to founder mindset and onboarding tactics, this episode is packed with tactical insights for founders, operators, and investors alike.Here's what's covered:02:34 Building the First GTM Talent Function in VC06:25 From a16z to YC: Supporting Founders Across Stages09:42 First Sales Hire vs. Later-Stage Leadership13:38 The Anatomy of a Great Recruiting Process22:26 Best Interview Questions for Sales Roles29:45 How to Pitch Senior Candidates at Early Stage33:39 What GTM Leaders Want to Hear44:35 Why Sales Hires Fail — and How to Avoid It47:36 Systems, Team Design & Ops from 0 to $10M51:42 Advice for GTM Candidates: How to Pick Your Next Role

De Nederlandse Kubernetes Podcast
#96 Java, Kubernetes & GC: Finding the Sweet Spot

De Nederlandse Kubernetes Podcast

Play Episode Listen Later Jun 3, 2025 29:32


In this episode, we dive into a topic that many teams only start paying attention to when it's already too late: Garbage Collection in Java microservices. And we do so together with Usama Nasir, Staff Software Engineer at GetYourGuide.While you might think Kubernetes solves everything, Usama shares how his team at GetYourGuide was still caught off guard by mysterious Out of Memory errors. The culprit? Microservices may be small, but that doesn't mean Java's memory management just takes care of itself.We talk about how Java's memory model really works, why different garbage collectors (like G1GC or ZGC) perform completely differently under pressure, and how small decisions can have a big impact on performance. Usama explains how observability with tools like Datadog turned out to be essential, and why sometimes it's actually smarter to allocate less memory to your containers.But the most important takeaway? Garbage Collection isn't just “a Java thing.” It's a shared responsibility between developers and DevOps/SREs. Only together can you find that sweet spot between speed, stability, and scalability.-----------In deze aflevering duiken we samen met Usama Nasir, Staff Software Engineer bij GetYourGuide, in een onderwerp dat in veel teams pas aandacht krijgt als het al te laat is: Garbage Collection in Java microservices.Hoewel je misschien denkt dat Kubernetes alles oplost, vertelt Usama hoe zijn team bij GetYourGuide tóch werd verrast door mysterieuze Out of Memory-errors. Wat bleek? Microservices mogen dan klein zijn, maar dat betekent niet dat Java's geheugenhuishouding vanzelf goed gaat.We praten over hoe Java's memory model echt werkt, waarom verschillende garbage collectors (zoals G1GC of ZGC) totaal anders presteren onder druk, en hoe kleine keuzes grote impact hebben op performance. Usama legt uit hoe observability met tools als Datadog onmisbaar bleek, en waarom het soms slimmer is om minder geheugen toe te kennen aan je containers.Maar het belangrijkste inzicht? Garbage Collection is geen “Java-dingetje”. Het is een verantwoordelijkheid van zowel developers als DevOps/SRE's. Alleen samen vind je die sweet spot tussen snelheid, stabiliteit en schaalbaarheid.Stuur ons een bericht.https://acc-ict.com/liveSupport the showLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT

The Next Wave - Your Chief A.I. Officer
This AI Tool Can Build Any SaaS App in Minutes

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later May 28, 2025 40:47


Episode 60: Can you really build an $8 billion SaaS startup by yourself using AI agents? Nathan Lands (https://x.com/NathanLands) sits down with Matan Grinberg (https://x.com/matansf), a physicist, AI founder, and creator of Factory AI—one of Silicon Valley's best-kept secrets. Matan has published papers alongside luminaries and built a company trusted by top VCs and tech insiders. In this episode, Nathan and Matan dive deep into the power and practicality of Factory AI—an agentic software platform that allows anyone to build full-featured SaaS applications using only natural language. After years of focusing on large enterprise clients and remaining under the radar, Factory AI is now opening up to everyone and revealing what's possible when state-of-the-art “droids” (purpose-built AI agents) collaborate to automate the entire software development lifecycle. Watch them attempt to build a DocuSign competitor in minutes live on the show, and explore how AI is changing the future of engineering, entrepreneurship, and creative problem-solving. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Enterprise-Focused Product Expansion (05:45) Engineering Task Automation Tools (07:01) Quick Project Setup Outline (10:43) AI Revolutionizing Software Development (14:29) Customer-Centric Problem Solving (18:10) Progress Through Efficiency Improvements (19:22) Agency: The New Success Metric (24:54) Expanding Product to Small Teams (25:38) Unified Platform for Software Development (30:44) Importance of Foundational Knowledge (33:55) Technology: Rise, Apex, and Decline (35:40) Future Technology Beyond Smartphones — Mentions: Want the ultimate guide to use Gemini's game-changing features? Get it here: https://clickhubspot.com/wdn Promo link for 14 day free trial w 10M extra free tokens: LINK Matt Grinberg: https://www.linkedin.com/in/matan-grinberg/ Factory: https://www.factory.ai/ Docusign: https://www.docusign.com/ Shaun Maguire: https://x.com/shaunmmaguire Sequoia: https://www.sequoiacap.com/ Datadog: https://www.datadoghq.com/ Sentry: https://sentry.io/ Perplexity: https://www.perplexity.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

How to Trade Stocks and Options Podcast by 10minutestocktrader.com
I Spent $475,995 on These 2 Stocks Yesterday ‼️

How to Trade Stocks and Options Podcast by 10minutestocktrader.com

Play Episode Listen Later May 15, 2025 58:04


Are you looking to save time, make money, and start winning with less risk? Then head to https://www.ovtlyr.com.How did traders pull in over 100% gains in just one day? This isn't hype—it's strategy. In this video, you'll see how OVTLYR members used a simple, repeatable system to crush the market with two trades: Apple ($AAPL) and SMBCI.This isn't about “buy and hope.” This is about clarity, precision, and execution.You'll discover how to:➡️ Spot bullish setups with the Outlier Nine—a powerful framework combining trend, signal, sector strength, and more➡️ Use options rolling to reduce risk, lock in profits, and keep the trade alive➡️ Avoid the trap of high win-rate systems that still lose money (looking at you, credit spreads)➡️ Create a defined entry and exit plan—before ever placing a trade➡️ Leverage tools like ATR, fear and greed metrics, and sector breadth to time moves with confidenceThis strategy doesn't rely on guessing, hype, or hoping a stock “comes back.” It focuses on data, discipline, and market structure—allowing traders to sit in cash for weeks, then strike fast when the edge is real.You'll also get:⭐ Real trade breakdowns (including 106% gains on Datadog and 9.21% account growth in a single trade)⭐ Heatmap-based exits that eliminate emotion⭐ A complete walkthrough of how OVTLYR 3.0 identifies high-conviction opportunities in secondsForget the noise. Forget the fake gurus. This is the system real traders use to make smart moves and manage risk in real time.And if you're tired of the stress, second-guessing, and watching others win while you're stuck—this video shows exactly how to change that!Gain instant access to the AI-powered tools and behavioral insights top traders use to spot big moves before the crowd. Start trading smarter today

Alles auf Aktien
Robotaxi-Attacke auf Europa und aussichtsreiche Nasdaq-Nachzügler

Alles auf Aktien

Play Episode Listen Later May 15, 2025 18:42


In der heutigen Folge von „Alles auf Aktien“ sprechen die Finanzjournalisten Philipp Vetter und Holger Zschäpitz über zwei gelungene Börsengänge, einen weiteren Tiefschlag für Bayer und einen Absturz bei Tui. Außerdem geht es um Etoro, Pfisterer Holding, Super Micro, AMD, Nvidia, Coreweave, Cisco, Eon, Daimler Truck, Brenntag, Renk, Hapag Lloyd, Baidu, WeRide, Uber, General Motors, Mercedes-Benz, BMW, Volkswagen, Pony.AI, Momenta Technology, Tesla, Alphabet, Archer Aviation, Marvell Technology, Broadcom, The Trade Desk, Datadog, MongoDB, Adobe, Diamondback Energy, Regeneron Pharmaceuticals, Warner Bros Discovery, Rheinmetall, Siemens Energy. Wir freuen uns an Feedback über aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter.[ Hier bei WELT.](https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html.) [Hier] (https://open.spotify.com/playlist/6zxjyJpTMunyYCY6F7vHK1?si=8f6cTnkEQnmSrlMU8Vo6uQ) findest Du die Samstagsfolgen Klassiker-Playlist auf Spotify! Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. Außerdem bei WELT: Im werktäglichen Podcast „Das bringt der Tag“ geben wir Ihnen im Gespräch mit WELT-Experten die wichtigsten Hintergrundinformationen zu einem politischen Top-Thema des Tages. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/alles_auf_aktien) Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html

CarahCast: Podcasts on Technology in the Public Sector
Datadog Enhances Public Services with IT Observability and AI-Driven Analytics

CarahCast: Podcasts on Technology in the Public Sector

Play Episode Listen Later May 14, 2025 21:14


Access the podcast to hear Greg Reeder, Senior Director of Public Sector Marketing at Datadog, and Martha Dorris, Founder of DCI Consulting, discuss how agencies increase agility and efficiency with innovative customer experience strategies, digital transformation and proactive application monitoring tools. Listen to practical use cases from the State Department, IRS and CBP showcasing how human-centric design increases engagement and public trust.

More or Less with the Morins and the Lessins
#98: OpenAI Acquires Windsurf for $3B, Google's Stock Dips

More or Less with the Morins and the Lessins

Play Episode Listen Later May 9, 2025 56:20


The countdown to Episode 100 is on, but this week, it's more news and less Sam.Jessica, Brit, and Dave dig into a jam-packed week across tech, AI, VC, and culture—starting with Dave topping a surprise list of top angel investors (yes, above Peter Thiel), raising questions about who really shows up on cap tables and why no women made the cut.The crew unpack:Windsurf's $3B acquisition by OpenAIApple's potential move to replace Google with its own AI search in SafariA historic week of M&A with DoorDash, Datadog, and Function Health, andWhy tech was (mostly) absent from this year's Met GalaWe're also on ↓X: https://twitter.com/moreorlesspodInstagram: https://instagram.com/moreorlessSpotify: https://podcasters.spotify.com/pod/show/moreorlesspodConnect with us here:1) Sam Lessin: https://x.com/lessin2) Dave Morin: https://x.com/davemorin3) Jessica Lessin: https://x.com/Jessicalessin4) Brit Morin: https://x.com/brit00:00 Trailer01:24 Nearly 100 episodes02:35 Top one angel investor: David Morin07:22 Big M&As and vibe coding20:09 Digital media22:09 Is Google search dying?30:13 New Netflix interface32:18 Fund entities and structures (Sam, don't listen!)44:37 Pop culture corner: Barry Diller46:20 Pop culture corner: Met Gala54:44 Happy birthday to Jess55:26 Outro

Investing In Florida Technology
The Founder Aesthetic: What Great Builders Have in Common

Investing In Florida Technology

Play Episode Listen Later May 8, 2025 45:40


What separates the top 1% of venture capitalists from the rest? For Roger Ehrenberg, Managing Partner at Eberg Capital, it's the ability — and the appetite — to invest before the crowd, before the product is built, and before there's even proof of concept. In a recent episode of the Skin in the Game VC podcast, Roger joined Tom Wallace and Saxon Baum to share how he turned a late-career pivot into one of the most impressive track records in early-stage venture capital.Roger didn't come from the startup world. He spent nearly two decades on Wall Street, running billion-dollar trading desks at Citi and Deutsche Bank. From the outside, it looked like a career anyone would want — but for Roger, it had run its course. Tired of internal politics and craving something more entrepreneurial, he walked away. Around the same time, he'd been dabbling in angel investing on the side. That small experiment — backing builders before product-market fit — quickly turned into a full-time obsession.He began writing a blog, Information Arbitrage, to share his thinking publicly. The blog gained traction. Founders started reaching out. Other investors began to follow his thesis. At a time when the idea of a “New York tech ecosystem” was almost laughable, Roger had the clarity to see where it could go — and the conviction to act. By early 2010, he scraped together a $17 million first close. That first fund would eventually land at $50 million, and IA Ventures was born.But the money was only part of the story. What set Roger apart then — and still does — is how early he's willing to go. He prefers backing companies before the market even knows they exist. In fact, he often writes the first check before there's a line of code written. This isn't blind optimism. It's founder-first investing grounded in deep research and sharp intuition.Roger's track record speaks for itself. He was an early backer of The Trade Desk when it was just a deck. He seeded Datadog, TubeMogul, and multiple other companies before they became category leaders. The common thread? Founders who could not only see the future but build their way into it. To Roger, great founders share something intangible: what he calls “aesthetic and empathy.”“Great founders understand where their product stops and where the customer starts,” he said. That could mean designing APIs that developers love or building consumer apps that feel inevitable. Either way, the best founders have an intuitive sense of product, user behavior, and market timing. Roger knows how to find them — or maybe, they know how to find him. That's the power of publishing, he says. His blog didn't just clarify his thesis — it attracted the right people. It helped him raise a fund when few believed in early-stage investing outside Silicon Valley.Since then, IA Ventures has grown to four funds and backed dozens of successful startups. Roger has since passed the torch to his partners and launched his next chapter: Eberg Capital. Now, he invests alongside his sons in a new wave of innovation — spanning sports, media, entertainment, and the evolving world of fandom.But whether he's backing a Marlins ownership stake, investing in Formula 1, or writing angel checks to creator economy startups, one thing hasn't changed: Roger Ehrenberg still goes early. He still backs founders before the world sees their potential. And more often than not, he's right.Listen to the full episode with Roger Ehrenberg now. Hosted on Acast. See acast.com/privacy for more information.

Ransquawk Rundown, Daily Podcast
US Market Open: Sentiment hit after HKMA says it is diversifying into non-US assets, Bunds volatile on Merz

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later May 6, 2025 5:30


Sentiment in the equities complex hit after HKMA said it has been lowering its duration in US treasury holdings; the exchange fund has been diversifying into non-US assets; ES -0.7%, NQ -1%.Germany's CDU leader Merz fails to be elected as Chancellor, a decision which has sparked pressure in European bourses leading to underperformance in the DAX 40.USD on the backfoot, JPY leads the majors, EUR upside stalled in reaction to Merz updates.Bunds boosted on Merz, though the move has since pared, Gilts underperform.Crude and gold remain firm amid escalating geopolitics.Looking ahead, US International Trade, Canadian Exports/Imports, NZ HLFS Unemployment Rate, EIA STEO, Comments from BoE's Breeden, Supply from the US. Earnings from AMD, Supermicro, Rivian, Tempus AI, Celsius, Datadog, Constellation Energy, UniCredit, Intesa Sanpaolo & Ferrari.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

Ransquawk Rundown, Daily Podcast
Europe Market Open: APAC gains capped by disappointing Chinese Caixin Services PMI

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later May 6, 2025 5:30


APAC stocks were mostly higher but with gains capped following disappointing Chinese Caixin Services PMI.European equity futures indicate a slightly lower open with Euro Stoxx 50 future down 0.1% after the cash market finished flat on Monday.DXY failed to hold above the 100 mark, EUR/USD sits on a 1.13 handle, USD/JPY was unable to maintain its footing above 144.Crude futures have clawed back nearly all the losses seen in reaction to the weekend's OPEC+ output hike.Looking ahead, highlights include EZ PMI (Final), US International Trade, Canadian Exports/Imports, NZ HLFS Unemployment Rate, EIA STEO, BoE's Breeden, Supply from Germany & US.Earnings from AMD, Supermicro, Rivian, Tempus AI, Celsius, Datadog, Constellation Energy, Fresenius Medical Care, Zalando, Continental, UniCredit, Intesa Sanpaolo & Ferrari.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

Software Huddle
Rewriting in Rust + Being a Learning Machine with AJ Stuyvenberg

Software Huddle

Play Episode Listen Later May 6, 2025 81:36


Today's guest is AJ Stuyvenberg, a Staff Engineer at Datadog working on their Serverless observability project. He had a great article recently about how they rewrote their AWS Lambda extension in Rust. It's a really interesting look at a big, hard project, from thinking about when it's a good idea to do a rewrite to talking about their focus on performance and reliability above all else and what he thinks about the Rust ecosystem. Beyond that, AJ is just a learning machine, so I got his thoughts on all kinds of software development topics, from underrated AWS services and our favorite databases to the AWS Free Tier and the annoyances of a new AWS account. Finally, AJ dishes out some career advice for curious, ambitious developers.

The New Stack Podcast
Prequel: Software Errors Be Gone

The New Stack Podcast

Play Episode Listen Later May 5, 2025 5:13


Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.The urgency behind Prequel's mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart MoveBuilding an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game. 

Lenny's Podcast: Product | Growth | Career
Inside Devin: The world's first autonomous AI engineer that's set to write 50% of its company's code by end of year | Scott Wu (CEO and co-founder of Cognition)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later May 4, 2025 92:31


Scott Wu is the co-founder and CEO of Cognition, the company behind Devin—the world's first autonomous AI software engineer. Unlike other AI coding tools, Devin works like an autonomous engineer that you can interact with through Slack, Linear, and GitHub, just like with a remote engineer. With Scott's background in competitive programming and a previous AI-powered startup, Lunchclub, teaching AI to code has become his ultimate passion.What you'll learn:1. How a team of “Devins” are already producing 25% of Cognition's pull requests, and they are on track to hit 50% by year's end2. How each engineer on Cognition's 15-person engineering team works with about five Devins each3. How Devin has evolved from a “high school CS student” to a “junior engineer” over the past year4. Why engineering will shift from “bricklayers” to “architects”5. Why AI tools will lead to more engineering jobs rather than fewer6. How Devin creates its own wiki to understand and document complex codebases7. The eight pivots Cognition went through before landing on their current approach8. The cultural shifts required to successfully adopt AI engineers—Brought to you by:Enterpret—Transform customer feedback into product growthParagon—Ship every SaaS integration your customers wantAttio—The powerful, flexible CRM for fast-growing startups—Where to find Scott Wu:• X: https://x.com/scottwu46• LinkedIn: https://www.linkedin.com/in/scott-wu-8b94ab96/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Scott Wu and Devin(09:13) Scaling and future prospects(10:23) Devin's origin story(17:26) The idea of Devin as a person(22:19) How a team of “Devins” are already producing 25% of Cognition's pull requests(25:17) Important skills in the AI era(30:21) How Cognition's engineering team works with Devin's(34:37) Live demo(42:20) Devin's codebase integration(44:50) Automation with Linear(46:53) What Devin does best(52:56) The future of AI in software engineering(57:13) Moats and stickiness in AI(01:01:57) The tech that enables Devin(01:04:14) AI will be the biggest technology shift of our lives(01:07:25) Adopting Devin in your company(01:15:13) Startup wisdom and hiring practices(01:22:32) Lightning round and final thoughts—Referenced:• Devin: https://devin.ai/• GitHub: https://github.com/• Linear: https://linear.app/• Waymo: https://waymo.com/• GitHub Copilot: https://github.com/features/copilot• Cursor: https://www.cursor.com/• Anysphere: https://anysphere.inc/• Bolt: https://bolt.new/• StackBlitz: https://stackblitz.com/• Cognition: https://cognition.ai/• v0: https://v0.dev/• Vercel: https://vercel.com/• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Assembly: https://en.wikipedia.org/wiki/Assembly_language• Pascal: https://en.wikipedia.org/wiki/Pascal_(programming_language)• Python: https://www.python.org/• Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox• Datadog: https://www.datadoghq.com/• Bending the universe in your favor | Claire Vo (LaunchDarkly, Color, Optimizely, ChatPRD): https://www.lennysnewsletter.com/p/bending-the-universe-in-your-favor• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Windsurf: https://windsurf.com/• COBOL: https://en.wikipedia.org/wiki/COBOL• Fortran: https://en.wikipedia.org/wiki/Fortran• Magic the Gathering: https://magic.wizards.com/en• Aura frames: https://auraframes.com/• AirPods: https://www.apple.com/airpods/• Steven Hao on LinkedIn: https://www.linkedin.com/in/steven-hao-160b9638/• Walden Yan on LinkedIn: https://www.linkedin.com/in/waldenyan/—Recommended books:• How to Win Friends & Influence People: https://www.amazon.com/How-Win-Friends-Influence-People/dp/0671027034• The Power Law: Venture Capital and the Making of the New Future: https://www.amazon.com/Power-Law-Venture-Capital-Making/dp/052555999X• The Great Gatsby: https://www.amazon.com/Great-Gatsby-F-Scott-Fitzgerald/dp/0743273567—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

TechCrunch Startups – Spoken Edition
Datadog acquires AI-powered observability startup Metaplane

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Apr 28, 2025 4:05


Cloud monitoring and security platform Datadog on Wednesday announced that it has acquired Metaplane, an AI-powered data observability startup, for an undisclosed amount. In a press release, Datadog said that the deal “accelerates” its expansion into data observability, building on the launch of related products. Learn more about your ad choices. Visit podcastchoices.com/adchoices

SaaS Connection
#160 Guillaume Duvaux, Founder GTM Coach. Accélérer son go-to-market pour atteindre le million d'ARR.

SaaS Connection

Play Episode Listen Later Apr 14, 2025 72:08


Pour l'épisode de cette semaine, je reçois Guillaume Duvaux, GTM Coach et fondateur de The First One Million, une newsletter et un programme d'accompagnement pour les fondateurs de SaaS en phase de lancement.Guillaume a un parcours impressionnant : après avoir démarré chez Algolia en tant que 11e employé, il a contribué à leur croissance de moins d'1 million à 65M€ d'ARR. Il est ensuite passé par Datadog, avant de fonder Terrality, puis de devenir GTM Coach en accompagnant des fondateurs comme ceux de Poolside, avec qui il a lancé le go-to-market et contribué à la levée de 620M$.Dans cet épisode, on est revenu sur :Les 5 variables essentielles à maîtriser pour trouver son product-market fitPourquoi les fondateurs doivent eux-mêmes faire les premières ventesComment créer un système de go-to-market rigoureux basé sur des itérations hebdomadairesL'importance de l'accountability dans les premiers moisPourquoi l'outbound manuel est encore le canal le plus efficace en early stageSa nouvelle offre communautaire autour de The First One MillionUn épisode riche en conseils concrets pour toutes celles et ceux qui cherchent à aller de 0 à 1 million d'ARR plus vite et plus efficacement.Vous pouvez suivre Guillaume sur LinkedIn et vous inscrire à sa newsletter ou à la waitlist de son programme.Bonne écoute !Pour soutenir SaaS Connection en 1 minute⏱ (et 2 secondes) :Abonnez-vous à SaaS Connection sur votre plateforme préférée pour ne rater aucun épisode

NoLimitSecu
Sécurisation de la chaîne d’approvisionnement logicielle

NoLimitSecu

Play Episode Listen Later Apr 6, 2025 37:46


Episode #497 consacré à la sécurisation de la chaîne d'approvisionnement logicielle (software supply chain) Avec Christophe Tafani-Dereeper Références : https://www.datadoghq.com/blog/engineering/secure-publication-of-datadog-agent-integrations-with-tuf-and-in-toto https://github.com/DataDog/guarddoghttps://github.com/DataDog/malicious-software-packages-dataset/https://github.com/DataDog/supply-chain-firewall/https://github.com/sigstore/cosign https://in-toto.io/https://slsa.dev/https://deps.dev/https://www.sigstore.dev/ https://openssf.org/package-analysis/https://openssf.org/projects/scorecard/https://github.com/google/osv-scanner/ The post Sécurisation de la chaîne d'approvisionnement logicielle appeared first on NoLimitSecu.

Category Visionaries
Jan Willem Rombouts, CEO & Founder of Beebop AI: $5.5 Million Raised to Power Grid Orchestration for the Clean Energy Transition

Category Visionaries

Play Episode Listen Later Apr 4, 2025 33:15


Beebop AI is pioneering a new middleware layer for power grid orchestration, securing $5.5 million in funding to help utilities and energy retailers optimize energy consumption and costs. In this episode of Category Visionaries, I sat down with Jan Willem Rombouts, CEO and Founder of Beebop AI, to discuss how his background at Goldman Sachs and experience building his first energy tech company shaped his approach to solving one of the energy transition's biggest challenges: balancing power grids in an increasingly renewable-powered world. Topics Discussed: Jan Willem's journey from Goldman Sachs' trading floor during the financial crisis to energy tech entrepreneurship The painful lessons learned building Restore, which pioneered virtual power plants and was later acquired by Centrica How Beebop AI creates a middleware layer that orchestrates power consumption across customer devices like EVs, solar panels, and heat pumps Why power grid orchestration is critical to making renewable energy both reliable and affordable Beebop's strategic flywheel connecting utilities and device manufacturers The go-to-market strategies that helped Beebop gain traction with major European utilities   GTM Lessons For B2B Founders: Engineer network effects into your go-to-market strategy: Beebop designed a utility-to-OEM flywheel where each new utility customer helps bring device manufacturers onto their platform, creating a powerful network effect. Jan Willem explained: "What we designed was that we would first contract these utilities... our anticipation was that they would be able to engage with these OEMs, with these manufacturers more easily, to essentially invite them to integrate with our platform." This approach turns customers into channel partners who can open doors that would be difficult for a startup to access directly. Break through complex sales cycles with land-and-expand: When selling to utilities and large corporations with notoriously long sales cycles, Beebop starts with a low-cost, high-value initial offering focused on insights and business case validation. Jan Willem noted: "Our initial proposition is very low cost and very high value... we allow them to see what the business case is... to create somewhat of a solid launching pad on which we can then expand and go to actual operationalization." This approach shortens time-to-value and creates internal champions. Focus on customer economics, not just your technology: Despite having complex technology, Beebop leads customer conversations with how their solution impacts key metrics like customer lifetime value, margin, churn, and customer acquisition costs. "Before we have explained anything about how new our software is, where it positions in the technology stack, we just show what kind of awesome products they can build... creating tens of percentages of discounts on their energy bills." Design for global scale from day one: Based on lessons from his first company, Jan Willem deliberately architected Beebop to work with market structures that are universal across regions: "What we did this time... is we chose markets that have a universal footprint and so that look essentially the same whether you're in the UK or you're in Texas or you're in Germany or you're in Sweden." This approach avoids the scaling challenges of having to constantly adapt to different regulatory environments. Bring process to event marketing: Beebop transformed their trade show approach by adopting a disciplined, metrics-driven strategy learned from Datadog's former CMO. Jan Willem shared: "The big learning for me was to be super intentional. If you go to a trade show, be super clear about exactly how many marketing qualified, how many sales qualified leads you want out of it, and then engineer a team with different roles and responsibilities." This systematic approach yields measurable ROI from events that many startups struggle to achieve.   //   Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe.  www.GlobalTalent.co

Datacenter Technical Deep Dives
How To Learn Like A Rockstar!

Datacenter Technical Deep Dives

Play Episode Listen Later Apr 3, 2025


Amanda Ruzza is a DevOps Engineer, world famous Jass Bassist, and a Services Architect at Datadog! in this episode she shares how she ‘migrated' traditional music studying techniques into learning Cloud and all things tech related! "Study is fun and it's all about falling in love with the journey

Hunters and Unicorns
How Dan Fougere Scaled 4 Billion-Dollar IPOs (And What He'd Do Differently)

Hunters and Unicorns

Play Episode Listen Later Apr 2, 2025 57:15


In this episode of THE PLAYBOOK UNIVERSE, we sit down with legendary CRO Dan Fougere, a pivotal force behind FOUR $1B IPOs. From engineering roots to building elite go-to-market engines, Dan unpacks the battle-tested frameworks that turned great ideas into generational companies. He shares hard-won lessons from PTC, BladeLogic, Medallia, and Datadog—including how to scale a sales org, recruit world-class talent, and align GTM with visionary founders. Whether you're a startup founder, CRO, or early-stage sales leader, this is a masterclass you don't want to miss. Dan also reveals the make-or-break mindset shifts that helped him navigate adversity, imposter syndrome, and the constant pressure of building under extreme constraints.   This episode is packed with insights you simply won't find in any sales playbook.

PodRocket - A web development podcast from LogRocket
Debugging apps with Deno and OpenTelemetry with Luca Casonato

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Mar 27, 2025 24:55


Luca Casanato, member of the Deno core team, delves into the intricacies of debugging applications using Deno and OpenTelemetry. Discover how Deno's native integration with OpenTelemetry enhances application performance monitoring, simplifies instrumentation compared to Node.js, and unlocks new insights for developers! Links https://lcas.dev https://x.com/lcasdev https://github.com/lucacasonato https://mastodon.social/@lcasdev https://www.linkedin.com/in/luca-casonato-15946b156 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Luca Casonato.

She Said Privacy/He Said Security
How AI Is Revolutionizing Contract Reviews for Legal Teams

She Said Privacy/He Said Security

Play Episode Listen Later Mar 27, 2025 33:00


Farah Gasmi is the Co-founder and CPO of Dioptra, the accurate and customizable AI agent that drafts playbooks and consistently redlines contracts in Microsoft Word. Dioptra is trusted by some of the most innovative teams, like Y Combinator and Wilson Sonsini. She has over 10 years of experience building AI products in healthcare, insurance, and tech for companies like Spotify. Farah is also an adjunct professor at Columbia Business School in NYC. She teaches a Product Management course with a focus on AI and data products. Laurie Ehrlich is the Chief Legal Officer at Dioptra, a cutting-edge legal tech startup revolutionizing contract redlining and playbook generation with AI. With a background leading legal operations and commercial contracting at Datadog and Cognizant, Laurie has deep expertise in scaling legal functions to drive business impact. She began her career in intellectual property law at top firms and holds a JD from NYU School of Law and a BS from Cornell. Passionate about innovation and diversity in tech, Laurie has also been a champion for women in leadership throughout her career. In this episode… Contract review can be time-consuming and complex, especially when working with third-party agreements that use unfamiliar language and formats. Legal teams often rely on manual review processes that make it challenging to maintain consistency across contracts, contributing to inefficiencies and increased costs. That's why businesses need an effective solution that reduces the burden of contract analysis while supporting legal and strategic decision-making. Dioptra, a legal tech startup, helps solve these challenges by leveraging AI to automate first-pass contract reviews, redline contracts, and generate playbooks. The AI agent analyzes past agreements to identify patterns, standard language, and key risk areas, allowing teams to streamline the review process. It supports a range of use cases — from NDAs to real estate deals — while improving consistency and reducing review time. Dioptra also enhances post-execution analysis by enabling companies to assess past agreements for compliance and risk exposure. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Farah Gasmi, Co-founder and Chief Product Officer at Dioptra, and Laurie Ehrlich, the Chief Legal Officer at Dioptra, about how AI is used to streamline contract reviews. Together, they discuss how Dioptra accelerates contract reviews, supports security and privacy through strict data controls, and enables organizations to build smarter, more consistent contract processes — without removing the need for expert human judgment. Farah and Laurie also delve into the importance of AI-driven consistency in contract negotiation, vendor security evaluations, and how companies can safeguard sensitive data when using AI tools.

The Fintech Blueprint
How Metronome is Building the Revenue Engine for AI, with CEO Scott Woody

The Fintech Blueprint

Play Episode Listen Later Mar 25, 2025 49:32


Lex interviews Scott Woody, CEO and founder of Metronome, a usage-based billing platform. Scott shares his journey from academia to entrepreneurship, detailing his experiences at UC Berkeley, D.E. Shaw, and Stanford, where he studied biophysics. His tenure at Dropbox, where he tackled billing system challenges, inspired the creation of Metronome. The discussion highlights Metronome's real-time billing data capabilities, which aim to improve business efficiency and customer experience. Scott also explores the broader implications of AI in fintech, emphasizing the shift towards usage-based business models and the importance of real-time data. Notable discussion points: Metronome emerged from firsthand frustrations at Dropbox, where Scott Woody experienced how rigid billing systems slowed growth, confused customers, and blocked real-time insights. He built Metronome as a flexible, real-time billing engine that merges usage data with pricing logic—powering the monetization infrastructure for top AI companies today. Real-time billing isn't just about invoices—it's a strategic revenue lever. For AI and SaaS businesses alike, Metronome enables teams to run dynamic experiments, optimize GPU allocation, and make last-minute decisions to hit quarterly targets—turning billing into a core growth engine. The rise of AI is accelerating a shift to usage-based models. As AI becomes specialized labor across verticals (from loan collection to customer service), companies are rapidly replatforming, and entire industries may flip from seat-based to outcome-based pricing within quarters—Metronome is positioned as the "payment processor" for this AI economy. MENTIONED IN THE CONVERSATION Topics: Metronome, Dropbox, Datadog, OpenAI, AI, AGI, machine learning, pricing models, financial services, business optimization, operational frameworks, analytics, financial modeling ABOUT THE FINTECH BLUEPRINT 

The MongoDB Podcast
EP. 257 Optimizing MongoDB: Deep Dive into Database Performance, Reliability, and Cost Efficiency with Observability Tools

The MongoDB Podcast

Play Episode Listen Later Feb 28, 2025 66:07


In this episode of MongoDB TV, join Shane McAllister along with MongoDB experts Sabina Friden and Frank Sun as they explore the powerful observability suite within MongoDB Atlas. Discover how these tools can help you optimize database performance, reduce costs, and ensure reliability for your applications. From customizable alerts and query insights to performance advisors and seamless integrations with enterprise tools like Datadog and Prometheus, this episode covers it all. Whether you're a developer, database administrator, or just getting started with MongoDB, learn how to leverage these observability tools to gain deep insights into your database operations and improve your application's efficiency. Tune in for a live demo showcasing how MongoDB's observability suite can transform your database management experience. Perfect for anyone looking to enhance their MongoDB skills and take their database performance to the next level.

FedScoop Radio
Achieving zero trust with full-scale observability | Datadog's Emilio Escobar

FedScoop Radio

Play Episode Listen Later Feb 18, 2025 18:12


Datadog CISO Emilio Escobar joins SNG host Wyatt Kash in a sponsored podcast discussion on how federal agencies must leverage comprehensive observability and monitoring to overcome escalating cyber threats. This segment was sponsored by Datadog.

Doppelgänger Tech Talk
OpenAI & Anthropic Roadmaps | Earnings von AppLovin Reddit TheTradeDesk Airbnb Adyen #432

Doppelgänger Tech Talk

Play Episode Listen Later Feb 14, 2025 71:17


Während Apple hat nächste Woche ein neues Familienmitglied vorstellt, lässt sich OpenAI Zeit. Anthropic hat sich angeschaut wie wir AI nutzen und prognostiziert $34,5 Milliarden Umsatz in 2027. Arm baut Chips für Meta. Für Chris schauen wir uns endlich die Zahlen von AppLovin an. Dazu gibt es noch Earnings von Reddit, TheTradeDesk, Airbnb, Adyen. Entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Apple (00:02:50) OpenAI (00:05:15) Anthropic (00:17:30) Arm (00:19:30) AppLovin (00:27:45) Reddit (00:37:20) Adyen (00:40:00) TheTradeDesk (00:43:10) Robinhood (00:50:00) Coinbase (00:50:40) Airbnb (00:58:50) Datadog (00:59:00) Boulevard Corner Shownotes OpenAI verschiebt sein o3-KI-Modell zugunsten einer „einheitlichen“ Version für die nächste Generation TechCrunch Anthropic prognostiziert rasantes Wachstum auf 34,5 Milliarden Dollar Umsatz im Jahr 2027 The Information Der „Index“ von Anthropic verfolgt die KI-Wirtschaft Axios Arm sichert sich Meta als ersten Kunden für ehrgeiziges neues Chip-Projekt Reuters Google Maps zeigt tatsächlich „(Golf von Amerika)“ an, während Google ... lilyraynyc

Ransquawk Rundown, Daily Podcast
Europe Market Open: Geopolitics in the driving seat ahead of US data

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later Feb 13, 2025 2:35


APAC stocks traded somewhat mixed albeit with a mostly positive bias among the major indices following the two-way price action across global markets owing to hot US CPI data and geopolitical optimism.US President Trump posted on Truth that he had a lengthy and highly productive phone call with Russian President Putin, and they agreed to have their respective teams start negotiations immediately. Trump then said he spoke to Ukrainian President Zelensky and the conversation went very well.US President Trump did not sign reciprocal tariffs order on Wednesday after stating that he may, while the White House schedule showed President Trump is to sign executive orders on Thursday at 13:00EST/18:00GMT.Fed Chair Powell offered a note of caution on the latest CPI reading and said the Fed targets PCE inflation, which is a better measure, and stated they will know what PCE readings are late on Thursday after the PPI data.European equity futures indicate a higher cash market open with Euro Stoxx 50 futures up by 1.1% after the cash market closed with gains of 0.3% on Wednesday.Looking ahead, highlights include German Final CPI, UK GDP Estimate and Services, Swiss CPI, US Jobless Claims, PPI, IEA OMR, Supply from Italy & US, Comments from ECB's Cipollone.Earnings from Datadog, Baxter, Deere, Duke Energy, GE Healthcare, PG&E, Coinbase, Draftkings, Applied Materials, Airbnb, Palo Alto, Roku, Wynn, Siemens, Delivery Hero, Commerzbank, Nestle, Orange, British American Tobacco, Unilever, Barclays & Moncler.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

Ransquawk Rundown, Daily Podcast
US Market Open: Crude subdued with continued focus on geopolitics, USD lower into PPI & Executive Orders

Ransquawk Rundown, Daily Podcast

Play Episode Listen Later Feb 13, 2025 3:17


US President Trump did not sign reciprocal tariffs order on Wednesday after stating that he may, while the White House schedule showed President Trump is to sign executive orders on Thursday at 13:00EST/18:00GMT.Stocks mostly firmer on constructive geopolitical updates; US futures are mixed ahead of PPI.USD softer as markets weigh potential Ukraine peace deal and lack of reciprocal tariffs (so far).Bonds attempt to recoup CPI-driven losses into PPI though geopols is capping.Crude continues the Russia/Ukraine downside seen in the prior session; reports suggested Israel/Hamas had come to an understanding, but this was subsequently denied by Israeli PM Netanyahu's Office.Looking ahead, US Jobless Claims, PPI, Supply from the US. Earnings from Datadog, Baxter, Deere, Duke Energy, GE Healthcare, PG&E, Coinbase, Draftkings, Applied Materials, Airbnb, Palo Alto, Roku, Wynn.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk

CFO Thought Leader
1070: From Finance Leader to Entreprenuer: A CFO's Journey to the CEO Office | Damon Fletcher, CEO, Caliper

CFO Thought Leader

Play Episode Listen Later Feb 9, 2025 42:24


Like many seasoned finance executives, Damon Fletcher saw Snowflake as a game-changer in cloud-based data management. While a senior finance executive at Tableau, he championed its adoption, recognizing its ability to scale analytics and streamline enterprise data operations. But he also discovered a challenge familiar to many finance leaders—the hidden costs that come with cloud consumption-based pricing.At Tableau, Fletcher tells us, the company's Snowflake costs grew exponentially, mirroring a broader trend in tech where companies struggle to control cloud spend. This realization led Fletcher beyond the CFO office. In 2023, he co-founded Caliper, a company dedicated to bringing greater cost transparency and AI-powered efficiency to cloud spending.Fletcher tells us that AI is central to Caliper's approach. The platform leverages machine learning forecasting to predict cloud usage trends and generative AI to surface actionable cost-saving recommendations. Unlike traditional cloud cost tools, Caliper provides deep insights across Snowflake, AWS, and Datadog, allowing finance and DevOps teams to pinpoint inefficiencies in real time.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

CaSE: Conversations about Software Engineering
New Hosts and Formats, Observability Costs and Training

CaSE: Conversations about Software Engineering

Play Episode Listen Later Feb 3, 2025 81:42


The CaSE Podcast returns with new hosts and a renewed focus on software architecture, reliability engineering, and data engineering. In this episode we start with discussing the cost of observability, sparked by Coinbase's leaked $65 million Datadog bill, raising questions about how much organizations should spend on monitoring. We also discuss the most important content of observability training for software architects. We close with Alex' current thoughts on home automation while renovating his house.

Go To Market Grit
#225 CEO Lattice, Sarah Franklin: Trailblazer

Go To Market Grit

Play Episode Listen Later Jan 13, 2025 72:01


Guest: Sarah Franklin, CEO of LatticeAs the CEO of a growing company, Lattice's Sarah Franklin has learned that one of her most important contributions is taking a leap of faith. “You have to have the courage to be the first one to do it,” she says,” and to show that it can be done, and to pave the way so that then your team feels trust.”Sarah cautions, though, that sometimes courage is deciding to stop and go a different direction. As agentic AI becomes more common, the people building companies like Lattice should look to the “cautionary tales” of how social media and mobile phones have changed society, she says.“We can have the courage to say, what are the outcomes that we want to prevent? Or what are the outcomes that we want to make sure happen? This all takes, courage, because it's all unknown.”Chapters:(01:14) - Schooling in Mexico (04:09) - Raising brave children (10:28) - Sarah's upbringing (13:29) - The pursuit of money (16:23) - Measuring success (19:28) - Learnings, not regrets (22:55) - Make an impact (26:44) - Pitching Trailhead (32:56) - Elevating a B2B company (35:27) - How to colonize Mars (38:39) - Marketing, the Salesforce way (44:21) - Dolphining and truth-tellers (50:56) - Renewed purpose (56:30) - The challenges of being CEO (01:00:18) - Pave the way (01:03:25) - “Humanizing AI” (01:06:57) - Handling controversy (01:11:04) - Who Lattice is hiring and what “grit” means to Sarah Mentioned in this episode: FaceTime, Salesforce, Marc Benioff, Mahatma Gandhi, Instagram, the Fortune 500, Java, Jerry Maguire, National Parks, Nike, Michael Jordan, Apple and “Think Different,” Sara Varni, Scott Holden, Andy Kofoid, Databricks, Datadog, Behind the Cloud, Oracle, Microsoft, Elon Musk, Amazon AWS, George Hu, Mike Rosenbaum, Cheryl Feldman, Zac Otero, Guidewire Software, AI agents, Indiana Jones and the Last Crusade, and LinkedIn.Links:Connect with SarahLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

AWS for Software Companies Podcast
Ep072: From Alerts to Action - How Datadog Manages Security Incidents with AI

AWS for Software Companies Podcast

Play Episode Listen Later Dec 30, 2024 23:44


Dr. Yanbing Li, Chief Product Officer at Datadog, outlines how the company has integrated AI and automation into its incident response framework, helping customers manage both traditional security challenges and emerging AI-specific risks.Topics Include:Introduced talk about incident response and CISO liabilityDatadog founded 14 years ago for cloud-based developmentPlatform unifies observability and security for cloud applicationsCurrent environment has too many fragmented security productsSEC requires material incident reporting within four daysDatadog's incident response automates Slack room creationResponse team includes Legal, Security, Engineering, and ProductSystem tracks non-material incidents to identify concerning patternsReal-time telemetry data drives incident management automationOn-call capabilities manage escalation workflowsDatadog uses own products internally for incident responseCompany focuses on reducing time to incident detectionAI brings new risks: hallucination, data leaks, design exploitationBits.ai launched as LLM-based incident management co-pilotTool synthesizes events and generates incident summariesBits.ai suggests code remediation and creates synthetic testsSecurity built into AI products from initial designPrompt injection prevented through structured validation approachSensitive data anonymized before LLM processingEngineering and security teams collaborate closely on AILLM observability becoming critical for production deploymentsCustomers need monitoring for hallucinations and token usageDatadog extends infrastructure monitoring into security naturallyCompany maintains strong partnership with AWSQ&A covered Bits.ai proactive capabilities and enterprise differentiationParticipants:Yanbing Li – Chief Produce Officer - DatadogSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

Screaming in the Cloud
Replay - Hacking AWS in Good Faith with Nick Frichette

Screaming in the Cloud

Play Episode Listen Later Dec 26, 2024 32:32


On this Screaming in the Cloud Replay, we're taking you back to our chat with Nick Frichette. He's the maintainer of hackingthe.cloud, and holds security and solutions architect AWS certifications, and in his spare time, he conducts vulnerability research at Hacking the Cloud. Join Corey and Nick as they talk about the various kinds of cloud security researchers and touch upon offensive security, why Nick decided to create Hacking the Cloud, how AWS lets security researchers conduct penetration testing in good faith, some of the more interesting AWS exploits Nick has discovered, how it's fun to play keep-away with incident response, why you need to get legal approval before conducting penetration testing, and more.Show Highlights(0:00) Intro(0:42) The Duckbill Group sponsor read(1:15) What is a Cloud Security Researcher?(3:49) Nick's work with Hacking the Cloud(5:24) Building relationships with cloud providers(7:34) Nick's security findings through cloud logs(13:05) How Nick finds security flaws(15:31) Reporting vulnerabilities to AWS and “bug bounty” programs(18:41) The Duckbill Group sponsor read(19:24) How to report vulnerabilities ethically(21:52) Good disclosure programs vs. bad ones(28:23) What's next for Nick(31:27) Where you can find more from NickAbout Nick FrichetteNick Frichette is a Staff Security Researcher at Datadog, specializing in offensive security within AWS environments. His focus is on discovering new attack vectors targeting AWS services, environments, and applications. From his research, Nick develops detection methods and preventive measures to secure these systems. Nick's work often leads to the discovery of vulnerabilities within AWS itself, and he collaborates closely with Amazon to ensure they are remediated.Nick has also presented his research at major industry conferences, including Black Hat USA, DEF CON, fwd:cloudsec, and others.LinksHacking the Cloud: https://hackingthe.cloud/Determine the account ID that owned an S3 bucket vulnerability: https://hackingthe.cloud/aws/enumeration/account_id_from_s3_bucket/Twitter: https://twitter.com/frichette_nPersonal website:https://frichetten.comOriginal Episodehttps://www.lastweekinaws.com/podcast/screaming-in-the-cloud/hacking-aws-in-good-faith-with-nick-frichette/SponsorThe Duckbill Group: duckbillgroup.com

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20Sales: What I Learned Scaling Datadog from $60M to $1BN in ARR | How to do Outbound in 2024 | Why Discounting is Dangerous and Contract Sizes are Misleading with Dan Fougere

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Nov 22, 2024 70:05


Dan Fougere is one of the most successful sales leaders of the last decade. Most recently, Dan was Chief Revenue Officer for Datadog, growing revenues from $60 million to $1BN ARR. Before Datadog, Dan was Head of Global Sales at Medallia where he created the Mediallia sales playbook. In addition, Dan is also a minority owner of the New York Yankees.  In Today's Episode with Dan Fougere:  1. Lessons Scaling Sales to $1BN in ARR at Datadog: What did Datadog not do that Dan wishes they had of done? What did they not do that Dan wishes they had done? What does Dan know about scaling sales to $1BN in ARR that he wishes he had known at the beginning? What stage of the scaling process was hardest? Why? 2. How to Hire the Best Sales Team: What are the top signals of the best sales candidates? How does Dan structure the interview process for new candidates? How does Dan use tasks and take-home assignments to test candidates? What does Dan think of hiring panels? What are the biggest hiring mistakes Dan has made? What did he learn? 3. Discounting, Logos and Deal Reviews: Is discounting always wrong? How should sales leaders use it? How important is the quality of logo in the early days vs revenue in the door? What is the right way to structure deal reviews? What makes good vs great? Is outbound dead in 2024? Advice to founders on outbound?