Podcasts about Bedrock

Lithified rock under the regolith

  • 1,299PODCASTS
  • 2,927EPISODES
  • 53mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 13, 2025LATEST
Bedrock

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Bedrock

Show all podcasts related to bedrock

Latest podcast episodes about Bedrock

The Tech Blog Writer Podcast
3276: How AWS is Building the Infrastructure for AI at Scale

The Tech Blog Writer Podcast

Play Episode Listen Later May 13, 2025 22:46


What happens when access to advanced AI models is no longer the real differentiator, and the true advantage lies in how businesses leverage their own data? At the AWS Summit in London, I sat down with Rahul Pathak, Vice President of Data and AI Go-to-Market at AWS, to unpack this question and explore how organisations are moving beyond experimentation and into large-scale generative AI adoption. Recorded live on the show floor, this conversation explores how AWS is supporting customers at every layer of their AI journey. From custom silicon innovations like Trainium and Inferentia to scalable services like Bedrock, Q Developer, and SageMaker, AWS is giving businesses the infrastructure, tools, and flexibility to innovate with confidence. Rahul shared how leading organisations such as BT Group, SAP, and Lonely Planet are already applying these tools to reduce costs, speed up development cycles, and deliver tailored experiences that would have been unthinkable just a few years ago. A key theme that emerged in our discussion is that data, not just models, is the true foundation of effective AI. Rahul explained why unifying data across silos is critical and how AWS is helping companies create more intelligent applications by connecting what they uniquely know about their business to powerful AI capabilities. We also addressed the operational realities of AI deployment. From moving proof-of-concept projects into production to meeting the growing demand for responsible AI, the challenges are shifting. Organisations are now focused on trust, security, transparency, and measurable value. If you're leading digital transformation and wondering how to scale AI solutions that deliver on business outcomes, this episode provides practical insight from someone at the center of the industry. How will your business stand out in a world where every company has access to AI models, but only a few know how to apply them with purpose?

The Brewing Network Presents | Dr. Homebrew
Dr. Homebrew | Episode #274 We Talk Wine Vocabulary With Jake From Bedrock Wine Co.

The Brewing Network Presents | Dr. Homebrew

Play Episode Listen Later May 13, 2025 99:04


Splitting the wine and beer worlds up was probably not the marketing plan craft beer should have gone with. So many flavors and descriptors cross those fermented boundaries that the two beverages are more alike than not. Sort of. On today's show, we bring on Jake from Bedrock Wine Co., a small winery in the heart of California's wine country. Jake manages the vineyards out there, and has a very solid knowledge on how grapes shape the final product, and how we can perhaps bring some of that vocabulary over to beer. Plus, he's very handsome. Learn more about your ad choices. Visit megaphone.fm/adchoices

Battle Drill Daily Devotional
Monday: Building Your Life on Bedrock Trust

Battle Drill Daily Devotional

Play Episode Listen Later May 12, 2025 3:20


What foundation are you building your life upon? This thought-provoking devotional examines the difference between intellectual belief and radical trust through the lens of the psalmist's bold declaration. #dailydevotional #dailydevotion #dailybible #bibleverse #biblestudy #battledrill #christian #christianity #ukchristian #christiantiktok #christiantok #SalvationArmy #Maidenhead #WildernessWisdom #SoulShelter #FearlessFaith #SacredSolitude #DivineTrust #SpiritualStrength #SurrenderControl #FaithJourney #SoulNourishment #DivineProtection #TemptationTruths #AncientWisdom #ModernApplication #SpiritualGrowth #ChristCentred Read more ... Click here to read today's devotional - https://www.dropbox.com/scl/fi/j54egg1xw09wzv7s19xvf/250511.pdf?rlkey=d9j8f7qbcyxav0utba8zsh7jq&dl=0  Click on the link - https://linktr.ee/battlefieldpodcasts - to listen, watch or subscribe to this podcast.

Wet Jeans
Moms Are The Bedrock of Society

Wet Jeans

Play Episode Listen Later May 11, 2025 41:51


Thank you to the moms out there. The good ones. I discuss that part of the show at the end of the show. Thank you for listening to my show.Support the show

UK Trance Society Podcast
Episode 226 (Mixed by Chinmayi) - The Drive Home

UK Trance Society Podcast

Play Episode Listen Later May 11, 2025 93:10


THE DRIVE HOME When playing out, we tend to build our sets up from the quieter to the more banging sounds. This time I've switched the order. This set would be good on the drive home from work or from the club. What goes up must come down so let's make the journey a pleasant one! 1 Café del Mar (Extended Mix) Deepcry 2 Dead Synthy (Extended Mix) Sasha, Marsh 3 Hide U (Extended Mix) Isidoros, Kosheen, Marten Lou 4 Chimera (Rospy Extended Remix) Kyau & Albert 5 Mermaids (Extended Mix) PARAFRAME 6 Alone (Extended Mix) Lycii 7 Lost Mind (DJ Version) Taglo 8 Must Be The Love (Simon Doty Extended Remix) BT, Nadia Ali, ARTY 9 MELODEMON (Extended Mix) MRPHLNDR 10 Love Made Me Do It (Guy J Remix) Moshic 11 Fraser River (Extended Mix) Jochem Hamerling 12 Heaven Scent (Marsh Remix) Nick Muir, Bedrock, John Digweed 13 Breathe (Extended Mix) PARAFRAME 14 Angel In The Dark (Extended Mix) Nathan Nicholson, Massano, Anyma (ofc) 15 The Wanderer (Extended Mix) Skyda 16 Saltwater feat. Moya Brennan (Ilan Bluestone Extended Remix) Chicane, Moya Brennan 17 Next To You (Extended Mix) Romain Garcia 18 Hanami (Extended Mix) Illumia 19 I Know (Original Mix) Illumia 20 Best Thing (Original Mix) Fagin, LAR, Marg Pappas Enjoy! Chinmayi

Balance Selections Podcast
Balance Selections 325: Moshic

Balance Selections Podcast

Play Episode Listen Later May 9, 2025 98:58


By the early 2000s, when much of the electronic world was chasing trance highs and vocal hooks, Moshic Shlomi—known mononymously as Moshic—was charting a darker path. His take on progressive house was not just brooding; it was cinematic, steeped in tension, mood, and layered melancholy. Releasing early work as Argonout, Moshic made his mark with tracks on Cyber Records and appearances on compilations from Global Underground to In Search of Sunrise, showcasing a sound both atmospheric and emotionally rich. Now leading his own label, Contrast, he continues to explore the space between light and shadow, with recent productions finding a home on imprints like Bedrock and Bonzai Progressive. His Balance Selections mix delivers on the promise of the label's name: a carefully curated journey of contrasts that never loses its introspective core. Featuring tracks from N-TCHBL, Stereo Underground, and Moshic himself, the mix isn't afraid to drift into the melancholic edges of progressive house. @Moshic

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

In this episode, host Krish Palaniappan welcomes back Ramya Ganesh to discuss Amazon Bedrock and its applications in AI and cloud computing. Ramya shares her extensive experience with AWS, particularly in cybersecurity and AI, and explains the differences between Bedrock and SageMaker. The conversation delves into practical use cases, such as code generation and architectural diagrams, while also addressing the challenges and considerations when integrating Bedrock into existing applications. The episode concludes with insights on prototyping with AWS AI tools and the future of AI development. In this conversation, Krish Palaniappan and Ramya Ganesh delve into the intricacies of using AWS Bedrock for model selection and application development. They explore the open-source nature of certain applications, the importance of selecting the right model for specific problems, and the nuances of model configurations. The discussion also covers how to compare different models and the next steps for integrating these models into applications.

Cloud Unplugged
Big Retail Cyber Attack: Amazon's AI Offensive & the Google AI Opt‑Out Illusion

Cloud Unplugged

Play Episode Listen Later May 7, 2025 33:16


In this 30‑minute episode, Jon and Lewis unpick the coordinated ransomware wave that struck Britain's high‑street giants. They trace the attack chain that emptied Co‑op shelves, froze M&S online orders and attempted, but failed, to extort Harrods.Lewis takes a look at Amazon's latest generative‑AI arsenal: Amazon Q's new developer‑first agents, the multimodal Nova Premier family running on Bedrock, and AWS's landmark decision to let any SaaS vendor list in Marketplace regardless of where the software runs, a direct play to become the app store for the whole cloud economy. Finally, they ask whether enterprises can really keep their data out of Google's AI engines.Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/

Die2
Sandkasten Rocker - Die2 #301

Die2

Play Episode Listen Later May 4, 2025 83:30


Ob Alt oder Jung, heute ist für jeden was dabei. Denn heute reden wir u.a. über einen Mähroboter mit KI, E-Scooter Stunt Fahrer, Vandalen in der Straßenbahn, Better on Bedrock, eklige Pilze, The Last of Us Staffel 2, Conquest Dark, begeistert von Oblivion Remastered, Castle Craft und Jack versunken in Minecraft.Eure Fragen oder Themen unter dem Hashtag #die2onairLinks zu den Themen der Folge► ARC Raiders https://store.steampowered.com/app/1808500/ARC_Raiders/► Castle Craft https://store.steampowered.com/app/2086680/Castle_Craft/► Better On Bedrock https://www.minecraft.net/de-de/marketplace/pdp?id=6c3a6979-dc77-41c6-b19e-0071dabedf71► Im geheimsten, in einen Berg gebauten Haus der Welt (Hausführung) https://youtu.be/OMUfmlPs_w4► I Played The #1 Minecraft Bedrock Mod https://youtu.be/9r_ZYTODZlw► Gotye - Somebody That I Used To Know (ACAPELLA) https://youtu.be/I4tWI2NrpoYDie2 auf Twitter ⁠⁠⁠https://twitter.com/die2onair

Balance Selections Podcast
Balance Selections 324: Aubrey Fry

Balance Selections Podcast

Play Episode Listen Later May 2, 2025 122:18


Aubrey Fry is a rising force in electronic music, blending progressive house, techno, and breaks with a raw, hypnotic sound. With releases on Last Night On Earth, Bedrock, and The Soundgarden, and support from legends like Sasha, Digweed, and Guy J, he recently added Balance Music to that esteemed list with the fantastic St John EP, a collaboration with Nick Stoynoff. Not to be outdone in the live arena, his high-energy DJ sets and growing global presence are cementing his reputation as one to watch. On this Balance Selections mix, the Welshman delivers a two-hour excursion that builds with the intensity of a pressure cooker. Featuring tracks from Gai Barone, Super Flu, Jody Barr, and more, it's an energetic journey that's not afraid to put the hammer down. @aubreyfry

Your Unity
Episode #530 with Contagious feat. Shane Cross

Your Unity

Play Episode Listen Later May 2, 2025 111:00


Your Unity #530 with Contagious feat. Shane Cross Recorded Live in Adelaide, Australia 02/05/2025 01. Dusky - Squeezer (Original Mix) [17 Steps] 02. Mark Knight - Your Love (Original Club Mix) [Toolroom Records] 03. Durante - Never B Alone (Extended Mix) [Anjunadeep] 04. Spencer Brown - Wannamaker [Anjunabeats] 05. Estiva, GAALIA - Feel Alive (Extended Mix) [Colorize (Enhanced)] 06. Shane Cross - Were You Wrong (Deep Mix) [White Label] Premium Pick 07. Romain Garcia - Alone (Extended Mix) [Anjunadeep] 08. Motorcycle - As The Rush Comes (Michael Tsukerman Into The Dark Mix) [White Label] 09. SHATO - Everlast (DJ Version) [Euphonic Visions] 10. Faithless, Zoë Johnston - Crazy English Summer (Maor Levi Remix) [White Label] 11. ARTY, Nadia Ali & BT - Must Be The Love (Simon Doty Extended Remix) [Armada Music] Spector Selector 12. Grum - Amnesia (Extended Mix) [Being] 13. Above & Beyond, Zoë Johnston - Carry Me Home (Extended Mix) [Anjunabeats] 14. Ernesto & Bastian - Thrill (Original Mix) [High Contrast Recordings] 15. Stephen Kirkwood, emse - Take My Heart (Extended Mix) [Anjunabeats] 16. Ridgewalkers feat. El vs. Dakota - Find Koolhaus (Shane Cross Mashup) [White Label] Prestigious Pick 17. Sia - Buttons (Markus Schulz Coldharbour Remix) [Ultra Records] 18. Nick Muir, Bedrock, John Digweed - Heaven Scent (Marsh Extended Remix) [Bedrock Records] 19. Andy Moor, Lange, Kyau & Albert - Made of Stadium Four (Shane Cross Mashup) [White Label] 20. Karen Overton - Your Loving Arms (BLR Extended Remix) [A State of Trance] 21. Tenishia, Kirsty Hawkshaw - Reasons To Forgive (Original Mix) [Armind (Armada)]

Anchor Faith Church
Bedrock Beliefs - The Reigning Church

Anchor Faith Church

Play Episode Listen Later Apr 30, 2025 49:07


Stay Connected With UsWebsite: anchorfaith.comAnchor Faith Church Facebook: www.facebook.com/anchorfaithAnchor Faith Church Instagram: www.instagram.com/anchorfaithPastor Earl Glisson Facebook: www.facebook.com/earlwglissonPastor Earl Glisson Instagram: www.instagram.com/earlglisson

NewChurchLIVE.tv: Pastor Chuck Blair

A Bedrock of Trust We live in an age where trust feels elusive—slipping through our fingers just when we reach for it. How do we return to a deeper, steadier, bedrock trust in God, and rediscover the peace of being gently held in His care? From the service aired on 4/27/25 If you enjoyed this episode, be sure to Subscribe and review our podcast wherever you get your podcasts. It is the #1 way to support this podcast, and it's free! Go to the main podcast page, scroll down and at the bottom you'll find a place to rate the podcast and to leave a review.  Follow us on Facebook and Instagram and YouTube @newchurchlive  Visit our Website and Make a donation to support our church community Video of Service HERE

The Wine Makers on Radio Misfits
The Wine Makers & Bedrock Conversations – Introducing S.O.R.B.E.T.

The Wine Makers on Radio Misfits

Play Episode Listen Later Apr 22, 2025 61:48


What began with a few DMs dunking on misleading “cover crop” Instagram posts—or maybe even earlier, on Napa Valley school bus rides more than 25 years ago—has grown into something much bigger: a groundbreaking tasting event focused on the future of responsible winegrowing. In this first-ever crossover episode of The Wine Makers Podcast and Bedrock Conversations, Katie Bundschu, Morgan Twain-Peterson MW, and Sam Coturri join Chris Cotrell, Alli Badar, Brian Casey, and Bart Hansen to introduce S.O.R.B.E.T.—the Sonoma Organic Regenerative Biodynamic Education Tasting. This inaugural event celebrates Sonoma Valley's critical role in sustainable and ecological winegrowing—past, present, and future. The only two rules for participation: 1) Wines must be farmed organically, regeneratively, or biodynamically. 2) Wines must come from Sonoma Valley. The result? A dynamic mix of wineries—from garagistes to corporate producers, from emerging natural wine stars to established legacy brands—offering everything from edgy small-batch bottles to ultra-premium pours. The crew shares the origin of the event, what it means for the region, and why this kind of collective action is more important than ever. [EP 367] Save the Date: August 17th at Fort Mason, San Francisco. Get tickets here: Eventbrite – S.O.R.B.E.T. Follow along: @s.o.r.b.e.tasting @bedrockwines @abbotspassage @sixteen600 Winemakers interested in participating: Reach out for an application—we'd love to have you join.

Bedrock Wine Conversations
Announcing S.O.R.B.E.T. (Sonoma Organic Regenerative Biodynamic Educational Tasting) w/ Sam Coturri and Katie Bundschu

Bedrock Wine Conversations

Play Episode Listen Later Apr 22, 2025 61:46


In the first ever cross-over episode between Bedrock Wine Conversations and The Wine Makers Podcast, Chris and Morgan along with Katie Bundschu (Gundlach Bundschu/Abbot's Passage) join Sam Coturri and the Wine Makers crew to discuss the first S.O.R.B.E.T..  Standing for Sonoma Organic Regenerative Biodynamic Educational Tasting the August 17th event at Fort Mason is San Fransisco will showcase wines from the 2500+ acres of responsibly farmed vineyards within the Sonoma Valley appellation. At around 20% organic, Sonoma Valley has been a historic leader in pushing forward the conversation about progressive farming practices (California agriculture in total is 4-5%) dating back to the 60s and 70s. Morgan, Katie and Sam, all multi-generational winegrowers (Katie is 6th!) explain the motivations for the tasting- chatting about the importance of farming for the next generation to putting a spotlight on the highly historic appellation of Sonoma Valley that is often confused with the greater Sonoma County. Vineyard participants will include Bedrock Vineyard, Montecillo Vineyard and Rossi Ranch while other wineries and wines will be featured from Bucklin, Donum, Hanzell, Hamel, Kamen, Guthrie Family Wines, Stewart Cellars, Repris, Fresc., Marioni, Korbin Kameron, Laurel Glen, Kivelstadt, Once&Future, Under the Wire and more.  

The Wine Vault
Episode 470 - Bedrock Wine Co. Sonoma County Syrah

The Wine Vault

Play Episode Listen Later Apr 21, 2025 66:03


Bedrock Wine Co. In this episode, Rob and Scott review a beauty of Syrah from Sonoma by Bedrock Wine Co.  So come join us, on The Wine Vault.

Fred Nova - in the mix
Fred Nova - here and now

Fred Nova - in the mix

Play Episode Listen Later Apr 21, 2025 75:40


here and now - Fred Nova in the mix: deep, melancholic & melodic house 15 tracks and remixes from artists like Dusky, Ivory, Denis Horvat, Birds of Mind, Gumm, Ajna, Kirik, Guy J, Sasha, Artche,... Released on great labels like Anjunadeep, Exit Strategy, Vokabularium, Sum Over Histories, Lossless, Get Physical, Systematic, TAU, Magnifik, Memory Remains, Bedrock, Last Night On Earth and many more. Enjoy the moment!

Detroit Voice Brief
Detroit Free Press Voice Briefing Friday April 18, 2025

Detroit Voice Brief

Play Episode Listen Later Apr 18, 2025 3:43


Montcalm County has Michigan's first measles outbreak of 2025 How GM and Bedrock would demolish 2 RenCen towers on Detroit's riverfront Chris Brown leads powerhouse lineup as Tycoon Festival makes Detroit debut Saturday

Comunidad Cristiana Emanuel
Josiah Martinez | Bedrock Edition

Comunidad Cristiana Emanuel

Play Episode Listen Later Apr 17, 2025 51:10


Josiah Martinez | Bedrock Edition by Comunidad Cristiana Emanuel

AWS for Software Companies Podcast
Ep095: AI and Cybersecurity - How SentinelOne Is Changing the Game

AWS for Software Companies Podcast

Play Episode Listen Later Apr 16, 2025 15:20


SentinelOne's Ric Smith shares how Purple AI, built on Amazon Bedrock, helps security teams handle increasing threat volumes while facing budget constraints and talent shortages.Topics Include:Introduction of Ric Smith, President of Product Technology and OperationsSentinelOne overview: cybersecurity company focused on endpoint and data securityCustomer range: small businesses to Fortune 10 companiesProducts protect endpoints, cloud environments, and provide enterprise observabilityRic oversees 65% of company operationsPurple AI launched on AWS BedrockPurple AI helps security teams become more efficient and productiveSecurity teams face budget constraints and talent shortagesPurple AI helps teams manage increasing alert volumesTop security challenge: increased malware variants through AIAI enables more convincing spear-phishing attemptsIdentity breaches through social engineering are increasingVoice deepfakes used to bypass security protocolsFuture threats: autonomous AI agents conducting orchestrated attacksSentinelOne helps with productivity and advanced detection capabilitiesSentinelOne primarily deployed on AWS infrastructureUsing SageMaker and Bedrock for AI capabilitiesBest practice: find partners for AI training and deploymentCustomer insight: Purple AI made teams more confident and creativeAI frees security teams from constant anxietySentinelOne's hyper-automation handles cascading remediation tasksMultiple operational modes: fully automated or human-in-the-loopAgent-to-agent interactions expected within 24 monthsCommon misconception: generative AI is infallibleAI helps with "blank slate problem" providing starting frameworksAI content still requires human personalization and reviewAWS partnership provides cost efficiency and governance benefitsParticipants:· Ric Smith – President – Product, Technology and Operations, SentinelOneSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

Grace Audio Treasures
The bedrock of Christian hope!

Grace Audio Treasures

Play Episode Listen Later Apr 14, 2025 4:06


Luke 24:5-6, "Why do you look for the living among the dead? He is not here; He has risen!"There is no greater proclamation in all the universe than this: Jesus Christ is risen from the dead! This singular truth distinguishes Christianity from every religion. Our Redeemer is not buried in a tomb, but He is reigning from His throne in Heaven. The One who was crucified in weakness, now lives in resurrection power and eternal glory.The resurrection of Jesus is not a peripheral truth--it is the very bedrock of our hope. Without it, the cross would be meaningless, and our hope would be in vain. The resurrection . . . affirms His deity, authenticates His mission, and guarantees the salvation of His people.Consider the comfort this brings to the believer. In a fallen world where suffering, sorrow, and death are ever present--we cling to a living Savior. Our faith is not an intellectual adherence to a dry dogma, but a vital relationship with a living Savior. His victory over the grave is our assurance that . . . sin's penalty has been paid, God's wrath has been satisfied, and eternal life has been secured!Moreover, the same Spirit who raised Jesus, now indwells every child of God, empowering him to . . . mortify sin, pursue holiness, and persevere in the Christian life.Is your heart weary today? Fix your eyes on the risen Christ. He is not distant or detached--He walks with His people, just as He walked with the two disciples on the road to Emmaus. He . . . opens the Scriptures, warms the heart, and strengthens the soul.He intercedes for us at the right hand of God, bearing our names upon His heart and pleading the merits of His blood,

Retail Daily Minute
Amazon Debuts Nova Sonic AI, Walmart Expands Drone Delivery with Zipline, and Sam's Club Accelerates Growth

Retail Daily Minute

Play Episode Listen Later Apr 10, 2025 5:12


Welcome to Omni Talk's Retail Daily Minute, sponsored by Mirakl. In today's Retail Daily Minute:Amazon Launches Nova Sonic Voice AI – Amazon unveils Nova Sonic, a next-gen voice model built on Bedrock, offering faster, more natural interactions—80% cheaper than OpenAI's models and now powering Alexa Plus.Zipline and Walmart Bring 30-Minute Drone Delivery to Texas – Walmart expands its drone delivery pilot with Zipline to the Dallas-Fort Worth area, offering customers in Mesquite ultra-fast service using precision tethered drones.Sam's Club to Open 15 Stores a Year, Renovate All 600 – Sam's Club plans a major expansion and full remodel of its U.S. locations, focusing on digital-first design and e-commerce integration despite economic headwinds and tariffs.The Retail Daily Minute has been rocketing up the Feedspot charts, so stay informed with Omni Talk's Retail Daily Minute, your source for the latest and most important retail insights. Be careful out there!

Bedrock Wine Conversations
060 - 2023 Bedrock Detert Vineyard Release & an Interview with Tom Garrett, Winemaker & Owner of Detert Family Wines

Bedrock Wine Conversations

Play Episode Listen Later Apr 4, 2025 153:39


In this two-part episode, Morgan and Chris talk about the upcoming special release of Bedrock's first vintage of Detert Vineyard Cabernet Franc on Tuesday, 4/8. They discuss the legendary, historically important site, the gravitas of working with the fruit, and how the wine came together. In part two, Morgan and Chris interview vineyard owner Tom Garrett, discussing his family's long history in Napa, his journey into wine—including founding his own wineries—the origins of the vineyard, and what makes Detert Vineyard the most historic and greatest site for Cabernet Franc in California and one of the finest in the world.

Your Unity
Episode #526 with Contagious

Your Unity

Play Episode Listen Later Apr 4, 2025 109:18


Your Unity #526 with Contagious Recorded Live in Adelaide, Australia 04/04/2025 01. Otherwish - Give Me Love (Extended Mix) [This Never Happened] 02. Pete Tong, MoBlack, Max Zotti & Monolink - Apocalypse (Extended Mix) [MoBlack Records] 03. Maty Owl - Tarot (Extended Mix) [Anjunadeep Explorations] 04. Killen. - Mojo (Extended Mix) [Anjunadeep] 05. Corren Cavini, Chris Howard - Tell Me (Extended Mix) [Purified Records] 06. Tinlicker, Hero Baldwin - I Started A Fire (Extended Version) [[PIAS] ÉLECTRONIQUE] Spector Selector 07. Thysma - Save Me (Extended Mix) [Zerothree] 08. Kx5, HAYLA - Escape (Sparrow & Barbossa Remix) [mau5trap] 09. Rezident, Von Boch & Elissa Mielke - Hold On (Extended Mix) [Anjunadeep] 10. Elysian vs Kryder - Now We Are Free (Extended Mix) [Kryteria] 11. Cosmic Gate - ID 1 (Extended Mix) [Wake Your Mind Records] 12. Kasablanca - Dawn (CVMRN Extended Mix) [Anjunabeats] 13. ANUQRAM - Safari (Extended Mix) [Anjunabeats] Premium Pick 14. Luttrell, Molly Moonwater - Something Right (Super Flu Extended Mix) [Anjunadeep] 15. Joris Voorn, Yotto feat. White Lies - Seventeen (Extended Mix) [Spectrum] 16. Estiva - Shores (Extended Mix) [Colorize (Enhanced)] 17. Ezequiel Arias - Perfect Dream (Extended Mix) [Anjunadeep] 18. Fatima Yamaha - What's A Girl To Do (Capulet Re-Edit) [White Label] 19. John Monkman, Tailor - Place To Be (Extended Mix) [Anjunadeep] Prestigious Pick 20. Tom Staar, Ansolo - Totem (Original Mix) [SIZE Records] 2014 21. Fisher - Stay (Extended Mix) [Catch & Release] 22. Nick Muir, Bedrock, John Digweed - Heaven Scent (Marsh Extended Remix) [Bedrock Records] 23. Zankee Gulati - Mind Opener (Marsh Extended Remix) [Meanwhile] 24. Luttrell - Space (Dusky Extended Mix) [Anjunadeep] 25. ilan Bluestone, Jasper Blunk - 40 (Extended Mix) [Anjunabeats]

The Party Life (Radio Show)
Episode 615: EPISODE 615 (04-04-2025) ft Cassian (Aus)

The Party Life (Radio Show)

Play Episode Listen Later Apr 3, 2025 119:37


Tracklist : www.thepartylife.com.au/615The Party Life : www.linktr.ee/thepartylifeDj Fuel : www.djfuel.com.auCassian : https://www.instagram.com/cassian --------------DJ Fuel Mix 1Above & Beyond, Zoe Johnston - Quicksand (Don't Go) [Anjunabeats]Mhammed El Alami & HKL - Sunrise [Hypersia Records]The Superjesus X The Journey - Something Good Chusap, RA & Twoface feat. Bently - No Extras [Hussle]GABBEH, Vivace - Promises [LOWBR]Skytech & Fafaq - Ladadi [Armada]Solstay & VMS - Enter [Vicious Black]CID - Pass Out [Night Service Only]Mason - Beat Of The Drum  [Club Sweat]Cedric Gervais - BAD GIRL [Delecta Records]Matt Sassari, BLR - Tambur [SASS]MaRLo & Mila Josef - Time Can Heal [Reaching Altitude]The Night Slug - Daddy's Home [TMRW Music]Lady Gaga - Poker Face (ELAC EDIT) [Free DL]Tune Of The Week - DJ Fuel's Pick Of The WeekPhillip Castle - Feel It [Interplay]House Cut Of The Week - It's All About House Music Bust-R, Ben Renna, Felixx - Midnight [Planet Lush]Vibe Radar - is it Left-Field or Down-Tempo, Pop, maybe Breakbeat or D'n'B, whatever it is, it's been picked by the Vibe Radar for being a cool record. DJ Fuel ft JESSCA - Taste [Pumping Records]Fuel Flashback / Fast ForwardNick Muir, Bedrock, John Digweed - Heaven Scent (Marsh Remix) [Bedrock Records]Guest Mix - Cassian Cassian & SCRIPT & BELLADONNA - Where I'm FromMila Journée & Olivier Giacomotto - Smash It Adam Port & Stryv Move (Anyma & Cassian Remix) ArtworkAdam Port & Stryv ft. Malachii - Move (Anyma & Cassian Remix) Cassian - S.O.SCassian X ICEHOUSE - Great Southern LandDa Hool Meet Her At The Love Parade (Yotto & Cassian Remix) Dimitri Vangelis & Wyman X Steve Angello Payback (Kevin de Vries & Cassian Remix) Anyma & Cassian Save Me ArtworkAnyma & Cassian ft. Poppy Baskcomb - Save Me (Sphere Version)Anyma - Pictures Of You (Cassian Remix) Jimi Jules vs. Joshlane - My City's On Fire vs. System Overload (Cassian Mashup) Kevin de Vries - Dance With Me (Acappella)Anyma & Argy & Son Of Son - Voices In My HeadCassian - Dun Dun RÜFÜS DU SOL - On My Knees (Cassian Remix)SCRIPT - Can You Hear Me? Gotye Somebody That I Used To Know (Acappella) John Summit & HAYLA - Shiver (Cassian Remix) https://www.thepartylife.com.au https://www.djfuel.com.au TPL Podcast Social Media Links : https://linktr.ee/thepartylifeDJ Fuel Social Media Links : https://linktr.ee/djfuelSupport the artists featured on the show by following our playlist and playing their music.Spotify Playlist : http://bit.ly/TPLplaylistA Music Podcast with the latest music in 2025 & the best in Dance Music, Progressive House, Melodic House & Techno, EDM, Trance Music, House Music, Afro-House, Tech-House, Drum & Bass + everything in between. 

Tech Driven Business
Inside Insights: The Evolving Landscape of SAP and Agentic AI with Geoff Scott

Tech Driven Business

Play Episode Listen Later Apr 1, 2025 25:18


In this latest episode, Geoff Scott, of ASUG, rejoins Mustansir Saifuddin to discuss the rapidly evolving landscape of AI within the SAP ecosystem, specifically focusing on the impact of partnerships like SAP and Microsoft's collaboration on Copilot and Joule. Listen in as we explore how these advancements will shape enterprise operations in 2025 and beyond, and why you can't afford to ignore this technological shift.to discuss what is required for businesses to be successful with Gen AI as they prepare for the future.  Geoff Scott, is CEO and Chief Community Officer of ASUG, believes that the connections ASUG makes for our members have the potential to become career-defining relationships that inspire innovation and success for their organizations. His forward-thinking leadership prioritizes helping our members make the most of their investment in SAP technologies. To that end, Geoff works closely with customers, members, the SAP Executive Board, and the extensive partner ecosystem to amplify the voice of the SAP customer.   Past positions include CIO for TOMS Shoes, where he led the implementation of SAP: CIO at JBS; and senior leadership positions at Ford Motor Company. Before becoming CEO, Geoff was an ASUG member and served on the board. Geoff has served on several philanthropic boards and is the founding member of the Denver CIO Executive Council.  Connect with Us: LinkedIn Mustansir Saifuddin Innovative Solution Partners  X:  @gscott16 @MmsaifuddinYouTube or learn more about our sponsor Innovative Solution Partners to schedule a free consultation.    Episode Transcript: [00:00:00] Mustansir Saifuddin: Welcome to Tech Driven Business, brought to you by Innovative Solution Partners. I'm honored to have Jeff Scott, CEO of as a, rejoin me to discuss the rapidly evolving landscape of AI within the SAP ecosystem, specifically focusing on the impact of partnerships like SAP and Microsoft's collaboration on copilot and Joule. [00:00:26] Mustansir Saifuddin: Listen in as we explore how these advancements will shape enterprise operations in 2025 and beyond, and why you can't afford to ignore this technological shift. [00:00:39] Mustansir Saifuddin: Thanks for coming back on our podcast. Geoff, it was really nice to have you back. You remember, you know you came on last year and we dove into this whole [00:00:48] Geoff Scott: Oh. [00:00:49] Mustansir Saifuddin: gen AI topic. Everybody remembers that, you know, it was a very hot topic last year and, you know, everybody was going in that direction. Now, fast forwarding everything to this year and say, Hey, what is going on? And this year, SAP has had some major announcements, as we all know about the partnerships that we leverage the power of AI within the SAP ecosystem. And what I see with the majority of SAP clients using Microsoft in the enterprises. There is a lot of opportunity in SAP and Microsoft , you know, the whole partnership, especially around copilot and SAP Joule. I believe it'll make a big impact. [00:01:30] Geoff Scott: I'm surprised you have me back. I was very nervous. It's a year later. I was like, okay, this is never gonna happen again. I, I disinvited myself from future podcasts. [00:01:39] Mustansir Saifuddin: Well, I have you back [00:01:42] Mustansir Saifuddin: on. [00:01:43] Mustansir Saifuddin: and I am telling you that it is more exciting than what we were talking about last year, and I think this is what I want to get some thoughts on, Hey, what's going on? What's your take on how these partnerships are coming together and what are we going to see in 2025? [00:02:01] Geoff Scott: Well, the good news is that what we see in 2025 is no apparent slowdown in any of this technology. You know, but what's interesting is we, in the SAP space, [00:02:13] Geoff Scott: are not necessarily meeting that challenge head on, and we probably are not moving as quickly as we should to capture the amount of opportunity that's out there. I, I think AI is real. It's gonna continue to evolve at a furious pace, and that necessitates that we as technology practitioners determine how we best leverage that technology. [00:02:36] Geoff Scott: You, you talked about Microsoft Copilot, Joule, right? I mean AWS. Bedrock , Google Gemini, you know, now we've got, other LLMs popping out all over the place. Right. , deep seek . Which just popped up very quickly. So there's just, a tremendous amount of movement here and it's really hard [00:02:57] Geoff Scott: to stay abreast of it. And I think the opportunity to jump in and start leveraging this is mission critical and what I think it really necessitates, and you talked about some announcements from SAP that I think double or triple down on this notion that AI is here, so if you really want to take your SAP data and make it AI enabled using Joule or using any other series of tools, [00:03:24] Geoff Scott: it's gonna necessitate that we as technology practitioners start to do some fairly radical things with our data. Number one is we start to de-customize everywhere we can and move the responsibility for code back to SAP so that they are responsible for figuring out how to make the AI work, not us. [00:03:42] Geoff Scott: So [00:03:42] Geoff Scott: , [00:03:42] Geoff Scott: how [00:03:42] Geoff Scott: do we over time de-customize and how do we over time think about the necessity of adopting SaaS based solutions such as SAP's Public Cloud? Many of our of our community members are implementing private cloud right [00:04:00] now through Rise which is great, but ultimately if we recreate all those customizations downstream, then we have to figure out how to make them AI enabled, and I think that's where we're gonna find ourselves under continuing amounts of stress as the business innovates faster and faster. [00:04:17] Geoff Scott: We typically in the SAP ecosystem, think about our innovations on a stair step model. And what I mean by that is we do an upgrade, we sit on that upgrade for a couple of years, as long as we possibly can. You know, and then we do an upgrade again. And the challenge I think that's gonna present is that there's so much innovation happening and, all these things are moving at such a speed that if we're not continuously innovating, [00:04:39] Geoff Scott: we are gonna find ourselves further and further behind. I, I'd like to see our SAP data be the sole source of truth inside our enterprises and an innovation gold mine. [00:04:49] Geoff Scott: And to do that, I think we have to de-customize. I think we have to be able to, innovate faster. I think we have to be able to look at this data, do a lot more work around archiving and getting the old stuff, swept up and moved out. Master data is gonna become a major, major opportunity for all of us. [00:05:05] Geoff Scott: And if we do all those things really, really well. We will have a fighting chance at making our enterprises very savvy. And on top of the latest trends versus trying to perpetually catch up. [00:05:16] Mustansir Saifuddin: It's a race, the way I look at it, and I think , you summed it up very well, and I think that leads me to my question into this whole topic of collaboration. Let's take that right now. What would you tell your SAP users about the power of Microsoft and SAP's collaboration? [00:05:33] Mustansir Saifuddin: How will it positively impact their day-to-day operations? Let's start with that. [00:05:38] Geoff Scott: Well, I, I think you set this up really well. We, we know from an ASUG research perspective that most SAP customers are also Microsoft customers. And that partnership has gone back almost as long as SAP and Microsoft have been in business. You know, there's some pictures I've seen of Bill Gates and Hasso Plattner, the two founders of both organizations working together early on. [00:06:04] Geoff Scott: So this is a partnership that goes back a long, long time and it's a tremendously powerful partnership. And it indicates to me that these are organizations that work very well together, very closely together and collaborate. I mean, almost everyone I know who works in SAP also uses Excel spreadsheets, also uses PowerPoint slides, [00:06:23] Geoff Scott: also creates Word documents. I do these almost every single day. It makes perfect sense to me that a tool such as Microsoft Copilot and SAP's Joule would be working in harmony together. And I think we're seeing some interesting innovation from both organizations where they're able to demonstrate that. [00:06:39] Geoff Scott: I saw some really cool, rag based technologies, a few weeks ago where a copilot can reach out and grab some data from SAP and bring it back. Likewise Joule is being able to show some similar capabilities. For most customers, as much as we'd like to have one AI tool, I just don't think that that's going to be the way this works. [00:06:58] Geoff Scott: I think we're gonna have multiple, which, which makes the enterprise architect's role that much more challenging because they're gonna have to figure out how to integrate these tools, when these tools are best used, how they're used, and how do we as as organizations, get value from them. [00:07:15] Mustansir Saifuddin: Absolutely. And if you take this a step further, right? The hype around Agentic AI, everybody's talking about agents. What are you seeing in the marketplace? What, what is your take? [00:07:25] Mustansir Saifuddin: How are SAP users benefitting from Agentic AI within their organizations? [00:07:31] Geoff Scott: As it relates specifically to the SAP ecosystem, my. My perception, maybe right or wrong, probably more wrong than right, is that many of them are investigating and researching. I haven't necessarily seen any specific in production, customer running, agentic AI using SAP dot dot yet. Is it coming? [00:08:00] I think it's coming. [00:08:01] Geoff Scott: Has everyone figured this out yet? No certainly SAP's talking about it. I saw some presentations from the AI team at SAP led by Philip Herzig where they're demonstrating a lot of this. And I think it's gonna be very interesting to watch how agentic, you know, agent-based AI starts to manage tasks. [00:08:19] Geoff Scott: And I'm very keen to see how this works. [00:08:24] Mustansir Saifuddin: It's still very early on in, in this space where a lot of SAP customers are thinking about using it. But [00:08:32] Mustansir Saifuddin: how [00:08:32] Mustansir Saifuddin: do we really find a use case that is really beneficial to the organization at least from a investment standpoint, the time standpoint , and the value add you get as a, as a result of this application basically. [00:08:47] Geoff Scott: And I think the, the potential challenge with agentic AI is it also has to be reasonable from a, you know, a what is this agent, what is this agent's tasks? One of the things that we all know about the SAP ecosystem is we exist here because our businesses are complicated. Someone used to say to me, if, if you didn't need to run SAP, you wouldn't. [00:09:11] Geoff Scott: Right. So you know, most of the organizations that run SAP are of a, a sufficient size and scale and complexity, whether that be that they're multiple businesses running, they have international components, the business makes a complicated product that has a lot of configuration to it, right? There's reasons why these organizations are running SAP. [00:09:32] Geoff Scott: So that kind of then begets the next point, which is, an agent based AI. It's going to have to be fairly complicated in order to handle all of those different, particulars of a business. So I, I think it's gonna be interesting to watch how organizations slice that down to make it so that they can demonstrate some success early days without making the agents so complicated that they basically can't function. [00:09:58] Geoff Scott: You know, even some of this agent AI we talk about that seems like really simple. Like, Hey, I want to go out to eat at a restaurant tonight. Have agentic AI make a reservation. When you break that down. How does it do that? what type of food do you want? [00:10:13] Geoff Scott: I don't know. Maybe Italian, maybe French, maybe American. What about what time do you want to eat? How far away do you want to go? And so much of that is, is left to our brains to just on a whim, we make these decisions. How do you have that conversation with AG Agentic AI, right? Where it says, Hey, you know, here's a reservation at Italian restaurant at six 30. [00:10:32] Geoff Scott: Nah, well, 6 45, nah. Well, what do you want? Not Italian. Well, what do you want instead? I don't know French. No. You want a burger? Nah, I don't feel like a burger tonight. I mean, oh my God. I mean, it's exhausting. [00:10:47] Mustansir Saifuddin: Let's take a step up, right? Let's, let's talk about from SAP customers, you know. Everybody's getting on this [00:10:55] Mustansir Saifuddin: What word of advice would you have for SAP customers when they get further into the journey with AI? Like, what are the things that they should be looking at? [00:11:03] Geoff Scott: First and foremost, take the time to experiment, right? I mean, if you're not using these AI tools every day start. And this has taken me a little bit of time to warm up to, I'm finding now that, I have enough, road underneath my tires that it's hard for me to do new things, [00:11:22] Geoff Scott: 'cause I'm fairly, you know, set in my ways. But if I don't, use these tools to do things, I'm just not, I'm not learning. And so I. As an example, I'm recording a podcast tomorrow with a couple of fellow ASUG board members, and last night I needed to get them some prep materials. [00:11:40] Geoff Scott: I uploaded three or four documents into Claude and I said, please look at these three documents and I need to brief the podcast participants on what they say. And it looked at all three documents and it coughed up a pretty darn good summary. [00:11:55] Geoff Scott: Perfect? No. Pretty good. Yes. Was it [00:12:00] easier that I didn't, I didn't have to go and look at each document and figure out what to say. I could take a look at its summarization and determine if that was something that I wanted, that I thought was accurate and something that I thought we could share. And the answer was it was pretty good. [00:12:15] Geoff Scott: That was a great experiment. And then I said, okay, now create the podcast questions. And it did it. Now, are we using all of them? No. Did it give me at least a starting point? A hundred percent. And by the way, for the people out there was like, oh my God. He put that into, he put that into Claude. Oh my God. What about the security things? We own a subscription to Claude. So it was in a subscription. It was, it was in our protected space. It was public information. So, you know, but you gotta think about those things, right? [00:12:42] Mustansir Saifuddin: . [00:12:42] Mustansir Saifuddin: Absolutely. And I think the one thing that you hit upon is time to value, right? When you look at these tools, these technology aspects of how it can make things faster, better . But it brings up another point, like when, when you look at these, these use cases, everything is about data. What you feeding into the model. [00:13:07] Mustansir Saifuddin: So, you know, from a data perspective, I know a lot of customers doesn't matter, SAP or other technologies, and especially in SAP you know, either struggle with clean governed data and kind of makes it very difficult. So what, what's your take on that in that space? You know, especially when they are ready to go to the AI [00:13:32] Mustansir Saifuddin: journey, but they have some work to do. [00:13:34] Geoff Scott: I think there's a tremendous amount of work to do on this, and this kind of comes back to a part of our earlier dialogue that I think that data has to be right. Right. If, if we're gonna succeed in this future AI enabled world, the data that is being accessed, from your SAP systems, whether through some sort of rag or wherever you're doing, it has to be accurate. [00:13:57] Geoff Scott: So the archiving perspective of this has to be right. And you know also what has to be right is your ability to get master data correct. So if you have the same customer in your SAP system, this is an easy example, five times. Well, you now have increased by factors, the likelihood that the answer that pops back is wrong. [00:14:18] Geoff Scott: So, you know, we've been talking about this for a long time, that your SAP data has to be accurate, has to be right, and SAP data is very accurate at the time that it was entered. I think this is one of the brilliant things about SAP. And where we as SAP, you know, professionals spend so much time is getting the data into the system correctly from the get go. [00:14:41] Geoff Scott: The problem is it doesn't age so well, right? It's not like a fine wine. It can sometimes get a little stale and old and if we're not also getting it broomed out. The challenge we run into is it could be part of a , hallucination that we're not aware of. And if all of a sudden people are looking at this data and making broader based decisions on it, and the decision processes was flawed and the data's flawed, we could be making a lot of really bad decisions. [00:15:12] Mustansir Saifuddin: Yeah, absolutely. Data and analytics is very near and dear to me. So I, I know that whole conversation about getting The data clean, having that value around data, right. Which drives a lot of those those results out of the tools that we [00:15:28] Mustansir Saifuddin: want to apply. Especially. [00:15:30] Geoff Scott: It's all gonna come down to data at the end of the day, right? The data wins and the accuracy of the data wins. And the more that we're gonna use these tools to summarize and roll up, the higher the risk that that summary is inaccurate because the data underneath it isn't right. [00:15:49] Geoff Scott: We had this conversation in an ASUG executive exchange forum last week. And I think most people are starting to recognize that , if you have been [00:16:00] deferring your archiving routines, now might be a good time to get some of that back under control. [00:16:07] Mustansir Saifuddin: Yeah, [00:16:08] Geoff Scott: Most of the models right now, the L lms right, are based on data that doesn't, that the, you know books, fueling [00:16:15] Geoff Scott: research reports, fueling these LLMs that that data has been around for a long time and is, and has stood the test of time. Most of our SAP data, you know, has to be thought of through a very specific lens. But I, I think it's critical, a hundred percent critical. [00:16:33] Mustansir Saifuddin: Yeah. So let, so let's take it down a, a notch, right? From an ASUG perspective, how have you seen ASUG members approaching realtime data analytics moving to the cloud? I know ASUG does a lot of research on this. What have you seen? What, what do you see in this year? [00:16:49] Geoff Scott: So I think, you know, almost everyone is having cloud conversations, which is the beginning of this, because I don't think you can innovate at scale if you're not thinking about moving into the cloud. You know, the other thing is, is that most of these solutions, if you think about the innovation curve, mostly solutions are gonna appear first in the latest additions of your software. [00:17:08] Geoff Scott: So if you can't start innovating at a faster and faster cycle, move out of the stair step you and I discussed earlier, moving to a constant innovation framework, you're gonna find yourself further and further behind because if you want to take advantage of innovation at scale at the time it's released or near to the time it's released you need to be on the latest versions of software. [00:17:27] Geoff Scott: The hard reality of most of our ecosystem is we are not. And if we are not, that's where this stuff is gonna appear first. Will it make it down to other versions of the software? Yes. Is it gonna be on SAP's first order priority to do that? No. They're gonna want to make sure where they get it out [00:17:44] Geoff Scott: to market fast and they're gonna look at their latest versions of the software to do that, where they're the most comfortable. You know, there's this question, why can't I run AI in my on-prem data center? Well, you could, but you're gonna have to do all of that lift by yourself. And that becomes a very costly exercise that unless you're the bigs of the bigs, is probably outside of your budget to do that. [00:18:08] Geoff Scott: So if you want to do this with some degree of economy, you have to be in the cloud, you have to de-customize. You have to think about your SAP implementation as a SaaS service, push accountability and responsibility for code and business process back to SAP, right? I mean, I, I think that, you know, what has AI told me, loud and clear at a volume level of 11? [00:18:30] Geoff Scott: We as SAP customers now more than ever, need to stop customizing and moving responsibility for code back to SAP, 'cause if we don't, we are never gonna be able to keep up. .In, in addition to that, that many of us over these years have outsourced our application maintenance services. We rely on consultants to do most of the work we need done, right, so we're not even in control of the productive resources necessary to make this stuff a reality. [00:19:05] Geoff Scott: We are project managers. We are business analysts, right? We don't necessarily know how to write code to do this, and if we're gonna have to rely on outside resources every time we make one of these moves, that's gonna be super costly and super slow. [00:19:22] Mustansir Saifuddin: Yeah. I hear you. [00:19:23] Mustansir Saifuddin: and I know the ASUG community hears that [00:19:26] Geoff Scott: But we have a lot in our ASUG community, right, who have been around for a long time that says, well, you know, my job is an ABAP programmer. What do you want me to do next? Or I'm a basis person and I don't like this. And I'm like, you are some of the people that are in , the best position to retool and relearn. [00:19:42] Geoff Scott: We're all gonna have to relearn. And, you know, is your business's, joy in life to have you produce more ABAP code or figure out how to get that ABAP code out, move it to SAP and say, congratulations, SAP, you're now responsible for this. Here's what I need this business process to do. Right. [00:20:00] And using your, using ASUG to help you influence that business process, instead of you saying, well, I'm gonna just take it and twist it to my own needs. [00:20:08] Geoff Scott: Even with me saying that, I still think that there's a lot of distance that SAP has to travel, by the way, I don't think they have this figured out. I don't think that they'll look at this and they go, yep, we got this. You just, you know, trust us. No, I think in certain areas they have this well done. [00:20:23] Geoff Scott: In other areas they do not. So what's the best thing we can do? Help them get there faster, influence them, participate in your ASUG chapter meetings, have a voice, talk about where you're hitting challenges. How do we need SAP to make better business processes? How are we gonna use the, you know, the tools that they have, like Lean IX and Signavio to help drive some of this? [00:20:48] Geoff Scott: That's to me where this is gonna need to happen. I would much prefer to have SAP struggle to keep up with business process than have. 10,000, 15,000, 20,000 customers do it on their own. It doesn't scale. [00:21:03] Mustansir Saifuddin: No, it doesn't. And I think, and that's a fair point, right? And this is where the value of ASUG comes in. And, and I mean the journey is long, but the, the path is there for us to follow. [00:21:14] Geoff Scott: I, I, yeah. [00:21:14] Mustansir Saifuddin: Right. And that's the, [00:21:15] Geoff Scott: I think the journey is long and the journey is more important than ever. It's time to get off the couch and go out and start walking, and then when you can walk, you can run, then, you know, then you can sprint. And I think , that's kind of the, the message that we're giving as ASUG is this isn't gonna slow down for you, you're gonna have to catch up to it. [00:21:32] Mustansir Saifuddin: No, I think, and that's the message. A lot of people are hearing loud and clear now, especially 2025 has brought in that that whole concept of either you go along with it or you're gonna be left behind. [00:21:44] Geoff Scott: Or, or, or at some point you're going to have to catch up, and the question is, is how much lifting are you gonna have to do to get there? I, again, I don't think this is easy. I, I don't think that there's , a magic pill we can swallow, you know, that that cleans us all up and we're all perfect. [00:22:01] Mustansir Saifuddin: No. No, for sure. And I think I, I know we talked about a lot of things today and we can keep on talking and the journey keeps on you know, is it's a [00:22:11] Geoff Scott: It's journey. [00:22:11] Mustansir Saifuddin: it's, [00:22:12] Geoff Scott: Yeah. [00:22:12] Mustansir Saifuddin: ending, but what, what is the one key takeaway that you want to leave with the listeners [00:22:18] Geoff Scott: One key takeaway [00:22:18] Mustansir Saifuddin: as we wrap up? [00:22:19] Geoff Scott: it. [00:22:20] Mustansir Saifuddin: Yep. [00:22:21] Geoff Scott: Spend time experimenting and learning this stuff. Get comfortable being uncomfortable with these tools. Use them. Think about how your business can benefit from them. Spend some time, you know, in BTP learning how to access these LLMs through your BTP interface. If you're having a challenge getting a business case written to move from your ECC environment to S 4. [00:22:46] Geoff Scott: Talk to us at ASUG, we will help you with that. Go to a chapter meeting and ask others how they made that investment work. Spend some time, you know, if you don't have a, a license for copilot where you and I started this afternoon, ask your IT counterparts to have access to copilot, use it. [00:23:04] Ask it questions, engage in iterative rep iterative prompts. These are things I think the, the faster we get comfortable with these technologies, the better off we as technologists will have light bulbs go off and say, oh, I, now I get how I can really put agent AI to work. Right. And I'm not gonna listen to just, you know, Microsoft, you know, talk about it or SAP talk about it. [00:23:24] Geoff Scott: I actually have some ideas. And these are good ideas and I'm excited, I'm excited to share 'em. Get out of the stands and on the field. [00:23:32] Mustansir Saifuddin: And who better do it? I mean, I think I, I love , your closing, right? Especially when you are looking at your own business, your own technology, and your way of doing things. Who better can come up with , a solution , or see the applications of these co-pilot Gemini, no matter what I mean, type of tools you can use. [00:23:51] Mustansir Saifuddin: But these are , the ways you can innovate, right by looking at the processes. [00:23:56] Geoff Scott: Yes. Someone told me that they set up two agentic AI bots [00:24:00] and the two of them constructed a podcast and it was pretty good. So withstand zero. I'm worried that next time you and I meet, it's not gonna be you or I, it's gonna be our agentic AI counterparts, some version of us. [00:24:14] Mustansir Saifuddin: and yeah, I'm looking forward to it. I think it is here. It's going to be here at some point, so might as well embrace it. [00:24:22] Geoff Scott: Yeah. Absolutely. [00:24:23] Mustansir Saifuddin: Thanks for listening to Tech-Driven Business brought to you by Innovative Solution Partners. Embracing innovation is no longer an option, but really a necessity for enterprise success. Geoff's key takeaway? Proactive experimentation with AI is crucial for SAP users to discover its business benefits. Engage with tools like copilot and Joule, participate in ASUG, and push for cloud migration to stay ahead of the rapid technological changes. We would love to hear from you. Continue the conversation by connecting with me on LinkedIn or x. Learn more about Innovative Solution Partners and schedule a free consultation by visiting isolutionpartners.com. Never miss a podcast by subscribing to our YouTube channel. Information is in the show notes. I.

Paul Thomas presents UV Radio
Paul Thomas presents UV Radio 387 - Incuding 30-min guest mix from Fuenka

Paul Thomas presents UV Radio

Play Episode Listen Later Mar 28, 2025 87:39


Trackist: Paul Thomas played 1. M.O.S., Ilias Katelanos, Plecta, Alexandra Savvidi - When We're Alone (Extended Mix) [Melody Of the Soul] 2. Karyendasoul - B27 (Dosem Extended Edit) [Anjunadeep] 3. Gaston Ponte – Deliverance (Ruben Karapetyan Remix) [Strange Town] 4. Paul Thomas & Dylhen – Cosmos (DJ Ruby Remix) [Pattern] 5. Das Pharaoh – Tempus Fugit [UV] 6. Paul Thomas – Lights Out (Simos Tagias Remix) [Pattern] 7. ID – ID [Pattern] 8. Jeff Ozmits & Miguel Ante - Show Me Something Real (Extended Mix) [UV] 9. Der Dritte Raum – Hale Bopp (Anunnakis Remix) [Promo] 10. Tiefstone – Annatar [UV Noir] 11. Blake Jarrell – Twenty Miami's Ago (Cendryma Remix) [UV] 12. Zankee Gulati – Mind Opener (Marsh Remix) [Meanwhile] 13. ID – ID [UV] Fuenka played 1, ID - ID 2, Rockka - Amnesia (Fuenka Remix) [Mango Alley] 3, Maze 28 - Great Attractor (Ruben Karapetyan Remix) [UV] 4, Máximo Lasso - Nimio (Fuenka Remix) [UV] 5, Jeff Ozmits & Miguel Ante - Dance Of Eternity (Fuenka Extended Mix) [UV] 6, Nick Muir, Bedrock, John Digweed - Heaven Scent (Nick Muir, Fallen Angel Remix) [Bedrock] Listen to the UV Radio show on YouTube: https://www.youtube.com/@officialuvmusic

AWS for Software Companies Podcast
Ep087: The Multi-Agent Advantage: How Sumo Logic Leverages AI for Observability

AWS for Software Companies Podcast

Play Episode Listen Later Mar 25, 2025 23:25


CEO Joe Kim shares how Sumo Logic has implemented generative AI to democratize data analytics, leveraging AWS Bedrock's multi-agent capabilities to dramatically improve accuracy.Topics Include:Introduction of Joe Kim, CEO of Sumo Logic.Question: Overview of Sumo Logic's products and customers?Sumo Logic specializes in observability and security markets.Company leverages industry-leading log management and analytics capabilities.Question: How has generative AI entered this space?Kim's background is in product, strategy and engineering.Non-experts struggle to extract value from complex telemetry data.Generative AI provides easier interface for interacting with data.Question: How do you measure success of AI initiatives?Focus on customer problems, not retrofitting AI everywhere.Launched "Mo, the co-pilot" at AWS re:Invent.Mo enables natural language queries of complex data.Mo suggests visualizations and follow-up questions during incidents.Question: What challenges did you face implementing AI?Team knew competitors would eventually implement similar capabilities.Single model approach topped out at 80% accuracy.Multi-agent approach with AWS Bedrock achieved mid-90% accuracy.Bedrock offered security benefits and multiple model capabilities.Question: How was working with the AWS team?Partnered with Bedrock team and tribe.ai for implementation.Partners helped avoid pitfalls from thousands of prior projects.Question: What advice for other software leaders?Don't implement AI just to satisfy board pressure.Identify problems without mentioning generative AI first.Innovation should come from listening to customers.Question: Future plans with AWS partnership?Moving toward automated remediation beyond just analysis.Question: Has Sumo Logic monetized generative AI?Changed pricing from data ingestion to data usage.New model encourages more data sharing without cost barriers.Participants:Joe Kim – Chief Executive Officer, Sumo LogicSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

AWS Podcast
#713: AWS News: Meet the Next Generation of Amazon SageMaker, Multi-Agent Collaboration on Bedrock

AWS Podcast

Play Episode Listen Later Mar 24, 2025 24:27


New game-changing AI developments are here, from SageMaker Unified Studio to Bedrock's new multi-agent capabilities. Join your hosts Simon and Jillian for the latest updates from AWS. 00:00:00 - Intro 00:00:49 - Top Stories 00:02:31 - Amazon Bedrock 00:05:35 - Analytics 00:06:08 - Application Integration 00:06:41 - AWS Step Function Workflow Studio 00:06:59 - Amazon Bedrock 00:07:26 - GraphRAG 00:09:08 - Amazon Nova Pro Foundation Model 00:09:32 - Amazon S3 Table and Sagemaker Lakehouse 00:12:00 - Compute 00:13:30 - Customer Engagement 00:14:39 - Data Bases 00:15:09 - Developer Tools 00:17:09 - End User Computing 00:17:25 - Front end Web and Mobile 00:18:08 - Games Internet of things 00:20:12 - Management and Governance 00:20:31 - Networking and Content Delivery 00:20:41 - AWS Application Load Balancer 00:21:06 - Security Identity End Compliance 00:22:32 - Storage 00:23:47 - Wrap up

BEDROCK PODCAST
BEDROCK THOUGHTS: NARRATION

BEDROCK PODCAST

Play Episode Listen Later Mar 21, 2025 22:42


I talk about narration in movies 

AWS Bites
141. Step Functions with JSONata and Variables

AWS Bites

Play Episode Listen Later Mar 21, 2025 15:43


In this episode, we provide an overview of AWS Step Functions and dive deep into the powerful new JSONata and variables features. We explain how JSONata allows complex JSON transformations without custom Lambda functions, enabling more serverless workflows. The variables feature also helps avoid the previous 256KB state size limit. We share examples from real projects showing how these features simplify workflows, reduce costs and enable new use cases.AWS Bites is brought to you in association with fourTheorem. If you need a friendly partner to support you and work with you to de-risk any AWS migration or development project, check them out at ⁠⁠fourtheorem.com⁠⁠In this episode, we mentioned the following resources:JSONata and variables official launch post: https://aws.amazon.com/blogs/compute/simplifying-developer-experience-with-variables-and-jsonata-in-aws-step-functions/JSONata exerciser: https://try.jsonata.org/Stedi JSONata playground: https://www.stedi.com/jsonata/playgroundEpisode 103: Building GenAI Features with Bedrock https://awsbites.com/103-building-genai-features-with-bedrock/Episode 63: How to automate transcripts with Amazon Transcribe and OpenAI Whisper https://awsbites.com/63-how-to-automate-transcripts-with-amazon-transcribe-and-openai-whisper/ Do you have any AWS questions you would like us to address?Leave a comment here or connect with us on X/Twitter, BlueSky or LinkedIn:- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/eoins⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠https://bsky.app/profile/eoin.sh⁠ | ⁠https://www.linkedin.com/in/eoins/⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/loige⁠⁠⁠⁠⁠ | ⁠https://bsky.app/profile/loige.co⁠ | ⁠https://www.linkedin.com/in/lucianomammino/

Your Unity
Episode #524 with Contagious

Your Unity

Play Episode Listen Later Mar 21, 2025 116:57


Your Unity #524 with Contagious Recorded Live in Adelaide, Australia 21/03/2025 01. Dinka - Take the Leap (Extended Mix) [Enormous Chills] 02. Rezident, Malou - It's All On You (Extended Mix) [Anjunadeep] 03. Rebel Of Sleep - Run Away (Original Mix) [Ame Records] 04. Pete K - Echoes Of Us (Original Mix) [Ame Records] 05. ANUQRAM - Safari (Extended Mix) [Anjunabeats] 06. Propellar - Welcome Home (Extended Mix) [Colorize (Enhanced)] Spector Selector 07. Matías Delóngaro - Astral Plains [Empty World] 08. Yotto, Eli & Fur - Somebody To Love (Extended Mix) [Odd One Out] 09. Teho - Interstice (Original Mix) [This Never Happened] 10. MOTSA, Jody Wisternoff, James Grant - Feel This Way (Extended Mix) [Anjunadeep] 11. Karyendasoul - B27 (Because of Art Extended Mix) [Anjunadeep] 12. Jack Willard, Lovlee - Wanna Know (Extended Mix) [Colorize (Enhanced)] Prestigious Pick 13. Icarus - Hiding (Original Mix) [Anjunadeep] 2016 14. Maty Owl - Sounds Like Yellow (Extended Mix) [Anjunadeep Explorations] 15. Luttrell - Sunshine (Extended Mix) [Anjunadeep] 16. Tinlicker, Thomas Oliver - Soon You'll Be Gone (Extended Vocal Mix) [Anjunadeep] 17. Sasha, Artche - Hold On (Artche Mix) [Last Night On Earth] 18. Quivver - Floating on the Surface (Extended Mix) [Anjunadeep Explorations] Premium Pick 19. ANUQRAM - Hide & Seek (Extended Mix) [Anjunabeats]
 20. GVN, Elliot Vast - I Found You (Extended Respray) [Anjunabeats] 21. Yotto, Franky Wah - Just Over (Franky Wah Extended Remix) [Odd One Out] 22. Nick Muir, Bedrock, John Digweed - Heaven Scent (Marsh Extended Remix) [Bedrock Records] 23. Jon Gurd, Reset Robot - Canblaster (Extended Mix) [Anjunadeep] 24. Karen Overton - Your Loving Arms (BLR Extended Remix) [A State of Trance]

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
AI Explorer Series (Part 3: Anthropic, Hugging Face, Cohere)

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

Play Episode Listen Later Mar 19, 2025 78:26


In this conversation, Krish Palaniappan delves into the AWS AI series, focusing on Amazon Bedrock and its foundational models. He discusses the differences between serverless models and the Bedrock marketplace, the importance of selecting the right model for specific use cases, and the training and inference processes in AI. The conversation also compares AWS Bedrock with Azure's offerings and emphasizes the complexities of AI architecture in modern development. In this conversation, Krish Palaniappan delves into the complexities of selecting AI models and platforms, particularly focusing on Bedrock and Hugging Face. He discusses the challenges startups face in asset comparisons, the importance of initial architecture in software development, and the evolving landscape of AI tools. The conversation emphasizes the need for a strategic approach to model selection, deployment, and understanding pricing structures, while also highlighting the significance of community engagement in the AI space. Snowpal Products Backends as Services on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AWS Marketplace⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Mobile Apps on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠App Store⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Play Store⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Web App⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Education Platform⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for Learners and Course Creators

Balance Selections Podcast
Balance Selections 318: Spencer Brown

Balance Selections Podcast

Play Episode Listen Later Mar 13, 2025 62:41


Rooted in the vibrant cultural mosaic of San Francisco, Spencer Brown has emerged as a bright beacon within progressive house and techno. While navigating his academic path at Duke University, Spencer's musical talents caught the ears of artists like Avicii and Above & Beyond, catapulting him onto a trajectory that would see his albums, Illusion of Perfection (2018) and Stream of Consciousness (2020), soar to the apex of the progressive house charts. In 2022, Spencer added his own label, diviine, alongside prestigious imprints like Bedrock, Anjunadeep, and Global Underground to his eclectic release portfolio. Spencer's intuitive connection with the dance floor is evident through his kinetic, ever-evolving DJ sets, which infused with a resonant message of love and positivity. On this Balance Selections mix, the American showcase his emotional versatility to guide listeners through a spectrum of moods and textures. Featuring tracks and collaborations from Henry Saiz, John Digweed, Max Graham and more, this is the work of master craftsman.

AWS for Software Companies Podcast
Ep083: Navigating the AWS Bedrock Journey: Planview's AI Evolution

AWS for Software Companies Podcast

Play Episode Listen Later Mar 13, 2025 32:24


Richard Sonnenblick and Lee Rehwinkel of Planview discuss their transition to Amazon Bedrock for a multi-agent AI system while sharing valuable implementation and user experience lessons.Topics Include:Introduction to Planview's 18-month journey creating an AI co-pilot.Planview builds solutions for strategic portfolio and agile planning.5,000+ companies with millions of users leverage Planview solutions.Co-pilot vision: AI assistant sidebar across multiple applications.RAG used to ingest customer success center documents.Tracking product data, screens, charts, and tables.Incorporating industry best practices and methodologies.Can ingest customer-specific documents to understand company terminology.Key benefit: Making every user a power user.Key benefit: Saving time on tedious and redundant tasks.Key benefit: De-risking initiatives through early risk identification.Cost challenges: GPT-4 initially cost $60 per million tokens.Cost now only $1.20 per million tokens.Market evolution: AI features becoming table stakes.Performance rubrics created for different personas and applications.Multi-agent architecture provides technical and organizational scalability.Initial implementation used Azure and GPT-4 models.Migration to AWS Bedrock brought model choice benefits.Bedrock allowed optimization across cost, benchmarking, and speed dimensions.Added AWS guardrails and knowledge base capabilities.Lesson #1: Users hate typing; provide clickable options.Lesson #2: Users don't like waiting; optimize for speed.Lesson #3: Users take time to trust AI; provide auditable answers.Question about role-based access control and permissions.Co-pilot uses user authentication to access application data.Question about subscription pricing for AI features.Need to educate customers about AI's value proposition.Question about reasoning modes and timing expectations.Showing users the work process makes waiting more tolerable.Participants:Richard Sonnenblick - Chief Data Scientist, PlanviewLee Rehwinkel – Principal Data Scientist, PlanviewSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

Joie de Vivre - Podcast
Joie de Vivre - Episode 532

Joie de Vivre - Podcast

Play Episode Listen Later Mar 12, 2025 58:04


1)Jan Blomqvist - Muted Mind (Extended Mix) 2)Maiu - Entropy State (Extended Mix) 3)Tinlicker - Never Let Me Go (Extended Mix) 4)Fahlberg - Make You Feel (Original Mix) 5)Amonita - Whisper (Original Mix) 6)Eelke Kleijn, Emily Roberts - Watching Over Me feat. Emily Roberts (Extended Mix) 7)Andrey Exx, TuraniQa, DeDeXgrande - Alone In The Dark (Hiwater Remix) 8)BLANDO - Aquatic (Extended Mix) 9)Mind Against, CAY (DE) - CANT U HEAR ME (Extended Mix) 10)Ross Quinn - Hard To Breathe (Extended Mix) 11)Nick Muir, Bedrock, John Digweed - Heaven Scent (Marsh Remix)

Nora En Pure - Purified Radio
Purified Radio 446

Nora En Pure - Purified Radio

Play Episode Listen Later Mar 10, 2025 59:26


Tracklisting: 01. Brigadow Crew, Cler Letiv - Here I Am (Original Mix)02. MaMan - Winds OF Change (Original Mix)03. Brigadow Crew, Cler Letiv - Kettegat (Original Mix)04. Delta Vaults - Edge of tomorrow (Extended mix) **05. Solink - Luminous (Original Mix)06. Carlos Pires - Dark Days (Modern Brothers Remix)07. BLANDO - Aquatic (Extended Mix)08. Outbeat - Echo Chamber (Original Mix)09. Innerverse, Oliver Cricket - Arrow (Extended Mix)10. Oblivium - Aura (Extended)11. HYPNOZA - My Life (Instrumental Mix)12. Andrey Exx, TuraniQa, DeDeXgrande - Alone In The Dark (Hiwater Remix)13. Nick Muir, Bedrock, John Digweed - Heaven Scent (Marsh Remix) *Listeners Choice *Pure Discovery **

Coldwired Podcast. Trance and Progressive.
March 2025 Selection (featuring Paul van Dyk, Hoopoe, Bedrock, Framewerk, Albion, The Thrillseekers, Nomas, Marsh, Tim French, Marsh, Jamie Baggotts, Ormus, Marc West, Enigma State, Torsten Stenzel, Mahe', Gerwin Van Engelenburg, Darude, Monuloku, Brann

Coldwired Podcast. Trance and Progressive.

Play Episode Listen Later Mar 6, 2025 90:02


The Miseducation of David and Gary

Send Us an Email to Chat!This week we are totally going back the Bedrock with 2000's The Flintstones: Viva Rock Vegas! A bunch of Z-listers come together to show the world the weddings of the Rubbles and the Flintaones!! We are having a blast this week! The Flintstones: Viva Rock Vegas   is available to watch on rental sites!Follow us on Instagram:@Gaspatchojones@Homewreckingwhore@Mullhollanddaze@The_Miseducation_of_DandG_PodGo Support Our Loves Beth and Justin Going Through a Hard Time by Donating Anything You Can HereCheck Out Our WebsiteIf you love the show check out our Teepublic shop!Right Here Yo!

Making Marketing
How Shinola is emphasizing its American design and manufacturing roots

Making Marketing

Play Episode Listen Later Feb 20, 2025 32:08


Detroit-based luxury design brand Shinola sells everything from jewelry to bikes to journals. In 2019, it even opened a hotel in downtown Detroit. But the is currently laser-focused on refining the answer to the question, “What's the first thing you think of when you think of Shinola?” And it wants that to be watches.  “We lost that [focus on watches] for a little while,” said Kevin Wertz, CMO at Bedrock, the platform company that owns Shinola. Bedrock also owns the outerwear brand Filson.  Shinola, founded in 2011, quickly gained a following because it was bringing manufacturing jobs back to Detroit. In 2012, the brand opened a 12,000-square-foot watch factory in the city. Over the next few years, it used its expertise in design and craftsmanship to expand into new categories.  But in 2016, Shinola ran into a hurdle when the FTC ruled that the company could not use the tagline “Built in Detroit.” Even though Shinola has a watch factory in the U.S., its watches — like all watch brands — largely rely on imported parts. Now, Shinola's watches say “Built in Detroit with Swiss and imported parts.”   "We're going back to the idea that we are designing and assembling watches in downtown Detroit,” Wertz said.  Despite this, Shinola has found that the best way to tell its story is to do more showing, rather than telling. Wertz said the content that has performed the best for Shinola is raw photos and videos from its factories showing how its watches are made. "People say, 'I don't know what watches being made actually looks like,'” he said, regarding the interest. Wertz joined the Modern Retail podcast this week to talk about how Shinola is refining its brand story. 

Selador Sessions
Selador Sessions 302 | Four Candles

Selador Sessions

Play Episode Listen Later Feb 20, 2025 58:04


A very welcome to return to label friend Four Candles (or Kris as he is known to his mum). With releases on Bedrock and diviine recently, and his own Keep Thinking imprint, Kris has been pushing things forward at a solid pace. On top of this his collaborations with Selador's own Steve Parry 'Mysko' and 'Soul Repeat' have just been released on our imprint, and both feature here in the mix too! Lets go! Enjoy! Tracklist Lusine – Two Dots ft. Vilja Larjosto (Four Candles Edit) [NoL] Gian Luka – Mindtaker [Keep Thinking] Patch Park – Control [Meanwhile] Sean Harvey – Up & Down [Keep Thinking] Four Candles & Steve Parry – Mysko [Selador] Four Candles & Steve Parry – Soul Repeat [Selador] Four Candles – ID Drunky Daniels – Rise [Not Too Fancy] Four Candles & Jon Towell – If We (…Just Did) [Keep Thinking] Rampue – Inside [A Tribe Called Kotori] This show is syndicated & distributed exclusively by Syndicast. If you are a radio station interested in airing the show or would like to distribute your podcast / radio show please register here: https://syndicast.co.uk/distribution/registration

Squawk Pod
OpenAI's Future & the Price of Tariffs 2/13/25

Squawk Pod

Play Episode Listen Later Feb 13, 2025 22:42


Elon Musk will withdraw his $97.4 billion bid for OpenAI's nonprofit arm if the ChatGPT maker stops its conversion into a for-profit entity, according to a court filing. Geoff Lewis, Bedrock founder and an investor in OpenAI, discusses AI competition, Elon Musk, and more. President Trump announced new 25% tariffs on all steel and aluminum imports into the U.S., on top of existing metals duties, in another major escalation of his trade policy overhaul. Jeff Currie, Carlyle chief strategy officer of energy pathways, discusses the impact of President Trump's tariffs on commodities, producer inflation, and the state of U.S. oil production. Plus, Alibaba says it will be Apple's AI partner in China, and President Trump announced his call with Russian leader Vladimir Putin, during which they discussed negotiating to end the war in Ukraine. Geoff Lewis - 8:46Jeff Currie - 17:20 In this episode:Geoff Lewis, @GeoffLewisOrgJoe Kernen, @JoeSquawkBecky Quick, @BeckyQuickAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY

Share The Wealth Show
P2 - The Real Estate Secret You're NOT Using: Canvassing for Success

Share The Wealth Show

Play Episode Listen Later Feb 12, 2025 32:34


EP 118 - In this episode of the Share the Wealth Show, we welcome back Beth Azor, a powerhouse in commercial real estate investing, to share her experiences, lessons, and mission to empower more women in the industry. Beth takes us through her journey of turning a dismissed opportunity into a high-return investment, navigating market challenges, and advocating for gender diversity in real estate.  Key points include:  

AWS Podcast
#707: AWS News: DeepSeek R1 Models on BedRock, Improve Data Discoverability with S3 Metadata

AWS Podcast

Play Episode Listen Later Feb 10, 2025 23:39


Join our hosts for a great discussion of the new and interesting on AWS! Chapters: 00:09 Intro 00:32 Deep Seek 05:26 S3 Meta Data 08:52 AWS Market Place 09:32 Analytics updates 10:28 Application integration 11:49 AI Updates 13:42 Compute 15:09 Customer Engagement 16:12 Database Updates 17:53 Developer Tools 19:31 Front end web and mobile 20:36 Management and governance 22:22 Migration and modernization updates 23:01 Networking and content delivery 23:12 Wrap up Shownotes: https://d29iemol7wxagg.cloudfront.net/707ExtendedShownotes.html

Share The Wealth Show
The Real Estate Secret You're NOT Using: Canvassing for Success

Share The Wealth Show

Play Episode Listen Later Feb 6, 2025 31:17


EP 117 - In this episode of the Share the Wealth Show, we welcome Beth Azor, a seasoned expert in retail leasing and property investment, to dive deeper into her strategies for achieving long-term success in the real estate industry. Beth shares invaluable tips on how persistence, relationships, and strategic thinking can elevate your career and investment opportunities. Key points include:

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

The Will Clarke Podcast
Sinca - 5 Pillars To Build A Career In Music

The Will Clarke Podcast

Play Episode Listen Later Feb 4, 2025 67:08


Podcast Overview: In this conversation, Will Clarke and Gabriela (Sinca) discuss the evolving landscape of the music industry, particularly in relation to the impact of AI on music creation, the role of artists, and the challenges posed by social media. They explore the uncertainty artists face in the current environment, the importance of authenticity, and the need for artists to adapt to new business models while building a community and emotional connection with their fans. In this engaging conversation, Will Clarke and Gabriela Sinca delve into the evolving landscape of the music industry, emphasizing the importance of fan engagement, the role of luck, and the necessity of building a strong network and brand. They discuss the changing image of DJs, the significance of quality music, and the hard work required to succeed. The conversation also touches on the future of streaming and the need for artists to adapt their business models to thrive in a competitive environment.Who Is Sinca: Hailing from Montréal, Sinca brings a unique blend of her French-Canadian and Peruvian roots into her music, crafting an experience that resonates globally. Establishing herself as a rising star, she made her mark on the international circuit through recognition from influential labels such as Bedrock, Anjunadeep, All Day I Dream, Days Like Nights, and more. With a distinctive style nestled between house, progressive and melodic techno, Sinca's sets create unforgettable moments in diverse settings, in both intimate and expansive spaces. Her musical journey has left an indelible mark on iconic venues and festivals, including Coachella (Do Lab), Brooklyn Mirage, Stereo, Electric Island, Burning Man, Papaya Playa Project, Do Not Sit, Anjunadeep Explorations, and more. Sharing stages with industry titans like John Digweed, Lee Burridge, Sasha, Kölsch, Patrice Baümel, and Eli & Fur, Sinca is solidifying her presence in the global music scene. Selected as one of the top 20 artists to watch in 2024 by Dancing Astronaut, Sinca's journey is a testament to her creativity and musical skill. As she explores new musical territories, keep a keen eye on her, for she is destined to leave a lasting impact.Join for updates: https://laylo.com/willclarke⏲ Follow Will Clarke ⏱https://djwillclarke.com/https://open.spotify.com/artist/1OmOdgwIzub8DYPxQYbbbi?si=hEx8GCJAR3mhhhWd_iSuewhttps://www.instagram.com/djwillclarkehttps://www.facebook.com/willclarkedjhttps://twitter.com/djwillclarkehttps://www.tiktok.com/@djwillclarke Hosted on Acast. See acast.com/privacy for more information.