Podcasts about Kube

  • 253PODCASTS
  • 1,000EPISODES
  • 52mAVG DURATION
  • 1WEEKLY EPISODE
  • May 27, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Kube

Latest podcast episodes about Kube

The FrogPants Studios Ultra Feed!
Word on The Street 17: Let's See Some Art!

The FrogPants Studios Ultra Feed!

Play Episode Listen Later May 27, 2025 68:48


Greg and Scott are joined today by Kube and BirD, and we go wild with the art drop! Everything from concept sketches to animated models in engine, and lots in-between. An episode you won't want to miss! Hosted on Acast. See acast.com/privacy for more information.

Chachi Loves Everybody
Ep. 67 Chet Buchanan

Chachi Loves Everybody

Play Episode Listen Later May 7, 2025 86:45


EPISODE SUMMARY: Chet Buchanan, Emmy-winning host of KLUC's Chet Buchanan show, discusses the many steps in his radio journey, how he serves his community through charity work, and the advice he's picked up along the way.On this episode of Chachi Loves Everybody, Chachi talks to Chet Buchanan about:Getting his start volunteering to wash cars at KRKO in Everett, Washington, before debuting on air at 16.His hustle mentality and how it's helped him stand outBouncing Z100 in Portland, The KUBE in Seattle, and PhoenixBecoming a programmer and a morning show host in Salt Lake CityThe on-air mistake that made the station change the locks on him and forced him to quitGetting involved in pro sports making a mix for the Seahawks before becoming a PA announcer for the Vegas Aces and Seattle KrakenThe importance of apologizing and owning up to your mistakesHis work started the world's largest single-location toy drive in Las VegasHis thoughts on the future of radio and advice for young peopleAnd More!ABOUT THIS EPISODE'S GUEST: Chet Buchanan is the host and creator of 98.5 KLUC's The Chet Buchanan Show! Since its inception in 1999, it has consistently been one of the highest rated and most beloved radio shows in Las Vegas. In addition to anchoring the morning show, for which he was named “Best of Las Vegas” by the Las Vegas Review-Journal, Chet can be seen just about everywhere around town.Many Las Vegans know Chet from his work on the court at UNLV Runnin' Rebel Basketball games, appearing as a frequent contributor on Fox 5, or holding down a packed schedule hosting multiple corporate, civic, and charity events.Chet is also the brains behind the annual Chet Buchanan Show Toy Drive, where he lives and broadcasts live on top of 30 foot scaffolding for 12 days straight to raise toys for thousands of Las Vegas children. Since its inception, the 98.5 KLUC Chet Buchanan Show Toy Drive has grown to become recognized as “The World's Largest Single Location Toy Drive.”ABOUT THE PODCAST: Chachi Loves Everybody is brought to you by Benztown and hosted by the President of Benztown, Dave “Chachi” Denes. Get a behind-the-scenes look at the myths and legends of the radio industry.PEOPLE MENTIONED:Scott HermanJohn MaynardPam ThompsonRobert O'BrienBob CaseSean LynchJeff KingKen BensonGary BryantJohnny EdwardsRich PattersonBill LeeRick DeesRachel DonahueCommander ChuckPaul AndersonBob RiversTom HuttlerBruce KellyKevin RiderGene BaxterJerry CliftonColleen CassidyJohn MurphyBilly HayesScott ThrowerDan ClarkStacy LynnEric EdwardsTom ShaneChuck FieldJB KingJoel DenverMike PrestonLori BradleyKent AllenThe T-ManCat ThomasJack EvansDave RyanRyan SeacrestMojoCharlie TunaChris AbbottMichael HayesEbroTony ColesABOUT BENZTOWN: Benztown is a leading international audio imaging, production library, voiceover, programming, podcasting, and jingle production company with over 3,000 affiliations on six different continents. Benztown provides audio brands and radio stations of all formats with end-to-end imaging and production, making high-quality sound and world- class audio branding a reality for radio stations of all market sizes and budgets. Benztown was named to the prestigious Inc. 5000 by Inc. magazine for five consecutive years as one of America's Fastest-Growing Privately Held Companies. With studios in Los Angeles and Stuttgart, Benztown offers the highest quality audio imaging work parts for 23 libraries across 14 music and spoken word formats including AC, Hot AC, CHR, Country, Hip Hop and R&B, Rhythmic, Classic Hits, Rock, News/Talk, Sports, and JACK. Benztown's Audio Architecture is one of the only commercial libraries that is built exclusively for radio spots to provide the right music for radio commercials. Benztown provides custom VO and imaging across all formats, including commercial VO and copywriting in partnership with Yamanair Creative. Benztown Radio Networks produces, markets, and distributes high-quality programming and services to radio stations around the world, including: The Rick Dees Weekly Top 40 Countdown, The Todd-N-Tyler Radio Empire, Hot Mix, Sunday Night Slow Jams with R Dub!, Flashback, Top 10 Now & Then, Hey, Morton, StudioTexter, The Rooster Show Prep, and AmeriCountry. Benztown + McVay Media Podcast Networks produces and markets premium podcasts including: IEX: Boxes and Lines and Molecular Moments.Web: benztown.comFacebook: facebook.com/benztownradioTwitter: @benztownradioLinkedIn: linkedin.com/company/benztownInstagram: instagram.com/benztownradio Enjoyed this episode of Chachi Loves Everybody? Let us know by leaving a review!

Diet your Brain
#85 Struktur statt Stress – Wie Projektmanagement deinen Alltag als Ernährungsfachkraft erleichtern kann

Diet your Brain

Play Episode Listen Later May 6, 2025 50:28


In dieser Folge erfährst du von Sören Kube, wie du mit einfachen Prinzipien aus dem Projektmanagement deinen Arbeitsalltag als Ernährungsfachkraft strukturierter und entspannter gestalten kannst. Mit den richtigen Tools und Techniken behältst du den Überblick und bleibst handlungsfähig, auch wenn's mal stressig wird. Diese Fragen klären wir u.a. in der Folge: Welche Projektmanagement-Prinzipien lassen sich auf die Arbeit von Ernährungsfachkräften übertragen? Welche Tools und Methoden helfen dir beim Priorisieren und Strukturieren deiner Aufgaben? Wie gehst du strategisch an neue Projekte heran – ohne dich zu verzetteln? Du bekommst außerdem: Tipps für ein effektives Zeitmanagement Ideen für kleine Veränderungen mit großer Wirkung Meine Lieblingsgerichte für stressige Tage Am Ende gibt's wie immer einen schnellen, gesunden Rezepttipp für stressige Tage. — Die Geschmacksrichtung Erbse kostenlos probieren? Dafür einfach eine E-Mail mit Name und Adresse an fortimel@danone.com senden. — Verbinde dich mit mir: Instagram: @dietyourbrain Fragen oder Feedback? Schreib mir gern unter dietyourbrain@web.de Wenn dir die Folge gefallen hat, freue ich mich über eine Bewertung bei Spotify oder Apple Podcasts – und wenn du den Link an Kolleg:innen weiterleitest.

Cloud Security Podcast
Scaling Container Security Without Slowing Developers

Cloud Security Podcast

Play Episode Listen Later Apr 17, 2025 28:13


Are you struggling to implement robust container security at scale without creating friction with your development teams? In this episode, host Ashish Rajan sits down with Cailyn Edwards, Co-Chair of Kubernetes SIG Security and Senior Security Engineer, for a masterclass in practical container security. This episode was recorded LIVE at KubeCon EU, London 2025.In this episode, you'll learn about:Automating Security Effectively: Moving beyond basic vulnerability scanning to implement comprehensive automationBridging the Security-Developer Gap: Strategies for educating developers, building trust, fostering collaboration, and understanding developer use cases instead of just imposing rules.The "Shift Down" Philosophy: Why simply "Shifting Left" isn't enough, and how security teams can proactively provide secure foundations, essentially "Shifting Down."Leveraging Open Source Tools: Practical discussion around tools like Trivy, Kubeaudit, Dependabot, RenovateBot, TruffleHog, Kube-bench, OPA, and more.The Power of Immutable Infrastructure: Exploring the benefits of using minimal, immutable images to drastically reduce patching efforts and enhance security posture.Understanding Real Risks: Discussing the dangers lurking in default configurations and easily exposed APIs/ports in container environments.Getting Leadership Buy-In: The importance of aligning security initiatives with business goals and securing support from leadership.Guest Socials: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cailyn's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Cybersecurity PodcastQuestions asked:(00:00) Intro: Container Security at Scale(01:56) Meet Cailyn Edwards: Kubernetes SIG Security Co-Chair(03:34) Why Container Security Matters: Risks & Exposures Explained(06:21) Automating Container Security: From Scans to Admission Controls(12:19) Essential Container Security Tools (Trivy, OPA, Chainguard & More)(19:35) Overcoming DevSecOps Challenges: Working with Developers(21:31) Proactive Security: Shifting Down, Not Just Left(25:24) Fun Questions with CailynResources spoken about during the interview:Cailyn's talk at KubeCon EU 2025

Podcast FCBarca.com
Un Toc de La Rambla #197 - Kubeł zimnej wody | Podcast FCBarca.com

Podcast FCBarca.com

Play Episode Listen Later Apr 16, 2025 80:24


NIE OMIŃ KOLEJNYCH ODCINKÓW I SUBSKRYBUJ KANAŁ

IFTTD - If This Then Dev
#315.src - Apprentissage visuel: Dessine moi un Kube avec Aurélie Vache

IFTTD - If This Then Dev

Play Episode Listen Later Mar 26, 2025 51:25


"Tu te crées ta boîte à outils" La D.E.V. de la semaine est Aurélie Vache, Developer Advocate @ OVHcloud. La discussion tourne autour des méthodes d'apprentissage en programmation, particulièrement l'apprentissage visuel dans des domaines complexes comme Kubernetes. Aurélie, autodidacte depuis son enfance et très curieuse, vante l'impact positif des schémas pour une meilleure compréhension et communication des concepts abstraits. Elle insiste sur l'itération nécessaire pour réussir des diagrammes et critique les ouvrages techniques trop chargés en texte. Aurélie valorise son expérience de conférencière qui non seulement contribue à aider autrui mais aussi enrichit sa propre compréhension. Elle encourage enfin l'expérimentation visuelle pour compléter la programmation.Chapitrages00:00:55 : Introduction à l'apprentissage visuel00:06:14 : La présentation d'Aurélie00:09:22 : Évolution de l'apprentissage autodidacte00:14:10 : L'importance des visuels dans l'apprentissage00:19:59 : Découverte de l'apprentissage visuel00:22:39 : Complexité de Kubernetes et simplification00:27:09 : Limites des livres techniques00:33:29 : Pratique et théorie dans l'apprentissage00:44:43 : L'impact des conférences sur le partage00:48:51 : Conclusion et recommandations de contenu Liens évoqués pendant l'émission ExcalidrawUnderstanding Kubernetes in a visual way: Learn and discover Kubernetes in sketchnotesUnderstanding Docker in a visual way: Learn and discover Docker in sketchnotes **Recrutez les meilleurs développeurs grâce à Indeed !** "Trouver des développeurs compétents et passionnés, comme les auditeurs d'If This Then Dev, peut être un vrai défi. Avec Indeed, connectez-vous rapidement avec des candidats qualifiés qui sauront s'épanouir dans votre entreprise. Profitez dès maintenant d'un crédit de 100 euros pour sponsoriser votre offre d'emploi : go.indeed.com/IFTTD."🎙️ Soutenez le podcast If This Then Dev ! 🎙️ Chaque contribution aide à maintenir et améliorer nos épisodes. Cliquez ici pour nous soutenir sur Tipeee 🙏Archives | Site | Boutique | TikTok | Discord | Twitter | LinkedIn | Instagram | Youtube | Twitch | Job Board |Distribué par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.

The Connector.
The Connector Podcast - DFS Digital Finance Summit - Isabel Ecosystem: 30 Years of Fintech Innovation

The Connector.

Play Episode Listen Later Mar 20, 2025 15:19 Transcription Available


Three decades of financial innovation have positioned Isabel as a cornerstone of Belgium's banking landscape. Starting as a multi-banking payment platform in 1995, this joint venture of Belgium's four major banks has evolved into something far more ambitious – creating financial ecosystems that connect institutions, businesses, and technology providers.Sylvie van der Velde, who leads business development at Isabel, reveals how their latest platform "Kube" is transforming corporate data exchange. With 300,000 companies already in the system, Kube is a digital passport allowing banks to share verified business information, eliminating redundant KYB processes. "When you're a client at two banks, you don't need to do the KYC process twice," Sylvie explains. This invisible benefit streamlines verification while maintaining security.What's particularly fascinating is Isabel's pragmatic approach to technology. Though Kube began as a blockchain experiment during the technology's peak hype around 2018, the team ultimately pivoted away from it. "We turned it around," Sylvie notes. "Our objective is having a better client experience - which technology do we need?" This refreshing focus on outcomes rather than trending technologies demonstrates the maturity that comes with 30 years in the industry. Looking ahead, Isabel plans to expand Kube beyond identity verification to include ESG data, allowing voluntary SMEs to showcase their sustainability credentials. This will provide businesses with competitive advantages in tenders and recruitment, as younger talent increasingly prioritizes employer values. With interest from Luxembourg and the Netherlands, Kube's potential extends beyond Belgium's borders.Visit Isabel's website to learn more about their community events and webinars. There, you can connect with others and explore how their ecosystem approach might benefit your business. After three decades of innovation, Isabel continues to prove that sometimes the most valuable financial solutions come not from disruption but from thoughtful collaboration.Thank you for tuning into our podcast about global trends in the FinTech industry.Check out our podcast channel.Learn more about The Connector. Follow us on LinkedIn.CheersKoen Vanderhoydonkkoen.vanderhoydonk@jointheconnector.com#FinTech #RegTech #Scaleup #WealthTech

HC Audio Stories
'A Hopeful Dark'

HC Audio Stories

Play Episode Listen Later Feb 21, 2025 3:52


Beacon artist depicts Earth under siege Zac Skinner walks the talk. Concerned about environmental degradation, he takes his young sons to remove trash along the banks of the Hudson River and donates a portion of his art sales to the nonprofit Earthjustice. No surprise then, that his symbolic paintings are saturated with stark reminders that the Earth is under siege. In two of them, oil pipelines guide the eye. In "Pop-up Farm with Vortex," a maelstrom threatens a ziggurat. "I'm going for post-industrial landscape," Skinner says. "They can be dark, but I intend them to be a hopeful dark." Skinner, 43, is one of three artists featured in a group show, Home is Where the Heart Is, on display at the Garrison Art Center through March 9. He will also participate in an artist talk with Amy Cheng, Erik Schoonebeck and Greg Slick at the art center at 2 p.m. on Saturday (Feb. 22). A practicing Buddhist whose work reflects his travels in Asia, Skinner enjoys camping and many of his pictures depict structures in the wilderness, like a pyramid, temple, monastery or wooden meditation hut. "For the smaller ones, I like to feature a prominent entryway to make them more inviting and inhabitable," he says. "They provide a sense of hope and a safe space as shelter from the storm." Hailing from the Syracuse area, Skinner earned an MFA from The School at the Art Institute of Chicago. Since moving to Beacon in 2014, he's used the area as a launch pad to show works in Texas, California and Korea. In addition to exhibiting in group shows at Kube Art Center and the former Theo Ganz Studio, he has mounted solo shows at the BAU Gallery and the now-closed Matteawan Gallery, all in Beacon, as well as the Garrison Art Center. A solo show at No. 3 Reading Room in Beacon led to a limited-edition book, Atlas Trap, published by Traffic Street Press. Owner Paulette Myers-Rich paired Skinner's relief prints of endangered species with poetry by Greg Delanty in a 40-copy print run. As a painter, Skinner works with many media, including tempera, egg-based paints used widely until the Renaissance. Some of the bleaker works are created with special charcoal, like "Cliff Shelter No. 1 with Storm Clouds," on view at Garrison Art Center. "Abandoned Hut by Dried Steam Bed" "Atmosphere Bubble and Ruins in a Dead Landscape" "Cliff Dwellers with Aloe Vera Garden" "Cliff Shelter No. 1 with Storm Clouds" "Pop-up Farm with Vortex" "Survival Camp with Water Collector, Kale and Oil Pipeline" His approach also hops around. "The alchemist in me likes to experiment with materials and depictions," he says. "I don't have a style, really, I just keep inventing my way through the images." Some pieces feature charred landscapes, barren trees and lots of stumps. Clouds are often ominous. The ones gathering in "Abandoned Hut by Dried Stream Edge" (on view in Garrison) and "Survival Camp with Water Collectors, Kale and Oil Pipeline," evoke Van Gogh's swirling brush style. The large dabs that make up the majestic purple mountains in "Atmosphere Bubble and Ruins in a Dead Landscape," which hangs in his studio at Kube, also channel the Dutch master. The painting's pillars could represent Stonehenge or the detritus of an abandoned highway overpass. "The goal with the overt message is to prevent indifference over time," he says. "I am compelled to represent myself, and my convictions, to inspire inner strength." The Garrison Art Center, at 23 Garrison's Landing, is open daily from 10 a.m. to 5 p.m., except Monday.

Kolektiv znanja sa Anisom Šerak
#72 Alija Kamber: Umjetnik bez karijere

Kolektiv znanja sa Anisom Šerak

Play Episode Listen Later Feb 20, 2025 97:48


Gost u ovoj epizodi je čovjek koji za sebe ne voli reći da je multimedijalni umjetnik, nego da je umjetnik bez karijere. On je fotograf, videograf, crtač i skupljač umjetnina, nadasve avanturista i putoholičar, Alija Kamber. Kako ga je bard suvremene umejtnosti u BiH i regiji, Jusuf Hadžifejzović, nazvao otvarajući njegovu psoljednju izložbu, on je Evlija Ćelebija modernog doba.Saznaćete sve o Alijinim putešestvijama na Karibima, njegovom višemjesečnom životu u Havani, na Kubi, otkrivanju Puta svile u Samarkandu i Afganistanu i zašto smatra Indiju svojom nepresušnom inspiracijom.Svoje umjetničke instalacije, fotografije i artefakte s raznih putiovanja Alija je izlagao u svojoj putujućoj galeriji Grundilo od Bihaća, Mostara, Zenice, Sarajeva, Tuzle i drugih gradova BiH do Kube, New Yorka, Indije, a izlagao je i u galerijama u Berlinu. Učestvovao je i kao jedan od performera na posljednjoj izložbi Selme Selman "Sleeping guards" 30.1.2025. u Stedejlik muzeju u Amsterdamu. https://www.alijakamber.com/page/4IG: https://www.instagram.com/furaun/____________U ovoj epizodi smo razgovarali o:00:00:00 Teaser I najava00:05:11 Moja putovanja započinju najprije u Bosni00:09:02 Kako sam se počeo baviti fotografijom?00:15:50 Izložba "Random" u Charlami u Skenderiji u decembru '24.00:19:49 Moje iskustvo s Kariba i otkrivanje svijeta van "dijamantne zone"00:24:30 Radio sam kao fotograf na cruiseru i upoznao čaroliju Kariba00:26:50 Od djetinjstva sam lud za Kubom i sanjao sam da živim u Havani00:30:08 Živeći s kubanskom familijom upoznao sam svaki kutak Havane00:39:21 Moja avantura u Samarkandu00:45:20 Zašto svi u Uzbekistanu voze američka Chevrolet auta?00:52:08 Indija je čudesna zemlja u kojoj je nema kraju iznenađenjima00:58:30 Moje iskustvo obilazaka buvljaka po svijetu01:01:02 Razmjena umjetničkih artefakata sa Sicilije i sa Kube01:06:46 Godinama skupljam komadiće ruševina spomenika Partizanskog groblja u Mostaru Bogdana Bogdanovića01:12:09 Dio sam performanca na izložbi Selme Selman u Stedejlik muzeju u Amsterdamu 01:17:09 Kako sam izlagao u galerijama u Berlinu?01:21:44 S prijateljima nije lako napraviti podcaste01:27:38 Umjetnik sam bez karijere i CVa i volim svoju putujuću galeriju01:30:42 Otvorili smo galeriju Raskosh u Sarajevu 01:35:00 Brinem se kome ću ostaviti svoj "muzejski" profil na Instagramu______________

Sports Maniac - Digitale Trends und Innovationen im Sport
FC Bayern: Wie die globale Content-Strategie Rekorde bricht - mit Nikolai Kube | #484

Sports Maniac - Digitale Trends und Innovationen im Sport

Play Episode Listen Later Feb 19, 2025 56:32


Rund 200 Inhalte pro Tag auf circa 20 Club-Medien Kanälen in zehn Sprachen für fast 200 Millionen Follower auf Social Media. Der FC Bayern bricht auch im Content Bereich Rekorde. Der Rekordmeister ist längst ein eigenes Medien- und Produktionshaus mit über 50 Mitarbeitenden in der Club Media Abteilung. Das Ziel: Menschen für den FC Bayern begeistern. Doch auf welcher Plattform funktioniert das am besten? "Heute reicht es nicht mehr nur Content zu produzieren. Mit Content muss man Geld verdienen." Welche Kanäle wurden eingestellt, warum spielt Twitch keine Rolle und wo kommt KI zum Einsatz? Wie wurde WhatsApp zum relevantesten Social Media Kanal im Bereich Merchandising? Und inwieweit wird die Reichweite der Spieler direkt genutzt? Im neuen Sports Maniac Podcast erfahren wir, was die "ThoMats Challenge" zum absoluten Best-Case macht, für welche gigantischen Zahlen Harry Kane gesorgt hat und welchen monetären Wert digitale Aktivierungen der Sponsoren wirklich haben. Unser Gast Nikolai Kube, Head of Club Media & Content beim FC Bayern München Unsere Themen Vom Redakteur zum Head of Content: Nikolais Weg beim FCB Insights in die Top-Kanäle: Instagram, TikTok, WhatsApp Organisation & Recruiting der Club Media Abteilung Über die Zusammenarbeit mit externen Medien Wie die Reichweite der Spieler genutzt wird Der gigantische Harry Kane Effekt 90 % internationale Fans: Was das für die Strategie bedeutet Monetärer Wert von Social Media im Sponsoring Next Big Thing: Content Highlights in 2025 Zum Blogartikel: https://sportsmaniac.de/episode484 Unsere Empfehlungen Harry Kane Impact: https://www.linkedin.com/posts/fcbayern_kane-fcbayern-kane-activity-7101118633749147649-_REF Der erfolgreichste YouTube Clip des FC Bayern = Branded Content: https://www.youtube.com/shorts/wUvQeGKM4LM Nik vor der Kamera beim FCBayern.TV: https://www.youtube.com/watch?v=bItUJioCCRY  Abonniert das WU: https://sportsmaniac.de/wu Unser Partner (Anzeige) GIPEDO: Ihr möchtet eure Sportvermarktung voranbringen und automatisieren? Das klappt mit dem GIPEDO Workspace, einem digitalen datengestützten Betriebsystem. Für mehr Infos meldet euch jetzt direkt bei Trisha Jürgens (Senior Manager Business Development bei GIPEDO) unter trisha@gipedo.io! Das Besondere: Für euch als Podcast-Hörer gibt es ein exklusives Angebot! Wer sich mit dem Verweis auf den Podcast bei Trisha meldet, kann den GIPEDO Workspace und das Partnerportal zwei Monate kostenlos testen! Unser Kontakt Folge Sports Maniac auf LinkedIn, Twitter und Facebook Folge Daniel Sprügel auf LinkedIn, Twitter und Instagram E-Mail: daniel@sportsmaniac.de Wenn dir gefällt, was du hörst, abonniere uns gerne und empfehle uns weiter. Der Sports Maniac Podcast ist eine Produktion unserer Podcast-Agentur Maniac Studios.

Ocene
Branko Gradišnik: Tisoč in nobena noč

Ocene

Play Episode Listen Later Feb 17, 2025 6:27


Piše Miša Gams, bereta Sanja Rejc in Igor Velše. Pisatelj Branko Gradišnik se je po avanturističnih potovanjih, ki jih je opravil po Irski, Siciliji, Korziki, Portugalski, Zakavkazju, Poldivjem Zahodu, Lake Districtu in Provansi odpravil na Kubo, kjer je že decembra 2014 skupaj z družino obredel vsa večja mesta in tudi plaže od Havane do Guardalavace malo pred ameriškim vojaškim oporiščem Guantanamo. Gradišnik, ki je svojo pisateljsko pot začel kot pisec znanstvenofantastičnih besedil in kratke domišljijske proze, nato pa napisal kar nekaj humorističnih, kriminalnih in satiričnih romanov, se je zadnja leta “našel” v žanru domiselnih potopisov, v katerih na humoren način opisuje prigode, anekdote, filozofske refleksije in zabavne dogodivščine s potovanj po svetu, ki skozi optiko globalizma postaja čedalje manjši. Naslov Tisoč in nobena noč nas takoj spomni na zbirko arabskih pravljic, kar ni naključje, saj nas pisatelj želi popeljati v misteriozni svet Kube skozi serijo humoresk, prigod in alegorij, ki delno spontano vzniknejo že ob trku dveh različnih kulturnih paradigem, delno pa jih spodbudi pisatelj sam s svojo vznesenostjo in otroško željo po vzburljivem dogajanju. O svoji tehniki pisanja se je v pričujočem potopisu vsaj na enem mestu precej razpisal: »… zdaj sem že dolga leta zgolj zapisovalec – pišem pač o rečeh, ki so se po moji vednosti res zgodile, da pa bi jih kdo bral – kajti kdo bo bral vsakdanjosti? - jih malo obarvam, recimo s humorno optiko (ki se prilega moji vedri naravi), potem s prostodušno iskrenostjo, ki se ne boji niti neprijetne resnice, z nepričakovanostjo izrazja (ki je značilna za mene blekneža) pa seveda z metodo, ki jo je izpopolnil Dostojevski, da dogajanja ne podaja v zaporedju, ampak kake ključne dogodke začasno zamolči in nam jih servira pozneje kot prave kolporterske senzacije.« In res bralec kakšno prigodo, šalo oziroma humoresko v polnosti razume šele z zamikom, ko mu dodatni detajli odstrejo veliko večjo sliko dogodka in se mu “zidaki” v glavi sestavijo v povsem drug algoritem. Jezik pripovedi je preprost, ubran, jasen in realističen, občasno se približuje jeziku kriminalk, detektivk in satiričnih burlesk. Čeprav potuje na Kubo s svojo družino – soprogo in na pol odraslima dvojčkoma – večino časa preživi v družbi Slovenca Denija – ta ga poduči, kako si pridobiti zaupanje Kubancev in Kubank pa tudi o optimalnih načinih za pridobivanje njihovega bivalnega vizuma, nepremičnin, zakonskih zvez in odvez in še bi lahko naštevali. Z njim avtor razvije poseben odnos, ki nenehno niha med pokroviteljstvom in občudovanjem in ki med potovanjem postaja vedno bolj zaupljiv in konkurenčen. V popotniški skupini so namreč tudi lepa dekleta, na katerih preizkušata različne – bolj kot ne spodletele – tehnike flirtanja in ob tem z besedovanjem razkrivata svojo življenjsko filozofijo. Gradišnik primerja flirt z ekonom loncem, ki »preprečuje, da juhica ne bi prekipela. Če pa začne vseeno kipeti, je lonec mogoče odstaviti, se malo odmakniti, pa bo.« Ob neki drugi priložnosti opozori na pomembnost moškega poslušanja in vnašanja humorja v debato: »… humor dokazuje, da si na nevrološki ravni dobro skonstruiran, da znaš prenašati življenjske tegobe. No, ko si je ženska s tabo izmenjala dovolj psihičnih vsebin, se bo počutila tako zbližano, tako sprepleteno s tabo, da si bo upala izmenjati tudi notranjo telesno vsebino, sokove.« Roko na srce, o državi Kubi izvemo zelo malo. Tisti ki iščejo bodisi turistične opise bodisi popotniške nasvete, bodo zelo hitro razočarani. V knjigi so sicer tu in tam omenjene najlepše plaže, nočni klubi in gostilne z dobro hrano, vendar se je treba do njihovih lokacij prebiti s pomočjo detektivske žilice in obilo domišljije. Po drugi strani pa dobimo v knjigi namige o tem, na kakšen način razviti strategijo iskanja toaletnega papirja, ki ga na Kubi zelo primanjkuje, kako se učinkovito pogajati s (pre)prodajalci, kako s preprosto telesno kretnjo ubiti več muh na en mah … ali kako sredi podeželske dekoracije plastičnega sadja prepoznati v zeleno prebarvan dildo. Na prvi pogled morda Gradišnikove prigode in nasveti ne delujejo sila praktično, a so zelo življenjski in predvsem na moč duhoviti. Čeprav v zapise ne vpleta politike, pa ne more obiti razlik med Američani in Kubanci, o katerih na koncu knjige zapiše, da se porajajo iz etične podstati, saj se prvi lotevajo izzivov s tekmovalnostjo in individualizmom, drugi pa s solidarnostjo v. vzajemno korist: »zakaj so Američani mizerne, Kubanci pa očitno dobre volje: sreče ne prinaša denar, zabava ali moč, temveč priložnost, da je človek drugim koristen in da so drugi koristni njemu. To je vse, kar potrebujemo, ta občutek vzajemne koristnosti, ki vzdržuje skupnost neglede na zunanje pritiske in težave.« Potopis Tisoč in nobena noč krasijo zabavne ilustracije Romea Štrakla, avtorja treh striparskih albumov in Gradišnikovega sodelavca pri seriji Strogo zaupno, ki ga s svojimi ilustracijami spremlja od tretje knjige naprej. Poleg slik, ki delujejo kot povzetki fotografij posameznih dogodkov, je na začetku knjige priložen tudi ročno narisan zemljevid Kube z vsemi mesti, v katerih se ustavi skupina, kot uvod v posamezno poglavje oziroma izlet pa je priložen še opis zastavljenih ciljev. Čarobnost pa se ne poraja iz njihove realizacije, temveč prav iz trenutkov, ko ne gre nič po načrtu. Odlomki iz knjige Tisoč in nobena noč nam še dolgo odzvanjajo v glavi in nas vabijo na dogodivščino, ki nam bo postavila življenje na glavo in nas pretresla do temeljev – in čez.

CPQ Podcast
Interview with Daniel Kube, CEO of servicePath, a CPQ vendor specializing in solutions for technology vendors and managed service providers.

CPQ Podcast

Play Episode Listen Later Feb 9, 2025 31:13


Daniel shares his unique career path and how his background informs his approach to business. He discusses how servicePath helps businesses manage complex product configurations, pricing, and deployments. In the interview, you'll learn about: How servicePath evolved from addressing the challenges of Managed Service Providers (MSPs) to a broader solution for various industries. The platform's unique features, including detailed financial analysis tools and the ability to handle complex pricing scenarios. How servicePath integrates with other systems and their commitment to continuous innovation. The potential and challenges of integrating AI into CPQ systems, including ensuring accuracy and a user-friendly experience. This episode is a must-listen for anyone interested in CPQ solutions, the future of AI in sales, and insights from a successful CEO. servicePath contact information: Email: daniel.kube@servicepath.co  Website: https://servicepath.co/  LinkedIn: https://www.linkedin.com/in/danielkube/     

Na ceste_FM
Kuba - Michal Škrek (8.1.2025 15:10)

Na ceste_FM

Play Episode Listen Later Jan 8, 2025 12:26


Lekár, básnik cestovateľ a sprievodca Michal Škrek nám porozpráva o svojej poslednej ceste na Kubu. Pôjdeme sa pozrieť do oblasti tabakových plantáží Viñales, do historického mesta Trinidad aj do oblastí málo navštevovaného vidieku. Michal popíše kubánsku náturu a prezradí, na aké komplikácie pri svojej práci sprievodcu na Kube narazil.

Drugi pogled
Geraldo Ramirez, Kuba

Drugi pogled

Play Episode Listen Later Jan 7, 2025 10:16


V prvem Drugem pogledu leta 2025 se odpravljamo v toplejše kraje, in sicer na Kubo. Geraldo Ramirez se je preselil s Kube v Slovenijo pred 16 leti. Za svoj dom si je takrat izbral Kras, natančneje, Sežano. Kaj ali bolje rečeno, kdo ga je pripeljal s sončnega Karibskega otoka na kraško burjo, izveste v tokratnem Drugem pogledu.

Meditation Mama
Mom Brain Reset Meditation Ft. Callie Kube

Meditation Mama

Play Episode Listen Later Dec 10, 2024 15:49


In this guided meditation Callie Kube meditation teacher, mom of two, and yoga for you meditation teacher grad leads us through a practice to help clear the mental chatter and reset our mom brains when they feel scattered. Learn more about Callie Practice with Callie Follow Callie on Instagram More Meditation Mama Order: You Are Not Your Thoughts: An 8-Week Anxiety Guided Meditation Journal ⁠⁠⁠order on Amazon⁠⁠⁠⁠ ⁠⁠⁠order at other bookstores here⁠⁠⁠⁠ ⁠⁠⁠⁠Order Meditation For The Modern Family⁠⁠⁠⁠ Meditation TT 40-Hour Meditation Teacher Training is now open for enrollment ⁠⁠⁠⁠Learn more and enroll here⁠⁠⁠⁠ Let's Connect Email Kelly your questions at info@yogaforyouonline.com Follow Kelly on instagram @yogaforyouonline Please rate, subscribe and review (it helps more than you know!) Learn more about your ad choices. Visit megaphone.fm/adchoices

The Mo'Kelly Show
Tech Thursday with Marsha Collier & Fullerton Alum Michelle Kube Kelly

The Mo'Kelly Show

Play Episode Listen Later Dec 6, 2024 33:40 Transcription Available


ICYMI: Hour Two of ‘Later, with Mo'Kelly' Presents – Thoughts on the FBI's warning to iOS and Android users to avoid ‘texting' AND a way to see how much Google's AI is obtaining from your photos on ‘Tech Thursday' with regular guest contributor; (author, podcast host, and technology pundit) Marsha Collier…PLUS – A look at Fullerton College's new “Drone Flying” bachelor's degree program with one the Hornet's most famous Alums, and KFI's very own ‘Producer Extraordinaire,' Michelle Kube Kelly - on KFI AM 640…Live everywhere on the iHeartRadio app

Startitup.sk
Jopo Poláček: Ivan Gašparovič vystúpil ako hlavná hviezda na našom Disco festivale [Pod Tlakom]

Startitup.sk

Play Episode Listen Later Dec 1, 2024 24:47


Ecclesia Podcast CZ
E106 | Jiří Kubeš: „Lepším poznáním těla můžeme lépe pochopit Boha.“

Ecclesia Podcast CZ

Play Episode Listen Later Nov 20, 2024 34:51


Hostem dnešní epizody je zdravotně pohybový specialista Jiří Kubeš. Jako spoluzakladatel a instruktor kurzů Enrapha se věnuje propojování zdravého pohybu a biomechaniky s křesťanskou spiritualitou. V rozhovoru probíráme jeho trenérskou praxi (včetně trénování Orlanda Blooma!) a význam cvičení Enrapha.

The Best of Breakfast with Bongani Bingwa
Buhlebendalo Presents Makube Chosi Kube Hele Live

The Best of Breakfast with Bongani Bingwa

Play Episode Listen Later Nov 18, 2024 7:11


Bongani Bingwa speaks with singer-songwriter, Buhlebendalo, on what fans can anticipate from "Makube Chosi Kube Hele Live," at Gibson Kente Theatre at Soweto Theatre, where her journey from Chosi to Hele unfolds in raw, musical brilliance.See omnystudio.com/listener for privacy information.

Startitup.sk
Títo Slováci masívne investujú na Kube. Ich kakao je možnosťou aj pre teba / CASHFLOW

Startitup.sk

Play Episode Listen Later Oct 23, 2024 43:18


V tejto časti Cashflow sme sa rozprávali s Pavlom Kožíkom, majiteľom investičnej platformy Proxenta, ktorý hovorí o tom, ako investovať na Kube, čo na Kube robia a akú investičnú príležitosť pre Slovákov ponúkajú. Prečo sa do Kuby oplatí investovať a ako chcú vyrábať ich kakao? To všetko v novej epizóde Cashflow. V spolupráci s Proxenta

Fiirabigmusig
Die 100 schönsten Kubeš-Titel

Fiirabigmusig

Play Episode Listen Later Oct 14, 2024 54:24


Zum 100. Geburtstag des berühmten böhmischen Komponisten Ladislav Kubeš (Senior) ist eine Sammelbox mit vier CDs erschienen. Im Jahr 2024 wäre Ladislav Kubeš (Senior) 100 Jahre alt geworden. Anlässlich dieses Jubiläums hat Ladislav Kubeš (Junior) eine Sammelbox mit vier CDs und den 100 schönsten Kompositionen seines Vaters herausgegeben. Mit dabei sind Hits wie die «Südböhmische Polka» oder der Walzer «Ein schönes Fleckchen Erde». Für die «Fiirabigmusig» hat Ladislav Kubeš (Junior) vier persönliche Lieblingstitel ausgewählt. «Die Auswahl war nicht einfach, ich mag alle Kompositionen meines Vaters», schmunzelt er im Gespräch mit der SRF Musikwelle. * Borkovická Polka * Lottchen-Polka * Du musst bleiben * Morgenpolka Ladislav Kubeš (Junior) erzählt auch aus dem Leben seines Vaters: «Es waren schwierige Zeiten für ihn, als die kommunistische Partei regierte». Die Restriktionen in der damaligen Tschechoslowakei waren einschneidend. Unterstützung erhielt Ladislav Kubeš (Senior) von Notenverlagen im Ausland, darunter auch in der Schweiz. Während der Sendung «Fiirabigmusig» werden zwei Sammelboxen «100 Jahre meine böhmische Heimat» verlost.

Le Podcast Kube
Épisode 10 - La dystopie est-elle indispensable ?

Le Podcast Kube

Play Episode Listen Later Oct 6, 2024 42:15


Hradec Králové
Radioporadna: I pleť ženy ovlivní antikoncepce. Jsou typy působící cíleně proti akné, říká gynekoložka Kubečková

Hradec Králové

Play Episode Listen Later Sep 26, 2024 19:55


Hostem radioporadny je dnes gynekoložka MUDr. Alena Kubečková. 26. září je totiž Mezinárodní den antikoncepce, takže se budeme věnovat především antikoncepci, abychom přispěli k dobré informovanosti veřejnosti a také k prevenci. Vyvrátíme některé mýty a pověry.

The Lawfare Podcast
Chatter: Rocky Mountain High with Courtney Kube and Gordon Lubold

The Lawfare Podcast

Play Episode Listen Later Jul 23, 2024 49:39


This week, we're at the Aspen Security Forum, the annual gathering of national security and foreign policy heavyweights. The conference regularly draws senior government and military officials from the United States and around the world to chew over the big issues of the day, and this time we had a full plate. It's not exactly hardship duty escaping to a glamorous mountain paradise. But the real world hardly felt far away. Questions linger about the November elections and the security failure that led to the assassination attempt on Donald Trump while two wars grind on with no clear sign of stopping. Shane Harris sat down with his colleagues Courtney Kube of NBC News and Gordon Lubold of The Wall Street Journal to talk about the highlights of the conference and what people discussed on the sidelines, where the real action often happens.Watch recordings of the security forum panels. https://www.aspensecurityforum.org/ Read more from our guests. Courtney Kube: https://www.nbcnews.com/author/courtney-kube-ncpn3621 Gordon Lubold: https://www.wsj.com/news/author/gordon-lubold Chatter is a production of Lawfare and Goat Rodeo. This episode was produced and edited by Noam Osband of Goat Rodeo. Podcast theme by David Priess, featuring music created using Groovepad.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Chatter
Rocky Mountain High with Courtney Kube and Gordon Lubold

Chatter

Play Episode Listen Later Jul 23, 2024 49:39


This week, we're at the Aspen Security Forum, the annual gathering of national security and foreign policy heavyweights. The conference regularly draws senior government and military officials from the United States and around the world to chew over the big issues of the day, and this time we had a full plate. It's not exactly hardship duty escaping to a glamorous mountain paradise. But the real world hardly felt far away. Questions linger about the November elections and the security failure that led to the assassination attempt on Donald Trump while two wars grind on with no clear sign of stopping. Shane Harris sat down with his colleagues Courtney Kube of NBC News and Gordon Lubold of The Wall Street Journal to talk about the highlights of the conference and what people discussed on the sidelines, where the real action often happens.Watch recordings of the security forum panels. https://www.aspensecurityforum.org/ Read more from our guests. Courtney Kube: https://www.nbcnews.com/author/courtney-kube-ncpn3621 Gordon Lubold: https://www.wsj.com/news/author/gordon-lubold Chatter is a production of Lawfare and Goat Rodeo. This episode was produced and edited by Noam Osband of Goat Rodeo. Podcast theme by David Priess, featuring music created using Groovepad. Hosted on Acast. See acast.com/privacy for more information.

TREND.sk
Proxenta rozširuje investície na Kube do výrobne kakaa. Investori si môžu kúpiť akcie

TREND.sk

Play Episode Listen Later Apr 29, 2024 30:13


Generálny riaditeľ skupiny Proxenta, Pavol Kožík, hovorí o dôvodoch, prečo spoločnosť investuje do výroby kakaa

Union Radio
#Puntoscardinales incidencias de bajar peso con fármacos con la nutricionista Valentina Kube

Union Radio

Play Episode Listen Later Mar 18, 2024 7:50


Radio Wave
Kompot: Český lev? Nechte ženy domluvit, říkají podstatné věci

Radio Wave

Play Episode Listen Later Mar 12, 2024 49:10


Dramaturgie sobotních Českých lvů přinesla kromě nového moderátora Marka Ebena také nový prvek – kouř, který měl upozornit na konec časového limitu při projevech. Proč byla obrovská škoda a neúcta, že přerušil producentku Pavlu Janouškovou Kubečkovou nebo režisérku Dariu Kashcheevu? A proč by se čeští moderátoři měli dovzdělat v tématu mikroagrese? Pusťte si Kompot.

Kompot
Český lev? Nechte ženy domluvit, říkají podstatné věci

Kompot

Play Episode Listen Later Mar 12, 2024 49:10


Dramaturgie sobotních Českých lvů přinesla kromě nového moderátora Marka Ebena také nový prvek – kouř, který měl upozornit na konec časového limitu při projevech. Proč byla obrovská škoda a neúcta, že přerušil producentku Pavlu Janouškovou Kubečkovou nebo režisérku Dariu Kashcheevu? A proč by se čeští moderátoři měli dovzdělat v tématu mikroagrese? Pusťte si Kompot.Všechny díly podcastu Kompot můžete pohodlně poslouchat v mobilní aplikaci mujRozhlas pro Android a iOS nebo na webu mujRozhlas.cz.

Oldschooler's
Bohdana Kubešová | Nikdy není pozdě roztáhnout křídla a nikdy to nevzdám.

Oldschooler's

Play Episode Listen Later Jan 18, 2024 50:00


Customer Engagement manažerka pro diabetologii a Content Lead pro středoevropský cluster ve společnosti Sanofi. Vystudovaná fyzioterapeutka, ale už 25 let pracuje ve farmaceutickém byznysu. Prošla mnoha pozicemi a rolemi v salesu a marketingu od medicínského reprezentanta pro Business Unit Head a někde uprostřed toho si dala rodičovskou dovolenou. Má ráda běh, lyže, slunce, les a jiskřivou konverzaci. Je sběratelka informací, maminka puberťáka, cestovatelka a baví ji LinkedIn, a především ji baví žít.

NBC Meet the Press
Post Game with Courtney Kube: Zelenskyy fails to convince lawmakers to unlock Ukraine aid

NBC Meet the Press

Play Episode Listen Later Dec 17, 2023 12:23


NBC News National Security and Pentagon Correspondent Courtney Kube joins Kristen Welker to talk about Ukrainian President Volodymyr Zelenskyy's visit to the U.S. this week, and says he "didn't turn a lot of minds" in his attempt to get Republican lawmakers to support continued aid for his country. Plus, Courtney's rare access into the U.S. space arms race with China.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Catch us at Modular's ModCon next week with Chris Lattner, and join our community!Due to Bryan's very wide ranging experience in data science and AI across Blue Bottle (!), StitchFix, Weights & Biases, and now Hex Magic, this episode can be considered a two-parter.Notebooks = Chat++We've talked a lot about AI UX (in our meetups, writeups, and guest posts), and today we're excited to dive into a new old player in AI interfaces: notebooks! Depending on your background, you either Don't Like or you Like notebooks — they are the most popular example of Knuth's Literate Programming concept, basically a collection of cells; each cell can execute code, display it, and share its state with all the other cells in a notebook. They can also simply be Markdown cells to add commentary to the analysis. Notebooks have a long history but most recently became popular from iPython evolving into Project Jupyter, and a wave of notebook based startups from Observable to DeepNote and Databricks sprung up for the modern data stack.The first wave of AI applications has been very chat focused (ChatGPT, Character.ai, Perplexity, etc). Chat as a user interface has a few shortcomings, the major one being the inability to edit previous messages. We enjoyed Bryan's takes on why notebooks feel like “Chat++” and how they are building Hex Magic:* Atomic actions vs Stream of consciousness: in a chat interface, you make corrections by adding more messages to a conversation (i.e. “Can you try again by doing X instead?” or “I actually meant XYZ”). The context can easily get messy and confusing for models (and humans!) to follow. Notebooks' cell structure on the other hand allows users to go back to any previous cells and make edits without having to add new ones at the bottom. * “Airlocks” for repeatability: one of the ideas they came up with at Hex is “airlocks”, a collection of cells that depend on each other and keep each other in sync. If you have a task like “Create a summary of my customers' recent purchases”, there are many sub-tasks to be done (look up the data, sum the amounts, write the text, etc). Each sub-task will be in its own cell, and the airlock will keep them all in sync together.* Technical + Non-Technical users: previously you had to use Python / R / Julia to write notebooks code, but with models like GPT-4, natural language is usually enough. Hex is also working on lowering the barrier of entry for non-technical users into notebooks, similar to how Code Interpreter is doing the same in ChatGPT. Obviously notebooks aren't new for developers (OpenAI Cookbooks are a good example), but haven't had much adoption in less technical spheres. Some of the shortcomings of chat UIs + LLMs lowering the barrier of entry to creating code cells might make them a much more popular UX going forward.RAG = RecSys!We also talked about the LLMOps landscape and why it's an “iron mine” rather than a “gold rush”: I'll shamelessly steal [this] from a friend, Adam Azzam from Prefect. He says that [LLMOps] is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. Don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this resource to something valuable is significant.Some of my favorite takeaways:* RAG as RecSys for LLMs: at its core, the goal of a RAG pipeline is finding the most relevant documents based on a task. This isn't very different from traditional recommendation system products that surface things for users. How can we apply old lessons to this new problem? Bryan cites fellow AIE Summit speaker and Latent Space Paper Club host Eugene Yan in decomposing the retrieval problem into retrieval, filtering, and scoring/ranking/ordering:As AI Engineers increasingly find that long context has tradeoffs, they will also have to relearn age old lessons that vector search is NOT all you need and a good systems not models approach is essential to scalable/debuggable RAG. Good thing Bryan has just written the first O'Reilly book about modern RecSys, eh?* Narrowing down evaluation: while “hallucination” is a easy term to throw around, the reality is more nuanced. A lot of times, model errors can be automatically fixed: is this JSON valid? If not, why? Is it just missing a closing brace? These smaller issues can be checked and fixed before returning the response to the user, which is easier than fixing the model.* Fine-tuning isn't all you need: when they first started building Magic, one of the discussions was around fine-tuning a model. In our episode with Jeremy Howard we talked about how fine-tuning leads to loss of capabilities as well. In notebooks, you are often dealing with domain-specific data (i.e. purchases, orders, wardrobe composition, household items, etc); the fact that the model understands that “items” are probably part of an “order” is really helpful. They have found that GPT-4 + 3.5-turbo were everything they needed to ship a great product rather than having to fine-tune on notebooks specifically.Definitely recommend listening to this one if you are interested in getting a better understanding of how to think about AI, data, and how we can use traditional machine learning lessons in large language models. The AI PivotFor more Bryan, don't miss his fireside chat at the AI Engineer Summit:Show Notes* Hex Magic* Bryan's new book: Building Recommendation Systems in Python and JAX* Bryan's whitepaper about MLOps* “Kitbashing in ML”, slides from his talk on building on top of foundation models* “Bayesian Statistics The Fun Way” by Will Kurt* Bryan's Twitter* “Berkeley man determined to walk every street in his city”* People:* Adam Azzam* Graham Neubig* Eugene Yan* Even OldridgeTimestamps* [00:00:00] Bryan's background* [00:02:34] Overview of Hex and the Magic product* [00:05:57] How Magic handles the complex notebook format to integrate cleanly with Hex* [00:08:37] Discussion of whether to build vs buy models - why Hex uses GPT-4 vs fine-tuning* [00:13:06] UX design for Magic with Hex's notebook format (aka “Chat++”)* [00:18:37] Expanding notebooks to less technical users* [00:23:46] The "Memex" as an exciting underexplored area - personal knowledge graph and memory augmentation* [00:27:02] What makes for good LLMops vs MLOps* [00:34:53] Building rigorous evaluators for Magic and best practices* [00:36:52] Different types of metrics for LLM evaluation beyond just end task accuracy* [00:39:19] Evaluation strategy when you don't own the core model that's being evaluated* [00:41:49] All the places you can make improvements outside of retraining the core LLM* [00:45:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO-in-Residence of Decibel Partners, and today I'm joining by Bryan Bischof. [00:00:15]Bryan: Hey, nice to meet you. [00:00:17]Alessio: So Bryan has one of the most thorough and impressive backgrounds we had on the show so far. Lead software engineer at Blue Bottle Coffee, which if you live in San Francisco, you know a lot about. And maybe you'll tell us 30 seconds on what that actually means. You worked as a data scientist at Stitch Fix, which used to be one of the premier data science teams out there. [00:00:38]Bryan: It used to be. Ouch. [00:00:39]Alessio: Well, no, no. Well, you left, you know, so how good can it still be? Then head of data science at Weights and Biases. You're also a professor at Rutgers and you're just wrapping up a new O'Reilly book as well. So a lot, a lot going on. Yeah. [00:00:52]Bryan: And currently head of AI at Hex. [00:00:54]Alessio: Let's do the Blue Bottle thing because I definitely want to hear what's the, what's that like? [00:00:58]Bryan: So I was leading data at Blue Bottle. I was the first data hire. I came in to kind of get the data warehouse in order and then see what we could build on top of it. But ultimately I mostly focused on demand forecasting, a little bit of recsys, a little bit of sort of like website optimization and analytics. But ultimately anything that you could imagine sort of like a retail company needing to do with their data, we had to do. I sort of like led that team, hired a few people, expanded it out. One interesting thing was I was part of the Nestle acquisition. So there was a period of time where we were sort of preparing for that and didn't know, which was a really interesting dynamic. Being acquired is a very not necessarily fun experience for the data team. [00:01:37]Alessio: I build a lot of internal tools for sourcing at the firm and we have a small VCs and data community of like other people doing it. And I feel like if you had a data feed into like the Blue Bottle in South Park, the Blue Bottle at the Hanahaus in Palo Alto, you can get a lot of secondhand information on the state of VC funding. [00:01:54]Bryan: Oh yeah. I feel like the real source of alpha is just bugging a Blue Bottle. [00:01:58]Alessio: Exactly. And what's your latest book about? [00:02:02]Bryan: I just wrapped up a book with a coauthor Hector Yee called Building Production Recommendation Systems. I'll give you the rest of the title because it's fun. It's in Python and JAX. And so for those of you that are like eagerly awaiting the first O'Reilly book that focuses on JAX, here you go. [00:02:17]Alessio: Awesome. And we'll chat about that later on. But let's maybe talk about Hex and Magic before. I've known Hex for a while, I've used it as a notebook provider and you've been working on a lot of amazing AI enabled experiences. So maybe run us through that. [00:02:34]Bryan: So I too, before I sort of like joined Hex, saw it as this like really incredible notebook platform, sort of a great place to do data science workflows, quite complicated, quite ad hoc interactive ones. And before I joined, I thought it was the best place to do data science workflows. And so when I heard about the possibility of building AI tools on top of that platform, that seemed like a huge opportunity. In particular, I lead the product called Magic. Magic is really like a suite of sort of capabilities as opposed to its own independent product. What I mean by that is they are sort of AI enhancements to the existing product. And that's a really important difference from sort of building something totally new that just uses AI. It's really important to us to enhance the already incredible platform with AI capabilities. So these are things like the sort of obvious like co-pilot-esque vibes, but also more interesting and dynamic ways of integrating AI into the product. And ultimately the goal is just to make people even more effective with the platform. [00:03:38]Alessio: How do you think about the evolution of the product and the AI component? You know, even if you think about 10 months ago, some of these models were not really good on very math based tasks. Now they're getting a lot better. I'm guessing a lot of your workloads and use cases is data analysis and whatnot. [00:03:53]Bryan: When I joined, it was pre 4 and it was pre the sort of like new chat API and all that. But when I joined, it was already clear that GPT was pretty good at writing code. And so when I joined, they had already executed on the vision of what if we allowed the user to ask a natural language prompt to an AI and have the AI assist them with writing code. So what that looked like when I first joined was it had some capability of writing SQL and it had some capability of writing Python and it had the ability to explain and describe code that was already written. Those very, what feel like now primitive capabilities, believe it or not, were already quite cool. It's easy to look back and think, oh, it's like kind of like Stone Age in these timelines. But to be clear, when you're building on such an incredible platform, adding a little bit of these capabilities feels really effective. And so almost immediately I started noticing how it affected my own workflow because ultimately as sort of like an engineering lead and a lot of my responsibility is to be doing analytics to make data driven decisions about what products we build. And so I'm actually using Hex quite a bit in the process of like iterating on our product. When I'm using Hex to do that, I'm using Magic all the time. And even in those early days, the amount that it sped me up, that it enabled me to very quickly like execute was really impressive. And so even though the models weren't that good at certain things back then, that capability was not to be underestimated. But to your point, the models have evolved between 3.5 Turbo and 4. We've actually seen quite a big enhancement in the kinds of tasks that we can ask Magic and even more so with things like function calling and understanding a little bit more of the landscape of agent workflows, we've been able to really accelerate. [00:05:57]Alessio: You know, I tried using some of the early models in notebooks and it actually didn't like the IPyNB formatting, kind of like a JSON plus XML plus all these weird things. How have you kind of tackled that? Do you have some magic behind the scenes to make it easier for models? Like, are you still using completely off the shelf models? Do you have some proprietary ones? [00:06:19]Bryan: We are using at the moment in production 3.5 Turbo and GPT-4. I would say for a large number of our applications, GPT-4 is pretty much required. To your question about, does it understand the structure of the notebook? And does it understand all of this somewhat complicated wrappers around the content that you want to show? We do our very best to abstract that away from the model and make sure that the model doesn't have to think about what the cell wrapper code looks like. Or for our Magic charts, it doesn't have to speak the language of Vega. These are things that we put a lot of work in on the engineering side, to the AI engineer profile. This is the AI engineering work to get all of that out of the way so that the model can speak in the languages that it's best at. The model is quite good at SQL. So let's ensure that it's speaking the language of SQL and that we are doing the engineering work to get the output of that model, the generations, into our notebook format. So too for other cell types that we support, including charts, and just in general, understanding the flow of different cells, understanding what a notebook is, all of that is hard work that we've done to ensure that the model doesn't have to learn anything like that. I remember early on, people asked the question, are you going to fine tune a model to understand Hex cells? And almost immediately, my answer was no. No we're not. Using fine-tuned models in 2022, I was already aware that there are some limitations of that approach and frankly, even using GPT-3 and GPT-2 back in the day in Stitch Fix, I had already seen a lot of instances where putting more effort into pre- and post-processing can avoid some of these larger lifts. [00:08:14]Alessio: You mentioned Stitch Fix and GPT-2. How has the balance between build versus buy, so to speak, evolved? So GPT-2 was a model that was not super advanced, so for a lot of use cases it was worth building your own thing. Is with GPT-4 and the likes, is there a reason to still build your own models for a lot of this stuff? Or should most people be fine-tuning? How do you think about that? [00:08:37]Bryan: Sometimes people ask, why are you using GPT-4 and why aren't you going down the avenue of fine-tuning today? I can get into fine-tuning specifically, but I do want to talk a little bit about the good old days of GPT-2. Shout out to Reza. Reza introduced me to GPT-2. I still remember him explaining the difference between general transformers and GPT. I remember one of the tasks that we wanted to solve with transformer-based generative models at Stitch Fix were writing descriptions of clothing. You might think, ooh, that's a multi-modal problem. The answer is, not necessarily. We actually have a lot of features about the clothes that are almost already enough to generate some reasonable text. I remember at that time, that was one of the first applications that we had considered. There was a really great team of NLP scientists at Stitch Fix who worked on a lot of applications like this. I still remember being exposed to the GPT endpoint back in the days of 2. If I'm not mistaken, and feel free to fact check this, I'm pretty sure Stitch Fix was the first OpenAI customer, unlike their true enterprise application. Long story short, I ultimately think that depending on your task, using the most cutting-edge general model has some advantages. If those are advantages that you can reap, then go for it. So at Hex, why GPT-4? Why do we need such a general model for writing code, writing SQL, doing data analysis? Shouldn't a fine-tuned model just on Kaggle notebooks be good enough? I'd argue no. And ultimately, because we don't have one specific sphere of data that we need to write great data analysis workbooks for, we actually want to provide a platform for anyone to do data analysis about their business. To do that, you actually need to entertain an extremely general universe of concepts. So as an example, if you work at Hex and you want to do data analysis, our projects are called Hexes. That's relatively straightforward to teach it. There's a concept of a notebook. These are data science notebooks, and you want to ask analytics questions about notebooks. Maybe if you trained on notebooks, you could answer those questions, but let's come back to Blue Bottle. If I'm at Blue Bottle and I have data science work to do, I have to ask it questions about coffee. I have to ask it questions about pastries, doing demand forecasting. And so very quickly, you can see that just by serving just those two customers, a model purely fine-tuned on like Kaggle competitions may not actually fit the bill. And so the more and more that you want to build a platform that is sufficiently general for your customer base, the more I think that these large general models really pack a lot of additional opportunity in. [00:11:21]Alessio: With a lot of our companies, we talked about stuff that you used to have to extract features for, now you have out of the box. So say you're a travel company, you want to do a query, like show me all the hotels and places that are warm during spring break. It would be just literally like impossible to do before these models, you know? But now the model knows, okay, spring break is like usually these dates and like these locations are usually warm. So you get so much out of it for free. And in terms of Magic integrating into Hex, I think AI UX is one of our favorite topics and how do you actually make that seamless. In traditional code editors, the line of code is like kind of the atomic unit and HEX, you have the code, but then you have the cell also. [00:12:04]Bryan: I think the first time I saw Copilot and really like fell in love with Copilot, I thought finally, fancy auto-complete. And that felt so good. It felt so elegant. It felt so right sized for the task. But as a data scientist, a lot of the work that you do previous to the ML engineering part of the house, you're working in these cells and these cells are atomic. They're expressing one idea. And so ultimately, if you want to make the transition from something like this code, where you've got like a large amount of code and there's a large amount of files and they kind of need to have awareness of one another, and that's a long story and we can talk about that. But in this atomic, somewhat linear flow through the notebook, what you ultimately want to do is you want to reason with the agent at the level of these individual thoughts, these atomic ideas. Usually it's good practice in say Jupyter notebook to not let your cells get too big. If your cell doesn't fit on one page, that's like kind of a code smell, like why is it so damn big? What are you doing in this cell? That also lends some hints as to what the UI should feel like. I want to ask questions about this one atomic thing. So you ask the agent, take this data frame and strip out this prefix from all the strings in this column. That's an atomic task. It's probably about two lines of pandas. I can write it, but it's actually very natural to ask magic to do that for me. And what I promise you is that it is faster to ask magic to do that for me. At this point, that kind of code, I never write. And so then you ask the next question, which is what should the UI be to do chains, to do multiple cells that work together? Because ultimately a notebook is a chain of cells and actually it's a first class citizen for Hex. So we have a DAG and the DAG is the execution DAG for the individual cells. This is one of the reasons that Hex is reactive and kind of dynamic in that way. And so the very next question is, what is the sort of like AI UI for these collections of cells? And back in June and July, we thought really hard about what does it feel like to ask magic a question and get a short chain of cells back that execute on that task. And so we've thought a lot about sort of like how that breaks down into individual atomic units and how those are tied together. We introduced something which is kind of an internal name, but it's called the airlock. And the airlock is exactly a sequence of cells that refer to one another, understand one another, use things that are happening in other cells. And it gives you a chance to sort of preview what magic has generated for you. Then you can accept or reject as an entire group. And that's one of the reasons we call it an airlock, because at any time you can sort of eject the airlock and see it in the space. But to come back to your question about how the AI UX fits into this notebook, ultimately a notebook is very conversational in its structure. I've got a series of thoughts that I'm going to express as a series of cells. And sometimes if I'm a kind data scientist, I'll put some text in between them too, explaining what on earth I'm doing. And that feels, in my opinion, and I think this is quite shared amongst exons, that feels like a really nice refinement of the chat UI. I've been saying for several months now, like, please stop building chat UIs. There is some irony because I think what the notebook allows is like chat plus plus. [00:15:36]Alessio: Yeah, I think the first wave of everything was like chat with X. So it was like chat with your data, chat with your documents and all of this. But people want to code, you know, at the end of the day. And I think that goes into the end user. I think most people that use notebooks are software engineer, data scientists. I think the cool things about these models is like people that are not traditionally technical can do a lot of very advanced things. And that's why people like code interpreter and chat GBT. How do you think about the evolution of that persona? Do you see a lot of non-technical people also now coming to Hex to like collaborate with like their technical folks? [00:16:13]Bryan: Yeah, I would say there might even be more enthusiasm than we're prepared for. We're obviously like very excited to bring what we call the like low floor user into this world and give more people the opportunity to self-serve on their data. We wanted to start by focusing on users who are already familiar with Hex and really make magic fantastic for them. One of the sort of like internal, I would say almost North Stars is our team's charter is to make Hex feel more magical. That is true for all of our users, but that's easiest to do on users that are already able to use Hex in a great way. What we're hearing from some customers in particular is sort of like, I'm excited for some of my less technical stakeholders to get in there and start asking questions. And so that raises a lot of really deep questions. If you immediately enable self-service for data, which is almost like a joke over the last like maybe like eight years, if you immediately enabled self-service, what challenges does that bring with it? What risks does that bring with it? And so it has given us the opportunity to think about things like governance and to think about things like alignment with the data team and making sure that the data team has clear visibility into what the self-service looks like. Having been leading a data team, trying to provide answers for stakeholders and hearing that they really want to self-serve, a question that we often found ourselves asking is, what is the easiest way that we can keep them on the rails? What is the easiest way that we can set up the data warehouse and set up our tools such that they can ask and answer their own questions without coming away with like false answers? Because that is such a priority for data teams, it becomes an important focus of my team, which is, okay, magic may be an enabler. And if it is, what do we also have to respect? We recently introduced the data manager and the data manager is an auxiliary sort of like tool on the Hex platform to allow people to write more like relevant metadata about their data warehouse to make sure that magic has access to the best information. And there are some things coming to kind of even further that story around governance and understanding. [00:18:37]Alessio: You know, you mentioned self-serve data. And when I was like a joke, you know, the whole rush to the modern data stack was something to behold. Do you think AI is like in a similar space where it's like a bit of a gold rush? [00:18:51]Bryan: I have like sort of two comments here. One I'll shamelessly steal from a friend, Adam Azzam from Prefect. He says that this is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. And that's the first one is I think, don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this like gold to, or this resource to something valuable is significant. I think people have gotten a little carried away with the old maxim of like, don't go pan for gold, sell pickaxes and shovels. It's a much stronger business model. At this point, I feel like I look around and I see more pickaxe salesmen and shovel salesmen than I do prospectors. And that scares me a little bit. Metagame where people are starting to think about how they can build tools for people building tools for AI. And that starts to give me a little bit of like pause in terms of like, how confident are we that we can even extract this resource into something valuable? I got a text message from a VC earlier today, and I won't name the VC or the fund, but the question was, what are some medium or large size companies that have integrated AI into their platform in a way that you're really impressed by? And I looked at the text message for a few minutes and I was finding myself thinking and thinking, and I responded, maybe only co-pilot. It's been a couple hours now, and I don't think I've thought of another one. And I think that's where I reflect again on this, like iron versus gold. If it was really gold, I feel like I'd be more blown away by other AI integrations. And I'm not yet. [00:20:40]Alessio: I feel like all the people finding gold are the ones building things that traditionally we didn't focus on. So like mid-journey. I've talked to a company yesterday, which I'm not going to name, but they do agents for some use case, let's call it. They are 11 months old. They're making like 8 million a month in revenue, but in a space that you wouldn't even think about selling to. If you were like a shovel builder, you wouldn't even go sell to those people. And Swix talks about this a bunch, about like actually trying to go application first for some things. Let's actually see what people want to use and what works. What do you think are the most maybe underexplored areas in AI? Is there anything that you wish people were actually trying to shovel? [00:21:23]Bryan: I've been saying for a couple of months now, if I had unlimited resources and I was just sort of like truly like, you know, on my own building whatever I wanted, I think the thing that I'd be most excited about is building sort of like the personal Memex. The Memex is something that I've wanted since I was a kid. And are you familiar with the Memex? It's the memory extender. And it's this idea that sort of like human memory is quite weak. And so if we can extend that, then that's a big opportunity. So I think one of the things that I've always found to be one of the limiting cases here is access. How do you access that data? Even if you did build that data like out, how would you quickly access it? And one of the things I think there's a constellation of technologies that have come together in the last couple of years that now make this quite feasible. Like information retrieval has really improved and we have a lot more simple systems for getting started with information retrieval to natural language is ultimately the interface that you'd really like these systems to work on, both in terms of sort of like structuring the data and preparing the data, but also on the retrieval side. So what keys off the query for retrieval, probably ultimately natural language. And third, if you really want to go into like the purely futuristic aspect of this, it is latent voice to text. And that is also something that has quite recently become possible. I did talk to a company recently called gather, which seems to have some cool ideas in this direction, but I haven't seen yet what I, what I really want, which is I want something that is sort of like every time I listen to a podcast or I watch a movie or I read a book, it sort of like has a great vector index built on top of all that information that's contained within. And then when I'm having my next conversation and I can't quite remember the name of this person who did this amazing thing, for example, if we're talking about the Memex, it'd be really nice to have Vannevar Bush like pop up on my, you know, on my Memex display, because I always forget Vannevar Bush's name. This is one time that I didn't, but I often do. This is something that I think is only recently enabled and maybe we're still five years out before it can be good, but I think it's one of the most exciting projects that has become possible in the last three years that I think generally wasn't possible before. [00:23:46]Alessio: Would you wear one of those AI pendants that record everything? [00:23:50]Bryan: I think I'm just going to do it because I just like support the idea. I'm also admittedly someone who, when Google Glass first came out, thought that seems awesome. I know that there's like a lot of like challenges about the privacy aspect of it, but it is something that I did feel was like a disappointment to lose some of that technology. Fun fact, one of the early Google Glass developers was this MIT computer scientist who basically built the first wearable computer while he was at MIT. And he like took notes about all of his conversations in real time on his wearable and then he would have real time access to them. Ended up being kind of a scandal because he wanted to use a computer during his defense and they like tried to prevent him from doing it. So pretty interesting story. [00:24:35]Alessio: I don't know but the future is going to be weird. I can tell you that much. Talking about pickaxes, what do you think about the pickaxes that people built before? Like all the whole MLOps space, which has its own like startup graveyard in there. How are those products evolving? You know, you were at Wits and Biases before, which is now doing a big AI push as well. [00:24:57]Bryan: If you really want to like sort of like rub my face in it, you can go look at my white paper on MLOps from 2022. It's interesting. I don't think there's many things in that that I would these days think are like wrong or even sort of like naive. But what I would say is there are both a lot of analogies between MLOps and LLMops, but there are also a lot of like key differences. So like leading an engineering team at the moment, I think a lot more about good engineering practices than I do about good ML practices. That being said, it's been very convenient to be able to see around corners in a few of the like ML places. One of the first things I did at Hex was work on evals. This was in February. I hadn't yet been overwhelmed by people talking about evals until about May. And the reason that I was able to be a couple of months early on that is because I've been building evals for ML systems for years. I don't know how else to build an ML system other than start with the evals. I teach my students at Rutgers like objective framing is one of the most important steps in starting a new data science project. If you can't clearly state what your objective function is and you can't clearly state how that relates to the problem framing, you've got no hope. And I think that is a very shared reality with LLM applications. Coming back to one thing you mentioned from earlier about sort of like the applications of these LLMs. To that end, I think what pickaxes I think are still very valuable is understanding systems that are inherently less predictable, that are inherently sort of experimental. On my engineering team, we have an experimentalist. So one of the AI engineers, his focus is experiments. That's something that you wouldn't normally expect to see on an engineering team. But it's important on an AI engineering team to have one person whose entire focus is just experimenting, trying, okay, this is a hypothesis that we have about how the model will behave. Or this is a hypothesis we have about how we can improve the model's performance on this. And then going in, running experiments, augmenting our evals to test it, et cetera. What I really respect are pickaxes that recognize the hybrid nature of the sort of engineering tasks. They are ultimately engineering tasks with a flavor of ML. And so when systems respect that, I tend to have a very high opinion. One thing that I was very, very aligned with Weights and Biases on is sort of composability. These systems like ML systems need to be extremely composable to make them much more iterative. If you don't build these systems in composable ways, then your integration hell is just magnified. When you're trying to iterate as fast as people need to be iterating these days, I think integration hell is a tax not worth paying. [00:27:51]Alessio: Let's talk about some of the LLM native pickaxes, so to speak. So RAG is one. One thing is doing RAG on text data. One thing is doing RAG on tabular data. We're releasing tomorrow our episode with Kube, the semantic layer company. Curious to hear your thoughts on it. How are you doing RAG, pros, cons? [00:28:11]Bryan: It became pretty obvious to me almost immediately that RAG was going to be important. Because ultimately, you never expect your model to have access to all of the things necessary to respond to a user's request. So as an example, Magic users would like to write SQL that's relevant to their business. And it's important then to have the right data objects that they need to query. We can't expect any LLM to understand our user's data warehouse topology. So what we can expect is that we can build a RAG system that is data warehouse aware, data topology aware, and use that to provide really great information to the model. If you ask the model, how are my customers trending over time? And you ask it to write SQL to do that. What is it going to do? Well, ultimately, it's going to hallucinate the structure of that data warehouse that it needs to write a general query. Most likely what it's going to do is it's going to look in its sort of memory of Stack Overflow responses to customer queries, and it's going to say, oh, it's probably a customer stable and we're in the age of DBT, so it might be even called, you know, dim customers or something like that. And what's interesting is, and I encourage you to try, chatGBT will do an okay job of like hallucinating up some tables. It might even hallucinate up some columns. But what it won't do is it won't understand the joins in that data warehouse that it needs, and it won't understand the data caveats or the sort of where clauses that need to be there. And so how do you get it to understand those things? Well, this is textbook RAG. This is the exact kind of thing that you expect RAG to be good at augmenting. But I think where people who have done a lot of thinking about RAG for the document case, they think of it as chunking and sort of like the MapReduce and the sort of like these approaches. But I think people haven't followed this train of thought quite far enough yet. Jerry Liu was on the show and he talked a little bit about thinking of this as like information retrieval. And I would push that even further. And I would say that ultimately RAG is just RecSys for LLM. As I kind of already mentioned, I'm a little bit recommendation systems heavy. And so from the beginning, RAG has always felt like RecSys to me. It has always felt like you're building a recommendation system. And what are you trying to recommend? The best possible resources for the LLM to execute on a task. And so most of my approach to RAG and the way that we've improved magic via retrieval is by building a recommendation system. [00:30:49]Alessio: It's funny, as you mentioned that you spent three years writing the book, the O'Reilly book. Things must have changed as you wrote the book. I don't want to bring out any nightmares from there, but what are the tips for people who want to stay on top of this stuff? Do you have any other favorite newsletters, like Twitter accounts that you follow, communities you spend time in? [00:31:10]Bryan: I am sort of an aggressive reader of technical books. I think I'm almost never disappointed by time that I've invested in reading technical manuscripts. I find that most people write O'Reilly or similar books because they've sort of got this itch that they need to scratch, which is that I have some ideas, I have some understanding that we're hard won, I need to tell other people. And there's something that, from my experience, correlates between that itch and sort of like useful information. As an example, one of the people on my team, his name is Will Kurt, he wrote a book sort of Bayesian statistics the fun way. I knew some Bayesian statistics, but I read his book anyway. And the reason was because I was like, if someone feels motivated to write a book called Bayesian statistics the fun way, they've got something to say about Bayesian statistics. I learned so much from that book. That book is like technically like targeted at someone with less knowledge and experience than me. And boy, did it humble me about my understanding of Bayesian statistics. And so I think this is a very boring answer, but ultimately like I read a lot of books and I think that they're a really valuable way to learn these things. I also regrettably still read a lot of Twitter. There is plenty of noise in that signal, but ultimately it is still usually like one of the first directions to get sort of an instinct for what's valuable. The other comment that I want to make is we are in this age of sort of like archive is becoming more of like an ad platform. I think that's a little challenging right now to kind of use it the way that I used to use it, which is for like higher signal. I've chatted a lot with a CMU professor, Graham Neubig, and he's been doing LLM evaluation and LLM enhancements for about five years and know that I didn't misspeak. And I think talking to him has provided me a lot of like directionality for more believable sources. Trying to cut through the hype. I know that there's a lot of other things that I could mention in terms of like just channels, but ultimately right now I think there's almost an abundance of channels and I'm a little bit more keen on high signal. [00:33:18]Alessio: The other side of it is like, I see so many people say, Oh, I just wrote a paper on X and it's like an article. And I'm like, an article is not a paper, but it's just funny how I know we were kind of chatting before about terms being reinvented and like people that are not from this space kind of getting into AI engineering now. [00:33:36]Bryan: I also don't want to be gatekeepy. Actually I used to say a lot to people, don't be shy about putting your ideas down on paper. I think it's okay to just like kind of go for it. And I, I myself have something on archive that is like comically naive. It's intentionally naive. Right now I'm less concerned by more naive approaches to things than I am by the purely like advertising approach to sort of writing these short notes and articles. I think blogging still has a good place. And I remember getting feedback during my PhD thesis that like my thesis sounded more like a long blog post. And I now feel like that curmudgeonly professor who's also like, yeah, maybe just keep this to the blogs. That's funny.Alessio: Uh, yeah, I think one of the things that Swyx said when he was opening the AI engineer summit a couple of weeks ago was like, look, most people here don't know much about the space because it's so new and like being open and welcoming. I think it's one of the goals. And that's why we try and keep every episode at a level that it's like, you know, the experts can understand and learn something, but also the novices can kind of like follow along. You mentioned evals before. I think that's one of the hottest topics obviously out there right now. What are evals? How do we know if they work? Yeah. What are some of the fun learnings from building them into X? [00:34:53]Bryan: I said something at the AI engineer summit that I think a few people have already called out, which is like, if you can't get your evals to be sort of like objective, then you're not trying hard enough. I stand by that statement. I'm not going to, I'm not going to walk it back. I know that that doesn't feel super good because people, people want to think that like their unique snowflake of a problem is too nuanced. But I think this is actually one area where, you know, in this dichotomy of like, who can do AI engineering? And the answer is kind of everybody. Software engineering can become AI engineering and ML engineering can become AI engineering. One thing that I think the more data science minded folk have an advantage here is we've gotten more practice in taking very vague notions and trying to put a like objective function around that. And so ultimately I would just encourage everybody who wants to build evals, just work incredibly hard on codifying what is good and bad in terms of these objective metrics. As far as like how you go about turning those into evals, I think it's kind of like sweat equity. Unfortunately, I told the CEO of gantry several months ago, I think it's been like six months now that I was sort of like looking at every single internal Hex request to magic by hand with my eyes and sort of like thinking, how can I turn this into an eval? Is there a way that I can take this real request during this dog foodie, not very developed stage? How can I make that into an evaluation? That was a lot of sweat equity that I put in a lot of like boring evenings, but I do think ultimately it gave me a lot of understanding for the way that the model was misbehaving. Another thing is how can you start to understand these misbehaviors as like auxiliary evaluation metrics? So there's not just one evaluation that you want to do for every request. It's easy to say like, did this work? Did this not work? Did the response satisfy the task? But there's a lot of other metrics that you can pull off these questions. And so like, let me give you an example. If it writes SQL that doesn't reference a table in the database that it's supposed to be querying against, we would think of that as a hallucination. You could separately consider, is it a hallucination as a valuable metric? You could separately consider, does it get the right answer? The right answer is this sort of like all in one shot, like evaluation that I think people jump to. But these intermediary steps are really important. I remember hearing that GitHub had thousands of lines of post-processing code around Copilot to make sure that their responses were sort of correct or in the right place. And that kind of sort of defensive programming against bad responses is the kind of thing that you can build by looking at many different types of evaluation metrics. Because you can say like, oh, you know, the Copilot completion here is mostly right, but it doesn't close the brace. Well, that's the thing you can check for. Or, oh, this completion is quite good, but it defines a variable that was like already defined in the file. Like that's going to have a problem. That's an evaluation that you could check separately. And so this is where I think it's easy to convince yourself that all that matters is does it get the right answer? But the more that you think about production use cases of these things, the more you find a lot of this kind of stuff. One simple example is like sometimes the model names the output of a cell, a variable that's already in scope. Okay. Like we can just detect that and like we can just fix that. And this is the kind of thing that like evaluations over time and as you build these evaluations over time, you really can expand the robustness in which you trust these models. And for a company like Hex, who we need to put this stuff in GA, we can't just sort of like get to demo stage or even like private beta stage. We really hunting GA on all of these capabilities. Did it get the right answer on some cases is not good enough. [00:38:57]Alessio: I think the follow up question to that is in your past roles, you own the model that you're evaluating against. Here you don't actually have control into how the model evolves. How do you think about the model will just need to improve or we'll use another model versus like we can build kind of like engineering post-processing on top of it. How do you make the choice? [00:39:19]Bryan: So I want to say two things here. One like Jerry Liu talked a little bit about in his episode, he talked a little bit about sort of like you don't always want to retrain the weights to serve certain use cases. Rag is another tool that you can use to kind of like soft tune. I think that's right. And I want to go back to my favorite analogy here, which is like recommendation systems. When you build a recommendation system, you build the objective function. You think about like what kind of recs you want to provide, what kind of features you're allowed to use, et cetera, et cetera. But there's always another step. There's this really wonderful collection of blog posts from Eugene Yon and then ultimately like even Oldridge kind of like iterated on that for the Merlin project where there's this multi-stage recommender. And the multi-stage recommender says the first step is to do great retrieval. Once you've done great retrieval, you then need to do great ranking. Once you've done great ranking, you need to then do a good job serving. And so what's the analogy here? Rag is retrieval. You can build different embedding models to encode different features in your latent space to ensure that your ranking model has the best opportunity. Now you might say, oh, well, my ranking model is something that I've got a lot of capability to adjust. I've got full access to my ranking model. I'm going to retrain it. And that's great. And you should. And over time you will. But there's one more step and that's downstream and that's the serving. Serving often sounds like I just show the s**t to the user, but ultimately serving is things like, did I provide diverse recommendations? Going back to Stitch Fix days, I can't just recommend them five shirts of the same silhouette and cut. I need to serve them a diversity of recommendations. Have I respected their requirements? They clicked on something that got them to this place. Is the recommendations relevant to that query? Are there any hard rules? Do we maybe not have this in stock? These are all things that you put downstream. And so much like the recommendations use case, there's a lot of knobs to pull outside of retraining the model. And even in recommendation systems, when do you retrain your model for ranking? Not nearly as much as you do other s**t. And even this like embedding model, you might fiddle with more often than the true ranking model. And so I think the only piece of the puzzle that you don't have access to in the LLM case is that sort of like middle step. That's okay. We've got plenty of other work to do. So right now I feel pretty enabled. [00:41:56]Alessio: That's great. You obviously wrote a book on RecSys. What are some of the key concepts that maybe people that don't have a data science background, ML background should keep in mind as they work in this area? [00:42:07]Bryan: It's easy to first think these models are stochastic. They're unpredictable. Oh, well, what are we going to do? I think of this almost like gaseous type question of like, if you've got this entropy, where can you put the entropy? Where can you let it be entropic and where can you constrain it? And so what I want to say here is think about the cases where you need it to be really tightly constrained. So why are people so excited about function calling? Because function calling feels like a way to constrict it. Where can you let it be more gaseous? Well, maybe in the way that it talks about what it wants to do. Maybe for planning, if you're building agents and you want to do sort of something chain of thoughty. Well, that's a place where the entropy can happily live. When you're building applications of these models, I think it's really important as part of the problem framing to be super clear upfront. These are the things that can be entropic. These are the things that cannot be. These are the things that need to be super rigid and really, really aligned to a particular schema. We've had a lot of success in making specific the parts that need to be precise and tightly schemified, and that has really paid dividends. And so other analogies from data science that I think are very valuable is there's the sort of like human in the loop analogy, which has been around for quite a while. And I have gone on record a couple of times saying that like, I don't really love human in the loop. One of the things that I think we can learn from human in the loop is that the user is the best judge of what is good. And the user is pretty motivated to sort of like interact and give you kind of like additional nudges in the direction that you want. I think what I'd like to flip though, is instead of human in the loop, I'd like it to be AI in the loop. I'd rather center the user. I'd rather keep the user as the like core item at the center of this universe. And the AI is a tool. By switching that analogy a little bit, what it allows you to do is think about where are the places in which the user can reach for this as a tool, execute some task with this tool, and then go back to doing their workflow. It still gets this back and forth between things that computers are good at and things that humans are good at, which has been valuable in the human loop paradigm. But it allows us to be a little bit more, I would say, like the designers talk about like user-centered. And I think that's really powerful for AI applications. And it's one of the things that I've been trying really hard with Magic to make that feel like the workflow as the AI is right there. It's right where you're doing your work. It's ready for you anytime you need it. But ultimately you're in charge at all times and your workflow is what we care the most about. [00:44:56]Alessio: Awesome. Let's jump into lightning round. What's something that is not on your LinkedIn that you're passionate about or, you know, what's something you would give a TED talk on that is not work related? [00:45:05]Bryan: So I walk a lot. [00:45:07]Bryan: I have walked every road in Berkeley. And I mean like every part of every road even, not just like the binary question of, have you been on this road? I have this little app that I use called Wanderer, which just lets me like kind of keep track of everywhere I've been. And so I'm like a little bit obsessed. My wife would say a lot a bit obsessed with like what I call new roads. I'm actually more motivated by trails even than roads, but like I'm a maximalist. So kind of like everything and anything. Yeah. Believe it or not, I was even like in the like local Berkeley paper just talking about walking every road. So yeah, that's something that I'm like surprisingly passionate about. [00:45:45]Alessio: Is there a most underrated road in Berkeley? [00:45:49]Bryan: What I would say is like underrated is Kensington. So Kensington is like a little town just a teeny bit north of Berkeley, but still in the Berkeley hills. And Kensington is so quirky and beautiful. And it's a really like, you know, don't sleep on Kensington. That being said, one of my original motivations for doing all this walking was people always tell me like, Berkeley's so quirky. And I was like, how quirky is Berkeley? Turn it out. It's quite, quite quirky. It's also hard to say quirky and Berkeley in the same sentence I've learned as of now. [00:46:20]Alessio: That's a, that's a good podcast warmup for our next guests. All right. The actual lightning ground. So we usually have three questions, acceleration, exploration, then a takeaway acceleration. What's, what's something that's already here today that you thought would take much longer to arrive in AI and machine learning? [00:46:39]Bryan: So I invited the CEO of Hugging Face to my seminar when I worked at Stitch Fix and his talk at the time, honestly, like really annoyed me. The talk was titled like something to the effect of like LLMs are going to be the like technology advancement of the next decade. It's on YouTube. You can find it. I don't remember exactly the title, but regardless, it was something like LLMs for the next decade. And I was like, okay, they're like one modality of model, like whatever. His talk was fine. Like, I don't think it was like particularly amazing or particularly poor, but what I will say is damn, he was right. Like I, I don't think I quite was on board during that talk where I was like, ah, maybe, you know, like there's a lot of other modalities that are like moving pretty quick. I thought things like RL were going to be the like real like breakout success. And there's a little pun with Atari and breakout there, but yeah, like I, man, I was sleeping on LLMs and I feel a little embarrassed. I, yeah. [00:47:44]Alessio: Yeah. No, I mean, that's a good point. It's like sometimes the, we just had Jeremy Howard on the podcast and he was saying when he was talking about fine tuning, everybody thought it was dumb, you know, and then later people realize, and there's something to be said about messaging, especially like in technical audiences where there's kind of like the metagame, you know, which is like, oh, these are like the cool ideas people are exploring. I don't know where I want to align myself yet, you know, or whatnot. So it's cool exploration. So it's kind of like the opposite of that. You mentioned RL, right? That's something that was kind of like up and up and up. And then now it's people are like, oh, I don't know. Are there any other areas if you weren't working on, on magic that you want to go work on? [00:48:25]Bryan: Well, I did mention that, like, I think this like Memex product is just like incredibly exciting to me. And I think it's really opportunistic. I think it's very, very feasible, but I would maybe even extend that a little bit, which is I don't see enough people getting really enthusiastic about hardware with advanced AI built in. You're hearing whispering of it here and there, put on the whisper, but like you're starting to see people putting whisper into pieces of hardware and making that really powerful. I joked with, I can't think of her name. Oh, Sasha, who I know is a friend of the pod. Like I joked with Sasha that I wanted to make the big mouth Billy Bass as a babble fish, because at this point it's pretty easy to connect that up to whisper and talk to it in one language and have it talk in the other language. And I was like, this is the kind of s**t I want people building is like silly integrations between hardware and these new capabilities. And as much as I'm starting to hear whisperings here and there, it's not enough. I think I want to see more people going down this track because I think ultimately like these things need to be in our like physical space. And even though the margins are good on software, I want to see more like integration into my daily life. Awesome. [00:49:47]Alessio: And then, yeah, a takeaway, what's one message idea you want everyone to remember and think about? [00:49:54]Bryan: Even though earlier I was talking about sort of like, maybe like not reinventing things and being respectful of the sort of like ML and data science, like ideas. I do want to say that I think everybody should be experimenting with these tools as much as they possibly can. I've heard a lot of professors, frankly, express concern about their students using GPT to do their homework. And I took a completely opposite approach, which is in the first 15 minutes of the first class of my semester this year, I brought up GPT on screen and we talked about what GPT was good at. And we talked about like how the students can sort of like use it. I showed them an example of it doing data analysis work quite well. And then I showed them an example of it doing quite poorly. I think however much you're integrating with these tools or interacting with these tools, and this audience is probably going to be pretty high on that distribution. I would really encourage you to sort of like push this into the other people in your life. My wife is very technical. She's a product manager and she's using chat GPT almost every day for communication or for understanding concepts that are like outside of her sphere of excellence. And recently my mom and my sister have been sort of like onboarded onto the chat GPT train. And so ultimately I just, I think that like it is our duty to help other people see like how much of a paradigm shift this is. We should really be preparing people for what life is going to be like when these are everywhere. [00:51:25]Alessio: Awesome. Thank you so much for coming on, Bryan. This was fun. [00:51:29]Bryan: Yeah. Thanks for having me. And use Hex magic. [00:51:31] Get full access to Latent Space at www.latent.space/subscribe

B4 the podcast
Malke - B4 The Podcast 119

B4 the podcast

Play Episode Listen Later Nov 29, 2023 60:44


He has enjoyed an impressive career so far which in- cludes hot releases on renowned labels like: Audio Code, Technopride, Kube, and Carmarage Records.  Malke has also graced the stages of major festivals and clubs such as: Monegros Desert Festival, Clash Club, Fabrik, and Florida 135. Make sure to check out his latest tracks, including 'Bring it Back,' 'Future of Now,' 'Let it Go,' 'Pain Rage,' and his free down- load edit of Prodigy's 'We Live Forever.'  Also, for any new stuff, stay tuned to his socials for updates on his upcoming releases, including 'Elements,' 'No Mercy,' and 'Karnage' including releases with Re- bekah and Lenny Dee. It's been a fantastic year for this talented artist, so  we're bringing him to you for an exclusive mix to close off 2023 in style, courtesy of B4 Bookings Worldwide Agency. Thanks for tuning in to 'B4 The  Podcast' - and brace yourselves for the incredible sounds of the one and only Malke!

Heartland Daily Podcast
The Domino Effect of Medicare Hospital Reimbursements on Soaring Health Costs

Heartland Daily Podcast

Play Episode Listen Later Nov 16, 2023 30:59


For years, Medicare has paid hospitals and their affiliates more for services than it has to others. The reasons are complex, but this policy significantly incentivizes hospitals to absorb independent practices, creating “monopolies” that reduce competition and increase prices for everyone. Dr. Richard Kube, M.D., founder and CEO of the Prairie Spine and Pain Institute—an independent practice in Illinois—experiences first-hand how this policy and other top-down government regulations work against patients. Kube, an advocate for “site-neutral” payment, recently discussed this topic in Newsweek. “Site-neutral payment would end the unfair policies promoting consolidation and encouraging higher prices,” Kube writes. “Such proposals have bipartisan support in Congress. Several congressional committees are currently debating a health reform package, including provisions to establish site neutrality under limited circumstances. This would be an essential first step, one that physicians nationwide hope will soon extend to other services. After all, reimbursing providers equally for the same service is only fair.” In the podcast, Kube discusses: - The reasons why Medicare pays more money to hospitals for the same service- Examples of the differences in costs- How this policy leads to increased consolidation in the healthcare industry and raises costs for everyone- The track record of Congress and the administration, including under Trump and Biden, in promoting site-neutral payments- The influence of the hospital lobby and the feasibility of implementing site-neutral payments- Actions the public can take to support the advancement of more free-market policies

Health Care News Podcast
The Domino Effect of Medicare Hospital Reimbursements on Soaring Health Costs

Health Care News Podcast

Play Episode Listen Later Nov 16, 2023 30:59


For years, Medicare has paid hospitals and their affiliates more for services than it has to others. The reasons are complex, but this policy significantly incentivizes hospitals to absorb independent practices, creating “monopolies” that reduce competition and increase prices for everyone. Dr. Richard Kube, M.D., founder and CEO of the Prairie Spine and Pain Institute—an independent practice in Illinois—experiences first-hand how this policy and other top-down government regulations work against patients. Kube, an advocate for “site-neutral” payment, recently discussed this topic in Newsweek. “Site-neutral payment would end the unfair policies promoting consolidation and encouraging higher prices,” Kube writes. “Such proposals have bipartisan support in Congress. Several congressional committees are currently debating a health reform package, including provisions to establish site neutrality under limited circumstances. This would be an essential first step, one that physicians nationwide hope will soon extend to other services. After all, reimbursing providers equally for the same service is only fair.” In the podcast, Kube discusses: - The reasons why Medicare pays more money to hospitals for the same service- Examples of the differences in costs- How this policy leads to increased consolidation in the healthcare industry and raises costs for everyone- The track record of Congress and the administration, including under Trump and Biden, in promoting site-neutral payments- The influence of the hospital lobby and the feasibility of implementing site-neutral payments- Actions the public can take to support the advancement of more free-market policies

The Dawn Stensland Show
Dr Richard Kube: Healthcare Monopolies and Big Government...

The Dawn Stensland Show

Play Episode Listen Later Nov 6, 2023 21:20


DR RICHARD KUBE JOINS DAWN BREAKING DOWN HEALTHCARE MONOPOLIES AND THE GOVERNMENT - EXPANDS ON HIS RECENT NEWSWEEK COLUMN...  Dr. Richard Kube: “Rising health care costs are a major concern for most Americans. A Pew poll from earlier this year found 64 percent of Americans consider health care affordability a "very big problem in our country today." That includes majorities of both parties—54 percent of Republicans and 73 percent of Democrats. Our country may struggle with division and polarization, but it's clear that health care is an area where bipartisan reform is possible….One of the key drivers for rising prices is consolidation in the health care market—when a smaller number of providers control a greater share of the overall market. Consolidation leads to less competition and therefore higher prices for consumers.” Dr. Richard Kube: “The rise of consolidation and decline of small private practices did not happen by chance. It was the unintended consequence of government policies, such as Medicare's reimbursement rate, which pays large hospitals and hospital systems more than small doctor offices for the same care. This payment differential benefits large hospital systems, which buy up small doctor practices and charge consumers higher prices by simply affixing the company name to an existing facility's front door….A report from the American Enterprise Institute found that "for a 30-minute office visit, the physician fee schedule payment rate for calendar year 2017 was $109.46 for a new patient, while if delivered in a hospital setting, the total would be 68.5 percent higher or $184.44." This massive difference in reimbursement for the same service not only distorts the market, but, according to the researchers, it is "harmful to patients, who experience higher [Medicare] Part B coinsurance amounts due to bigger bills for the same clinical service."” Dr. Richard Kube: “All of this can be resolved by ending two-tier Medicare reimbursement policies. Instead, the program should implement site-neutral reimbursement payments based on the care provided, not facility location or ownership….Site-neutral payment would put an end to the unfair policies that promote consolidation and encourage higher prices. Proposals to do so in Congress have support among Republicans and Democrats. Several congressional committees are now debating a health reform package that includes provisions to establish site neutrality in limited circumstances. That would be a vital first step, one which physicians across the country hope will soon expand to other services. After all, what could be more fair than reimbursing providers the same amount for the same service?” Tune in 10 AM - 12 PM EST weekdays on Talk Radio 1210 WPHT; or on the Audacy app!

Ward Scott Files Podcast
November 3, 2023 ~ Dr. Richard Kube, CEO and founder of Prairie Spine & Pain Institute

Ward Scott Files Podcast

Play Episode Listen Later Nov 3, 2023


Today on The Ward Scott Files Podcast, Professor Emeritus will be discussing various topics with ~Dr. Richard Kube, CEO and founder of Prairie Spine & Pain Institute! Along with this, he will discuss local Alachua County news, weather, and more! LIVE 9 A.M. every weekday!

X3
72 - Localize Festival Live mit Daniel Heinz & Vivian Kube - Behörden enttraumatisieren

X3

Play Episode Listen Later Oct 19, 2023 70:46


Im letzten Jahrzehnt haben sich viele neue Initiativen und Netzwerke im Bereich zivilgesellschaftliches Engagement und Kultur entwickelt, die sich mit migrantischen Identitäten beschäftigen. Soziale Medien haben dazu beigetragen, die Migrant:innen-Empowerment-Bewegung zu stärken und gesellschaftliche Missstände sichtbar zu machen. Doch wir stehen noch am Anfang einer echten gesellschaftlichen Teilhabe. Diese Folge unseres Podcasts wurde live auf dem Localize Festival in Potsdam aufgenommen. Mit unseren Gäst:innen Dr. Vivian Kube, Rechtsanwältin bei FragDenStaat, und Daniel Heinz, politischer Bildner & Politikwissenschaftler, diskutieren wir die Hürden für Migrant:innen in der deutschen Bürokratie, Fortschritte der letzten Jahre und Visionen für die Bekämpfung von Diskriminierung. Shownotes: Localize Festival: https://localize.cargo.site/ FragDenStaat: https://fragdenstaat.de/ Bildungsstätte Anne Frank: https://www.bs-anne-frank.de/

Union Radio
¿Es mejor consumir miel o papelón antes de consumir azúcar? | Valentina Kube| #buenastardes

Union Radio

Play Episode Listen Later Sep 27, 2023 8:43


linkmeup. Подкаст про IT и про людей

В 121-м эпизоде telecom мы сделали первый нырок в сеть в кубернетисе - и нам понравилось. Поэтому вооружаемся аквалнагом и идём глубже - сегодня говорим про реализацию одного из CNI в Kubernetes - Cilium - примечательный тем, что использует не только сетевой стек ядра Linux, но и eBPF вместе с XDP. Про что: Введение в eBPF и Cilium: Краткое объяснение технологии eBPF и ее роли в будущем сетевой безопасности и сетевых технологий. Почему мы переключились на Cilium: объяснение причин перехода. Особенности Cilium и использование с Kubernetes: Обзор ключевых возможностей и преимуществ Cilium, включая XDP Load Balancing, замену Kube-proxy и требования к версии ядра. Обсуждение Hubble Observability и его роли в мониторинге и отладке сети. Как работать с Cilium без kube-proxy и минимальными требованиями к ядру. Первые шаги с Cilium: Настройка Network Policy, включая белые списки и фильтрацию исходящего трафика. Понимание Cilium Entities и их роли в сетевой безопасности. Рассмотрение Local Redirect Policy и изменения, связанные с использованием node-local-dns traffic. Особенности маршрутизации и балансировки в Cilium: Обсуждение различных параметров и опций, доступных для маршрутизации и балансировки. Конфигурация Cilium hostFirewall: Приведение особенностей настройки и использования hostFirewall в Cilium. Отладка в Cilium: Обсуждение инструментов и стратегий для отладки, включая аудит сетевой политики и использование debug ключей. Рассмотрение использования sidecar с cilium monitor для централизованного сбора логов. Обзор найденных и исправленных багов в Cilium: Обсуждение конкретных проблем и багов, с которыми мы столкнулись и как их решали. Особенности работы с Cilium: Обсуждение специфических сценариев работы и совместимости с Kubernetes. Анализ потенциальных проблем и багов, которые могут возникнуть при использовании Cilium. Неочевидные возможности Cilium: Обзор функций и возможностей Cilium, которые мы еще не успели изучить. Заключение и обсуждение планов на будущее использование Cilium и eBPF. Сообщение telecom №125. K8s Cilium появились сначала на linkmeup.

Le Podcast Kube
Épisode 6 - À la découverte de nouveaux mots

Le Podcast Kube

Play Episode Listen Later Jul 6, 2023 30:51


Jutranja kronika
Za šolarje in dijake konec šolskega leta

Jutranja kronika

Play Episode Listen Later Jun 23, 2023 21:34


Približno 200 tisoč šolarjev in 80 tisoč dijakov bo danes še zadnjič v tem šolskem letu sedlo v šolske klopi, prejeli bodo tudi zaključna spričevala. Nato bodo za približno dva meseca pozabili na šolo in brezskrbno uživali na počitnicah. Težko pričakovano slovo od šolskih klopi za 10 tednov prinaša več prostega časa. Številne organizacije so pripravile počitniške delavnice, poletne šole, izlete in letovanja. Druge teme: - Večina izhodišč za novo pokojninsko zakonodajo je pripravljena, prvi julijski teden pred socialnimi partnerji. - Na dnu Atlantskega oceana odkrili razbitine pogrešane turistične podmornice. - Slovenski odbojkarji do nove zmage v ligi narodov. V Franciji so bili s 3:1 boljši od Kube.

Tim Conway Jr. on Demand
Hour 3 | Bellio's Dial Of Destiny @ConwayShow

Tim Conway Jr. on Demand

Play Episode Listen Later Jun 15, 2023 33:21


Chicken or Egg first? We now know the answer / Heal the Bay Worst beaches // Dinners on Kube! // Harrison Ford Indiana Jones Premeire / Dial of Destiny // Emails / Mo Kellly

Kubernetes Bytes
Unleashing the power of KubeVirt - Running Containers and VMs on Kubernetes

Kubernetes Bytes

Play Episode Listen Later May 23, 2023 74:39


In this episode of Kubernetes Bytes, Ryan and Bhavin sit down with Sachin Mullick and Peter Lauterbach - the Product Management team at Red Hat focused on Red Hat OpenShift Virtualization and the open-source KubeVirt project and talk about how users can run containers and virtual machines side-by-side on the same Kubernetes cluster. They discuss the benefits of having a unified control plane for all your applications and the different features that enable users to run their applications in production. They also talk about some customers that have implemented this technology in production. Listen to learn more about how you can get started with KubeVirt and run your VMs alongside your Kubernetes pods on your Kubernetes or OpenShift clusters. 03:27 - News Segment 13:54 - KubeVirt Interview 01:06:12 - TakeawaysThe Motley Fool: Save $110 off the full list price of Stock Advisor for your first year, go to http://www.fool.com/kubernetesbytes and start your investing journey today! *$110 discount off of $199 per year list price. Membership will renew annually at the then current list pricShow Notes: 1. Kube by Example - https://kubebyexample.com/ 2. Ask An OpenShift Admin - https://youtube.com/playlist?list=PLaR6Rq6Z4IqdsG6b09q4QIv_Yq5fNL7zh 3. https://kubevirt.io/ 4. https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization Cloud-Native News: 1. New Security Startup - Stacklok - https://techcrunch.com/2023/05/17/kubernetes-and-sigstore-founders-raise-17-5m-to-launch-software-supply-chain-startup-stacklok/ 2. Traefik Lab announces Traefik Hub - Also raised $11M https://techcrunch.com/2023/05/17/traefik-labs-launches-traefik-hub-a-kubernetes-native-api-management-service/ 3. KSOC releases the KBOM standard - https://tech.einnews.com/pr_news/629861155/ksoc-releases-the-first-kubernetes-bill-of-materials-kbom-standard 4. Upbound announces managed Crossplane service - https://www.infoq.com/news/2023/05/upbound-managed-control-plane/ 5. Kubernetes 1.27 StatefulSet auto deletion for PVCs to beta https://kubernetes.io/blog/2023/05/04/kubernetes-1-27-statefulset-pvc-auto-deletion-beta/ 6. Cost reduction CAST AI company focuses on reducing compute costs running generative AI models on k8s https://siliconangle.com/2023/05/18/kubernetes-firm-cast-ai-adds-support-reducing-generative-ai-deployment-costs/ 7. Vault secret store operator https://thenewstack.io/hashicorp-vault-operator-manages-kubernetes-secrets/ 8. Managed Kafka or Run it yourself ? https://thenewstack.io/kafka-on-kubernetes-should-you-adopt-a-managed-solution/ 9. Cool usecase - edge k8s - robots picking fruit - https://thenewstack.io/fruit-picking-robots-powered-by-kubernetes-on-the-edge/ 10. Knative 1.10 release https://knative.dev/blog/releases/announcing-knative-v1-10-release/ (4-25 missed it)

127 Fit Podcast
IFBB Pro Kube Cielen

127 Fit Podcast

Play Episode Listen Later Nov 7, 2022 48:46


Please subscribe to "BTM" on YouTube, thank you!     Connect with Kuba: https://www.instagram.com/kuba_sylvester_cielen/   Connect with "BTM": https://www.instagram.com/behindthemusclepodcast1/     "Remember, behind the muscle, there's always a story!

Agelast podcast
Podcast 136: Denis Solis, Fernando Almeida, Omar Tores

Agelast podcast

Play Episode Listen Later Jul 11, 2022 301:59


Donacije na Patreonu: https://www.patreon.com/agelast Jednokratne donacije kanalu: https://www.paypal.me/agelastpodcast Kripto donacije: BTC: 1BdrToPVPRbMtzPkdX8z3wviTHZZyzqD7w ETH: 0xe189975f215102DD2e2442B060D00b524a608167 FB: https://www.facebook.com/galebnikacevic Instagram: https://www.instagram.com/agelast_/ Twitter: https://twitter.com/GalebNikacevic A1: https://a1.rs/privatni Legend WW: https://www.legend.rs/ Gosti su kubanski politički disidenti, koji su u postupku dobijanja azila, jer su 11. jula 2021. godine učestvovali u organizovanju najvećeg protesta u istoriji Kube, kada su milioni ljudi na ulicama svakog mesta ove zemlje zamalo doveli do kraja vladavinu porodice Kastro i rušenje mita o čuvarima Revolucije. Kuba koja je živela na samoj ivici opstanka kroz celokupan posto-revolucionarni period od 1959. godine i koja počiva na sistemu u kome su svi aspekti društva, od obrazovanja, zdravstva, ekonomije, vojske i policije, pod apsolutnom kontrolom porodice Kastro, doživela je poptuni krah sa pandemijom korona virusa. Nemaština i glad su dosegli razmere koje je teško izmeriti. U ovom trenutku, ljudi umiru od gladi na ulicama, bez hrane, osnovne medicinske pomoći i drugih elementarnih potreba. Tokom pandemije, ali i danas, najosnovniji lekovi poput antibiotika nisu mogli da se nabave, a sa dodatnim sankcijama i bez novca iz dijaspore koji je bio veliki deo ekonomije Kube, režim je dodatno pritisnuo narod taksama i porezima i društvo je počelo da puca po šavovima. Pojedinci na različitim delovima Kube stupali su u proteste, podizali su svoj glas i naknadno su bili proganjani, hapšeni i osuđivani u politički montiranim procesima. Danas, više od 1300 osoba nalazi se u kubanskim zatvorima kao politički zatvorenici.U pitanju je protest veći od čuvenog „Maleconazo” protesta 1994. godine koji je usledio nakon „pada gvozdene zavese“ i raspada Sovjetskog Saveza, tzv. „period special“. On je prouzrokovao toliko veliku ekonomsku krizu, inflaciju, da je za posledicu imao glad i potpuni raspad kubaskog društva koje je, ioanko, živelo na ivici egzistencije. Nakon tih protesta desio se do tada najveći egzodus kubanskog stanovništva.Gosti su:Denis SolisFernando AlmeidaOmar Tores Uvodna reč: Nikola Kovačević, pravnik za ljudska prava i njihov zastupnik u postupku dobijanja političkog azila." Relevantni linkovi: 1. https://www.rollingstone.com/music/music-features/cuba-san-isidro-denis-solis-russia-rappers-prison-1322445/ 2. https://havanatimes.org/features/the-faces-behind-cubas-archipelago-group/ 3. https://havanatimes.org/opinion/a-million-signatures-for-cuban-prisoner-maykel-osorbo/ 4. https://www.bbc.com/news/world-latin-america-55098876 5. https://www.youtube.com/watch?v=xTwqbPL5qHY 6. https://www.youtube.com/watch?v=pP9Bto5lOEQ PR & Organizacija: Sandra Planojević Hasci-Jare Audio: Marko Ignjatović Instagram: Galeb Nikačević Hasci-Jare: https://www.instagram.com/agelast_/ Sandra Planojević: https://www.instagram.com/run_lola_run_7/ Marko Ignjatović: nema instagram.