Podcasts about Elu

3rd century BCE Sri Lankan language; ancestor of Sinhalese and Dhivehi

  • 183PODCASTS
  • 427EPISODES
  • 48mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Elu

Show all podcasts related to elu

Latest podcast episodes about Elu

Emotsionaalsed Mehed podcast
#335 Saates Merit Raju “Mis tegelikult aitab, kui ärevus, stress ja suhe kisuvad lõhki.”

Emotsionaalsed Mehed podcast

Play Episode Listen Later May 3, 2025 180:00


Merit Raju on joogaõpetaja, kirjanik ja unistuste elu mentor, kellel on üle 25 aasta kogemust vaimsel teekonnal. Selles sügavas vestluses räägime teadlikust paarisuhtest, läbipõlemisest, närvisüsteemi tasakaalustamisest, unest, hingamisest ning vaimse ja füüsilise tervise seostest. Merit jagab ausalt oma eluõppetunde ja taipamisi, mis on saanud aluseks tema kümnendale raamatule „Püha armastus“ ja tööle inimestega.

L’Heure du Monde
Cent jours de Trump : où en est la guerre contre les migrants ?

L’Heure du Monde

Play Episode Listen Later Apr 30, 2025 21:19


Elu en partie sur la promesse de renvoyer des millions de personnes sans papiers, Donald Trump tente, depuis son investiture, il y a cent jours, de mettre en place la plus grande campagne d'expulsion de l'histoire des Etats-Unis. Le président américain organise ainsi une chasse aux migrants en situation illégale mais aussi légale, comme les étudiants ou les bénéficiaires d'un visa de travail temporaire.Les résultats sont encore loin des millions d'expulsions promises au cours de la campagne, notamment à cause du manque de moyens dont dispose la police fédérale. Mais le sentiment de peur s'est diffusé à toutes les populations étrangères, qui craignent d'être arbitrairement arrêtées en se rendant au travail ou en allant faire des courses.Comment cette campagne d'arrestations s'organise-t-elle ? Où sont détenus et renvoyés les étrangers ? Que peut faire la justice face aux moyens, souvent contestés, utilisés par l'administration Trump pour expulser ces personnes ? Dans cet épisode du podcast « L'Heure du Monde », la correspondante du Monde à San Francisco, Corines Lesnes, analyse la politique migratoire menée par Donald Trump.Un épisode de Garance Muñoz et Cyrielle Bedu. Réalisation : Quentin Bresson. Présentation et rédaction en chef : Claire Leys. Dans cet épisode : extraits de discours de Donald Trump ; d'un reportage de l'Agence France-Presse, le 1er avril 2025.Cet épisode a été publié le 30 avril 2025.---Pour soutenir "L'Heure du Monde" et notre rédaction, abonnez-vous sur abopodcast.lemonde.frQue pensez-vous des podcasts du « Monde » ? Donnez votre avis en répondant à cette enquête. Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.

EMISSIONS SPECIALES - AZUR FM
Le décès du Pape François Ier

EMISSIONS SPECIALES - AZUR FM

Play Episode Listen Later Apr 21, 2025 1:41


En ce lundi de Pâques, jour de célébration de la résurrection du Christ, le décès du Pape François, créé un véritable chamboulement auprès des fidèles. Elu en 2013, le Saint Père était salué pour son humilité et son ouverture au monde. Philippe Gomis revient sur ce qu'implique le décès du Pape pour la communauté chrétienne.L'article complet : https://azur-fm.radio-website.com/news/cooperation-radiophonique-un-weekend-de-paques-marque-par-le-deuil-2449Les interviews sont également à retrouver sur les plateformes Spotify, Deezer, Apple Podcasts, Podcast Addict ou encore Amazon Music.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Culture en direct
Que retenir de l'œuvre de Mario Vargas Llosa ?

Culture en direct

Play Episode Listen Later Apr 15, 2025 10:46


durée : 00:10:46 - Les Midis de Culture - par : Marie Labory - L'écrivain hispano-péruvien Mario Vargas Llosa est mort ce dimanche 13 avril 2025, à l'âge de 89 ans. Elu à l'Académie française en 2021, Prix Nobel de littérature en 2010, il restera comme l'une des figures les plus importantes de la littérature hispano-américaine. - réalisation : Laurence Malonda - invités : Pierre Ducrozet Romancier; Élise Lépine Journaliste littéraire au Point; François Angelier Producteur de l'émission "Mauvais Genres" sur France Culture, spécialiste de littérature populaire

Emotsionaalsed Mehed podcast
#332 Kaire Parve – Kuidas treenida vaimset vastupidavust ja juhtida oma mõtteid

Emotsionaalsed Mehed podcast

Play Episode Listen Later Apr 15, 2025 111:35


Selles #332 episoodis on külaliseks inspireeriv juhtimiscoach, joogaterapeut ja personalijuht Kaire Parve, kellega sukeldume sügavatesse teemadesse nagu vaimne vastupidavus, mõtete juhtimine, potentsiaali avastamine ja sisemine tasakaal. Räägime, miks on negatiivsed mõtted tihti tugevamad kui positiivsed, kuidas tulla toime muutustega, milline mõju on joogateraapial ja miks tuleb tõelist rõõmu ja rahu otsida enda seest. Kaire jagab praktilisi soovitusi, teravaid taipamisi ja elulisi kogemusi, mis puudutavad nii juhtimist kui isiklikku arengut. Kuula saadet ja saa teada, miks “givers give” ja kuidas “mitte teadmisest algab tegelikult õppimine.” Rohkem infot leiad: www.kaireparve.com

20 Divin, le Podcast du Vin
20 Divin #78 : Jean-Charles Boisset, le Dandy de Californie

20 Divin, le Podcast du Vin

Play Episode Listen Later Apr 8, 2025 18:05


Elu personnalité de l'année 2024 par la magazine américain Wine Enthusiast, Jean-Charles Boisset, bourguignon d'origine, découvre la Californie avec ses grands-parents à l'âge de 11 ans, en parcourant les missions catholiques de Californie où il visite la plus vieille cave de la région. Initié très jeune au vin par ses parents vignerons à Gevrey-Chambertin, il est époustouflé par les vins californiens et y reviendra plus tard pour y faire ses études et conquérir l'Amérique

Nuestro flamenco
Nuestro flamenco - La mujer y el cante - 12/03/25

Nuestro flamenco

Play Episode Listen Later Mar 12, 2025 56:43


Recuperación del disco de 1983 "La mujer y el cante", con Mariana Cornejo, Carmen de la Jara, Encarna Anillo, Elu de Jerez y Macarena de Jerez.Escuchar audio

Multiverse 5D
Eluña -Teachings of True Power - Channeling The Order Of The Magdalenes

Multiverse 5D

Play Episode Listen Later Mar 10, 2025 34:39


Eluña -Teachings of True Power - Channeling The Order Of The Magdalenes

Intiimselt Eraelust
#145 Elu perioodid vol1

Intiimselt Eraelust

Play Episode Listen Later Dec 10, 2024 59:02


„Elu perioodid vol1“ Selles saates arutleme elu üle üldiselt ja hakkame vaikselt vaatama tagasi eelnevale aastale ning seda tehes läheme sügavale. Delikaatsed terviseteemad, kaasaarvatud vaimne tervis, mis omakorda sisaldab ebakindlust, ärevust. Sellega tegelemisest, enda ületamisest, võrdlus. Teemad, mis suudavad puudutada… Saade lindistatud 10.12.2024

Kroonika podcast
Merilin Mälk langes ahistamise ohvriks: jah, see inimene on vabandanud

Kroonika podcast

Play Episode Listen Later Dec 5, 2024 43:15


Merilin tuli avalikkuse ette esmakordselt 2018. aastal "Eesti otsib superstaari" saates, kus ta saavutas kolmanda koha. Tänaseni on ta järjekindlalt tegutsenud muusikamaastikul popartistina. Ta on välja andnud mitmeid hitte ja kahtlemata jääb tema lugu "Miljon sammu" kõiki kuulajaid tundideks kummitama. Elu veeretab ka tema teele ette väljakutseid, millest ta avatult ja julgelt räägib. Ühtlasi tegeleb ta pidevalt oma sisemaailma analüüsimisega, et olla parem inimene ja elada õnnelikumat elu. Külas on Merilin Mälk ja saatejuht on Kerli Kivistu.

Le Nouvel Esprit Public
Le PS peut-il s'affranchir ? / La situation en Ukraine

Le Nouvel Esprit Public

Play Episode Listen Later Dec 1, 2024 66:43


Vous aimez notre peau de caste ? Soutenez-nous ! https://www.lenouvelespritpublic.fr/abonnementUne émission de Philippe Meyer, enregistrée au studio l'Arrière-boutique le 29 novembre 2024.Avec cette semaine :Nicolas Baverez, essayiste et avocat.Jean-Louis Bourlanges, essayiste.Lionel Zinsou, ancien Premier ministre du Bénin et président de la fondation Terra Nova.LE PS PEUT-IL S'AFFRANCHIR ? Le prochain congrès du PS devrait avoir lieu au printemps 2025. Elu depuis 2018, Olivier Faure remettra son mandat de premier secrétaire en jeu. Sa gouvernance est contestée par ceux qui lui reprochent de coller au pas et aux humeurs de Jean-Luc Mélenchon et d'avoir réduit le PS à une annexe de La France insoumise au lieu de profiter de la force acquise lors des européennes et même des législatives pour être en mesure de faire émerger un candidat socialiste présidentiable. Les partisans du député de Seine et Marne plaident pour la survie, à tout prix, de l'union de la gauche, tout en menant à l'Assemblée une bataille larvée contre LFI pour obtenir le leadership de la gauche. Un combat rendu possible par le retour en force des socialistes à l'Assemblée : les troupes du patron du groupe socialiste à l'Assemblée, Boris Vallaud, comptent 66 parlementaires, contre 71 mélenchonistes.Cette crise entre le PS et LFI a éclaté au grand jour après la proposition de pacte de « non-censure » évoqué dimanche dernier sur France Inter par Boris Vallaud qui a proposé à « tous les présidents de groupes du Sénat et de l'Assemblée de l'arc républicain de poser la question des conditions d'une non-censure ». Il a, en outre, dit vouloir « reprendre le fil » de ce que « les groupes du Nouveau Front populaire » à l'Assemblée et au Sénat avaient « commencé à faire à la mi-août en disant "nous sommes prêts à des compromis texte par texte, nous sommes prêts à discuter des priorités de politique budgétaire" ». « Le PS cherche des alliés. Mais ça sera sans LFI », a assuré Jean-Luc Mélenchon, accusant le PS de « tendre la main » au-delà de la gauche.La mésentente affichée entre LFI et le PS est également apparue à propos d'une proposition de loi déposée le 19 novembre, à l'initiative du député « insoumis » du Nord Ugo Bernalicis, qui vise à « abroger le délit d'apologie du terrorisme du code pénal ». Ce délit, créé par une loi de 2014, consiste à « présenter ou commenter favorablement soit le terrorisme en général, soit des actes terroristes déjà commis ». Bernard Cazeneuve avait défendu ce texte comme « nécessaire » face à « la stratégie médiatique » des groupes djihadistes et au fait qu'« Internet offre aux thèses les plus extrêmes une caisse de résonance démultipliée ». L'objectif consiste selon la présidente du groupe LFI à l'Assemblée nationale, Mathilde Panot, à cantonner à nouveau l'apologie du terrorisme au droit de la presse afin de garantir « la liberté d'expression ». La proposition a suscité une vague de critiques. A gauche le premier secrétaire du Parti socialiste Olivier Faure a jugé qu'il suffisait en la matière d'affiner « la définition » du délit « pour en éviter les dérives ». Plus clairement, le patron des députés PS, Boris Vallaud, a affirmé ne pas soutenir « la proposition de LFI ».LA SITUATION EN UKRAINE Après des mois de refus, le 17 novembre, les Etats-Unis ont donné aux Ukrainiens le feu vert pour frapper la Russie en profondeur avec leurs missiles balistiques sol-sol d'une portée allant jusqu'à 300 kilomètres. Washington a justifié cette autorisation par le récent déploiement de soldats nord-coréens dans la région frontalière russe de Koursk. Alors que le conflit passait le cap des 1.000 jours, le 19 novembre, Kyiv a frappé un poste de commandement russe dans la région de Koursk. En réponse, le président russe a annoncé l'adoption d'une nouvelle doctrine nucléaire qui élargit la possibilité d'un recours à l'arme atomique en cas d'attaque « massive » par un pays non nucléaire mais soutenu par une puissance nucléaire. Une référence claire à l'Ukraine et aux États-Unis. Les États-Unis, le Royaume-Uni et l'Union européenne ont dénoncé « une rhétorique irresponsable » de la part de la Russie.Alors que la Russie pousse son avantage sur la ligne de front, en s'emparant de territoires, dans l'est du pays, à une rapidité inédite, les Etats-Unis ont annoncé, le 20 novembre, que pour aider l'Ukraine à freiner l'avancée des Russes, ils allaient fournir à Kyiv des « mines antipersonnel non-persistantes », c'est-à-dire équipées d'un dispositif d'autodestruction ou d'autodésactivation. Une mesure dénoncée non seulement par la Russie, et jugée « désastreuse » par la Campagne internationale pour interdire les mines, une organisation qui a reçu le prix Nobel de la paix en 1997. L'Ukraine est aujourd'hui le pays le plus miné au monde, avec 23 % de son territoire pollué par des mines terrestres et des munitions non explosées, indiquait dans un rapport en octobre, le Programme des Nations unies pour le développement (PNUD).Le 21 novembre Vladimir Poutine a déclaré que Moscou « avait lancé un nouveau missile balistique de portée intermédiaire sur l'Ukraine, en réponse à l'utilisation récente par ce pays d'armes américaines et britanniques pour frapper plus en profondeur » le territoire russe. Il a précisé que l'engin était un nouveau type de missile balistique hypersonique baptisé « Orechnik » - « noisetier », en russe -, dans sa « configuration dénucléarisée ». Le tir a visé un « site du complexe militaro-industriel ukrainien » dans la ville de Dnipro, a-t-il ajouté. C'est une première dans l'histoire du nucléaire militaire. Il n'était pas chargé – d'où l'absence d'explosion au sol –, mais, avec un tel tir, les Russes ont franchi un pas dans l'escalade avec les Occidentaux. Face aux risques importants de méprise, donc de riposte et d'escalade nucléaire, la Russie a indiqué avoir prévenu les Etats-Unis de son tir. Une annonce confirmée par Washington.Face aux nouvelles menaces du président russe, qui prévient qu'il pourrait désormais les frapper directement, les Occidentaux hésitent, vis-à-vis de l'Ukraine, entre un soutien réitéré mais assorti de limites (Joe Biden), des promesses verbales (Otan, France, Royaume-Uni et Suède), et la « prudence » (Allemagne).Chaque semaine, Philippe Meyer anime une conversation d'analyse politique, argumentée et courtoise, sur des thèmes nationaux et internationaux liés à l'actualité. Pour en savoir plus : www.lenouvelespritpublic.fr

Nädala raamat
Antonio Damasio "Asjade kummastav kord. Elu, tunded ja kultuuride sünd" Postimehe kirjastuselt

Nädala raamat

Play Episode Listen Later Nov 22, 2024


Antonio Damasio "Asjade kummastav kord. Elu, tunded ja kultuuride sünd" Postimehe kirjastuselt. Tutvustab Marek Strandberg. Selle nädala raamat „Asjade kummastav kord“ on teedrajav uurimus homöostaasist, inimese füsioloogia parameetrite hoidmisest vahemikus, mis teeb võimalikuks mitte ainult elu säilimise, vaid ka õitsengu.

Järjejutt
John Dos Passos, „USA triloogia. Suur raha", kirjastus Koolibri. Loeb Rando Tammik.

Järjejutt

Play Episode Listen Later Nov 8, 2024


Selle nädala Kuku Raadio järjejutuminutites kuuleme katkendeid John Dos Passose USA triloogia viimasest osast. Eelmise sajandi 20-ndate aastate majandusbuum toob kaasa tormilise majanduskasvu. Elu võtab üha kiiremaid pöördeid, kuid paraku liigub kõik vääramatu krahhi poole.

C dans l'air
Trump : son triomphe, nos inquiétudes

C dans l'air

Play Episode Listen Later Nov 6, 2024 63:39


C dans l'air du 6 novembre - Trump : son triomphe, nos inquiétudesUn retour en forme de revanche. Donald Trump est élu président des États-Unis après avoir passé la fatidique barre des 270 grands électeurs. Quatre ans après avoir échoué face à Joe Biden, il a réussi ce qu'un seul président avait réussi avant lui : revenir à la Maison-Blanche pour un second mandat. Sans attendre les résultats officiels, celui qui deviendra le 47e président des États-Unis en janvier prochain s'est félicité d'une "victoire politique jamais vue dans notre pays". "Les Américains nous ont donné un pouvoir sans précédent, un mandat incroyable", s'est-il réjoui.Alors que depuis des semaines le résultat du scrutin était annoncé par les sondages comme l'un des plus indécis de l'histoire du pays, c'est finalement une vague rouge qui a déferlé sur les États clés. Elu président des États-Unis, Donald Trump a également remporté le vote populaire. Les républicains s'emparent aussi du Sénat et de la Chambre des représentants. Une victoire nette et sans appel qui provoque une onde de choc dans le pays et à travers le monde."C'est le meilleur come-back de l'histoire des États-Unis", s'est réjoui son colistier et futur vice-président J. D Vance. "Nous allons avoir le meilleur come-back de l'économie sous le leadership de Trump", a-t-il affirmé après avoir mené une campagne très violente axée sur les questions économiques et l'immigration.En 2024, comme en 2016, Donald Trump a donc réussi à convaincre les Américains qu'il comprenait leurs difficultés du quotidien mieux que son adversaire. Une candidate démocrate, la vice-présidente Kamala Harris, qui a dû mener une campagne éclair après le spectaculaire retrait de Joe Biden et n'est pas parvenue à mobiliser suffisamment, face aux diatribes de son rival sur l'immigration et sur l'inflation. Deux thèmes qui ont été centraux dans cette campagne très genrée alors que le coût de la vie et l'envolée des prix immobiliers impactent fortement depuis des mois les classes moyennes. Nous sommes allés à la rencontre de ces Américains qui ne parviennent plus à se loger et doivent se tourner vers des mobile homes pour vivre.Quels enseignements tirer de cette élection ? Quels sont les ressorts de la victoire de Donald Trump ? L'inflation et la flambée des prix de l'immobiliers, une des raisons de cette vague rouge ? Le "gender gap", une autre des clés pour expliquer la défaite de Kamala Harris ? Tout au long de la campagne, Donald Trump et Kamala Harris se sont disputés les votes des femmes, des jeunes, et des minorités ethniques. Les premiers sondages confirment que Kamala Harris s'est arrogé le vote des femmes et Donald Trump celui des hommes. Le républicain est arrivé également en tête parmi l'électorat blanc et il a fait une percée auprès de ces électeurs afro-américains et hispaniques, qui pourraient avoir fait basculer le scrutin.Et maintenant ? En plus d'avoir remporté l'élection présidentielle, les républicains vont prendre le contrôle du Sénat et de la Chambre des représentants. Mais qu'est-ce que cela implique pour l'avenir du pays ? Pourquoi ce second mandat s'annonce-t-il explosif ? Quel rôle pour Elon Musk dans la future administration ? Pourquoi la victoire de Donald Trump inquiète-t-elle en Europe ?Les experts :- Laurence HAÏM - Journaliste – L'Heure américaine et auteure du documentaire "Trump Dieu et les siens", en replay sur le site de France télévisions.- Loïc de la MORNAIS - Grand reporter – Envoyé spécial - France 2, ancien correspondant à Washington- Vincent JOLLY - Grand reporter – Le Figaro Magazine- Soufian ALSABBAGH - Politologue spécialiste des Républicains- John BOLTON ( en duplex) - Ex-conseiller à la sécurité nationale de Donald Trump- David THOMSON ( en duplex de Floride) - Correspondant – RFI

Multiverse 5D
Eluña - The Magic We Lost

Multiverse 5D

Play Episode Listen Later Oct 30, 2024 19:58


Eluña - The Magic We Lost English: In this transmission, Eluña enters the pyramid of Meidoom in Egypt and channels the energy of a queen from the time of Khem. Queen Nefertari, as she calls herself, offers insights into the nature of pyramids and a glimpse into her life, both tragic and divine. Português: Nesta transmissão, Eluña entra na pirâmide de Meidoom no Egito e canaliza a energia de uma rainha da época de Khem. A rainha Nefertari, como ela se autodenomina, oferece insights sobre a natureza das pirâmides e um vislumbre de sua vida, tanto trágica quanto divina. Join The Circle https://www.elunanoelle.com/the-circle (Exclusive Channeled Videos, Live Q&As, Meditations & Early Access to Book a 1:1 Akashic Reading) Join The Next Live Meditation and Live Q&A https://www.elunanoelle.com/live Book an Akashic Reading https://www.elunanoelle.com/services/ Instagram   / elunanoelle   Website https://www.elunanoelle.com

Just Tap In with Emilio Ortiz
#113 Eluña - Mysteries of Ancient Egypt: Alien Contact, Initiations, Sacred Sites

Just Tap In with Emilio Ortiz

Play Episode Listen Later Oct 17, 2024 137:01


Eluña is a student of the Universe, an Akashic channeler, and a psycho-spiritual healer. She is devoted to the esoteric teachings of the ancient mystery schools, walking and guiding as an initiate and conduit of the Divine. Eluña began by accessing the Akashic Records for herself, close friends, and then privately for clients. In 2023, Eluña was shown that soon she would begin channeling for the public. While she will forever be a humble student, Eluña is serving the Creator and walking her path as a healer-teacher. She now has had decades of practice to hone her psychic abilities, enabling her to use her gifts in service. Eluña Noelle receives information visually, auditorily, energetically, and somatically when accessing higher realms. This allows for highly accurate readings to occur, and for several different beings to speak through her. Eluña also channels the Three Guides, contemporaries of Thoth in Ancient Khem towards the middle of the episode. ___________________ PODCAST CHAPTERS 00:00 - Eluña Intro 01:21 - Spiritual Rebirth in Egypt 05:24 - Shedding the Ego 07:23 - Intuitive Hits and Synchronicities 12:31 - Holding Space for Others 15:18 - Entering Sacred Spaces with Reverence 28:26 - Healing and Energetic Activation 31:33 - Communication with Star Beings 36:17 - Revealing Hidden Chambers 38:12 - Preparation for Alien Contact 42:34 - Global Sacred Sites and Activation 48:24 - Reconciliation and Collective Evolution 01:02:43 - Hathor's Temple and Soul Mission 01:12:26 - Reflecting on Soul's Mission 01:13:15 - Tuning into the Energy of the Sphinx 01:18:47 - Materializing the Initiatic Journey 01:24:27 - Trust and Mystery in the Journey 01:30:13 - Channeling Session Begins 01:32:41 - Deeper Purpose of Emilio's Activation in Egypt 01:35:35 - Specific Spaces Opened for Exploration 01:38:06 - Reconciling Past Decisions 01:40:18 - Soul Connection with Community 01:42:22 - Interpreting Energies as Beings 01:45:04 - Questions for First Contact 01:46:23 - Thoth's Path to Masterhood 01:50:51 - How to Access Thoth's Energy 01:51:46 - Describing the All 01:52:17 - Fondest Memory of Thoth 01:54:32 - Has Thoth Reincarnated? 01:55:30 - Guides' Connection to the Great Pyramid 01:57:21 - Reconciling for Humanity 01:57:48 - Greater Purpose of the Sphinx 01:59:27 - Secrets Beneath the Sphinx 02:00:13 - Parting Message for Humanity 02:02:34 - Eluña's Experience in the Great Pyramid 02:06:22 - Mystical and Activating Conversation 02:07:33 - The Final Trio ___________________ Guest: Eluña, Akashic Records Channeler Website | https://elunanoelle.com/ YouTube Channel |  @elunanoelle  1:1 Channeled Readings | https://elunanoelle.com/services/ Eluña's Live Events & Exclusive Content | https://elunanoelle.com/the-circle/ Live Q&A's | https://elunanoelle.com/live/ Join Eluña on An Initiatic Journey to Egypt | https://elunanoelle.com/journeys/ Host: Emilio Ortiz Instagram | https://www.instagram.com/iamemilioortiz/ Subscribe to YouTube Channel |  @EmilioOrtiz  Watch Emilio's latest series on 4biddenknowledge TV l https://bit.ly/AwakenThe6thSense Shop Our Clothing Collection l https://www.unlockedmovement.com/collections/justtapin ___________________ Special Offerings to Support the Show: ✦ Make a One-Time or Recurring Donation on PayPal

Google Cloud Cast
Como o Gemini está sendo usado na prática por Devs? [Especial Summit BR]

Google Cloud Cast

Play Episode Listen Later Oct 16, 2024 27:03


No nosso primeiro episódio gravado in loco, diretamente do Innovators Hive, no Google Cloud Summit Brasil 2024, nossas apresentadoras comandaram um painel para entender o impacto do Gemini e da inteligência artificial no trabalho e dentro da comunidade dev. Carol Carneiro e Luana Costa estiveram com Rafa Mores, CEO ELU e Embaixadore Mov Tech 2030, e Brenda Xavier, Engenheira de Dados SPC Brasil e Líder GDG Santos, que dividiram ótimos insights sobre como o Gemini pode ser um aliado no dia a dia dos desenvolvedores. Conecte-se com Carol Carneiro no LinkedIn Conecte-se com Luana Costa no LinkedIn  Conecte-se com Rafa Mores no LinkedIn Conecte-se com Brenda Xavier no LinkedIn 

Emotsionaalsed Mehed podcast
#312 Liina Vettik: Kuidas lõhkuda rahauskumusi ja luua sisemist jõukust ja õnne

Emotsionaalsed Mehed podcast

Play Episode Listen Later Oct 12, 2024 179:40


Liina on coach ja koolitaja, kelle programmid on mõeldud naisele, kes soovib oma ellu luua jõukust ning olla samal ajal terve ja õnnelik

Emotsionaalsed Mehed podcast
#310 Kaire Vacker: Human Design ja läbipõlemise ennetamine – kuidas leida oma sisemine vägi?

Emotsionaalsed Mehed podcast

Play Episode Listen Later Oct 2, 2024 102:47


Seekordse saate külaliseks on Kaire Vacker, kes on 37-aastane ja hariduselt jurist. Ta õppis 5 aastat õigusteadust ja on töötanud advokaadibüroos ja samuti 5 aastat Maksuametis kohtujuristina. Elu muutus tal siis kui viimastel aastatel Maksuametis sai ta kogeda korralikku ärevushäiret. Kaire ise tunnistab, et on elus mitu korda kogenud läbi põlemist. Sellest hetkest, hakkas ta teadlikkumat teed käima, uurima ja katsetama. Kui tal laps sündis, siis tegi ta jursti värgiga lõpparve, umbes kaheksa aastat tagasi ja siis tulid tema ellu eeterlikud õlid ja fotograafia. Lapse sünd viis Kairet aga otsima oma kohta või seda, mida ta on tulnud siia maailma päriselt tegema. Hetkest, mil ta lasi endale Human Design (HD) kaardi lugemist teha, oli ta võlutud. Ta ütleb, et sai endast nii hästi aru peale seda. Kogu tema keha reageeris ja ta asus seda õppima ning umbes kuu aega hiljem oli ta sõbrannale koosatanud HD raamatu. Siit läks käima soovitusturundus ning Kaire süvenes veelgi enam HD maailma. Oma kodukal pakub ta täna erinevaid HD ampse- salvestatud videosid, lisaks materjale erinevate teemade kohta. Samuti võimalus temaga konsultatsiooni bookida või saada seda koos isikliku raamatuga. Kaire kohta leiad rohkem infot leheküljelt www.kairevacker.com NB! Kas sulle meeldib rääkida? Kas sa tunned ennast rääkides mugavalt? Kas sind kuuldakse? Kas sina tead, mida sa oled tulnud siia maailma väljendama? Nii oluline on teada, mida ja kuidas sa ennast välejndad, sest meie kõne ühendab. Meie sisemaailma välisega. Kõri mängib olulist rolli ka meie unistuste täitumisel! Kairel ilmus just ahjusoe e-raamat sellel teemal! Leiad selle siit: https://kairevacker.com/product/korikeskus/ HEAD MÕTTED: “Minu läbipõlemine äratas mind üles.” “HD terminoloogia kohaselt saavad olla meie keskused kas värvilised ehk täidetud või värvitud ehk defineerimata. See värviline on see, kes meie oma loomult ja sünnipäraselt oleme.” “Kui sa punnid olla keegi teine, siis sa punnidki olla keegi teine - see ei ole see, kes sa päriselt oled.” “Selleks, et avarduda ei saa ma istuda mugavustsoonis.” “Täiskasvanuks saade on meil nii palju välist mõju, et me ei pruugi enam aru saada, kes me tegelikult oleme.” “Me kõik saame võtta vastutuse oma elus ja hakata elama elu sellisena nagu me tahame.” “Arvame, et me sünnime puhta lehena aga meil on väga suur pagas tegelikult kaasas.” “HD toetub meie strteegiale, kes on see, kes algatab, kes on see, kes ootab tunnustavat kutset, kes on see, kes toimib elule vastates jne.” “Meid ei ole väikesest peale õpetatud sisemist kõne kuulama.” “Kui sinu silmad säravad, siis inimesed tahavadki sinult seda teenust osta.” “Teha tuleb siis kui sa naudid, siis sa lood hoopis teisest kohast.” JÄLGI KAIRE VACKER TEGEMISI SIIT: https://kairevacker.com/ https://www.facebook.com/kaire.vacker https://www.instagram.com/kairevacker/ CHRIS KALA PODCASTI TOETAJAD: https://www.million.ee https://www.garden.ee https://www.ruthterras.eu https://www.ruumum.com JÄLGI MIND SIIT: https://www.youtube.com/@chriskalapodcast https://www.facebook.com/chriskkala https://www.facebook.com/chriskalapodcast https://www.instagram.com/chriskkala/ https://soundcloud.com/chris-kala-podcast/tracks https://open.spotify.com/show/6Ohs4fCrk5UsLt49PVLyRI

On refait le match avec Denis Balbir
LA QUOTIDIENNE - LFP : Linette a-t-il une chance de renverser Labrune ?

On refait le match avec Denis Balbir

Play Episode Listen Later Sep 4, 2024 23:19


Cyril Linette pourra finalement se présenter à la présidence de la Ligue de football professionnel (LFP) face au sortant Vincent Labrune, le 10 septembre. Elu à la présidence de la Ligue en 2020, ce dernier est très critiqué depuis les soubresauts de l'attribution des droits TV de la Ligue 1 à DAZN et beIN Sports pour 500 millions d'euros annuels, après l'échec de l'appel d'offres lancé l'an dernier et la promesse avortée d'atteindre un milliard d'euros. Peut-il réellement être battu par Linette ? Philippe Sanfourche décortique le feuilleton avec Florian Gazan dans ce podcast.

The Paranormal UFO Consciousness Podcast
A Discussion about the book "The Beings: A Compilation of Personal Encounters and Experiences With Extra-Dimensional Beings"

The Paranormal UFO Consciousness Podcast

Play Episode Listen Later Aug 15, 2024 88:13


Ameera May and Doug Auld Join Grant Cameron to discuss a new book on NHI contact. Embark on a journey beyond the known realms of existence with "The Beings", a captivating compilation of personal experiences shared by individuals who have encountered extra-dimensional beings and ETs.In this fascinating compilation, each chapter offers a unique and personal narrative of encounters that defy conventional understanding. From awe-inspiring visions to profound exchanges with entities from distant realms, these stories illuminate the mysteries of the cosmos and challenge our perceptions of reality.Through heartfelt accounts and candid reflections, the co-authors of "The Beings" invite readers to explore the edges of consciousness and witness the profound transformations that occur when worlds collide. Whether you're a skeptic or a seeker, this book promises to ignite your imagination and expand your understanding of what lies beyond the veil.Prepare to be captivated, inspired, and forever changed by the extraordinary encounters documented within the pages of "The Beings".Foreword by: Grant CameronAuthors: Philip Kinsella, Doug Auld, Vincent Cassius Cain, Jamie Ong, Vanessa Lisle Widergren, Laura Van Tyne, Reema Owens, El Serumaga, Harold Hoenow, Alexandra Steiner, Michelle Carpenter, Rachel Chamness, Eluña Noelle, Heidi Hooper, Jade Lore, Shekina RoseCompiled by: Ameera MayPublished by: The Near-Death Institute https://www.amazon.com/Beings-Compilation-Encounters-Experiences-Extra-Dimensional-ebook/dp/B0D6ZCJYJW

Next Level Soul with Alex Ferrari: A Spirituality & Personal Growth Podcast
NLS 480: BRACE YOURSELF: Channel REVEALS MAJOR Shifts HAPPENING to HUMANITY in 2024-2030! with Eluña

Next Level Soul with Alex Ferrari: A Spirituality & Personal Growth Podcast

Play Episode Listen Later Aug 10, 2024 104:58


On today's episode, we welcome Eluña, a profound channeler who guides us through the transformative shifts humanity is currently navigating. As we sat down to explore the vastness of the universe and the mysteries of our collective consciousness, Eluña brought forth deep insights that resonate on a spiritual level, challenging us to see beyond the illusions of our current reality.The shift, as Eluña explains, is not an abrupt change but a gradual process. We are collectively moving towards a significant transformation in our consciousness, with major shifts predicted around 2030. This evolution is not just a matter of time; it's a choice we make as a collective. "The shift is gradual. The shift is a process. And those who choose to see that humanity is indeed on a path, this is when the shift can happen," Eluña shared. These words echo the profound truth that our destiny is not predetermined but crafted by our collective awareness and choices.As we delved into the intricacies of past lives, Eluña offered a fascinating perspective. Unlike the linear perception of time we often hold, Eluña views lives as parallel experiences, all occurring simultaneously. This understanding challenges our conventional views, inviting us to see life as a fractal, constantly folding in on itself. This realization opens up a new way of understanding our past, present, and future, not as distinct segments but as interconnected experiences that shape who we are.One of the most striking aspects of our conversation was the exploration of Eluña's past lives, particularly during the times of Lemuria and Atlantis. These ancient civilizations, often shrouded in mystery, were described by Eluña as societies with different focuses—Lemuria on healing and spiritual growth, and Atlantis on technological advancement. The downfall of Atlantis, as she recounts, was rooted in their shift towards a more forceful, will-driven approach, which ultimately led to their demise. This serves as a poignant reminder of the importance of balancing spiritual wisdom with technological power.SPIRITUAL TAKEAWAYSUnderstanding the Gradual Shift: The transformation of humanity's consciousness is a gradual process, not a sudden event. It requires a collective choice to embrace change and evolution.Parallel Lives Perspective: Our past, present, and future lives are happening simultaneously, interconnected like a fractal, offering a new way to perceive our spiritual journey.Balance Between Spiritual and Technological Growth: The story of Atlantis serves as a reminder of the dangers of prioritizing technological advancement over spiritual wisdom.As we continue to navigate the complexities of our current world, Eluña's insights remind us that the path forward is one of collective choice and spiritual evolution. We are at a crossroads where our decisions will shape not only our future but the future of humanity as a whole. It's a time to reflect, to choose the higher path, and to embrace the transformative power that lies within each of us.Please enjoy my conversation with Eluña.Become a supporter of this podcast: https://www.spreaker.com/podcast/next-level-soul-podcast-with-alex-ferrari--4858435/support.

Multiverse 5D
How I Learned To Channel - Eluña

Multiverse 5D

Play Episode Listen Later Jul 14, 2024 28:11


How I Learned To Channel - Eluña So many of you have been asking me to share my story of how I began channeling and wanting to learn how to connect with your own guides. There is so much to my story and to the process of channeling that I could share, but I chose to share the most important aspects that started me on this journey with the intention of supporting you. I share a bit about how things started for me as a child, some of the lessons my guides brought to me, and what I do today to further my practice of channeling. Join The Circle (Channeled Videos, Guided Meditations, Live Events & More) https://www.elunanoelle.com/the-circle Schedule your personal reading with Eluña https://elunanoelle.com/services/ Follow me on Instagram   @elunanoelle   Website https://www.elunanoelle.com

The Positive Head Podcast
2204: Soul-Share with Akashic Channeler, Eluña Noelle

The Positive Head Podcast

Play Episode Listen Later Jul 4, 2024 75:09


Eluña is an Akashic records channeler, healer, and intuitive who is committed to helping people uncover helpful insights from their past lives, as well as connect with their angels and guides. In this episode, she shares her story opening as a channel and gives Brandon a powerful reading with a delicious synchronicity baked in.   Connect with Eluña at elunanoelle.com Hear the industree music project synchronicity story at industr.ee   Care to play a game with the youniverse? Ask the universe the episode you would most benefit from hearing next and click positivehead.com/game.  Download The Golden Key audio or e-book at GoldenKey.Gift with the Code: POSITIVEHEAD

Just Tap In with Emilio Ortiz
#93 Eluña Noella - Akashic Records: Ancient Egypt, Heart Wisdom, Galactic Connections

Just Tap In with Emilio Ortiz

Play Episode Listen Later Jun 14, 2024 109:13


Eluña joins the podcast to share her profound experience of channeling messages from the Akashic Records and insights regarding the collective consciousness awakening on the planet. Eluña is a student of the Universe, an Akashic channeler, and a psycho-spiritual healer. She is devoted to the esoteric teachings of the ancient mystery schools, walking and guiding as an initiate and conduit of the Divine. Eluña began by accessing the Akashic Records for herself, close friends, and then privately for clients. While she will forever be a humble student, Eluña serves the Creator and walks her path as a healer-teacher. She now has had decades of practice to hone her psychic abilities, enabling her to use her gifts in service. Eluña Noelle receives information visually, auditorily, energetically, and somatically when accessing higher realms. This allows for highly accurate readings to occur, and for several different beings to speak through her. Some of the beings she channels are Pleiadian Priestesses, the Arcturian Council, and a group who call themselves the Three Guides, Hermetic scholars from Atlantis and contemporaries of Thoth the Atlantean. ___________________ PODCAST CHAPTERS 00:00 - Eluña Trailer 01:05 - Eluña Opens Up With a Prayer 02:28 - Initiatic Pilgrimage to Egypt & Contemplation 05:50 - Inner Excavation / Exploring the Darkness 08:40 - The Meaning of Eluña's Name & Working with Children 13:21 - Connecting with Pleiadian Priestesses 15:40 - Accessing the Akashic Records 19:20 - Uncovering Lost Civilizations: Pyramids, Atlantis 22:20 - Eluña Shares a Past Lifetime in Lemuria 25:21 - The Activation of the Heart Wisdom 30:48 - The Process of Channeling the Three Guides 37:00 - Channeling Session Begins 38:17 - The Purpose of the Ancient City of Khem 42:50 - Utilizing Megalithic Structures for Spiritual Evolution 46:15 - The Re-emergence of Feminine Energy 48:50 - Thoth the Atlantean's Teachings & Where Humanity Is Headed 51:50 - Influence of Solar Flares & Planetary Alignments 55:55 - Humanity Becoming a Galactic Civilization 58:37 - Emotions, Grief, and Spiritual Awakenings 01:02:40 - The New Human Emerging 01:04:35 - New Systems Coming Online 01:06:30 - Channeled Message for the Young Leaders 01:08:38 - Highest Timeline For Humanity 01:11:00 - Favorite Memory From Atlantis 01:13:00 - Eluña Closes the Akashic Records 01:15:20 - Using Dr. Joe Dispenza Meditations 01:19:00 - Atlantean School System & Technology 01:27:37 - The Split in Consciousness: Lemuria and Atlantis 01:33:55 - Meeting Different Beings in the Universe 01:37:45 - Vision for the Future 01:41:10 - The Final Trio ___________________ Guest: Eluña, Akashic Records Channeler Website | https://elunanoelle.com/ YouTube Channel |  @elunanoelle  Channeled Readings | https://elunanoelle.com/services/ Guided Meditations | https://elunanoelle.com/kuanyinmeditation/ Live Q&A's | https://elunanoelle.com/qa/ Join Eluña on An Initiatic Journey to Egypt | https://elunanoelle.com/journeys/ Host: Emilio Ortiz Instagram | https://www.instagram.com/iamemilioortiz/ Subscribe to YouTube Channel |  @EmilioOrtiz  Watch Emilio's latest series on 4biddenknowledge TV l https://bit.ly/AwakenThe6thSense Shop Our Clothing Collection l https://www.unlockedmovement.com/collections/justtapin ___________________ Special Offerings to Support the Show: ✦ Make a One-Time or Recurring Donation on PayPal

Multiverse 5D
Eluña_ Awaken to the Cosmic Consciousness __ Channeled Message from The Three Guides

Multiverse 5D

Play Episode Listen Later May 27, 2024 21:03


Eluña_ Awaken to the Cosmic Consciousness __ Channeled Message from The Three Guides - 2024.05.27  Eluña is an Akashic channeler, psycho-spiritual healer, and energy intuitive. Eluña receives intuitive information visually, auditorily, somatically, and energetically and is able to channel many different beings, as well as tap into the Akashic Records. Who are The Three Guides The Three Guides are scholars from the time of Atlantis and contemporaries of Thoth. They are coming through at this time to assist humanity with raising consciousness to help bring about unity and empowerment with the teachings of Hermetic wisdom. This week, the Three Guides went into the art and process of expansion. They describe what is needed for this process and how to let it happen without distortion. They also describe how our bodies, our chakra centers, and our minds will be affected by the incoming energies from the cosmos, as well as how the Collective will be affected. Group consciousness is the next part of humanity's evolution and it is increasing. This increase in expansion is happening within and without, above and below, and cannot be stopped. They advise us on how to move with the energies, how to rest, and how to integrate it all.

Multiverse 5D
The Great Pyramid, First Contact, & My Journey to Egypt // Channeled Message by Eluña

Multiverse 5D

Play Episode Listen Later May 27, 2024 20:35


Eluña: The Great Pyramid, First Contact, & My Journey to Egypt // Channeled Message on 2024.04.20 In this video, the Three Guides reveal two functions of the Great Pyramid of Giza in Egypt (also known as Khem, or Khemet), which I was completely unfamiliar with. They spoke of the "Coming dawn of First Contact" and said the more people who are familiar with the energy of the Great Pyramid, the more prepared we will be as a whole. This feels like a secondary revealing of my purpose in going to Egypt: to be better prepared for First Contact! I share the details of my trip, including the magically synchronistic events that led me to this particular guide and pilgrimage. I am so looking forward to sharing more with you as everything unfolds in real time!

Multiverse 5D
Eluña's Mystical Journey Through Egypt // A New Timeline for Humanity | Eluña

Multiverse 5D

Play Episode Listen Later May 24, 2024 35:53


Eluña's Mystical Journey Through Egypt // A New Timeline for Humanity | Eluña Being invited into the deepest and most profound activation of myself and of Gaia herself was how my journey in Egypt began. I feel so humbled and honored to be writing these words and to know how I am being called into service at this time. In this video, I share three of the most profound activations and experiences while I was visiting Egypt in April. First, I was invited to activate the Great Pyramid during a significant conjunction of Jupiter and Uranus. I saw priests and priestesses assisting the group in this activation and was guided by a priestess. She activated light language in me, guided me into the womb space to harmonize the masculine and feminine in myself, then oversaw my energy work for the planet. I was given my soul's mission by Hathor and told that I am meant to help restore the Sacred in all beings and in the Earth through activating sacred sites with groups. On the last night of my journey, I was brought to the ancient Temple of Isis. Here, Isis gave me the mission of reactivating her temple. Through Her teaching, I learned how to bring in energy, anchor it in myself and at the site, and give birth to it at the new site of her temple to "restore the heart of Isis." These experiences are beyond words, but I did my best to convey them in this story format. I hope you can feel the beauty that this entire experience has created in me. I am so honored to share this with you, family. Eluña is an Akashic channeler, psycho-spiritual healer, and energy intuitive. Eluña receives intuitive information visually, auditorily, somatically, and energetically and is able to channel many different beings, as well as tap into the Akashic Records.

Entertainment Law Update
ELU 165: The Battle for Art and Rights

Entertainment Law Update

Play Episode Listen Later Feb 29, 2024 78:55 Transcription Available


This episode of Entertainment Law Update is sponsored by JD Supra – a leading platform in professional services content marketing – helping lawyers to turn their expertise into networking opportunities, media visibility, and new business. JD Supra publishes and distributes … Read the rest The post ELU 165: The Battle for Art and Rights appeared first on Entertainment Law Update.

Filmikägu
Filmikägu: #351

Filmikägu

Play Episode Listen Later Feb 16, 2024


FK351! Miks on vaja uut ekraniseeringut A.H. Tammsaare teosest “Elu ja armastus”? Vastust sellele teavad selle filmi lavastaja ja stsenarist Helen Takkin ja kaasstsenarist Martin Algus. Uudiseid on sel nädalal ohtralt. Marveli kinouniversum on saanud omale täiendust “Fantastilise neliku” näol. Will Smith on tagasi. Meil on uus Karate Kid jpm. Uutest tulijatest keskendume sel korral pikemalt filmidele “Elu ja armastus” ja “Bob Marley: One Love”. Saatejuhid on Lauri Kaare ja Kristjan Gold.

Filmikägu Uncut
FK351! Külas on Helen Takkin ja Martin Algus

Filmikägu Uncut

Play Episode Listen Later Feb 16, 2024


FK351! Miks on vaja uut ekraniseeringut A.H. Tammsaare teosest “Elu ja armastus”? Vastust sellele teavad selle filmi lavastaja ja stsenarist Helen Takkin ja kaasstsenarist Martin Algus. Ja see vastus läks nii põnevaks, et me kaldusime juba spoilerite territooriumile. Seega, olge hoitatatud, et film tasuks enne ära vaadata ja siis kuulata Heleni ja Martini ülipõnevat juttu. Uudiseid on sel nädalal ohtralt. Marveli kinouniversum on saanud omale täiendust “Fantastilise neliku” näol. Will Smith on tagasi. Meil on uus Karate Kid jpm. Uutest tulijatest keskendume sel korral pikemalt filmidele “Elu ja armastus” ja “Bob Marley: One Love”. Sisukord: 02:26 Kirjad 20:36 Edetabel 26:54 Uudised 44:14 Intervjuu: Helen Takkin ja Martin Algus 2:03:48 Elu ja armastus (Eesti) 2:09:39 Emma ja must jaaguar (Prantsusmaa) 2:10:54 Väikesed superlennukid: täiskiirusel. Film 2:13:17 Bob Marley: One Love 2:26:48 Telekava

Filmikägu
Filmikägu: #350

Filmikägu

Play Episode Listen Later Feb 9, 2024


FK350! Meil on sel korral stuudios näitleja Karolin Jürise ja operaator Alvar Kõue. Kuigi mõlemad on seotud filmiga “Elu ja armastus”, räägime eelkõige sellest, milline on võtteplatsil näitleja ja operaatori suhe. Filmist ja kaanidest tuleb ikka ka juttu. Uudistes on jutuks Taika Waititi, Celine Songi, Tim Burtoni jpt uued filmid. Uutest tulijatest võtame ette “Meisterkokk Mary”, “Priscilla” ja “Kõik me võõrad”. Saatejuhid Lauri Kaare ja Kristjan Gold.

Filmikägu Uncut
FK350! Külas on Karolin Jürise ja Alvar Kõue

Filmikägu Uncut

Play Episode Listen Later Feb 9, 2024


FK350! Meil on sel korral stuudios näitleja Karolin Jürise ja operaator Alvar Kõue. Kuigi mõlemad on seotud filmiga “Elu ja armastus”, räägime eelkõige sellest, milline on võtteplatsil näitleja ja operaatori suhe. Filmist ja kaanidest tuleb ikka ka juttu. Uudistes on jutuks Taika Waititi, Celine Songi, Tim Burtoni jpt uued filmid. Kinno tulevad “Meisterkokk Mary”, “Kaunis pulm”, “Femme”, “Vau!”, “Armastuse olemus”, “Igavesti”, “Priscilla” ja “Kõik me võõrad”. Sisukord: 01:31 Kiri 11:26 Edetabel 21:39 Uudised 33:12 Intervjuu: Karolin Jürise ja Alvar Kõue 1:12:56 Meisterkokk Mary 1:17:04 Kaunis pulm 1:18:41 Femme 1:20:28 Vau! (Prantsusmaa) 1:23:16 Armastuse olemus (Kanada) 1:27:50 Igavesti (Taani) 1:32:00 Priscilla 1:36:56 Kõik me võõrad 1:42:39 Telekava

Võrkpall24
Kuldne geim | 2023. aasta Eesti võrkpalli TOP 10, erikülaliseks president Pevkur!

Võrkpall24

Play Episode Listen Later Dec 22, 2023 157:51


Võrkpalliteemaline taskuhäälingusaade "Kuldne geim" läheb jõulude eel eetrisse 148. osaga, mis on ühtlasi 2023. aasta Eesti võrkpalli kokkuvõttev saade. Saates järjestavad 2023. aasta kümme tähtsamat hetke ja sündmust ning analüüsivad värsketes karikafinaalides toimunut Mihkel Uiboleht, Taavi Nõmmistu ja Karl Rinaldo, erikülalisena lööb kaasa võrkpalliliidu president ja Eesti kaitseminister Hanno Pevkur. Kui mõne aasta eest oli Pevkur langenud Reformierakonnas ministrikohtadelt välja ja kandideeris hoopis Euroopa võrkpalliliidu juhiks, siis täna on tal taas Eesti poliitikas täita taas oluline roll. "Minu üks põhimõte on olnud "never say never". Elu ongi keeruline. Ja elu on ühtpidi spiraal ja teistpidi keerdkäike täis. Kui keegi tuleb poliitikasse ja hakkab lipsu triikima, et nüüd saab ministriks, siis see on kõige valem asi üldse." Ent millisteks kujunesid "Kuldse geimi" tegijate jaoks tänavuse võrkpalliaasta kümme tähtsamat sündmust ja hetke, seda saab kuulata juba saatest. Muuhulgas tulevad saates teemaks: *Mille arvelt Hanno kaitseministri- ja alaliidu presidendi ameti jaoks aega röövib? *Kas Euroopa võrkpallijuhtimises jätkub allakäigutee ka pärast järgmise aasta CEV-i valimisi? *Millises aktuaalses kultuurielamuses räägitakse nii ranna- kui saalivõrkpallist? *Milliseid küsimärke tekitas tänavu aasta parimate võrkpallurite valmine? *Kas Tallinnasse ehitakse uus 3000-pealisele publikule mõeldud areen? *Kaks Eesti noortekoondist pakkusid ilusa aasta; *Milline võiks näha järgmisel suvel välja Eesti naiste koondis? *Superviisikust ja nende senistest tegudest; *Küsimus Hannole: kas ja millal tuleb alaliit miinusest välja? *Eesti meeste koondise tõusust ja langusest; *Kuidas hinnata naiste Hõbeliiga võitu ja mida pakkus kodune EM-finaalturniir laiemalt? *2023. aasta karikafinaalide võlud ja valud.

Täitsa Pekkis Podcast
Milline Naine #06 - Kadri Walsberg: valu, mis pani mind aastaga kaotama -25kg

Täitsa Pekkis Podcast

Play Episode Listen Later Nov 27, 2023 128:12


Kuidas inspireeruda kadedusest, aastaga saavutada suur füüsiline transformatsioon ning väikeste sammudega luua püsivad muutused. Kadri Walsberg on kolme tütre ema, toidufotograaf ja koos oma kihlatu Illimar Pilt'iga teevad nad koos metsasünnipäevasid ning kajastavad oma pere lugu blogis Piltsberg. Varasemalt on Kadri õppinud kehakultuuri ja toitumisnõustamist.  November on Täitsa Pekkis Saates transformatsiooni ja teadliku muutuse loomise kuu.  Oleme Täitsa Pekkis Saate tiimiga saanud valmis “Transformatsioonipaketi naisele”, mis sisaldab:  18 videokoolitust 10 praktilist tööriista 24 eksperdi teadmised ja kogemused 12 kuud ligipääs kogu sisule lihtne ja mugav platvorm sisu tarbimiseks Uuri lähemalt: https://taitsapekkis.ee ja ostul kasuta sooduskoodi “taitsapekkis”, mis annab paketi 54,90€ asemel 34,90€. NB! Pakett on müügil ainult kuni 10.12.23  Sooduskood: “taitsapekkis20” annab mokafit.com esimeselt kuult -20% soodustust. Pakkumine kehtib kuni 01.02.2024 SAATES RÄÄGIME: Kolmest rasedusest ja koduperenaise rolli uppumisest Ärevushäiretest, häbitundest ja teiste heakskiidu otsimisest  Kuidas aastaga kaotada 25kg Suhtekriisist ning selle ületamisest Kui suurt valu peab kogema, et minna muutusesse SHOWNOTES 00:00 - Sissejuhatus 00:01:38 - Transformatsioonipakett naisele 00:04:30 - Kes on Kadri Walsberg 00:07:11 - Kust sai alguse Kadri enesearengu teekond 00:10:06 - Kuidas möödus esimene rasedus ja sünnitus 00:12:29 - Milliseid muutusi tõi ema roll 00:14:53 - Millal tekkisid probleemid kehakaaluga 00:19:30 - Venna varjus elamine ning millised unistused olid nooremana 00:32:18 - Ärevushäiretest teadlikuks saamine 00:25:11 - Raskused kolmanda raseduse ajal 00:40:54 -Elu pärast ärevushäireid 00:44:05 - Suhtekriis 00:52:52 - Kehakaalust tulenev enesekriitika 00:58:25 - Mis aitas jõuda enda aktsepteerimiseni 01:04:39 - Häbitunde ületamine 01:16:46 - Vajadus olla teiste poolt aktsepteeritud 01:23:39 - Väikesed sammud on need, mis toovad suure muutuse 01:28:18 - Sooduskood 01:33:22 - Naised saunas ning rituaal, mida seal tehti 01:41:25 - Endale armastuse jagamine 01:43:22 - Enesetunne täna 01:49:35 - Soovitusi naistele, kes tahavad luua muutust, aga ei julge seda teha 02:01:44 - Lõpeta vabanduste otsimine Saates mainitud inspiratsiooniallikad:  Isik: Mona Kattel Koduleht: https://mokafit.com/ KADRI WALSBERG Instagram: https://www.instagram.com/katu_mokafit/ Facebook: https://www.facebook.com/kadri.valsberg Koduleht: https://piltsberg.com/ TÄITSA PEKKIS SAADE Koduleht: ⁠https://taitsapekkis.ee/⁠ Instagram: ⁠https://www.instagram.com/taitsapekkissaade/⁠ Facebook: ⁠https://www.facebook.com/taitsapekkissaade⁠ Patreoni “La Famila” erisaated: ⁠https://www.patreon.com/taitsapekkis/⁠ --- Send in a voice message: https://podcasters.spotify.com/pod/show/taitsapekkissaade/message

Jutupiigade Podcast
Ep.63 - Elust, filtrita

Jutupiigade Podcast

Play Episode Listen Later Nov 24, 2023 68:54


Tule sõida koos meiega elulistel Ameerika mägedel! Seekord siis kihistame naerda, loeme ette meie kuulajate kirjad, räägime elu suurimast draamast ja teadmatusest mida selle eluga küll peale hakata. Elu võib ikka vahel olla vägagi hot'n'cold ja see võib päris hirmutav olla... Meie anonüümse Google Forms lingi oma jutu poetamiseks leiad SIIT: https://tinyurl.com/2p8cddkc

Filmikägu Uncut
FK338! Külas on Rainer Sarnet

Filmikägu Uncut

Play Episode Listen Later Nov 17, 2023


FK338! Kung-fu, õigeusk ja “Nu, pogodi” saavad kokku Rainer Sarneti uues filmis “Nähtamatu võitlus”. Kuidas, räägib ta saates. Nudimata versioonis on teiseks maiuspalaks intervjuu Hollywoodi superprodutsendi ja “Terminaatori ema” Gale Anne Hurdiga. Teemadeks nii interneti tumedam pool kui Arsenali jalgpalliklubi. Koduses stuudios vaatame uudistenurgas otsa uutele projektidele, mis on läinud töösse pärast streigi lõppu. PÖFF küll endiselt kestab, aga tavalevisse saabuvad animafilmi “Ühe liblika lugu” ja “Kombitsatüdruk”. Samuti “Härra Blake teie teenistuses”, “Näljamängud: laululindude ja madude ballaad” ja “Elu maitsed”. Sisukord: 11:17 Kirjad 26:00 Edetabel 34:11 Uudised 45:36 Intervjuu: Rainer Sarnet 1:39:27 Ühe liblika lugu 1:40:11 Kombitsatüdruk 1:41:24 Härra Blake teie teenistuses (Prantsusmaa) 1:45:01 Näljamängud: laululindude ja madude ballaad 1:49:20 Elu maitsed (Prantsusmaa) 1:56:35 Intervjuu: Gale Anne Hurd 2:14:31 Telekava

More Money
S4, Ep 397: Using Law of Attraction After Getting Fired

More Money

Play Episode Listen Later Nov 1, 2023 23:12


A few months ago, your nanny job ended. What happened? I was nannying and had been kind of struggling with it for about a year at that point. It just got to a point where I couldn't make it work. I did a lot of coaching with you about navigating and manifesting my way through those things, but none of it worked. Why wasn't it like “Let me get a new job?” I had been force-telling myself that I could manifest it to be better. I could force a bad situation to be good instead of just exiting the situation into a better situation. And because a similar situation had happened in my last nanny job, I thought that I was the common denominator, so I should fix myself within the job. But really, I needed just to exit the field of nannying.  There were parallels between your last relationship and your relationship with that boss, tell us about that. My boss lacked compassion for me and my experiences within that job. Similar things happened with my ex-boyfriend all the time. He had no compassion or flexibility in any part of the world or his life, especially with me. So I experienced similar reactions from both my boss and my ex - when I asked for things I needed I would be met with resistance and anger.  If I had coached you and told you, “Why don't you just quit this job?” What would you have said? I probably would have said, “No, I can fix this. This is a pattern and I'm determined to break this pattern where I am.” I felt the only way a pattern could be fixed was within the situation itself. But I had grown so much that I had become a square peg, so I was no longer going to fit into a round hole, no matter how much self-work I did. I had become another person, and really she needed me to leave for her to break into the person she was becoming too. My leaving left her the space to step into that easier life she was creating too.  Tell us about the break that happened. I was on vacation with them, and things got difficult. She had a sit-down conversation with me and asked me right out, “Are you happy working for us anymore?” and there was no other answer I could've given her except the truth, “No.” It was a very uncomfortable and rocky experience, but I still felt a huge sense of relief that I no longer had to try to fix this situation. I was uncomfortable with how it ended, but I felt free for the first time in years. What about the money? It was all ok. I had spent 5 years in ELU learning how to save and building up my savings, and thanks to that, I was able to continue living the same lifestyle I had before. I had enough savings to cover myself for the space I had between jobs. Thankfully, I have written such a great new money story that it supported me beautifully between jobs during a time when I wasn't getting any direct income. It was a little uncomfortable in all of that space. Cassie gave me coaching that getting comfortable in this space that I had created was what was going to manifest what I wanted. Sure enough, once I leaned into getting comfortable in that space, I started feeling better, and I manifested the perfect job for me at that time, and I started taking clients right after. What does “holding the space for the best-fit job” mean? For me, that meant not allowing myself to spend hours scrolling Indeed for jobs. To stay out of the panic and worry that comes with those hours of rabbit hole job searching. If I had filled up the space between jobs with that hyper fixation on getting a new job, that would not have gotten me the best-fit job for me. And it wouldn't have felt good; it would have felt yucky. So I did a lot of consciously choosing “I am not going to fill up my space right now. I am choosing to be present in my house. What do I want to do right now?”  Next episode, we're going to talk about “Best fit” vs. “Perfect” job. We'll give you some more concrete tips.  Vicki: Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U: Instagram: @ enchantedlifeu TikTok @enchantedlifeu

More Money
S4, Ep 395: Manifesting the Life You Want

More Money

Play Episode Listen Later Oct 18, 2023 24:56


Cassie asks if Vicki's life looks exactly the way she wanted it to 5 years ago. Vicki shares, NO - this life I am living was kind of an inspired ah-ha about 2 or 3 years ago. I realized kind of all at once, that the life I was pushing to create was no longer feeling good. I no longer felt the pull toward being a movie or Broadway star. I wanted something totally different and foreign to me. They talk about why there is a difference… There's a difference because this is true to me. The dream I tried to force to create was the one that I wanted when I was 17, and I just kept clinging to and pushing it until I turned 30. I started working with you a few years before this epiphany, and I think it's through the work I did with you in ELU that taught me how to open up to what I truly, deeply, and honestly wanted. I learned how to become the person who was worthy of getting everything she ever wanted, and by learning that worthiness, I was open and available for the truth to come in. The truth is, “You want something else.” It was by getting general, not more specific, that opened me up to the ah-ha. By owning that what I REALLY wanted was a loving, fun, abundant life and letting go of the specifics of what I thought SHOULD make that up, my brain was available to accept the new life I am living now. How fun to think I spent the first 2.5 years with you opening up to more, and it only took another 2.5 years to get everything I decided I wanted. What were three LOA mistakes you were making 5 years ago? Micromanaging my manifesting. AKA Not trusting and working in the “How” Using money to numb, instead of feeling my feelings Not applying my manifesting techniques to my everyday life What are three keys you would tell listeners today that you know makes the law of attraction work to manifest money and your dreams? Celebrate everything you can What you focus on, you get more of. Well, celebration is like a BIG focus. So find more stuff to celebrate! Especially when you see or hear about someone else getting what you want or  Awareness is key to change By bringing your awareness to your everyday thoughts, actions, and words, you automatically bring control and power to them. You can notice a thought or a story and stop it immediately or decide after that you are writing a new story. Feel your feelings Learn how to feel your feelings without words and judgment. Setting a scheduled time to feel them has been so helpful for me. Back when I was leaving a nanny job, I knew I'd need lots of space to feel my feelings, so I had a standing daily appointment with myself to feel my feelings after work. What is the most surprising thing you have learned over the last 5 years? How powerful I truly am Seeing things magically appear for me when I want or need them Anything else you want to share? I have more money now unemployed than I did when I first started with you, and I had over 3 jobs. Vicki: Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U: Instagram: @ enchantedlifeu TikTok @enchantedlifeu

Juggalo Judgment
Schmeev's AXEcellent Adventure - Alla Xul Elu, The Almighty

Juggalo Judgment

Play Episode Listen Later Oct 15, 2023 69:36


We've arrived at an album that I've at least gotten one prior listen to! On VINYL no less! We finally get an introduction to Elu after 3 albums of him being in the name, and who knows how the dynamic will change? In 2018, Monoxide of Twiztid said that at that year's Gathering Of The Juggalos, Majik Ninja Entertainment would be represented. Speculation ran rampant as, to the knowledge of fans, the "beef" between Psychopathic and MNE was still going strong. Mere weeks after The Gathering ended, it was revealed that Alla Xul Elu, who had performed at GOTJ, were on Majik Ninja and would be releasing their newest LP in September. This time around, the group added new member Lee Carver into the fray, and now the trio were set to "re-debut" to a whole new audience of the underground with MNE's backing. How does this fare for group? How does Schmeev feel about it? Find out today as he does his deep dive into... The Almighty. Hit us up on our socials here: https://linktr.ee/JuggaloJudgment, or get at one of us directly on Twitter @MikeSpohnTheSEJ and @Schmeev, OR hit up Mike on Instagram @StraightEdgeJuggalo. We'd love to hear your feedback, let us know what you think!

More Money
S4, Ep 394: Blame and Law of Attraction

More Money

Play Episode Listen Later Oct 11, 2023 14:15


The interview starts with a celebration of the dedication Vicki has made to herself and her Future Self over the past 5 years.  Cassie asks, what did that dedication look like for you? Vicki shares, Consistency and showing up even when I didn't want to. 20 minutes a day, 5 days a week, showing up. Not only that, I showed up the rest of every day the way I wanted to. Cassie asks What did using the law of attraction look like for you 5-6 years ago? Vicki shares, Lots of “To-Do's” Mantras Vision boards Journaling Specifics Meditating Stress Self-blame Cassie asks: you used to struggle with depression in the past. How much of that do you think came from the forcing of “good” feelings and forcing all of your “to-do's”? Vicki shares Because I was pushing so much of the “good” feelings, I was pushing away the sadness and anger that would come up throughout my life. Then the bubble would burst, and I would have to feel ALL of those big “bad” feelings all at once. That would take me down, totally incapacitated. I would allow the big, sad feelings to take over my life for however long it was. A week, a month, whatever it was. It affected every aspect of my life for that period of time. Because I hadn't been allowing myself to feel them - those little doses throughout my life, I'd just push them down. Then they'd come out like one big ugly monster. When you feel it all the time, it doesn't grow into a monster. When you push it down, it gains momentum and power to become this huge monster. Rather than just feeling it in those smaller, more frequent doses and not giving it the juice to become that big monster. Cassie asks Vicki if she used to do a lot of self-blame when she did LoA. What did that look like, how was that, and how did that play into depression? Vicki tells us I'd constantly look at my life and say, “You weren't able to manifest xyz. It's your fault because you tried to manifest it, and it didn't work. And the only reason you don't get something when you're manifesting is because you're not doing it right. Or because you did something wrong.” I spent 6 years thinking that. It's sad that some people come into LoA with this mindset, trying to make their lives better when in actuality, it's making their lives worse. I believed it and saw it work a couple of times, so when something went “wrong,” or I didn't get what I wanted, I looked back and saw that the common denominator was me. So I was the problem.  Now it's so freeing finding ELU and discovering you could just turn the page and start new. Go from exactly where you are and move forward without having to look back and analyze and accuse yourself of things you did in the past. You can just choose to move forward. Cassie asks is the biggest difference in using the law of attraction now vs. then? Vicki shares, I feel confident and calm using LoA I spend less time “doing” and more time living the life of my dreams I am successful at it They wrap up by telling us what next week's episode will be about. Why is LoA working now when it wasn't working before? When on paper, it seemed like Vicki was doing “all” of the things that should've worked and had worked for other people before. They will also discuss if the life she's living now is the one she wanted back then. Also how she has more money now without a job compared to when she was working 3+ jobs and had LESS money. Vicki: Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U: Instagram: @ enchantedlifeu TikTok @enchantedlifeu

DJ Ricky Sixx: In The Mix
Episode 24: In The Mix (Show 497)

DJ Ricky Sixx: In The Mix

Play Episode Listen Later Oct 5, 2023 61:03


Mixshows every weekend on DanceMixUSA.com, NoFearRadio.com CARDIO Dance Radio, Mix93FM.com, KCIE 90.5, WDA1.com, hitmixxradio.com, and RadioFreeNashville.org.1-Juan Dileju, Fabian Hernandes DFH ft. Tello - Agosto2- Nils Van Zandt - Move On Baby3- Steve Angello, Saturday-Monday ft. Julia Spada - The Ocean (Still Young & BRØMANCE Remix)4- Nicole Markson - Set The World On Fire5- Wave Wave - Overdrive6- Joel Fletcher - Rumble7- Pretty Poison/Jade Starling ft. Lee Dagger - Place In The Sun (Luca Debonaire & Mike Ferullo Remix)8- Diplo vs. Darude vs. Sikdope vs. Lil Jon - Be Right There Sandstorm Snake (DJ Prophet Festival Banger)9- Tenille Arts, Leann Rimes - Jealous Of Myself (Dave Audé Remix)10- FABLO & Trevon - Believe11- Dannic - Feel Your Energy12- Da Hool x Wildchild - Meet Master (Kolya Funk Remix)13- The Nightshifters - Feel The Love14- Wh0, James Hurr - Boogie Down15- Knock2, Dillon Francis - buttons!16- Elu & Fur, Sam Luck - Take Me High

More Money
S4, Ep 392: 5 Years of Manifesting Success

More Money

Play Episode Listen Later Sep 27, 2023 24:53


The interview starts with a celebration.  Cassie explains how she used to think of it as “graduation” but it's more like continuation because the work keeps going. You never graduate from this work. (In the best way possible) Cassie asks Vicki her thoughts on if she would just graduate and be done with the work? Vicki shares, Absolutely not, this is too fun and too juicy Isn't it human nature to always desire growth and depth further than what you've experienced? Cassie asks Vicki her Money Manifested total Vicki shares, $473,302.88! Cassie asks Vicki what her former Money Story was and to describe how it was playing out in her life. Vicki shares, Former story was Survival Vicious cycle of Panic, Above water, Overspending/doing/giving, Panic, etc. Money: having enough, overspending, overdrafting my account or getting a huge bill, absolute melt down / panic, back to square one of just enough, and repeat. Not money related: performing in a million shows back to back to back, then burning out and hating it. Relationships: being ok in a relationship, then big big drama, then ok just pushing through, then big big drama, over and over again. Cassie asks Vicki what her current money story is and how that shows up in her life Vicki shares, Powerful Creator When I say powerful, it means I am intentionally creating things, money as well as an amazing life. Money: I have spent the last 5 years creating this powerful momentum of creation. I'm on a roll welcoming in more and more money, more and more easily Relationships: I powerfully created and stepped into this relationship of my dreams! Powerfully, easily creating the wedding of my dreams too Life: I created this life of ease, so less drama and bs Cassie asks her to share the three ways money showed up that surprised her the most. Vicki answers… Getting hired to direct Winnie the Pooh this winter. The job was good money, but they gave me a pay range and I was open to any of it, because it was all manifested, but when I sat down for our first meeting, the artistic director was very adamant about paying me the highest end of the range. First year with ELU - getting inspired to do my taxes, expecting to owe a BUNCH of money, but then getting PAID the amount I thought I would owe Huge tip - working at a serving job, I was going about my thing, someone pulled me aside to tell me I had the jackpot family at my table. They always tip BIG, and sure enough, they tipped HUGE. More from one table than the rest of the tips that night combined! This was my first week on the job, too! Cassie asks: 5 years ago, what did you think it meant to “manifest money”? Vicki shares Checks in the mail People handing me money Lots of work and focus Luck Check back next week to hear more about Vicki's 5 years of manifesting success with EnchantedLifeU and everything she's learned over that time! Vicki: Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U: Instagram: @ enchantedlifeu TikTok @enchantedlifeu

More Money
S4, Ep. 388: Manifest More Without a To-Do List

More Money

Play Episode Listen Later Aug 30, 2023 23:45


Vicki and Cassie do a deep dive in explaining “What is the how?” and why we don't use it to manifest what we want - we follow inspiration instead.  They give anecdotes and stories about why the how hasn't worked for people in the past, also stories of how inspiration worked better and faster and it felt better, and they explain the 3 different levels of “the how”. The how is closing doors to possibilities, waiting for inspiration is holding open all doors to possibility, and that's how the easiest, sweetest paths show up. Paths to what you want to manifest, sometimes paths to things you didn't know you wanted to manifest until you get there. To-do lists are a “how” and we don't use them in ELU. We practice inspiration so much that things that would've been on the to-do list end up getting done easily and naturally in perfect timing. Vicki: Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U: Instagram: @ enchantedlifeu TikTok @enchantedlifeu

More Money
S4, Ep. 384: Manifesting and Feeling Guilty

More Money

Play Episode Listen Later Aug 2, 2023 22:54


We are back with Vicki and she is celebrating manifesting $135,469.57! She and Cassie joke about how she had to look up how to say out loud such a big number. Vicki tells us her experience with having to put down her beloved dog, Otis. She felt guilty about him dying because it was the last piece of her former relationship. Her and her ex had been co-parenting the dog, and it didn't feel great to continue on communication with him. Vicki worried she manifested Otis' decline in health so she could get out of communication with her ex faster. She asked Cassie, “Did I manifest Otis' passing in order to complete my past relationship?” Cassie asked her, “What if Otis manifested you?” Meaning what if Otis was always on this path with his health, no matter what, so he manifested Vicki to have the best final years he could possibly have? This coaching allowed Vicki a lot of space for healing and closure. We also hear about how she was supporting and helping her boyfriend find his next home. She's gotten practice in the past when she needs to switch jobs or auditions or what kids theater camp to teach each summer. She's learned how to trust her instincts along the way to accomplish what needed to get done, when it needed to get done. All experiences lead to her being able to flow through house hunting. Vicki had to learn how to trust her boyfriend's inspirations, and trust his way of leveraging Law of Attraction. She's learning to trust and let go and trust the person she loves - she's learning on a new level how to allow other people to live their own lives. She's stopping herself from telling others how to live and how to “Do it right”. She's starting her 5th year in Enchanted Life University and Vicki feels so great to be a part of something that's grown and that is so important to her and who she is. She realizes she's stuck with it for 4 whole years so far. She celebrates how consistent she's been and the growth she's experienced. We hear Vicki tell us about how the money showed up every month for her to pay her monthly investment to ELU, how that constant stepping into trust was so helpful for her whole journey. She's most excited about the bigger and bigger growth that's coming. So much has shifted and she's made so much space for bigger things that she's excited for even bigger coming in. Check out next week to hear where Vicki's abundance journey is taking her! Vicki Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U Instagram: @ enchantedlifeu TikTok @enchantedlifeu

More Money
S4, Ep. 383: How Survival Money Story Affects Manifesting Money

More Money

Play Episode Listen Later Jul 26, 2023 20:02


We're back with Vicki for another step in her life's journey! What has been your biggest growth as a coach and stepping into that? Owning her worthiness. Trusting and owning that she is worthy of being there, and worthy of being a coach, and accepting of the life she is creating. Her life is becoming what she always wanted it to be but it is also so different than she's always wanted, and it's perfect. What was her old money story? Survival. She is so happy she hasn't needed support or guidance around money from her coach in a very long time. But she's seeing her survival money story coming up in other areas of her life that don't have to do with money.  Her life before joining ELU was full of drama. She was working so many hours and not getting paid enough. She'd sometimes get a big tip or paycheck and just go spend it on something right away. She'd have ants in her pants because she was uncomfortable with seeing and having all of that money in her accounts. She'd overdraft her account or realize she'd be short on her bills that month and she'd have a full on panic attack / melt down, and not look at her accounts for a couple weeks. Vicki would take a couple days off of work to be depressed at home, and lose out on all that money from those shifts, and get even deeper in the hole. She was stuck. Making money, spending it, running out of it, panicking, getting more money, and repeating the cycle. Now, she loves money and feels good when she thinks about it! She feels good and easy while keeping track of her money. Vicki has not one, not two, but three savings accounts!! Now she has more than enough money to invest in the things that she wants. How has owning where you came from contributed to your growth? It's helped her realize the ink is dry, and all there is to do now is go forward. Owning the level that I'm at now is also owning her worthiness to support other people. Vicki Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U Instagram: @ enchantedlifeu TikTok: @ enchantedlifeu

Le téléphone sonne
C'est quoi être un bon patron ?

Le téléphone sonne

Play Episode Listen Later Jul 17, 2023 42:57


durée : 00:42:57 - Le téléphone sonne - Elu le 6 juillet dernier à la tête du Medef, Patrick Martin succède à Geoffroy Roux de Bézieux. Les clés de l'organisation patronale lui sont officiellement transmises aujourd'hui. Cette prise de fonction met en avant le rôle du patron au sein de l'entreprise et de la société.

More Money
S4, Ep. 373: How to Manifest The Perfect Job

More Money

Play Episode Listen Later May 17, 2023 26:16


Vicki manifested her dog following inspiration… Since we last spoke with her, Vicki has manifested her dog! The dog is exactly what she pictured in her scripts. She followed inspiration and it lead her to the dog she'd been scripting about for only a few months! How should you count manifested money? She realized she could count her new job's salary as money manifested! She followed inspiration to take this job, and now it's an amazing fit, leaving her with a new total of $66,854.78!  Can you trust your gut to manifest the perfect job? Vicki learned how to follow and trust her gut. By trusting her gut feeling, she ended up with the perfect job for her in that moment, even though the details didn't make sense. The details didn't matter, because the details changed. How saying “no” helped her manifest what she really wanted. Vicki was offered a kids theater job that she normally would have jumped at the opportunity for. But this time around, her gut gave a resounding “No.” Confused, she turned down the job, even though it didn't seem to make logical sense. Since then, she realized by saying no, so much space and fun has opened up in her life to allow for a brilliant summer ahead of her! How Vicki has manifested enough money for the things she wants. She created a budget for attraction that allowed space for more money to show up for the things that she wants! Before Enchanted Life U, she would have wanted things, but had no money for them. By getting clear on what each percentage of savings was for, and by putting in at least $1 into each portion every month, she now always has enough money for the things she wants!  If someone had told Vicki as she started ELU that it would have taken 2 years to get to this place, would she have thought it was a long time? No. She would've thought it was such a short time, she wouldn't have believed it. Vicki Instagram @vickinotvicky TikTok @vickinotvicky10k Enchanted Life U Instagram: @ enchantedlifeu TikTok: @ enchantedlifeu

Mindcrack Podcast
S2E131 - Road Trip

Mindcrack Podcast

Play Episode Listen Later Apr 24, 2023 61:23


Guude and Matt chat about Disney annual passes, Orlando trip, ELU, SpaceX Starship flight test, Disney, Guude's road trip back, electric vehicles,  and much more. Submit Questions: https://mindcracklp.com/mindcrack-podcast Hey, want to get an entirely new episode every week just like this one? Check out our patreon at https://www.patreon.com/mindcrack Guude: https://twitter.com/GuudeLP | https://www.instagram.com/GuudeFood Matt: https://twitter.com/Sevadus | https://www.instagram.com/sevadus

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

OpenAI just rollicked the AI world yet again yesterday — while releasing the long awaited ChatGPT API, they also priced it at $2 per million tokens generated, which is 90% cheaper than the text-davinci-003 pricing of the “GPT3.5” family. Their blogpost on how they did it is vague: Through a series of system-wide optimizations, we've achieved 90% cost reduction for ChatGPT since December; we're now passing through those savings to API users.We were fortunate enough to record Episode 2 of our podcast with someone who routinely creates 90%+ improvements for their customers, and in fact have started productizing their own infra skills with Codeium, the rapidly growing free-forever Copilot alternative (see What Building “Copilot for X” Really Takes). Varun Mohan is CEO of Exafunction/Codeium, and he indulged us in diving deep into AI infrastructure, compute-optimal training vs inference tradeoffs, and why he loves suffering.Recorded in-person at the beautiful StudioPod studios in San Francisco.Full transcript is below the fold. Timestamps* 00:00: Intro to Varun and Exafunction* 03:06: GPU Efficiency, Model Flop Utilization, Dynamic Multiplexing* 05:30: Should companies own their ML infrastructure?* 07:00: The two kinds of LLM Applications* 08:30: Codeium* 14:50: “Our growth is 4-5% day over day”* 16:30: Latency, Quality, and Correctability* 20:30: Acceleration mode vs Exploration mode* 22:00: Copilot for X - Harvey AI's deal with Allen & Overy* 25:00: Scaling Laws (Chinchilla)* 28:45: “The compute-optimal model might not be easy to serve”* 30:00: Smaller models* 32:30: Deepmind Retro can retrieve external infromation* 34:30: Implications for embedding databases* 37:10: LLMOps - Eval, Data Cleaning* 39:45: Testing/User feedback* 41:00: “Users Is All You Need”* 42:45: General Intelligence + Domain Specific Dataset* 43:15: The God Nvidia computer* 46:00: Lightning roundShow notes* Varun Mohan Linkedin* Exafunction* Blogpost: Are GPUs Worth it for ML* Codeium* Copilot statistics* Eleuther's The Pile and The Stack* What Building “Copilot for X” Really Takes* Copilot for X* Harvey, Copilot for Law - deal with Allen & Overy* Scaling Laws* Training Compute-Optimal Large Language Models - arXiv (Chinchilla paper)* chinchilla's wild implications (LessWrong)* UL2 20B: An Open Source Unified Language Learner (20B)* Paper - Deepmind Retro* “Does it make your beer taste better”* HumanEval benchmark/dataset* Reverse Engineering Copilot internals* Quora Poe* Prasanna Sankar notes on FLOPs and Bandwidth* NVIDIA H100 specs - 3TB/s GPU memory, 900GB/s NVLink Interconnect* Optimizer state is 14x size of model - 175B params => 2.5TB to store state → needs at least 30 H100 machines with 80GB each* Connor Leahy on The Gradient PodcastLightning Rounds* Favorite AI Product: Midjourney* Favorite AI Community: Eleuther and GPT-J* One year prediction: Better models, more creative usecases* Request for Startup: Superathlete Fitness Assistant* Takeaway: Continue to tinker!Transcript[00:00:00] Alessio Fanelli: Hey everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. I'm joined by my cohost, swyx, writer, editor of L Space Diaries.[00:00:20] swyx: Hey, and today we have Varun Mohan from Codeium / Exafunction on. I should introduce you a little bit because I like to get the LinkedIn background out of the way.[00:00:30] So you did CS at MIT and then you spent a few years at Nuro where you were ultimately tech lead manager for autonomy. And that's an interesting dive. Self-driving cars in AI and then you went straight into Exafunction with a few of your coworkers and that's where I met some of them and started knowing about Exafunction.[00:00:51] And then from out of nowhere you cloned GitHub Copilot. That's a lot of progress in a very short amount of time. So anyway, welcome .[00:00:59] Varun Mohan: That's high praise.[00:01:00] swyx: What's one thing about you that doesn't appear on LinkedIn that is a big part of what people should know?[00:01:05] Varun Mohan: I actually really like endurance sports actually.[00:01:09] Like I, I've done multiple triathlons. I've actually biked from San Francisco to LA. I like things that are like suffering. I like to suffer while I, while I do sports. Yeah.[00:01:19] swyx: Do you think a lot about like code and tech while you're doing those endurance sports or are you just,[00:01:24] Varun Mohan: your mind is just focused?[00:01:26] I think it's maybe a little bit of both. One of the nice things about, I guess, endurance athletics, It's one of the few things you can do where you're not thinking about, you can't really think about much beyond suffering. Like you're climbing up a hill on a bike and you see like, uh, you see how many more feet you need to climb, and at that point you're just struggling.[00:01:45] That's your only job. Mm-hmm. . Yeah. The only thing you can think of is, uh, pedaling one more pedal. So it's actually like a nice, a nice way to not think about work. Yeah,[00:01:53] Alessio Fanelli: yeah, yeah. Maybe for the audience, you wanna tell a bit about exa function, how that came to be and how coding came out[00:01:59] Varun Mohan: of that. So a little bit about exo function.[00:02:02] Before working at exa function, I worked at Neuro as Sean was just saying, and at neuro, I sort of managed large scale offline deep learning infrastructure. Realized that deep learning infrastructure is really hard to build and really hard to maintain for even the most sophisticated companies, and started exa function to basically solve that gap, to make it so that it was much easier for companies.[00:02:24] To serve deep learning workloads at scale. One of the key issues that we noticed is GPUs are extremely hard to manage fundamentally because they work differently than CPUs. And once a company has heterogeneous hardware requirements, it's hard to make sure that you get the most outta the hardware. It's hard to make sure you can get, get great GPU utilization and exa function was specifically built to make it so that you could get the most outta the hardware.[00:02:50] Make sure. Your GP was effectively virtualized and decoupled from your workload to make it so that you could be confident that you were running at whatever scale you wanted without burning the bank.[00:03:00] swyx: Yeah. You gave me this metric about inefficiency,[00:03:03] Varun Mohan: right? Oh, okay. Like flop efficiency. Yeah. Yeah. So basically, I think it comes down to, for most people, one of the things about CPUs that's really nice is with containers, right?[00:03:13] You can end up having a single. You can place many containers on them and all the containers will slowly start eating the compute. It's not really the same with GPUs. Like let's say you have a single. For the most part, only have one container using that gpu. And because of that, people heavily underestimate what a single container can sort of do.[00:03:33] And the GPU is left like heavily idle. And I guess the common term now with a lot of LM workloads is like the flop efficiency of these workloads. M F U, yeah. Yeah. Model flop utilization. The model flop utilization, which is basically like what fraction of the flops or compute on the hardware is actually getting used.[00:03:49] And sort of what we did at exa function. Not only make it so that the model was always running, we also built compiler technology to make it so that the model was also running more efficiently. And some of these things are with tricks like operator fusion, like basically you could imagine fusing two operations together such that the time it takes to compute.[00:04:07] the fused operation is lower than the time it takes for each individual operation. Oh my God. Yeah. .[00:04:13] Alessio Fanelli: Yeah. And you have this technique called dynamic multiplexing, which is basically, instead of having a one-to-one relationship, you have one GP for multiple clients. And I saw one of your customers, they went from three clients to just one single GPU and the cost by 97%.[00:04:29] What were some of those learning, seeing hardware usage and efficiencies and how that then played into what, what[00:04:34] Varun Mohan: you're building? Yeah, I think it basically showed that there was probably a gap with even very sophisticated teams. Making good use of the hardware is just not an easy problem. I think that was the main I, it's not that these teams were like not good at what they were doing, it's just that they were trying to solve a completely separate problem.[00:04:50] They had a model that was trained in-house and their goal was to just run it and it, that should be an easy. Easy thing to do, but surprisingly still, it's not that easy. And that problem compounds in complexity with the fact that there are more accelerators now in the cloud. There's like TPUs, inferential and there's a lot of decisions, uh, that users need to make even in terms of GPU types.[00:05:10] And I guess sort of what we had was we had internal expertise on what the right way to run the workload was, and we were basically able to build infrastructure and make it so that companies could do that without thinking. So most[00:05:21] Alessio Fanelli: teams. Under utilizing their hardware, how should they think about what to own?[00:05:26] You know, like should they own the appearance architecture? Like should they use Xlo to get it to production? How do you think[00:05:32] Varun Mohan: about it? So I think one thing that has proven to be true over the last year and a half is companies, for the most part, should not be trying to figure out what the optimal ML architecture is or training architecture is.[00:05:45] Especially with a lot of these large language models. We have generic models and transformer architecture that are solving a lot of distinct problems. I'll caveat that with most companies. Some of our customers, which are autonomous vehicle companies, have extremely strict requirements like they need to be able to run a model at very low latency, extremely high precision recall.[00:06:05] You know, GBT three is great, but the Precision Recall, you wouldn't trust someone's life with that, right? So because of that, they need to innovate new kinds of model architectures. For a vast majority of enterprises, they should probably be using something off the shelf, fine tuning Bert models. If it's vision, they should be fine tuning, resonant or using something like clip like the less work they can do, the better.[00:06:25] And I guess that was a key turning point for us, which is like we start to build more and more infrastructure for the architectures that. The most popular and the most popular architecture was the transformer architecture. We had a lot of L L M companies explicitly reach out to us and ask us, wow, our GT three bill is high.[00:06:44] Is there a way to serve G P T three or some open source model much more cheaply? And that's sort of what we viewed as why we were maybe prepared for when we internally needed to deploy transform models our.[00:06:58] Alessio Fanelli: And so the next step was, Hey, we have this amazing infrastructure. We can build kind of consumer facing products, so to speak, at with much better unit economics, much better performance.[00:07:08] And that's how code kind[00:07:10] Varun Mohan: of came to be. Yeah. I think maybe the, the play is not maybe for us to be just, we make a lot of consumer products. We want to make products with like clear ROI in the long term in the enterprise. Like we view code as maybe one of those things. Uh, and maybe we can, we can talk about code maybe after this.[00:07:27] We. Products like co-pilot as being extremely valuable and something that is generating a lot of value to professionals. We saw that there was a gap there where a lot of people probably weren't developing high intensive L L M applications because of cost, because of the inability to train models the way they want to.[00:07:44] And we thought we could do that with our own infrastructure really quickly.[00:07:48] swyx: I wanna highlight when you say high intensive, you mean basically generate models every key, uh, generate inferences on every keystroke? That's[00:07:55] Varun Mohan: right. Yeah. So I would say like, there's probably two kinds of L l M applications here.[00:07:59] There's an L L M application where, you know, it rips through a bunch of data and maybe you wait a couple minutes and then you see something, and then there's an application where the quality is not exactly what you want, but it's able to generate enough, sorry, low enough latency. It's still providing a ton of value.[00:08:16] And I will say there's like a gap there where the number of products that have hit that co-pilot spot is actually not that high. Mm. A lot of them are, are kind of like weight and, you know, just generate a lot of stuff and see what happens because one is clearly more compute intensive than the other Basically.[00:08:31] swyx: Well co uh, I don't know if we told the whole story yet, you were going to[00:08:35] Varun Mohan: dive into it. . Yeah, so I guess, I guess the story was I guess four or five months ago we sort of decided internally as a team we were like very early adopters of co-pilot. I'm not gonna sit here and say co-pilot, it's not a great tool.[00:08:45] We love co-pilot. It's like a fantastic tool. We all got on the beta. The moment it came out we're like a fairly small T, but we, like we all got in, we were showing each other completions. We end up writing like a lot of cuda and c plus plus inside the company. And I think there was probably a thought process within us that was like, Hey, the code we write is like very high aq.[00:09:04] You know? So like there's no way it can help. And one of the things in c plus plus that's like the most annoying is writing templates. Writing template programming is maybe one of those things. No one, maybe there's like some people in the C plus O standards community that can do it without looking at the, looking at anything online.[00:09:19] But we struggle. We struggle writing bariatric templates and COPA just like ripped through. Like we had a 500 line file and it was just like writing templates like, and we didn't really even test it while we were running it. We then just compiled it and it just, We're like, wow. Like this is actually something that's not just like it's completing four loops, it's completing code for us.[00:09:38] That is like hard in our brains to reach, but fundamentally and logically is not that complicated. The only reason why it's complicated is there's just a lot of rules, right. And from then we were just like, wow, this is, that was maybe the first l l m application for us internally, because we're not like marketers that would use, uh, Jasper, where we were like, wow, this is like extremely valuable.[00:09:58] This is not a toy anymore. So we wanted to take our technology to build maybe apps where these apps were not gonna be toys, right? They were not gonna be like a demo where you post it on Twitter and then you know there's hype and then maybe like a month later, no one's using.[00:10:11] swyx: There's a report this morning, um, from co-pilot where they, they were estimating the key tabs on amount of code generated by a co-pilot that is then left in code repos and checked in, and it's something like 60 to 70%[00:10:24] Varun Mohan: That's, that's nuts, but I totally believe it given, given the stats we have too. There's this flips in your head once you start using products like this, where in the beginning there's like, there's like skepticism, like how, how valuable can it be? And suddenly now like user behavior fundamentally changes so that now when I need to write a function, I'm like documenting my code more because I think it's prompting the model better, right?[00:10:43] So there's like this crazy thing where it's a self-fulfilling prophecy where when you get more value from it, more of your code is generated. From co-pilot[00:10:50] swyx: just to walk through the creation process, I actually assumed that you would have grabbed your data from the pile, which is the Luther ai, uh, open source, uh, code information.[00:11:00] But apparently you scraped your own[00:11:01] Varun Mohan: stuff. Yeah. We ended up basically using a lot of open, I guess, permissively licensed code, uh, in the public internet, mainly because I think also the pile is, is fairly a small subset. Uh, I think maybe after we started there was the, that was also came to be, but for us, we had a model for ourselves even before that, uh, was the point.[00:11:21] Ah, okay. So the timing was just a little bit off. Yeah, exactly. Exactly. But it's awesome work. It's, it seems like there's a good amount of work that's getting done Decentrally. Yeah. Which is a little bit surprising to me because I'm like more bullish on everyone needs to get together in a room and make stuff happen.[00:11:35] Like we're all in person in Mountain View. But yeah, no, it's pretty impressive. Yeah. Luther in general, like everything they've done, I'm pretty impressed with it. Yeah, and we're[00:11:42] swyx: gonna talk about that. Cause I, I didn't know you were that involved in the community[00:11:45] Varun Mohan: that early on I wasn't involved. It was more of like a, I was watching and maybe commenting from time to time.[00:11:50] So they're a very special community for sure. Yeah,[00:11:52] swyx: yeah, yeah. That's true. That's true. My impression is a bunch of you are geniuses. You sit down together in a room and you. , get all your data, you train your model, like everything's very smooth sailing. Um, what's wrong with that[00:12:02] Varun Mohan: image? Yeah, so probably a lot of it just in that a lot of our serving infrastructure was already in place, Uhhuh before then.[00:12:09] So like, hey, we were able to knock off one of these boxes that I think a lot of other people maybe struggle with. The open source serving offerings are just, I will say, not great in that. That they aren't customized to transformers and these kind of workloads where I have high latency and I wanna like batch requests, and I wanna batch requests while keeping latency low.[00:12:29] Mm-hmm. , right? One of the weird things about generation models is they're like auto regressive, at least for the time being. They're auto aggressive. So the latency for a generation is a function of the amount of tokens that you actually end up generating. Like that's like the math. And you could imagine while you're generating the tokens though, unless you batch a.[00:12:46] It's gonna end up being the case that you're not gonna get great flop utilization on the hardware. So there's like a bunch of trade offs here where if you end up using something completely off the shelf, like one of these serving thing, uh, serving frameworks, you're gonna end up leaving a lot of performance on the table.[00:13:00] But for us, we were already kind of prepared. To sort of do that because of our infrastructure that we had already built up. And probably the other thing to sort of note is early on we were able to leverage open source models, sort of bootstrap it internally within our company, but then to ship, we finally had some requirements like, Hey, we want this model to have fill in the middle capabilities and a bunch of other things.[00:13:20] And we were able to ship a model ourselves. So we were able to time it so that over the course of multiple months, different pieces were like working out properly for us. So it wasn't. . You know, we started out and we were just planning the launch materials. The moment we started there was like maybe some stuff that was already there, some stuff that we had already figured out how to train models at scale internally.[00:13:38] So we were able to just leverage that muscle very quickly. I think the one[00:13:41] swyx: thing that you had figured out from the beginning was that it was gonna be free forever. Yeah. Yeah, co-pilot costs $10[00:13:47] Varun Mohan: a month. Co-pilot costs $10 a month. I would argue significantly more value than $10 a month. The important thing for us though, was we are gonna continue to build more great products on top of code completion.[00:13:58] We think code completion is maybe day one of what the future looks like. And for that, clearly we can't be a product that's like we're $10 a month and we're adding more products. We want a user base that loves using us. And we'll continue to stay with us as we continue to layer on more products. And I'm sure we're gonna get more users from the other products that we have, but we needed some sort of a differentiator.[00:14:17] And along the way we realized, hey, we're pretty efficient at running these workloads. We could probably do this. Oh, so it wasn't,[00:14:23] swyx: it was a plan to be free from the start. You just[00:14:25] Varun Mohan: realized we, yeah. We realized we could probably, if we cut and optimized heavily, we could probably do this properly. Part of the reasoning here was we were confident we could probably build a pro tier and go to the enter.[00:14:35] But for now, originally when we, when we started, we weren't like, we're just gonna go and give every, all pieces of software away for free. That wasn't like sort of the goal there. And[00:14:43] swyx: since you mentioned, uh, adoption and, you know, traction and all that, uh, what can you disclose about user growth? Yeah, user adoption.[00:14:50] Varun Mohan: Yeah. So right now we have. We probably have over 10,000 users and thousands of daily actives, and people come back day over day. Our growth is like around, you know, four to 5% day over day right now. So all of our growth right now is sort of like word of mouth, and that's fundamentally because like the product is actually one of those products where.[00:15:08] Even use COT and use us, it's, it's hard to tell the difference actually. And a lot of our users have actually churned off of cot isn't Yeah. I,[00:15:14] swyx: I swept Yeah. Yeah. To support you guys, but also also to try[00:15:17] Varun Mohan: it out. Yeah, exactly. So the, the crazy thing is it wasn't like, Hey, we're gonna figure out a marketing motion of like, Going to the people that have never heard of co-pilot and we're gonna like get a bunch of users.[00:15:27] We wanted to just get users so that in our own right we're like a really great product. Uh, and sort of we've spent a lot of engineering time and obviously we co-wrote a blog post with you, Sean, on this in terms of like, there's a lot of engineering work, even beyond the latency, making sure that you can get your cost down to make a product like this actually work.[00:15:44] swyx: Yeah. That's a long tail of, of stuff that you referenced,[00:15:47] Varun Mohan: right? Yes. Yeah, exactly.[00:15:48] swyx: And you, you said something to the order of, um, and this maybe gets into co-pilot for X uh, which is something that everybody is keen about cuz they, they see the success of co-pilot. They're like, okay, well first of all, developer tools, there's more to do here.[00:16:00] And second of all, let's say the co-pilot idea and apply for other disciplines. I don't know if you wanna Yeah.[00:16:06] Varun Mohan: There's[00:16:06] Alessio Fanelli: gonna some. Key points that, that you touched on. Um, how to estimate, inference a scale, you know, and the latency versus quality trade-offs. Building on first party. So this is free forever because you run your own models, right?[00:16:19] That's right. If you were building on open ai, you wouldn't be able to offer it for free real-time. You know, when I first use coding, It was literally the same speed as Copi is a little bit[00:16:29] swyx: faster. I don't know how to quantify it,[00:16:31] Varun Mohan: but we are faster. But it's one of those things that we're not gonna like market as that's the reason because it's not in and of itself a right for you to like, I'm just gonna be open with you.[00:16:39] It's not a reason for you to like suddenly turn off a copilot where if our answers were trash, uh, but we were faster. You know what I mean? But your focus[00:16:46] Alessio Fanelli: was there. We used the alpha, I think prem on our discord came to us and say, you guys should try this out. So it was really fast. Even then, prompt optimization is another big thing, and model outputs and UX kind of how you bring them together.[00:17:00] Which ones of these things are maybe like the one or two that new founders should really think about first?[00:17:07] Varun Mohan: Yeah, I think, I think my feeling on this is unless you are ex, you probably should always bootstrap on top of an existing a. Because like even if you were to, the only reason why we didn't is because we knew that this product was actually buildable.[00:17:22] Probably if we worked hard enough to train a model, we would actually be able to build a great product already. But if you're actually going out and trying to build something from scratch, unless you genuinely believe, I need to fine tune on top of, you know, terabytes of data terabyte is a very large amount of data, but like tens of gigabytes of data.[00:17:37] Probably go out and build on top of an API and spend most of your time to make it so that you can hit that quality latency trade off properly. And if I were to go out and think about like the three categories of like an LM product, it's probably like latency, quality, and correct ability. The reality is, you know, if I were to take a product like co-pilot or Coum, the latency is very low.[00:17:58] The quality I think, is good enough for the task, but the correct ability is, is very easy. Credibility. What, what is correct ability? Correct ability means, let's say the quality is not there. Like you consider the the case where, The answer is wrong. How easy is it for your user to actually go and leverage parts of the generation?[00:18:16] Maybe a, a concrete example. There's a lot of things people are excited about right now where I write a comment and it generates a PR for me, and that's like, that's like really awesome in theory. I think that's like a really cool thing and I'm sure at some point we will be able to get there. That will probably require an entirely new model for what it's worth that's trained on diffs and commits and all these other things that looks at like improvements and code and stuff.[00:18:37] It's probably not gonna be just trained on generic code. But the problem with those, those sort of, I would say, applications are that, let's suppose something does change many files, makes large amounts of changes. First of all, it's guaranteed not gonna be. Because even the idea of like reviewing the change takes a long time.[00:18:54] So if the quality and the correct ability is just not there, let's say you had 10 file, a 10 file change and you modified like, you know, file two and four, and those two modifications were consistent, but the other eight files were not consistent. Then suddenly the correct ability is like really hard.[00:19:10] It's hard to correct the output of the model. And so the user interface is 100% really important. But maybe until you get the latency down or the correct ability, like correct ability, like a lot better, it's probably not gonna be shippable. And I think that's what you gotta spend your time focusing on.[00:19:26] Can you deliver a product that is actually something users want to use? And I think this is why I was talking about like demo. It's like very easy to hand to handpick something that like works, that works for a demo, exceedingly hard for something that has large scope, like a PR to work consistently. It will take a lot of engineering effort to make it work on small enough chunks so that a user is like, wow, this is value generative to me.[00:19:49] Because eroding user trust or consumer trust is very easy. Like that is, it is is much, much, it's very easy to erode user trust versus enterprise. So just be mindful of that, and I think that's probably like the mantra that most of these companies need to operate under. Have you done any[00:20:05] Alessio Fanelli: analysis on. What the ratio between code generated and latency is.[00:20:11] So you can generate one line, but you could also generate the whole block. You can generate Yeah. A whole class and Yeah. You know, the more you generate the, the more time it takes. Like what's the sweet spot that, that you[00:20:21] Varun Mohan: found? Yeah, so I think there was a great study and, and I'm not sure if it's possible to link it, but there was a great study about co-pilot actually that came out.[00:20:28] Basically what they said was there were two ways that developers usually develop with a code assistant technology. They're either in what's called like acceleration mode or exploration mode. And exploration mode is basically you're in the case where you don't even know what the solution space for the function is.[00:20:43] and you just wanna generate a lot of code because you don't even know what that looks like. Like it might use some API that you've never heard of. And what you're actually doing at that point is like you're writing a clean comment, just wishing and praying that you know, the generation is long enough and gets you, gets you far enough, right?[00:20:57] acceleration mode is basically you are doing things where you are very confident in what you're doing and effectively. Code gives you that muscle so that you can basically stay in flow state and you're not thinking about like exactly what the APIs look like, but push comes to shove. You will figure out what the APIs look like, but actually like mentally, it takes off like a load in your head where you're like, oh wow.[00:21:18] Like I can just do this. The intent to execution is just a lot, a lot lower there. And I think effectively you want a tool that captures that a little bit. And we have heuristics in terms of captur. Whether or not you're in acceleration versus exploration mode. And a good heuristic is, let's say you're inside like a basic block of a piece of code.[00:21:37] Let's say you're inside a a block of code or an IF statement, you're probably already in acceleration mode and you would feel really bad if I started generating the ELs clause. Because what happens if that else causes really wrong? That's gonna cause like mental load for you because you are the way programmers think.[00:21:51] They only want to complete the if statement first, if that makes sense. So there are things where we are mindful of like how many lines we generate if you use the product, like multi-line generations happen and we are happy to do them, but we don't want to do them when we think it's gonna increase load on developers, if that makes sense.[00:22:07] That[00:22:07] Alessio Fanelli: makes sense. So co-pilot for x. , what are access that you think are interesting for people to build[00:22:13] Varun Mohan: in? Didn't we see some, some tweet recently about Harvey ai, uh, company that, that is trying to sell legal? It's like a legal, legal assistance. That's, that's pretty impressive, honestly. That's very impressive.[00:22:23] So it seems like I would really love to see what the product looks like there, because there's a lot of text there. You know, looking at bing, bing, ai, like, I mean, it's, it's pretty cool. But it seems like groundedness is something a lot of these products struggle with, and I assume legal, if there's one thing you want them to.[00:22:39] To get right. It's like the groundedness. Yeah.[00:22:42] swyx: Yeah. I've made the analogy before that law and legal language is basically just another form of programming language. You have to be that precise. Yes. Definitions must be made, and you can scroll to find the definition. It's the same thing. Yes. ,[00:22:55] Varun Mohan: yes. Yeah. But like, I guess there's a question of like comprehensiveness.[00:22:59] So like, let's say, let's say the only way it generates a suggestion is it provides like, you know, citations to other legal. You don't want it to be the case that it misses things, so you somehow need the comprehensiveness, but also at the same time, you also don't want it to make conclusions that are not from the site, the things at sites.[00:23:15] So, I don't know, like that's, that's very impressive. It's clear that they've demonstrated some amount of value because they've been able to close a fairly sizable enterprise contract. It was like a firm with 3,500 lawyers, something nuts, honestly. Very cool. So it's clear this is gonna happen, uh, and I think people are gonna need to be clever about how they actually make it work.[00:23:34] Within the constraints of whatever workload they're operating in. Also, you, you guys[00:23:37] swyx: are so good at trading stuff, why don't you, you try[00:23:39] Varun Mohan: cloning it. Yeah. So I think, I think that's, that's, uh, preview the roadmap. Yeah, yeah, yeah, yeah. No, no, no, but I'm just kidding. I think one of the things that we genuinely believe as a startup is most startups can't really even do one thing properly.[00:23:52] Mm-hmm. Focus. Yeah. Yeah. Usually doing one thing is really hard. Most companies that go public have like maybe a couple big products. They don't really have like 10, so we're under no illusions. Give the best product experience, the amount of engineering and attention to detail, to build one good product as hard.[00:24:08] So it's probably gonna be a while before we even consider leaving code. Like that's gonna be a big step because the amount of learning we need to do is gonna be high. We need to get users right. We've learned so much from our users already, so, yeah, I don't think we'd go into law anytime soon.[00:24:22] swyx: 3,500 lawyers with Ellen and Ry, uh, is, is is apparently the, the new[00:24:27] Varun Mohan: That's actually really big.[00:24:28] Yeah. Yeah. I can congrat.[00:24:29] swyx: Yeah, it's funny cuz like, it seems like these guys are moving faster than co-pilot. You know, co-pilot just launched, just announced enterprise, uh, like co-pilot for teams or co-pilot for Enterprise. Yeah. After like two years of testing.[00:24:40] Varun Mohan: Yeah, it does seem like the co-pilot team has built a very, very good product.[00:24:44] Um, so I don't wanna like say anything, but I think it is the case to startups will be able to move faster. I feel like that is true, but hey, like GitHub has great distribution. Whatever product they do have, they will be able to sell it really. Shall[00:24:56] swyx: we go into model numbers and infra estimates? our favorite[00:25:01] Varun Mohan: topics.[00:25:02] Nice small models. Nice.[00:25:04] swyx: So this is, um, relevant to basically I'm researching a lot of skilling law stuff. You have a lot of thoughts. You, you host paper discussions[00:25:12] Varun Mohan: in your team. Yeah, we, we try to like read papers that we think are really interesting and relevant to us. Recently that's been, there's just a fire hose of papers.[00:25:21] You know, someone even just curating what papers we should read internally as a company. Yeah, I think, I think there's, there's so much good content[00:25:28] swyx: out there. You should, you guys should have a podcast. I mean, I told you this before. Should have a podcast. Just, just put a mic near where, where you guys are[00:25:33] Varun Mohan: talking.[00:25:34] We gotta, we gotta keep developing coding though, . No, but you're doing this discussion[00:25:38] swyx: anyway. You[00:25:38] Varun Mohan: might as well just, oh, put the discussion on a podcast. I feel like some of the, some of the thoughts are raw, right? Like, they're not gonna be as, as nuanced. Like we'll just say something completely stupid during our discussions.[00:25:48] I don't know, , maybe that's exciting. Maybe that's, it's kinda like a justin.tv, but for ML papers, Okay, cool. I watched that.[00:25:55] swyx: Okay, so co-pilot is 12 billion parameters. Salesforce cogen is up to 16. G P t three is 175. GP four is gonna be 100 trillion billion. Yeah. So what, what we landed on with you is with, uh, with Cilla, is that we now have an idea of what compute optimal data scaling is.[00:26:14] Yeah. Which is about 20 times parameters. Is that intuitive to you? Like what, what did that[00:26:18] Varun Mohan: unlock? I think basically what this shows is that bigger models are like more data efficient, like given the same number of tokens, a big model like trained on the same number of tokens. A bigger model is like, is gonna learn more basically.[00:26:32] But also at the same time, the way you have to look at it is there are more flops to train a bigger model on the same number of tokens. So like let's say I had a 10 billion parameter model and I trained it on on 1 million tokens, but then I had a 20 billion parameter model at the end of it will be a better.[00:26:47] It will have better perplexity numbers, which means like the probability of like a prediction is gonna be better for like the next token is gonna be better. But at the end of it, you did burn twice the amount of compute on it. Right? So Shinto is an interesting observation, which says if you have a fixed compute budget, And you want the best model that came out of it because there's like a difference here where a model that is, that is smaller, trained on the same number of tokens as fewer flops.[00:27:12] There's a a sweet spot of like number of tokens and size a model. I will say like people probably like. Are talking about it more than they should, and, and I'll, I'll explain why, but it's a useful result, which is like, let's say I have, you know, some compute budget and I want the best model. It tells you what that, what you should generate.[00:27:31] The problem I think here is there is a real trade off of like, you do need to run this model somewhere. You need to run it on a piece of hardware. So then it comes down to how much memory does that piece of hardware have. Let's say for a fixed compute budget, you could train a 70 billion parameter. What are you gonna put that on?[00:27:47] Yeah, maybe you could, could you put that on an 80 gig, A 100? It would be a stretch. You could do things like f, you know, in eight F p a, to reduce the amount of memory that's on the box and do all these other things. But you have to think about that first, right? When you want to go out and train that model.[00:27:59] The worst case is you ended up training that mo, that model, and you cannot serve it. So actually what you end up finding is for a lot of these code completion models, they are actually what you would consider over-trained . So by that I mean like, let's look at a model like Cogen. It's actually trained on, I believe, and, and I could be wrong by, you know, a hundred billion here or there.[00:28:18] I got some data. Oh, okay. Let's look at the 3 billion parameter model. It's a 2.7. I think it's actually a 2.7 billion barometer model. It's weird because they also trained on natural language on top of code, but it's trained on hundreds of billions of tokens. If you applied that chinchilla, Optimization to it, you'd be like, wow, this is, this is a stupid use of compute.[00:28:36] Right? Because three, they should be going to 60, any anything more than 60. And they're like, they should have just increased the model size. But the reality is if they had like the compute optimal one might not be one that's easy to serve, right? It could just have more parameters. And for our case, our models that we train internally, they might not be the most compute.[00:28:56] In other words, we probably could have had a better model by making it larger, but the trade off would've been latency. We know what the impact of having higher latency is, and on top of that, being able to fit properly on our hardware constraints would've also been a concern.[00:29:08] swyx: Isn't the classic stopping point when you, you see like loss kind of levels off.[00:29:12] Right now you're just letting chinchilla tell you,[00:29:16] Varun Mohan: but like you should just look at loss. The problem is the loss will like continue to go down. It'll just continue to go down like, like in a, in a way that's like not that pleasing. It's gonna take longer and longer. It's gonna be painful, but it's like one of those things where if you look at the perplexity number of difference between.[00:29:31] Let's say a model that's like 70 billion versus 10 billion. It's not massive. It's not like tens of percentage points. It's like very small, right? Mm. The reality is here, like, I mean this comes down to like IQ of like these models in some sense, like small wins at the margins are massive wins in terms of iq.[00:29:47] Like it's harder to get those and they don't look as big, but they're like massive wins in terms of reasoning. They can now do chain of thought, all these other things. Yeah, yeah, yeah.[00:29:55] swyx: It's, and, and so apparently unlocked around the[00:29:57] Varun Mohan: 20 billion. Yes. That's right. Some kind of magic. Yeah. I think that was from the UL two or maybe one of those land papers.[00:30:03] Any thoughts on why? Like is there is? I don't know. I mean, emergence of intelligence, I think. I think maybe one of the things is like we don't even know, maybe like five years from now of what we're gonna be running are transformers. But I think it's like, we don't, we don't 100% know that that's true. I mean, there's like a lot of maybe issues with the current version of the transformers, which is like the way attention works, the attention layers work, the amount of computers quadratic in the context sense, because you're like doing like an n squared operation on the attention blocks basically.[00:30:30] And obviously, you know, one of the things that everyone wants right now is infinite context. They wanna shove as much prop as possible in here. And the current version of what a transformer looks like is maybe not ideal. You might just end up burning a lot of flops on this when there are probably more efficient ways of doing it.[00:30:45] So I'm, I'm sure in the future there's gonna be tweaks to this. Yeah. Uh, but it is interesting that we found out interesting things of like, hey, bigger is pretty much always better. There are probably ways of making smaller models significantly better through better data. That is like definitely true. Um, And I think one of the cool things that the stack showed actually was they did a, like a, I think they did some ablation studies where they were like, Hey, what happens if we do, if we do decontamination of our data, what happens if we do de-duplication?[00:31:14] What happens if we do near dup of our data and how does the model get better? And they have like some compelling results that showcase data quality really matters here, but ultimately, Yeah, I think it is an interesting result that at 20 billion there's something happening. But I also think like some of these things in the future may look materially different than what they look like right now.[00:31:30] Hmm. Do you think[00:31:31] Alessio Fanelli: the token limitation is actually a real architectural limitation? Like if you think about the tokens need as kind of like atic, right? Like once you have. 50,000 tokens context, like 50,000 or infinite. For most use cases, it's like the same. Where do you think that number is, especially as you think about code, like some people have very large code bases, there's a lot.[00:31:53] Have you done any work there to figure out where the sweet[00:31:55] Varun Mohan: spot is? Yeah, look, I think what's gonna really end up happening is if people come up with a clever way and, and it, there was some result research that I believe came out of Stanford. I think the team from the Helm group, I think came out with some architecture that looks a little bit different than Transformers, and I'm sure something like this will work in the future.[00:32:13] What I think is always gonna happen is if you find a cheap way to embed context, people are gonna figure out a way to, to put as much as possible in because L LM so far have been like virtually stateless. So the only thing that they have beyond fine tuning is like just shoveling everything you can inside.[00:32:28] And there are some interesting papers, like retro, actually there are maybe some interesting pieces of thought like ideas that have come out recently. Yeah, let's go through them. So one of the really interesting ideas, I think is retro. It's this paper that came out of DeepMind and the idea is actually, let's say you send out, you send out, uh, a prompt.[00:32:44] Okay? Send out a prompt. You compute the burt embedding of that. And then you have this massive embedding database. And by massive, I'm not talking about like gigabytes, I'm talking about terabytes. Like you have, geez, you actually have 10 times the number of tokens as what was used to train the model. So like, let's say you had a model that was trained on a trillion tokens, you have a 10 trillion embed, uh, like embedding database.[00:33:04] And obviously Google has this because they have all content that ever existed in humanity and they have like the best data set and sort of, they were able to make one of these, uh, embedding databases. But the idea here, which is really cool, is you end. Taking your prompt, computing, the bird, embedding you find out the things that were nearby.[00:33:20] So you do roughly like a semantic search or an embedding search within that. And then you take those, you take the documents that were from those embeddings and you shove those in the model too, in what are called like cross chunked attention. So you like shove them in the model with it as well.[00:33:34] Suddenly now the model is able to take in external. Which is really exciting actually, because suddenly now you're able to get dynamic context in, and the model in some sense is deciding what that context is. It's not deciding it completely. In this case, because the Bert model in this case was actually frozen.[00:33:50] It wasn't trained with the retro model as well, but. The idea is you're somehow adding or augmenting context, which I think is like quite exciting. There's probably two futures. Either context becomes really cheap. Right now it's quadratic. Maybe there's a future where it becomes linear in the, in the size of the context, but the future might actually be the model itself dictates, Hey, I have this context.[00:34:10] You have this data source. Give me this. The model itself is going out into your database and like being like, I want this information, and this is kind of like. What Bing search is looking like. Right? Or bing chat is sort of looking like where it's like I, the model is probably, there's probably some model that's saying I want this information.[00:34:27] And that is getting augmented into the context. Now the model itself knows what context it sort of has and it can sort of like build a state machine of sort of what it needs. And that's probably what the future of this looks like. So you, you[00:34:37] swyx: predict monster embedding database[00:34:39] Varun Mohan: companies? Probably Monster embedding database companies or, yeah.[00:34:43] The model in some sense will need to talk to, Talk to these embedding databases. I'm actually not convinced that the current breed of embedding database companies are like ready for what the future sort of looks like. I think I'm just looking at their pricing, how much it costs per gigabyte and it's prohibitive at the scale we're talking about, like let's say you actually did want to host a 10 terabyte embedding database.[00:35:03] A lot of them were created, let's say two years ago, two, three years ago, where people were like, you know, embedding databases are small and they need to make the cost economics work. But maybe, yeah, there's probably gonna be a big workload there. I will just say for us, we will probably just build this in-house to start with, and that's because I think the technology probably isn't there.[00:35:20] And I think that the technology isn't there yet. Like waiting on point solutions to come up is a lot harder, um, than probably building it up. The way I, I like to think about this is probably the world looks on the LM space. Looks like how the early internet days were, where I think the value was accrued to probably like Google and Google needed to figure out all the crazy things to make their workload work.[00:35:41] And the reason why they weren't able to outsource is, is no one else was feeling the pain. ,[00:35:46] swyx: they're just solving their own pain points. They're just solving their own pain points. They're so far ahead of everyone else. Yes, yes. And just wait[00:35:50] Varun Mohan: for people to catch up. Yes. Yes. And that's maybe different than how things like Snowflake look where the interface has been decided for what SQL looks like 50 years ago.[00:35:58] And because of that, you can go out and build the best database and Yeah, like everyone's gonna be like, this doesn't make my beer taste better. And buy your database basically. That's[00:36:08] swyx: a great reference, by the way. Yeah. We have some friends of the, the pod that are working on embedding database, so we'll try to connect you Toroma[00:36:14] Varun Mohan: and see.[00:36:14] Yeah. Oh, I actually know Anton. I worked with him at Neuro. Oh. Although, there you go. Yeah. Uh, what do you, well, what do you think about, I mean,[00:36:20] swyx: so chromas pivoting towards an embedding[00:36:22] Varun Mohan: database. I think it's an interesting idea. I think it's an interesting idea. I wonder what the early set of workloads that.[00:36:27] They will hit our, and you know what the scaling requirements are. This is maybe the classic thing where like, the teams are great, but you need to pick a workload here that you care about the most. You could build anything. You could build anything. When you're an infrastructure company, you can go in, if I was selling, serving in for, I could build, serving for like linear aggression.[00:36:44] I could build this, but like, unless you hit the right niche for the end user, it's gonna be. . So I think it, I'm excited to see what comes out and if they're great, then we'll use it. Yeah.[00:36:54] swyx: I also like how you slowly equated yourself to Google there. Oh, we're not, we're not Google. You're, you're gonna be the Google of ai.[00:37:00] Varun Mohan: We're definitely, we're definitely not Google. But I was just saying in terms of like, if you look at like the style of companies that came out. Yeah. You know? Absolutely. Or maybe we should live in the cutting edge in[00:37:08] swyx: the future. Yeah. I think that's the pitch.[00:37:10] Varun Mohan: Okay, thanks for b***h us.[00:37:13] Alessio Fanelli: So you just mentioned the older vector embedding source are kind of not made for the L l M generation of compute size.[00:37:21] what does l LM ops look like? You know, which pieces need to be drastically different? Which ones can we recycle?[00:37:27] Varun Mohan: Yeah. One of the things that we've found, like in our own thing of building code that's been just shows how much is missing, and this is the thing where like, I don't know how much of this you can really outsource, which is like we needed to build eval infrastructure.[00:37:40] That means how do you build a great code? And there are things online like human eval, right? And uh, I was telling, which is the benchmark telling Sean about this, the idea of human eval is really neat for code. The idea is you provide a bunch of functions with Docstrings and the eval instead of being, did you predict next token?[00:37:56] It's like, did you generate the entire function and does the function run correctly against a bunch of unit tests? Right. And we've built more sophisticated evals to work on many languages, to work on more variety of code bases. One of the issues that ends up coming up with things like human eval is contam.[00:38:12] Because a lot of these, uh, things that train models end up training on all of GitHub GitHub itself has human eva, so they end up training on that. And then the numbers are tiny, though. It's gonna be tiny, right? But it doesn't matter if it's tiny because it'll just remember it. It'll remember that it's, it's not that it's that precise, but it will, it's like, it's basically like mixing your, your training and validation set.[00:38:32] It's like, oh, yeah, yeah, yeah, yeah. But we've seen cases where like online where someone is like, we have a code model that's like, they we're like, we did this one thing, and HU and human eval jumped a ton and we were just like, huh, did human eval get into your data set? Is that really what happened there?[00:38:46] But we've needed to build all this eval. And what is shown is data cleaning is massive, but data cleaning looks different by. Like code data cleaning is different than what is a high quality piece of code is probably different than what's a high quality legal document. Yeah. And then on top of that, how do you eval this?[00:39:01] How do you also train it at scale at whatever cost you really want to get? But those are things that the end user is either gonna need to solve or someone else is gonna need to solve for them. And I guess maybe one of the things I'm a little bearish on is if another company comes out and solves eval properly for a bunch of different verticals, what was the company that they were selling to really?[00:39:21] What were they really doing at that point? If they themselves were not eval for their own workload and all these other things? I think there are cases where, let's say for code where we probably couldn't outsource our eval, like we wouldn't be able to ship models internally if we didn't know how to eval, but it's clear that there's a lot of different things that people need to take.[00:39:38] Like, Hey, maybe there's an embedding piece. How large is this embedding database actually need to be? But hey, this does look very different than what classic ML ops probably did. Mm-hmm. . How[00:39:47] Alessio Fanelli: do you compare some of these models? Like when you're thinking about model upgrading and making changes, like what does the testing piece of it internally?[00:39:56] Yeah. For us look like.[00:39:56] Varun Mohan: For us, it's like old school AB testing. We've built like infrastructure to be able to say, ramp up users from one to 10 to. 50% and slowly roll things out. This is all classic software, uh, which[00:40:09] swyx: you do in-house. You don't, you don't buy any[00:40:10] Varun Mohan: services. We don't buy services for that.[00:40:13] There are good services, open source services that help you just don't need them. Uh, yeah, I think that's just like not the most complicated thing for us. Sure. Basically. Yeah. Uh, but I think in the future, maybe, we'll, obviously we use things like Google Analytics and all this other stuff, but Yeah. For things of ramping our models, finding out if they're actually better because the eval also doesn't tell the whole story because also for us, Even before generating the prompt, we do a lot of work.[00:40:36] And the only way to know that it's really good across all the languages that our users need to tell us that it's actually good. And, and they tell us by accepting completions. So, so GitHub[00:40:44] swyx: co-pilot, uh, the extension does this thing where they, they like, they'll set a timer and then within like five minutes, 10 minutes, 20 minutes, they'll check in to see if the code is still there.[00:40:54] I thought it was a[00:40:54] Varun Mohan: pretty creative way. It's, it's a very, it's honestly a very creative way. We do do things to see, like in the long term, if people did. Accept or write things that are roughly so because they could accept and then change their minds. They could accept and then change their minds. So we, we are mindful of, of things like that.[00:41:09] But for the most part, the most important metric is at the time, did they actually, did we generate value? And we want to know if that's true. And it's, it's kind of, it's honestly really hard to get signal unless you have like a non-trivial amount of usage, non-trivial, meaning you're getting, you're doing hundreds of thousands of completions, if not millions of completions.[00:41:25] That sounds like, oh wow. Like, that's like a very small amount. But like it's classic. Maybe like if you look at like when I used to be an intern at Quora, like, you know, now more than seven, eight years ago. When I was there, I like shipped a change and then Cora had like millions of daily actives and then it looked like it was good, and then a week later it was just like way worse.[00:41:43] And how is this possible? Like in a given hour we get like hundreds of thousands of interaction, just like, no, you just need way more data. So this is like one of those things where I think having users is like genuinely very valuable to us, basically. Users is all you need. . Yeah.[00:41:59] swyx: Um, by the way, since you brought out Quora, have you tried po any, any thoughts[00:42:03] Varun Mohan: on po I have not actually tried po I've not actually tried.[00:42:05] I[00:42:05] swyx: mean, it seems like a question answering website that's been around for 20 years or something. Would be very, would be very good at question answering. Yeah.[00:42:12] Varun Mohan: Also Adam, the ceo, is like incredibly brilliant. That guy is like insanely smart, so I'm sure they're gonna do,[00:42:18] swyx: they have accidentally built the perfect like data collection company for For qa.[00:42:22] Varun Mohan: Yeah. . It takes a certain kind of person to go and like cannibalize your original company like the in, I mean, it was kinda stagnant for like a few years. Yeah, that's probably true. That's[00:42:31] swyx: probably true. The observation is I feel like you have a bias to its domain specific. , whereas most research is skewed towards, uh, general models, general purpose models.[00:42:40] I don't know if there's like a, a deeper insight here that you wanna go into or, or not, but like, train on all the things, get all the data and you're like, no, no, no. Everyone needs like customized per task,[00:42:49] Varun Mohan: uh, data set. Yeah. I think I'm not gonna. Say that general intelligence is not good. You want a base model that's still really good and that's probably trained on normal text, like a lot of different content.[00:43:00] But I think probably one thing that old school machine learning, even though I'm like the kind of person that says a lot of old school machine learning is just gonna die, is that training on a high quality data set for your workload is, is always gonna yield better results and more, more predictable results.[00:43:15] And I think we are under no illusions that that's not the case. Basical. And[00:43:19] swyx: then the other observation is bandwidth and connectivity, uh, which is not something that people usually think about, but apparently is a, is a big deal. Apparently training agreed in the synchronous needs, high GPU coordination.[00:43:29] These are deleted notes from Sam Altman talking about how they think about training and I was like, oh yeah, that's an insight. And[00:43:34] Varun Mohan: you guys have the same thing. Yeah. So I guess for, for training, you're right in that it is actually nuts to think about how insane the networks are for NVIDIA's most recent hardware, it's.[00:43:46] For the H 100 boxes, you shove eight of these H 100 s on a. Between two nodes. The bandwidth is 3,200 gigabits a second, so 400 gigabytes a second between machines. That's like nuts when you just sit and think about it. That's like double the memory bandwidth of what a CPU has, but it's like between two machines.[00:44:04] On top of that, within the machine, they've created this, this fabric called envy link that allows you to communicate at ultra low latency. That's even lower than P C I E. If you're familiar, that's like the communication protocol. . Yeah, between like the CPU and the other devices or other P C I E devices.[00:44:21] All of this is to make sure that reductions are fast, low latency, and you don't need to think about it. And that's because like a lot of deep learning has sort of evolved. Uh, training has evolved to be synchronous in the OG days. There is a lot of analysis in terms of how good is asynchronous training, which is like, Hey, I have a node, it has a current state of the model.[00:44:39] It's gonna update that itself locally, and it'll like every once in a while, go to another machine and update the weights. But I think like everyone has converged to synchronous. I'm not exactly sure. There's not a lot of good research on asynchronous training right now. Or maybe there is an, I haven't read it.[00:44:52] It's just that there isn't as much research because people are just like, oh, synchronous works. Uh, and the hardware is continually upleveled to handle[00:44:59] swyx: that. Yeah. It was just un unintuitive to me cuz like the whole purpose of GPUs could train things. A lot of things in parallel. Yes.[00:45:05] Varun Mohan: But the crazy thing is also, maybe I can, I can give some dumb math here.[00:45:09] Sure. Here, which is that, uh, let's go with uh, G B T three, which is like 170 billion per. The optimizer state, so while you're training is 14 times the size of the model, so in this case, if it's like 170 billion parameters, it's probably, I'm not great at mental math here, but that's probably around 2.5 terabytes to just store the optimizer state.[00:45:30] That has gotta be sharded across a lot of machines. Like that is not a single gpu. Even if you take an H 100 with 80 gigs to just shard that much, that's like 40, at least 30 machines. So there's like something there where these things need to communicate with each other too.[00:45:44] swyx: You need to vertically scale horizontally.[00:45:46] Varun Mohan: Yeah. You gotta co-located, you gotta somehow feel like you have this massive, the, the ideal programming paradigm is you feel like you have this massive computer. That has no communication, you know, overhead at all, but it has like infinite computer and infinite memory bandwidth.[00:45:59] swyx: That's the AI cluster. Um, okay, well, uh, we want to head to the questions.[00:46:05] Alessio Fanelli: So favorite AI product that you are not[00:46:08] Varun Mohan: building? Yeah, I'm friends with some of the folks at Mid Journey and I really think the Mid Journey product is super cool, especially seeing how the team is iterating and the quality of generations. It consistently gets upleveled. I think it's like quite neat and I think internally at at exa functional, we've been trying out mid Journey for like random content to like generate images and stuff.[00:46:26] Does it bother[00:46:26] swyx: you that they have like a style. I don't know. It, it seems like they're hedging themselves into a particular, like you want mid journey art, you go there.[00:46:33] Varun Mohan: Yeah. It's a brand of art. Yeah, you're right. I think they do have a style, but it seems more predictably good for that style. Okay. So maybe that's too, so just get good at, uh, domain specific thing.[00:46:41] Yeah. Yeah. maybe. Maybe I, maybe I'm just selling, talking to a booker right now. . Yeah. Uh, okay.[00:46:46] swyx: Uh, next question. Uh, favorite AI people and[00:46:48] Varun Mohan: communities? Yeah, so I think I mentioned this before, but I think obviously the open. The opening eye folks are, are insane. Like we, we only have respect for them. But beyond that, I think Elu is a pretty special group.[00:46:59] Especially it's been now probably more than a year and a half since they released like G P T J, which was like back when open source G PT three Curri, which was comparable. And it wasn't like a model where like, It wasn't good. It was like comparable in terms of perplexity to GT three curity and it was trained by a university student actually, and it just showed that, you know, in the end, like I would say pedigree is great, but in if you have people that are motivated know how computers work and they're willing to just get their hands dirty, you can do crazy things and that was a crazy project that gave me more hope.[00:47:34] Decentral training being potentially pretty massive. But I think that was like a very cool thing where a bunch of people just got on Discord and were chatting and they were able to just turn this out. Yeah. I did[00:47:42] swyx: not know this until I looked in further into Luther, but it was not a formal organization.[00:47:45] Was a company was a startup. It's not, yeah. Bunch of guys on Discord.[00:47:48] Varun Mohan: They gotta you, they gotta keep you research grant and they somehow just wrote some codes. .[00:47:52] Alessio Fanelli: Yeah. Yeah. Listen to APAC with Connor, who's the person, and basically Open Eye at the time was like, we cannot release G P T because it's like too good and so bad.[00:48:01] And he was like, He actually said he was sick, so he couldn't leave home for like a, a few weeks. So it was like, what else am I gonna do? And ended up