Podcasts about Omo

  • 217PODCASTS
  • 438EPISODES
  • 39mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 29, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Omo

Latest podcast episodes about Omo

Jonesy & Amanda's JAMcast!

Jonesy & Amanda's JAMcast!

Play Episode Listen Later May 29, 2025 5:47 Transcription Available


What can Omo tell us about the Jonesy of the 80s?See omnystudio.com/listener for privacy information.

Who Cares? - Dr. Who Fans Talk TV

New storytelling, ghostwriter mythos, dialogue density, clipshow climaxes, different approaches to gods...how does The Story & the Engine do Doctor Who differently? (00:00:00) New writer, Dot and Bubble comparisons, Doctor outside Doctoring (00:08:42) Hose, tie-in short story, Omo, commercialised storytelling (00:11:17) Haircuts, barbershop, African setting, density of ideas (00:18:13) Dialogue focus, place in Doctor Who, continuity, Doctor returns (00:27:47) Storytelling, Belinda, Gods, science (00:38:55) Generational trauma, reproducing exploitation, Omo betrayal (00:45:18) The Barber villain, 'ghostwriter', web, villain forgiveness (00:53:57) Pacing, 'hurt people hurt people', companion dialogue, Ncuti performance (01:01:29) Captain Poppy, clipshow climax, life stories, fan responses… Continue reading →

Susanna Beth'a™ Touching Lives with Life and Light

You will be shocked to hear some of the lies I have held on to for long.Omo, the truth will always be the truth. Listen and share some lies you've held on to until light came to you.

Doctor Who: Tin Dog Podcast
TDP 1365: TV #DoctorWho The Story and The Engine #DisneyWho Review

Doctor Who: Tin Dog Podcast

Play Episode Listen Later May 14, 2025 22:47


  Synopsis When  and  land in  for another  reading, the Doctor goes to a  to meet an old . There, he discovers a mysterious  trapping the patrons, feeding  with their stories. Plot  sits in a chair, getting a haircut, and tells a story about  the , his village saved by the mysterious man in the blue box. As he speaks, images splay out on the wall behind him, depicting his story to the men listening eagerly. As he finishes, they all look at a pair of lights on the wall in tense anticipation, relaxing when they switch from red to green. Omo tells the others not to worry, the Doctor always comes, and the light switches back to red, alarms blare, and the room shakes.  insists that the  take her home, and he suggests that they head to , , the communications hub of , and a place he loves - home of his favorite barbershop. Belinda expresses confusion at this - the  can do his hair; but he explains that it's about community, about being himself, since it's the first time he's ever been a black man. Belinda understands and sends him off to enjoy himself in Lagos after he takes a  reading. The Doctor winds his way through a market, greeting everyone as he passes, before he comes to his friend Omo's barbershop, , finding those assembled in the middle of a story. As the door closes behind him an alarm goes off in the TARDIS, alerting Belinda that something has gone wrong. The Doctor notices that everyone present is on missing posters outside, and he watches as the man's hair grows back. The light in the barbershop flashes to red, people scramble to decide who still has a story left, and someone sits down, telling the story of  and a , of music and of time. As images flicker on the wall, the Doctor looks on in wonder, and asks how it works, begins testing by throwing out words from his travels. But it has to be a story, it has to be with a haircut. A new  has taken over the shop, he came one day, and as if by magic the shop became his. A woman enters the barbershop, , bringing food, the door closes behind her, and an alarm in the TARDIS sounds again. The Doctor recognizes her, but can't place her. The light switches to red again, and the Doctor sits down, telling the most powerful story he knows, not of  or , but of an ordinary life. Of Belinda Chandra doing her job, helping someone all night long, even on her grandmother's birthday, a simple gesture of thanks two weeks later. Abby watches a screen in another room, seeing it lighting up, noting that they're accelerating, as the story ends. The Barber is impressed with the power of his stories, and tells Abby when she comes out that they need to recalibrate . Omo asks if they can be let free now that the Doctor has come, his stories being effective, his hair having grown in the interim more than any of theirs. But Abby locks the door and the pair leave. The TARDIS sounds an alarm yet again, this time showing Belinda an image of the barbershop. The Doctor is furious that Omo betrayed him, is willing to trap him here, and refuses to listen as everyone tries to tell him not to open the door. He forces it open with his sonic screwdriver, finding a vacuum on the other side. A vacuum with only giant web and a large spider traversing it, the barbershop on the back of the spider. The Doctor closes the door with great effort, and the Barber emerges from the backroom, explaining that the shop is in Lagos and in outer space at the same time, only Abby and himself able to travel between. Outside, Belinda finds herself lost, but is pointed towards the shop by a , entering it, glad to see the Doctor. Reunited, the pair confront the Barber, calling him a coward who hides his face, having no real power. Rising to the taunt, the Barber names himself, calling himself , , , , , the god of stories. The pair burst out laughing - the Doctor has met Bastet, Sága, Dionysus, Anansi. He's partied with them, Anansi even tricked him to marry his daughter. This man isn't any of them. And so the man admits, he's the person who did their work for them. Wherever the gods went, he took their stories, cleaned them up, refined them, wrote them down, all for humans to repeat them, to keep the gods alive. Without him the gods would not exist. The web outside is his creation as well, the , a web that connects cultures and ideas. He was so successful that the gods abandoned him, and now he wants vengeance. The engine winds down, so much power drained from the Doctor opening the door. Abby criticizes him, and the Doctor recognizes her at last - Anansi's daughter, Abena. He's sorry that he was unable to help her, but he was a  at the time, and had his own story. The light turns red, and the Barber insists the Doctor tell a story. The Doctor refuses, demanding to know what vengeance is being planned. The Barber relents - he plans to cut out the gods from memory when he reaches the center of the nexus, erasing them from existence. The Doctor is horrified, this will damage humanity, as it will harm their ability to tell stories, to pass down information, insisting that this is horrific. He refuses to sit down and speak, he won't let the spider go further. As the shop descends into chaos, everyone arguing, Abena proclaims that she will tell a story, and begins to braid the Doctor's hair. And she tells a story of plantation slaves transmitting information through the braids on their hair, maps to freedom for anyone who could escape, hidden in a place where the overseers would never check. As the battery stabilizes, the Doctor and Belinda run into the back room, finding themselves in a maze, a maze for which the Doctor has the map on his head. The pair come to a room full of artifacts from various cultures and the ship's engine, an engine that runs on stories, a heart inside a brain. The Barber enters the room behind them, having cut Abena off from the outside, the Doctor disrupting the flow of power, slowing the spider down but not stopping it. The Barber insists that the Doctor has done nothing. So the Doctor suggests that they consider , who wrote a story in six words. The Doctor's six word story is "I'm born. I die. I'm born." And energy begins to flow into the engine, never-ending energy, as his past lives flicker across the screens. But the Doctor has disrupted the engine, it can't process the power. He tells the Barber that now it's his choice - he can save the people in the shop by opening the door. But the engine will disintegrate. The Barber unlocks the door and Omo, Adena and the rest out front escape. The Doctor sends Belinda back as he sits with the Barber, talking to him, convincing him that he still has more to live for. The pair escape the shop at the last moment as it collapses, the engine exploding, destroying the spider it rode on. Omo apologizes to the Doctor, and says that he should have protected the Doctor, they're part of the same community. The two make up. Omo gives the Barber his shop, saying that he's retiring, and gives him a name, his father's name, . Adétòkunbo steps back into the barbershop, now his. The Doctor and Belinda step back into the TARDIS, one step closer to home. Cast  -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -  Crew          ,  and  with  and                             by   •   by   Music  by   •  Assistant to  -    •   performed by               General production staff  for the  -   -   -   -   -   -   -   -   -   -   -   -   - ,   - ,   -  Script department  -   -  Camera and lighting department  - ,   -   -   - ,   -   -   -   -   -   -   -   -   -   - , , ,  Art department  - ,   - ,   -   -   -   -   -   -   -   -   -   -   -   - ,   -   - , , , , , ,   -   -   -   -   -   -   -   -   -   - ,   -   - , ,   - ,   - , , , , , , , , , , ,  Costume department  -   -  Make-up and prosthetics Movement  -   -   -  Casting  -   -  General post-production staff  -   -   -   -   -  Special and visual effects  -   -   -   -   - ,   -  Sound  -   -   -   -                                                            Not every person who worked on this adventure was credited. The absence of a credit for a position doesn't necessarily mean the job wasn't required. The information above is based solely on observations of the actual end credits of the episodes as broadcast, and does not relay information from IMDB or other sources.     Worldbuilding  claims to be , , , , and . In return, the Doctor relates encounters he had with all of those deities: winning a bet against Anansi, having a drinking contest with Dionysus that caused a  in , watching  movies with Sága, and losing a game of  to Bastet. The  has a large collection of artefacts related to stories. The room of shelves includes a  , several , a  statue of , a life-sized statue of a bearded man, , a helmet, copies of , , and , and a statue of a . The area around the heart has a statue of a dancing  goddess, a , a , a statue of an , a model , several , , and a . Notes The episode has a smooth transition from the "" into the , with the title sequence first appearing in the shop window, and then the camera slowly zooming closer until the image fills the frame and the window fades away. The title of the episode was revealed on official social media on  . On , the prequel short story , also written by , was published on the . Some of the artwork from it was shown on the shop window in this episode when  was telling his story of the Doctor. The story shares many themes and ideas with other work by Ellams. The 2017 play Barber Shop Chronicles prominently explored  as places of friendship and culture. It featured many barber shops, including one in . A version of the story about  and the  was part of this play. The 2019 play The Half-God of Rainfall depicted a world in which the gods of all religions coexist as separate figures who interact and fight with each other.  appeared, presented as the  of stories. Ellams viewed the character  as echoing the title character of this play, as both are newly-invented children of gods. The 2020 poetry book The Actual had a poem about the Yo-Yo Ma story, as well as a poem comparing rapping to time travel which mentions Doctor Who. The Yo-Yo Ma anecdote is based on the musician's trip to  which was filmed for the 1993 documentary Distant Echoes: Yo-Yo Ma & the Kalahari Bushmen. For the UK debut on , the episode was first released as an audio description version only. The standard version of the episode was then released a few minutes later. Episode writer  appears as a , marking the second time a person has written and acted in the same episode, following  in  [+].  as  and  as the  were omitted from the advance credits. The  anecdote of challenging him to write a story in six words appears to be referencing 'For sale: baby shoes, never worn.', a story misattributed to Hemingway. Myths to be added Filming locations to be added Ratings to be added Production errors If you'd like to talk about narrative problems with this story — like plot holes and things that seem to contradict other stories — please go to . to be added   The Doctor uses the , as he previously did in :  [+], :  [+], :  [+] and :  [+]. 's cameo, for the first time, occurs in a flashback rather than the present, in the story the Doctor recounts about how Belinda saved a  life. She's seen walking down the hallway just before Belinda meets the patient again and is given flowers. The Doctor recognises Abby from his encounter with  when he was the , indicating he now has access to some of the memories that were erased by . Belinda sees an apparition of  just before she reaches the barber shop. When she later tells the Doctor about seeing a little girl, he guesses it was due to stories from the Story Engine leaking out. When the Doctor overcharges the engine with his endless story, it is shown with clips of the  from :  [+], the  in :  [+], the  in :  [+], the  in :  [+], the  in :  [+], the  in :  [+] with audio from :  [+], audio of the  from :  [+], the  in :  [+], the  in :  [+], the  in :  [+], the  in :  [+], , , and the  in :  [+] (saying the line heard earlier), the  in :  [+] and the  in :  [+]. Clips of the  in :  [+], the  in : "" [+], the Twelfth Doctor in :  [+] and a still of the Fifteenth Doctor in :  [+] appear in the background of the following scene. in article: External links Official  page on  Footnotes  @BBC (2025-03-22). . YouTube. Archived from  on 2025-03-22.   (2025-05-08). . . Archived from  on 2025-05-08.  Amanda-Rae Prescott (2025-05-10). . Den of Geek. Archived from  on 2025-05-11.  . BBC One. Archived from  on 2025-04-30.

DWBRcast
DWBRcast 274 - The Story & the Engine!

DWBRcast

Play Episode Listen Later May 10, 2025 84:26


Lagos, Nigéria. Em THE STORY & THE ENGINE, o Doutor (Ncuti Gatwa) decide visitar seu velho amigo Omo, dono de uma barbearia, mas se depara com uma situação inesperada. Um outro barbeiro tomou conta do local e usa pessoas para energizar um motor movido a estórias.Além disso, esse novo barbeiro conta com a ajuda de Abby, uma garota cujo rosto o Doutor se lembra de algum lugar, mas não sabe bem de onde. Uma história tocante e cheia de poesia e mistério, tocando em temas profundamente humanos.

313.fm
Planet Funk Ep 586.mp3

313.fm

Play Episode Listen Later May 4, 2025 184:15


Omo & Rob DShawn Planet Funk Ep 586

The Jim Colbert Show
JCS: Faiyaz Kara with the Orlando Weekly 4/18/2025

The Jim Colbert Show

Play Episode Listen Later Apr 18, 2025 17:07


Faiyaz Kara, restaurant critic with the Orlando Weekly, talks about the Michelin Guide awarding two stars to Sorekara yesterday. Sorekara is just the second Florida restaurant to get two Michelin Guide stars. The other is in Miami. Michelin Guide also awarded a star to Omo by Jont in Winter Park. Faiyaz also shares his review of The Chapman in Winter Park with its Florida-centric fare, along with his reviews of other restaurants.

Vi streamer op ad åen
#316: En Nostalgisk Reklamerejse 2.0. feat. Kirsten

Vi streamer op ad åen

Play Episode Listen Later Apr 13, 2025 57:21


Det er påske, og det fejrer vi med en ny YouTube-reklamerejse tilbage til 90'erne og 00'erne, hvor nostalgien hersker, borde bliver mahogni med tiden og DK Benzin ikke har noget bonusfis! Anders og Peter starter ud i studiet sammen med Chat GPT-Kirsten, som afløser Tobias, indtil sidstnævnte dukker op midt i episoden. Det bliver til mange AI-forviklinger, men samtidig også et varmt gensyn med bl.a. DSB-Harry, Jørgen fra KIMs-reklamerne, Werther's Ægte og ikke mindst Lars Larsen. Vi høres på åen! For en god ordens skyld er vi ikke sponsoreret af ét eneste af de produkter, vi nævner - men DK Benzin er stadig velkomne til at ringe til os! Forresten... Vi er på Twitter - og Instagram-mediet: @streamaaen Og også Facebook: www.facebook.com/streamaaen. Kontakt os gerne: streamaaen@gmail.com. Bag podcasten står Peter Vistisen, Tobias Iskov Thomsen og Anders Zimmer Hansen - alle tidligere børnemodeller for OMO. Yderligere noter: Citat fra film (gamle reklamer): DK Benzin (DK-Benzin A/S, OK Benzin), L'Oréal, Werther's Ægte, Riesen, McDonalds, Schulstad, OBS, DSB, Home, Risifrutti, Nybolig, Schwartzkopf, JYSK, KIMs, Gevalia, Lotto   Al credit til YouTube-kontoen RiisenDK.

The Clement Manyathela Show
SERIES: Companies that survived scandals: Siemens 

The Clement Manyathela Show

Play Episode Listen Later Mar 17, 2025 21:11


Clement Manyathela chats to Sabine Dall’Omo,who is the Chief Executive Officer at Siemens Sub-Saharan Africa about the company’s 2008 global corruption scandal and how it affected Africa and the globe. See omnystudio.com/listener for privacy information.

Agitación y Cultura
El valle del Omo en Etiopía, en una exposición en Plasencia

Agitación y Cultura

Play Episode Listen Later Feb 10, 2025


Michel Pedrero y Guille Sánchez han retratado el valle del Omo en Etiopía. Es el lugar en el que confluye una de las mayores concentraciones de grupos étnicos de todo el continente Africano.

Well Sh*t. It really is that simple...
Episode 140 - FOMO, ROMO, SOMO and JOMO…

Well Sh*t. It really is that simple...

Play Episode Listen Later Feb 3, 2025 66:37


The terms FOMO and JOMO have become buzz words over the years and even if you aren't aware of what they mean, you may be experiencing the effects of them. Join us for today's episode where we will discuss the fear and joy of missing out, introduce the lesser known members of the OMO family and help to find the balance so you feel less anticipation and anxiety and feel more alignment and fulfillment. In this episode we cover: The cause of Serena's current FOMO and JOMO What are FOMO and JOMO Claire's Fantasy Fest JOMO The "highlight reel" What is SOMO What are the main drivers of FOMO, JOMO and SOMO What is ROMO The role the highlight reel played in Serena's Fantasy Fest experience over the years Looking at the wholeness of a situation What it means to resource yourself How to experience more JOMO and ROMO and less FOMO and SOMO The flip side of the OMOs The difference doing things differently can make The constant balancing act we perform Episode References: The episode where we talk about the emotional rollercoaster - Episode 133 - Why we sometimes resist joy Full Show notes: ⁠https://bit.ly/WellShitEpisodeGuide⁠

Makej vole!
Makej vole! Podcast #83 – Online pokec: Pavel Paloncý o etiopské expedici na řeku Omo

Makej vole!

Play Episode Listen Later Jan 29, 2025 18:50


Plná verze podcastu a bonusový obsah Plnou verzi podcastu si můžete poslechnout na Forendors.cz. Makej vole! na Forendors - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠forendors.cz/trailrun.cz⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 149 Kč měsíčně (přístup ke všem epizodám), nebo 99 Kč za epizodu. Výše uvedeným příspěvkem mi pomůžete s provozem podcastu Makej vole! a dostanete za to mimo jiné přístup ke všem bonusovým epizodám podcastu MAKEJ VOLE! Pro vás drobný, pro mě velká pomoc a motivace. Děkuju :) O čem je tahle epizoda? Tohle je taková trochu punk epizoda. Původně to byl online pokec s ultra dobrodruhem Pavlem Paloncým pro předplatitele Forendors, který jsme nahrávali online přes Zoom. Nakonec se to zvrhlo ve skoro dvouhodinový povídání nejen o expedici do nitra Etiopie, ale taky o zimní výpravě na Denali, tréninku, vysoké nadmořské výšce a o dalších zajímavostech. Přišlo mi škoda, nepustit to ven i pro lidi, kteří třeba neměli čas na live přenos. A tak je to venku :) Expedice Omo Pavel Paloncý, Peggy Marvanová a jejich parta vyrazili do Etiopie. Cílem téhle expedice bylo sjet na lodích málo probádanou řeku Dinchiye a natočit o ní video materiál, který by ji zachytil ještě než bude definitivně dobyta komerční turistikou. Na místě však zjistili, že vzhledem k situaci v této části Etiopie by to nebylo úplně bezpečné, tak se vydali na řeku Omo (do které ústí právě Dinchiye), kde to vypadalo, že bude bezpečněji. Což ale nakonec asi nebylo. Podcast MAKEJ VOLE! také podporují: Runsport.cz - Experti na kopce Za podporu děkuju hodným lidičkám z běžecké speciálky Runsport.cz, kteří mě zásobují nejen hadříkama a botkama na běhání a lyže, ale taky mi půjčují prostor pro natočení podcastu. Mrkněte na ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.runsport.cz⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ SALOMON Tvůrce inovací a trendů ve světe trailrunningu a outdoorových sportů. Více na ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠salomon.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vše najdete taky na mém webu zde – ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.trailrun.cz

omo
Episode 74: The Linda Chronicles

omo

Play Episode Listen Later Dec 18, 2024 52:16


Omo meets the Violin Chronicles! Special Guest: Linda Lespets .

Kuula rändajat
Kuula rändajat. Lõuna-Etioopia värvikate rahvakildude juures

Kuula rändajat

Play Episode Listen Later Dec 1, 2024 33:41


Saates tutvustatakse Lõuna-Etioopias, Omo oru ümbruses elavaid väikerahvaid.

聽天下:天下雜誌Podcast
【創新突圍】OMO創新策略,如何精準客製零時差?零售電商核心策略改從消費者開始

聽天下:天下雜誌Podcast

Play Episode Listen Later Nov 22, 2024 38:35


消費購物是我們的日常,但你有發現線上線下虛實整合已經悄悄步入新時代嗎?到底最貼近你我生活的零售產業從 O2O 到 OMO 策略做了哪些變動?消費市場又有哪些最新趨勢?更多精彩內容,請下載收聽本集「創新突圍」。 主持人:天下雜誌行銷會員商務中心總監 胡明顗(寶太太) 來賓:91APP 資深副總經理 汪君羽 Ed Wang 製作團隊:天下整合傳播部、天下實驗室 本集節目由 91APP 合作推薦 -- Hosting provided by SoundOn

聽天下:天下雜誌Podcast
【阿榕伯胡說科技Ep.42】酷澎Coupang殺進台灣,挑戰本土龍頭MOMO!B2C、D2C全面開戰,台灣電商如何擁抱虛實融合,打造零售新未來? ft. 何英圻

聽天下:天下雜誌Podcast

Play Episode Listen Later Nov 21, 2024 47:32


電商界的年度大戲「雙11」剛過去,這集節目,我們就來談談台灣的電商大戰!台灣網路產業發展早期在許多方面都走在中國前面,然而,隨著中國電商市場快速崛起,加上韓國電商酷澎(Coupang) 大舉進入台灣,台灣電商如今局面顯然大大不同。 從 C2C 到 B2C 再到 D2C,台灣電商市場經歷了三階段的改變之後,為什麼 D2C 模式是台灣電商未來的重要方向?在虛實融合(Online-Merge-Offline, OMO)趨勢下,台灣市場的消費行為將有哪些改變?此外,韓國電商巨頭酷澎 (Coupang) 進軍台灣掀起B2C新戰局,momo將會受到多大衝擊?面對酷澎的挑戰,台灣電商業者又該如何應對? 主持人:天下雜誌總主筆 陳良榕 來賓:TiEA協會理事長、91APP董事長 何英圻 製作團隊:李洛梅、劉駿逸 *免費下載《天下每日報App》體驗1個月:https://bit.ly/3wQEJ4P *訂閱阿榕伯科技電子報:https://bit.ly/42A6BWj *意見信箱:bill@cw.com.tw -- Hosting provided by SoundOn

The Whet Palette: Miami Restaurants, Wine, and Travel
S3 E57 Miami Restaurant Reviews 112-123 PART 2

The Whet Palette: Miami Restaurants, Wine, and Travel

Play Episode Listen Later Nov 17, 2024 48:01


Send us a textPicking up where we left off, the hubs and I review our personal experiences from the last 16 (new to us) Miami restaurants we visited. This is OUR opinion on OUR experience dining at each. We chose and paid for every restaurant featured. Receipts? Got 'em! There's much to tell, so this will be a two-part episode. PART 2112 Kojin113 Torno Subito114 Konro115 Paya116 Daniel's Florida Steakhouse117 Sparrow118 Amano by Oka119 Sunny's SteakhouseBONUS: 120 Victoria & Albert's121 OMO by Jont122 Sushi Saint123 Bombay Street Kitchen Featured in this episode: A couple of new and exciting "very Florida" steakhouses, a couple of Italian restaurants with very different offerings, the return of an improved beloved Miami concept, a trip to Palm Beach for a stellar tasting menu, back south to Brickell Key for something special, and a tropical feast in South Beach by one of our city's best restauranteur groups. Hungry? Dale! Listen here:AppleSpotifyiHeartradioAmazon MusicAudibleVisit me on my other platforms: InstagramTwitterYouTubeTikTokFacebookLike what you hear? Supporting my podcast is simple. Please share, review, and/or rate to help the episodes receive more exposure. It takes seconds, and it's incredibly helpful. Want to advertise your business or event in an episode or two?Message me at thewhetpalette@gmail.com. Thank you for listening. As always, from my “palette”  to yours, Cheers! Brenda#TWPmiamiPODCAST #miamipodcast #miamieats #restaurantreview #miamidining #miamichef #miamifood #miamifinedining #kojin #tornosubito #konro #payamiami #danielssteakhouse #amanobyoka #sparrow #omobyjont #sushisaint #bombaystreetkitchen #victoriaandalberts #floridafinedining #miamifinedining #finedinSupport the show

Cha Cha Music Review Podcast
Cha Cha Music Review Series Season 5 Episode 18

Cha Cha Music Review Podcast

Play Episode Listen Later Nov 15, 2024 10:36


Omo mehn, this has to be the hottest week in terms of releasing new music from the continent of Africa, especially from Nigerian artists, though I would say it's expected, December is almost here everyone is trying to drop that December jam, but which of all these songs will be the ultimate song for December, well only time will tell. That said, if you are listening to me for the first time, my name is Hafeestonova, Your Musical Plug and the Creator of the Energy Force this is the Cha Cha Music Review Series on the Cha Cha Music Review Podcast, a music podcast that amplifies the African sound by bringing the best of African music into your ears So here are the songs for this week; 1. Wizkid - Kese Dance. https://open.spotify.com/track/27durTCg4qj3qAbKsSVNX4?si=823c03c66d2f48c4 2. Adekunle Gold ft. Kizz Daniel -Pano Tano https://open.spotify.com/track/4rYrmw13Viwwu3c5vM89ty?si=a37122eb869c4f03 3. Ruger ft. Tiwa Savage -Toma https://open.spotify.com/track/40t1l3IPtEIjlhAUPxn7jd?si=1be6328001d944bb 4. Mayorkun ft Fireboy DML- Innocent https://open.spotify.com/track/6al03F4hh8LUPRMcnrM3H5?si=3125d200b58f4bfa 5. Ajebo Hustlers ft. Victony -Ava Maria https://open.spotify.com/track/6al03F4hh8LUPRMcnrM3H5?si=b22878d2e8ca4359 6. Iyana ft. Nkosazana Daughter x Makhadzi -Look At You. https://open.spotify.com/track/161lJwJrfP4LJN5JctKAjS?si=3cde6ff699144aae 7. Baaba J ft. Seyyoh - Runaway https://open.spotify.com/track/7MhwICkzmhuWWdsNn082c8?si=a6a1acf0d846494a 8. DJ Mic Smith ft. Shatta Wale x Medikal -Liquor https://open.spotify.com/track/5O1XtrzDEnzuCCVCub26fb?si=94ccd3fa315946c3 9. Ric Hassani ft. Neyo x Joeboy – Love and Romance II https://open.spotify.com/track/4VcP2qNfYjYibIHxMbYyV3?si=09097aeba14d46fe

Friday Night Groove
11-01-24 Friday Night Groove feat. OMO

Friday Night Groove

Play Episode Listen Later Nov 4, 2024 60:56


11-01-24 Recording of The Friday Night Groove on 88.3 WXOU FM, Auburn Hills, MI. In this episode, I invite Ann Arbor/Ypsilanti-based artist, producer, and frequency wizard, OMO for a special on-air live set. All original patterns and productions for an hour on FM airwaves.   For more on the artist visit: https://www.instagram.com/omo_auditorysound/   For more on the program visit: www.fridaynightgroove.com

AWR Yoruba / èdèe Yorùbá
KO AWON OMO RE NIGBA TI O BA DUBULE

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 25, 2024 28:59


DIDI ATUNBI

AWR Yoruba / èdèe Yorùbá
KO AWON OMO RE NIGBATI O BA WA LONA

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 24, 2024 28:59


JESU - AYO AYE

AWR Yoruba / èdèe Yorùbá
KIKO OMO NI ONA OLORUN NILO AKOKO

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 23, 2024 28:59


OHUN GBOGBO NSISE PAPO

Conversations with Musicians, with Leah Roseman
Omo Bello: Celebrating African Art Song

Conversations with Musicians, with Leah Roseman

Play Episode Listen Later Oct 21, 2024 100:42


Omo Bello is an acclaimed French-Nigerian operatic soprano , and in this episode we are focussing on her newly-released album “African Art Song” on Somm recordings with pianist Rebeca Omordia. Many of you heard my episode this past summer with pianist and curator of the African Concert Series, Rebeca Omordia, and I'll be linking that episode below for you.   Omo talked to me about overcoming shyness and stage fright,  her childhood and university years in Lagos, Nigeria, and some of her mentors including Grace Bumbry and Thomas Quasthoff.  I was fascinated to gain insights from her life as an opera singer, and to learn about many of the composers from Africa and the African diaspora featured on this wonderful album, including Ayo Bankole, Fred Onovwerosuoke,  Ishaya Yaron, Chirstian Onyeji and Shirley Thompson .   Like all my episodes, you can watch this on my YouTube channel or listen to the podcast on all the podcast platforms, and I've also linked the transcript to my website:  https://www.leahroseman.com/episodes/omo-bello Episode with Rebeca Omordia: https://www.leahroseman.com/episodes/rebeca-omordia-african-pianism African Art Song album: https://somm-recordings.com/recording/african-art-song/ Omo Bello website: http://www.omobello.com/about.html Omo Bello instagram: https://www.instagram.com/omo_bello Merch store to support this series: https://www.leahroseman.com/beautiful-shirts-and-more Buy me a coffee? https://ko-fi.com/leahroseman Newsletter sign-up: https://mailchi.mp/ebed4a237788/podcast-newsletter Catalog of Episodes: https://www.leahroseman.com/about Linktree Social Media: https://linktr.ee/leahroseman photo: Vincent Pontet Timestamps: (00:00) Intro (02:53) African Art Song album with Rebeca Omordia (09:12) Ayo Bankole (10:40) Ayo Banko's Adura fun Alafia Prayer for Peace (14:22) Ayo Bankole (17:00) Omo's childhood and university years in Lagos (32:22) Fred Onovwerosuoke, cultural context to interpret this music (39:13)  excerpt of “Ngulu” by Fred Onovwerosuoke  (40:11) the voice as instrument (44:49) other episodes you may like, and different ways to support this series (45:33) Grace Bumbry (53:44) Shirley Thompson (58:15) excerpt from  Shirley Thompson's "Psalm to Windrush” (59:44) Omo Bello Music Foundation in Nigeria (01:07:47) Ishaya Yarison (01:10:26) excerpt from Ishaya Yarison's Ku zo, mu raira waƙa  (01:11:54) Christian Onyegi, African Art Song album themes (01:15:34) Giri Giri by Christian Onyegi (01:17:31) percussionist Richard Olatunde Baker on the album, transmitting oral tradition of the music (01:20:46) challenges in music education in France (01:28:17) Thomas Quastoff, Des Knaben Wunderhorn album (01:34:21) challenges and joys of an opera singer

AWR Yoruba / èdèe Yorùbá
KOKO-KOKO TI OBI TI O FEE KO OMO NI ONA OLORUN GBODO TELE

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 21, 2024 28:59


PINNU ENI TI IWO O SIN

AWR Yoruba / èdèe Yorùbá
ITOJU AWON OMO WA GBA OGBON

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 20, 2024 28:59


YAN ENI TI IWO O SIN

AWR Yoruba / èdèe Yorùbá
DANDAN NI ITOJU AWON OMO WA

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 18, 2024 29:00


OTITO ASIKO YII

AWR Yoruba / èdèe Yorùbá
TOJU AWON OMO RE NINU EMI, EKO ILE ATI ARA

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 11, 2024 28:59


IRONUPIWADA ATI ISODOMI

AWR Yoruba / èdèe Yorùbá
OTITO NIPA ITOJU AWON OMO WA - ASE LATI ODO OLORUN NI

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 9, 2024 28:59


IRONUPIWADA ATI ATUNBI

Creator to Creator's
Creator to Creators S6 Ep 52 E'Major

Creator to Creator's

Play Episode Listen Later Oct 8, 2024 31:30


Apple Music SpotifyYoutubeInstagramTiktokXFor listeners who have not yet been introduced to the pulse-inspiring vibe in the Afrobeat fusion of Nigerian American artist E'MAJOR, his new release “Bolo” will dance his music into their consciousness.“Bolo,” dropping September 20, is an upbeat track with a low-key, repetitive keyboard melody behind forceful, changeable beats carried by a variety of instruments. It can make you dance in your chair.“‘Bolo' is danceable music that just gets you to — you know — just dance,” he said. “It gets you to the right mood, it gets into your head.”That is exactly what it's intended to do, what it does, which is get into your head. The title, “Bolo,” is Nigerian slang for the brain.“In this case, it's talking about a girl who pleases you, makes you feel good. It's really sweet words, telling someone that you love — your wife, your girl — that whatever she's doing, how she's looking or how sexy, she's making your head go crazy.”The way you do me, I no know Dey make me dancey awilo I know I used to be a player But I retire like papilo …Omo you don scatter my bolo boloScatter my bolo you do scatter my bolo bolo  He says his music style has been called eclectic because he has fertilized his Afrobeat-R&B roots with pop, hip-hop, highlife, contemporary gospel and others. In “Bolo,” he includes amapiano. “It has a lot of amapiano, a feel-good vibe,” he said. “It's a feel-good type of song, a blend of Afrobeat and amapiano, and you can hear a little bit of reggaeton in that, too, because that has been married with Afrobeat and amapiano.”E'MAJOR, a singer, songwriter and instrumentalist, lives in Minneapolis. He has years of experience as a lead vocalist for traveling bands, including an a cappella group, and later as a contemporary gospel artist. In 2019, he signed with Motion Major Records as an Afrofusion/R&B artist.The Afrobeat/R&B tag does not limit him. R&B is his main thing, he says, but his “eclectic” mix includes jazz and one of his tracks, “Aladdin,” has a touch of what sounds very much like American folk banjo played on a native string instrument.“I am a collector when it comes to music, having been blessed to have all the backgrounds with highlife and R&B and reggae and all that. You can hear a little bit of all those genres in my music.”He says his lyrics “explore themes of love, struggle, joy and success,” but understanding the lyric message is not necessary to experience a pure enjoyment of the music. E'MAJOR's musical fusion is intensely pleasurable listening purely for the sound. The lyrics, sung in his expressive, wide-ranging tenor voice, become another musical instrument.The international variety of his musical influences and roots are also a factor in his ambition.“I'm looking to go worldwide. That's always the goal for me, and the music coming out from me has an international appeal. It's carefully crafted that way to appeal to everybody.”The beats are the baseline.“You can listen to the words, or maybe some hook in a song gets you,” he said, but then he explains that the music, the lyrics and the singing go all together to create the vibe of the beats, and it may be all Afrobeat or one or the other of his multiple fusions.“It's done that way to appeal to an international perspective, because you want your music to be felt around the world.”He wants, he said, to make good music that “touches life.”“I want to make people feel good and sing with a large, live band, performing in places like the O2 Arena. That's really where my eye is.”He has an EP out (streaming on YouTube) and is working toward an album. First will come some more singles. He has “tons” of songs to work with, he said. “Bolo” is an introduction to him and his music.“I believe people are going to be able to feel the song. There's no place you can't play it — at the clubs or anywhere. I want to be able to get into all these different demographics, have them listen to the song and ask questions. ‘Who's this guy E'MAJOR? What's he all about?' This song is great, and my goal is to get the song in people's faces and start a conversation about who E'MAJOR is.”Become a supporter of this podcast: https://www.spreaker.com/podcast/creator-to-creators-with-meosha-bean--4460322/support.

AWR Yoruba / èdèe Yorùbá
AWON OTITO TI A GBODO MON NIPA ITOJU AWON ORIN DAFIDI 11:3

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 8, 2024 28:59


O WON, SUGBON OFE NI

AWR Yoruba / èdèe Yorùbá
AWON OBI TI O SE ASEYORI LORI ITOSONA AWON OMO WON - MOSE. ABRAHAMUN, SARA, REKABU

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 7, 2024 28:59


ELEDA ATI OHUN TI O DA

AWR Yoruba / èdèe Yorùbá
NIWON TI AWON OBI KAN SE ASEYORI LATI EYIN WA NINU ITOJU AWON OMO WON, AWA NAA A SE ASEYORI

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 6, 2024 28:58


IWE IGBAANI - ATOKA IRETI

AWR Yoruba / èdèe Yorùbá
KIKO AWON OMO WA NI ONA OLORUN

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 4, 2024 28:59


RIRIN NINU IMOLE

AWR Yoruba / èdèe Yorùbá
IDILE, KO OMO RE

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Oct 2, 2024 29:00


KRISTI OLUGBALA WA

Ça peut vous arriver
LES ? DE LA CONSO - D'où vient le nom de la lessive "OMO" ?

Ça peut vous arriver

Play Episode Listen Later Sep 15, 2024 2:40


La marque de lessive OMO est une marque phare. Mais savez-vous pourquoi l'enseigne s'appelle-t-elle ainsi ? Découvrez la réponse avec notre spécialiste de la grande distribution, Olivier Dauvers ! Tous les jours, retrouvez en podcast les meilleurs moments de l'émission "Ça peut vous arriver", sur RTL.fr et sur toutes vos plateformes préférée

Blah Blah Comics
Odd Man Out -Episode 7 - Young Animal

Blah Blah Comics

Play Episode Listen Later Sep 14, 2024 57:44


In 2016 DC comics announced, and started releasing a pop imprint curated by Gerard Way. This imprint would go on to have a massive impact on many lives, ours included. Odd Man Out wouldn't exist without Young Animal, so it's only right that for the one year anniversary of OMO, we also celebrate the 8 year anniversary of YA.

A Mediocre Time with Tom and Dan
789 – Smeagol Your Bunghole

A Mediocre Time with Tom and Dan

Play Episode Listen Later Sep 11, 2024 117:05


If you're new to the live stream, welcome! We're so glad you're here. It feels like our community on YouTube and Twitch keeps growing, and we really appreciate all your support. Tom's off getting a haircut to look sharp for the cruise, and I'm heading home. Andrea's already dropped Dansby off at the kennel, and I'm feeling a little sad—he's my best boy. Off we go on the 2024 Tom & Dan Cruise! - Hollabachs German Restaurant promotion in Sanford - Hollabachs experiences: shot ski, family-friendly, Oktoberfest gear, German food - Uber Keller's tapas-style German cuisine and beer garden vibes - Tom and Dan cruise announcement, plus Friday free show scheduling mix-up - Streaming on Twitch and YouTube—don't forget to like and subscribe! - Shoutout to SJ for website help and first-time website ownership - Jeff from DevOps managing the website as a favor - Chris Kattan interview mishap at West End Live, ending up on Reddit - Discussion on how bad interviews can still make great content - Josh Wolf: fantastic guest and friend of the show - Brendan O'Connor from "Bungalower and the Bus," named a top podcast by Orlando Magazine - Chris Kattan interview details, including his neck injury and memoir - Jokes about Tom's wife misspelling their son's name on the cruise documents - Teasing about leaving their son behind on the cruise - Announcement: studio building sold, potential new studio locations - Joking about a studio move to Seth's dojo or the "Triple Nipple" - Cappy's subs song to support the business and lease issues - Interview with Cappy's executive chef, and jokes about the trailer setup - Fake attack journalism idea to save Cappy's, plus a parody song - Brendan's northern Ontario trip: skinny dipping, wildlife encounters, and stargazing - Buzzard story, Sandhill crane rescue, and reflecting on good intentions - Brendan's remote Grindr experience and his pigeon art project in downtown Orlando - Mall trips with Tom's kids, the decline of mall stores, and Radio Shack nostalgia - Local bar and restaurant shoutouts: Aylstone, Current Seafood, Will's Pub, and more - Brendan's role as a travel writer and tourism board collaborations - Brendan's stalker Craig Youngworth and his scam activities - Discussion on political scam texts and elderly being targeted - Old men falling for younger women scams and awkward massages - Listener voicemails: Gothapotamus, high school insecurities, and music preferences - Awkward massage stories on cruises and travel - Fine dining trend: Omo, Soseki, and nostalgia for TGI Fridays - Upcoming events for Brendan, including puff and paddle with Green Dragon dispensary - Closing remarks: BDM show and more cruise stories next week ### **Connect & Follow:** - [Website](https://tomanddan.com/) - [Twitter](https://twitter.com/tomanddanlive) - [Facebook](https://facebook.com/amediocretime) - [Instagram](https://instagram.com/tomanddanlive) **Listen & Laugh:** - [Apple Podcasts: A Mediocre Time](https://podcasts.apple.com/us/podcast/a-mediocre-time/id334142682) - [Google Podcasts: A Mediocre Time](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2FtZWRpb2NyZXRpbWUvcG9kY2FzdC54bWw) - [TuneIn: A Mediocre Time](https://tunein.com/podcasts/Comedy/A-Mediocre-Time-p364156/) **Corporate Comedy:** - [Apple Podcasts: A Corporate Time](https://podcasts.apple.com/us/podcast/a-corporate-time/id975258990) - [Google Podcasts: A Corporate Time](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Fjb3Jwb3JhdGV0aW1lL3BvZGNhc3QueG1s) - [TuneIn: A Corporate Time](https://tunein.com/podcasts/Comedy/A-Corporate-Time-p1038501/) **Exclusive Content:** - [Join BDM](https://tomanddan.com/registration) **Merchandise:** - [Shop Tom & Dan](https://tomanddan.myshopify.com/)

Old Man Orange
Scarface 1932 Vintage Cinema Review - Old Man Orange Podcast 608

Old Man Orange

Play Episode Listen Later Aug 20, 2024 86:25


Today I am joined again with a classic guest from back in the podcast day, Brent from his own movie fueled show, Home Video Hustle to talk all about the original 1932 Scarface film. You probably know the classic 1983 Al Pacino, Oliver Stone, and Brian De Palma Miami Gangster classic of the same name, but not many have seen the classic prohibition era take. From Howard Hughes and Directed by Howard Hawks. Staring Paul Muni with Ann Dvorak, Osgood Perkins, and Boris Karloff. We go back in time, almost a hundred years to a pre Hayes Code era film of action packed violence, scenery chewing acting, and a solid story that always holds up fine. We get distracted along the way talking old school WWF and WCW, Resident Evil, couch co-op games, and a ton on just movies in general. Some that relate, like the Scarface video game on Wii, PS2, and Xbox and then some just of the rails tangents to no mans land. All in good fun. So, come on by and join us on another adventure of OMO Podcast. Be sure to take a listen to the Home Video Hustle Podcast from Brent. It's in a very similar vein to OMO and Via VHS from a man who loves movies almost more that we do. - https://www.youtube.com/channel/UCfN67zqLBcbJNJw1cHI0Hlw Old Man Orange is Spencer Scott Holmes & Ryan Dunigan - 2024 - "Young Adults, Old Man Attitude. Talking retro games, classic films and comic good times with a crisp of Orange taste." - www.OldManOrange.com Our link tree with all the places one could go for our podcasts like Old Man Orange, Via VHS, and more of our radio filled adventures. Plus, Pizza Boyz Comics, the sitcom styled, retro fueled indie series from Spencer Scott Holmes in physical and digital reading forms. Then topped out nicely with our old videos, animations, and other experiments over the years too for the amusement. - https://linktr.ee/OldManOrange I also have my new workout and strength motivation book, "Pull-Ups For Life" up on Amazon Kindle and included in the Unlimited Membership too. Link in the Link Tree Above or you can look it up on Kindle. Support the Show the easy and simple way, by using one of our Amazon Links to make your purchases. Doesn't cost you a penny extra but sends a little something our way. Thanks! Scarface 1932 - https://amzn.to/4fLCaDb

雪球·财经有深度
2592.从央妈的二季度货币政策报告中,理解本周债市大跌的原因

雪球·财经有深度

Play Episode Listen Later Aug 11, 2024 6:22


欢迎收听雪球出品的财经有深度,雪球,国内领先的集投资交流交易一体的综合财富管理平台,聪明的投资者都在这里。今天分享的内容叫从央妈的二季度货币政策报告中,理解本周债市大跌的原因,来自表舅是养基大户。债市的“打人风暴”还在继续,自从央妈以赛代练,升级了技能之后,操作越发的随心所欲。一方面,从行政手段上,市场传,央妈委托四大行卖债后,四大行要求中介提供最终买盘的名单,国债交易彻底实现“实名制”了;另一方面,央妈继续挑最软的柿子捏,发力卖出5-7年这一期限区间,并带动全期限利率上行,今天5-7年国债再次大幅上行4-5.5bps不等,带动10-30年期国债分别上行2.5bps,从周度的走势来看,7年国债活跃券低点上行15bps,10年上行10bps,30年上行8bps;同时,恰逢月初,央妈适度收紧资金面,本周净回笼了几千亿的资金,隔夜资金今天贵了10bps,一揽子“杀债”方案下,成功实现了期初的目标。央妈这回对长端利率的抬高、矫正,除了明面上的徙木立信,为自己立威,从而减少未来的政策传导成本之外,深层次来说,还是两个目的,一个围绕政治性(汇率及政策空间),一个围绕人民性(债市及资管产品风险)。政治性是什么?拉高未来的政策空间。有人觉得之前人民币冲破7.1,汇率压力解除,债市的封印也就解除了,这有点想当然了,很显然,此轮人民币的短期升值,是外部效应带来的,并不具有可持续性,今天就又要回到7.2附近了,美国大选临近,不确定性非常多,我们必须保证后续的政策空间。人民性是什么?总有人说什么债券每调买机,我也认可债市长期利率中枢可能还要下,但是预期打得太慢,利率下得太快,真的是好事吗?长端利率下行到2.0%,散户进来个1万亿,然后呢,真有反转,是不是全被埋进去了?22年底理财的苦头还没吃够吗?今天晚间,央妈发布了二季度的货币政策报告,在报告中,体现了上述两点。第一,债市打人的第一个理由——政治性,也就是为了拉高未来的政策空间,本质上仍然是为了缓解汇率的压力。从明面上来看,本轮人民币跟随日元大幅升值,突破7.1后,紧接着迎来了三连跌,目前离岸人民币重新回到了7.17的位置,2年期美债、10年期美债这两天快速上行,中美利差重新扩大,汇率压力并没有解除。在二季度的货币政策报告中,央妈对汇率的担忧,可见端倪。我们知道,汇率,是两种货币之间的强弱关系对比。进出的美元,有可以有两大类组成,一是从外面收了美元的企业和个人,他们的结汇意愿,二是外资的进出。1、我们看结售汇意愿的话。今年以来,每个月的结售汇都是逆差,也就是说,企业出口赚了很多美元,但大家都不愿意换,因为要么赌美元还会升值,要么担心人民币继续贬值,在通篇的报告中,央妈提及多次。这是贬值的主要压力之一。2、我们再看外资的投资。央妈的报告显示,“上半年,全国实际使用外资金额 4989 亿元,同比下降 29.1%”,不得不说的是,这个同比下降29.1%,可能也是有水分的,比如,有些地区,有明确的引入外资的指标,政府会要求企业,通过资金绕道的方式,把境内资金,转出境,再转回来投资。这是贬值的压力之二。3、我说个更直观的数据,北上资金。买股票的北上资金,外资不管是通过 Q F I I ,还是北向通进来,其实都是相当于把美元换成了人民币,进来投资人民币资产,所以资金净卖出,其实也是一种多方位的对汇率的压制。从上面的几点来看,大家可以看到,事实上,汇率的压力,是全方位的。要提高企业和个人的结汇意愿,要吸引外资流入,是一个漫长的过程,且本质上央妈无法起到触动他们的意愿的作用,更多的其实是被动应对,而被动应对的过程中,一定是需要有更多的政策空间的,这里面,保证中长债收益率在一定水平的绝对值上,从而保证在美国利率万一又掉头向上的过程中,有足够的手段去应对,是央妈需要考虑的事情。当然,央妈在这个过程中,起到的另外一个作用,就是保证流动性充裕,从而推动经济增长,促进信贷需求,经济好了,才能扭转所有人的预期和资金流向,这也是为何央妈要把政策利率锚定到OMO,并降低OMO,且始终今年以来保证流动性充裕的原因。毕竟钱已经很多了,没人贷款,其实和我关系也不大。这是央妈打人,拉高中长债收益率的其中一个底层原因。债市打人的第二个理由——人民性,避免散户重蹈覆辙,在固收产品里再跌一跤,也通过加大固收理财的波动率,降低产品不正常的吸引力,减少趋势交易的力量,三次点刹,要比一次急刹车,造成的影响小。综上央妈债市“打人”的两个深层次原因。政治性,也就是为了拉高未来的政策空间,本质上仍然是为了缓解汇率的压力。人民性,避免散户重蹈覆辙,在固收产品里再跌一跤,也通过加大固收理财的波动率,降低产品不这回对债市的阶段性压制,其实就是起到了这么个效果,出三个小调整,要比直接出一个像22年底那样的大调整,更有利于市场的稳定。

Clase Básica
#159 Especial trapitos sucios

Clase Básica

Play Episode Listen Later Aug 9, 2024 67:38


Como dice el adagio básico: si no lavas tus trapitos sucios en casa, todo el barrio se entera, y en este especial de chisme te actualizamos con lo que pasó en el cumpleaños de Charli XCX, el (al parecer) inminente divorcio de Jennifer Lopez y Ben Affleck y los nominados y no-nominados a los próximos VMA. Que no te pillen con manchas! Y ríete del resto tranqui en este especial trapitos sucios junto a Omo! Nuevo capítulo ya disponible en Spotify y @emisorpodcasting

AWR Yoruba / èdèe Yorùbá
IPA AINI IKORA ENI NI IJANNU LORI IBANIWI AWON OMO WA

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Aug 8, 2024 28:58


MASE JE KI IMOLE RE KI O DI OKUNKUN

AWR Yoruba / èdèe Yorùbá
BI AWON SE NBA IPINNU ATI ABAYORI RE JE NINU AYE AWON OMO

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Aug 1, 2024 28:59


MASE JE KI IMOLE RE DI OKUNKUN

AWR Yoruba / èdèe Yorùbá
BIBOWO FUN OLORUN NI ILE MIMO RE

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Jul 27, 2024 28:58


TITO OLORUN AWON OMO WA NI ONA

The Jim Colbert Show
Bob is On Fryer!

The Jim Colbert Show

Play Episode Listen Later Jul 26, 2024 157:08


Friday – Bob Frier is our guest cohost while Deb is out. We talk Olympics, smells of Las Vegas, Halloween Horror Nights, and sunsets. Prime Time Kitchen with Orlando Weekly Restaurant Critic Faiyaz Kara review Omo by Jont, and some restaurant closings. Jack's Olympic Update. Plus, WOKE News, Sink or Sail, Embers Only, JCS Trivia & You Heard it Here First.

AWR Yoruba / èdèe Yorùbá
GBIGBA AWON OMO NIYANJU

AWR Yoruba / èdèe Yorùbá

Play Episode Listen Later Jul 19, 2024 28:59


IBUKUN NI FUN AWON ONIRELE

Arkivo de 3ZZZ Radio en Esperanto
Elsendo de la 24a de junio

Arkivo de 3ZZZ Radio en Esperanto

Play Episode Listen Later Jun 24, 2024 59:54


Kanto: “ La vivon varmigas la sun'” de la grupo Strika tango el la kompaktdisko Civilizacio  Legado: 1) Heather el la revuo Esperanto “ Afriko, longa historio” de Mireille Grosjean . 2) Mesaĝo de UEA al  Unuiĝintaj Nacioj okaze de la tago de rifuĝintoj  Kanto: el la kompaktdisko ĴoMo friponas “ Al Durrati” . Legado: […]

Wild with Sarah Wilson
AMA: How do you manage information overload PLUS should white women activists get out of the arena?

Wild with Sarah Wilson

Play Episode Listen Later May 2, 2024 28:54


On today's AMA, I share my hacks and my “mindset” for sifting through, retaining and managing all the data inflow (amino supplements, tilting, biting off more than I can chew as a tactic and advice gleaned from systems thinkers). And I wade into when to speak out and when not to wade into a rally and take the mic (and the responsibility that comes with being a privileged white woman who looks as tame as a mum from an OMO commercial).SHOW NOTESJoin the conversation over on SubstackSubscribe and post your own Ask Me Anything here--If you need to know a bit more about me… head to my "about" pageFor more such conversations subscribe to my Substack newsletter, it's where I interact the most!Get your copy of my book, This One Wild and Precious LifeLet's connect on Instagram and WeAre8 Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Top 5 Research Trends + OpenAI Sora, Google Gemini, Groq Math (Jan-Feb 2024 Audio Recap) + Latent Space Anniversary with Lindy.ai, RWKV, Pixee, Julius.ai, Listener Q&A!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 9, 2024 108:52


We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And

ceo american spotify tiktok black australia art europe english google ai china apple vision france politics service online state crisis living san francisco west research russia chinese elon musk reach search microsoft teacher surprise ring harry potter security asian chatgpt broadway run silicon valley mvp ceos discord medium reddit mail dubai stanford math adolf hitler fill worlds complex direction context mixed qatar stanford university dom one year falcon cto substack offensive tension retro minecraft ia newton hungary explorers openai sf gemini residence archive alt nvidia ux api builder laptops apples lamar discovered fastest generate sweep voyager stable python j'ai developed ui mm jet gpt stretching rj ml lama alibaba hungarian github automated llama directions notion grimes rail lava merge transformer lesser clip metaphor runway amd synthetic samba bal emo sora copilot shack sam altman wechat structured llm ops mamba ix gpu unreal engine connector spreadsheets agi rahul raspberry pi vector bytedance zapier sql pixie collected c4 rag sonar gpus anz 7b deepmind perplexity lambda vps anthropic utilization alessio tiananmen square speculative lms gopher lm web summit json mixture arp sundar pichai 60k google gemini mistral kura cli pocketcast pika tendency soa motif digital ocean a16z sumit demo day chinchillas itamar adept versa npm yon markov reassuring dabble linux foundation hacker news dcm boma us tech omo moes svelte agis jupyter yann lecun open api matryoshka tpu jupyter notebooks replit jeremy howard vipul exa groq 70b neurips hbm gemini pro mece nat friedman rlhf rnn chris ray code interpreter mrl naton simon willison audio recap 460k latent space sfai and openai unthinking versal jerry liu matei zaharia hashnode
omo
Episode 64: 5 years in review

omo

Play Episode Listen Later Feb 14, 2024 56:36


Jerry and Rozie are joined by fellow Omo founder Chris Jacoby to talk about 5 years of Omo. Later, Gerard KilBride joins Brandon Godman to talk about bridges. Special Guests: Christopher Jacoby and Gerard KilBride .

Have A Sip
NSND Bạch Tuyết: Biết 2 điều này, cả đời không bao giờ khổ - Have A Sip #157

Have A Sip

Play Episode Listen Later Jan 19, 2024 78:39


Dù đã có những cuộc trò chuyện thú vị cùng NSND Bạch Tuyết qua 2 series podcast là "Trăm Năm Sân Khấu" và "EduStation" của Vietcetera, nhưng đến nay "Have A Sip” mới có cơ hội được ngồi lại cùng cô.Với vai trò cây đa, cây đề trong nghệ thuật nước nhà, cô Bạch Tuyết không chỉ là ngôi sao sáng giữa bức tranh sân khấu Việt, mà còn là cầu nối vững chắc cho bao thế hệ. Mỗi lần gặp gỡ cô, khán giả luôn được nhìn thấy hình ảnh tươi trẻ, mới mẻ và lối tư duy cởi mở.Làm thế nào để luôn cởi mở, không ngừng học hỏi, không ngừng hy vọng và lạc quan? Quy luật của cuộc sống là gì? Tại sao ta luôn thấy ta cực khổ? Cùng tìm hiểu với NSND Bạch Tuyết và host Thùy Minh.Đừng quên có thể xem bản video của podcast này tại: YouTubeVà đọc những bài viết thú vị tại website: Vietcetera—Cám ơn Omo đã đồng hành cùng Have a sip. Dù năm qua có khó khăn thế nào, còn được miệt mài lấm bẩn, lao động là còn cơ hội thay đổi hoàn cảnh, thực hiện ước mơ. Hãy giữ vững hi vọng vào tương lai Bạn nhé! OMO - lấm bẩn gieo hy vọng."#OMO #Lambangieohyvong #Tethyvong" Xem thêm tại:- Facebook: https://www.facebook.com/OMOVietnam- Youtube: https://www.youtube.com/@OMOvietnam---Nếu có bất cứ góp ý, phản hồi hay mong muốn hợp tác, bạn có thể gửi email về địa chỉ team@vietcetera.comYêu thích tập podcast này, bạn có thể donate cho Have A Sip tại:● Patreon: https://www.patreon.com/vietcetera● Buy me a coffee: https://www.buymeacoffee.com/vietcetera