Podcasts about matryoshka

Russian nested doll

  • 153PODCASTS
  • 693EPISODES
  • 1h 5mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 24, 2025LATEST
matryoshka

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about matryoshka

Latest podcast episodes about matryoshka

Villas Grace Church
Extraordinary Prayer pt.3 - John 17:20-26 - Matthew Niemier

Villas Grace Church

Play Episode Listen Later Jan 24, 2025 27:46


Christian's are like Russian Matryoshka dolls. What's a Matryoshka doll? It's one doll, comprised of various sizes, where each individual doll fits into the size larger. In our sermon "Extraordinary Prayer pt.3" we'll witness Jesus pray for us to be in Him, like He is the Father. We'll also witness, Jesus pray for our unity in Him, which reveals God's love to the unbelieving world.

ASMR by GentleWhispering
♥(*^.^) Russian Doll 'Matryoshka' Does Your MakeUp Best♥ ASMR Role Play / Soft Spoken / Ear-toEar

ASMR by GentleWhispering

Play Episode Listen Later Nov 8, 2024 44:36


Hallow :) Today Matryona "Matryoshka" will help you become a real Doll :D Hope you enjoy her sweet nature and slight accent as she applies her favorite cosmetics on you :) Thank you. I shall stop talking about myself in the 3rd person haha :) ♥ Please keep an open mind with this one, it's just a playful role play, I had so much fun making it, and even though it might be too much for some of you I truly hope some could still find it relaxing in a way. Thank you for watching :* ♥ It's hard to give you an exact directory of this video as it has a lot of steps in it. :) sorry. #ASMR #GentleWhispering #relax 6/29/15 --- Support this podcast: https://podcasters.spotify.com/pod/show/maria-gentlewhispering/support

Doctor Who: Tin Dog Podcast
TDP 1318:13B. Doctor Who: The Fourth Doctor Adventures Series 13: Metamorphosis

Doctor Who: Tin Dog Podcast

Play Episode Listen Later Oct 29, 2024 11:32


  From the recent past to the far future, the Doctor, Harry and Naomi find themselves battling foes old and new. From the malevolence of the Master to the traps of the Toymaker, danger lurks round every corner... and sometimes in the last shape you'd expect... 13.3 Matryoshka by Aurora Fearnley (2 parts) When a strange force drags the TARDIS to Earth, it's clear that the Doctor, Harry and Naomi are up against an incredibly powerful being. And when they encounter Lord Pearson, inventor of games and toys, searching for his vanished daughter... it becomes clear exactly who that being might be. Hide and seek is one of the simplest games devised by man... but in the hands of The Toymaker... it may also become the deadliest. 13.4 The Caged Assassin by Matthew Sweet (2 parts) It's very unusual to find a tiger in the TARDIS. But it's even more unusual to find one heavily dosed in radiation. But this is far from the most unusual occurrence the Doctor, Naomi and Harry are going to encounter today. Because they are about to meet Charles Jamrach, supplier of exotic animals to the rich and royal, who is unaware that his famous menagerie conceals a deadly terror. A strange creature out for blood... 13.5 Metamorphosis by Lisa McMullin (2 parts) The people of the planet Jaxus are vanishing. When the Doctor and his friends land, Naomi is snatched away by a mysterious fog and taken to a prison run by a very old foe of the Doctor's. The Master is here and he has, as ever, a sinister scheme underway... but neither Naomi nor Harry know who he is... Can the Doctor stop his plans and rescue the prisoners? Or will his companions inadvertently aid his enemy's plans? **Please note: the collector's edition CD box set is strictly limited to 1,500 copies.**

Matryoshka of Lies
Matryoshka of Lies. Episode 6: Ending Empire feat. Casey Michel, Marci Shore and Oleksiy Radynski

Matryoshka of Lies

Play Episode Listen Later Oct 18, 2024 20:05


This episode of Matryoshka of Lies uncovers Russia's brutal colonial history in Alaska, marked by massacres, enslavement, and resource extraction from Indigenous populations, as highlighted by journalist Casey Michel. Filmmaker Oleksiy Radynski urges Ukrainians to confront their complicity in Moscow's colonial past, emphasizing that dismantling the Russian Federation is crucial to ending this legacy. Radynski also highlights global dependence on fossil fuels from Russia's colonized territories, which fuels both empire and the climate crisis. Historian Marci Shore examines the role of Russian people in failing to resist expansionist aggression, while drawing hope from Ukraine's student-led movement to confront the past and build a better future.Dive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika and Vlada Toporkova, produced by Alina Poliakova, mixed and sound design by Anastasiia Fedoskina, and co-produced and narrated by Maksym Eristavi.Consider subscribing on a platform that is convenient for you: https://pod.link/1729375002Support the journalism of Ukrainska Pravda. Learn how at https://www.pravda.com.ua/eng/

Scottish Poetry Library Podcast
Nothing But The Poem - Kathryn Bevis

Scottish Poetry Library Podcast

Play Episode Listen Later Oct 15, 2024 13:20


“To make art out of something painful, uncertain or damaging is an act of real empowerment” wrote Kathryn Bevis, who died in May 2024. Her first full-length poetry collection, The Butterfly House, was published two months earlier and tells the story of a life before and after a late-stage cancer diagnosis. The poems examine both life and death, encompassing experiences, terrible and sublime. Her publishers Seren wrote in her obituary that she was "Perhaps one of the finest poets of her generation... (who) captured hearts and minds with her innovative use of form, language and metaphor to describe everyday life, experiences of women and terminal illness. She had a skill for finding light in the dark, celebration in sadness, and joy in the smallest moments." Don Paterson described her as: " A poet of real wisdom, compassion, and fearlessness." Sam Tongue took an immersive dive into two Kathryn Bevis poems My Cancer as a Ring-Tailed Lemur and Matryoshka. Find out what Sam - and the Friends Of The SPL group - took from these poems in our Nothing But The Poem podcast.

Red Game Table
Matryoshka 4.6 - The Burn, pt. I

Red Game Table

Play Episode Listen Later Sep 25, 2024 70:08


Division 3 works with counterparts in the DPRK to investigate sightings of what look like people made of ashes. For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage, find Johnny's show Subversive History on his Linktree, and listen to Jeremy here. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone.

Matryoshka of Lies
Matryoshka of Lies. Episode 5: Russophobia feat. Diana T. Kudaibergen and Viktoriia Grivina

Matryoshka of Lies

Play Episode Listen Later Sep 13, 2024 21:33


This episode explores the concept of "Russophobia" and how it is used to silence criticism and maintain control. Painting the victims as Russophobic has always been part of the crime, as vividly illustrated by Professor Timothy Snyder's 2023 testimony to the United Nations Security Council. Snyder, a leading historian on Eastern Europe, sheds light on the intersection of Russian genocidal rhetoric and claims of Russophobia.Featuring Diana T. Kudaibergen, a prominent Qazaq researcher of colonialism and a sociologist at the University of Cambridge, who tackles the commemoration of the genocide in Qazaqstan and underscores the need to elevate indigenous voices in these discussions, and Viktoriia Grivina, a Ukrainian writer and a cultural researcher from Kharkiv, who exposes Russia's colonial rhetoric, with her home city being the victim of it.Dive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika and Vlada Toporkova, produced by Alina Poliakova, mixed and sound design by Anastasiia Fedoskina, and co-produced and narrated by Maksym Eristavi. Consider subscribing on a platform that is convenient for you: https://pod.link/1729375002Support the journalism of Ukrainska Pravda. Learn how at https://www.pravda.com.ua/eng/

Jean & Mike Do The New York Times Crossword
Saturday, August 10, 2024 - MATRYOSHKA, hard to spell, harder to work into casual conversation

Jean & Mike Do The New York Times Crossword

Play Episode Listen Later Aug 11, 2024 19:29


Today's crossword was Spencer Leach's fifth, and it is a gem. The clues are challenging, novel (there are 7 debuts) and delightfully deceiving. Definitely worth 5 squares on the JAMCR scale, for reasons ... that are thrashed out in today's episode. We are also proud to announce the winner of this week's JAMCOTWA (Jean And Mike Crossword Of The Week Award), so check that out as well!Show note imagery: A person, in a BOLERO, throwing a bolo

Matryoshka of Lies
Matryoshka of Lies. Episode 4: Imperial Innocence feat. Botakoz Kassymbekova

Matryoshka of Lies

Play Episode Listen Later Aug 9, 2024 20:02


This episode explores the concept of "Imperial Innocence" with Dr. Botakoz Kassymbekova, a prominent Qazaq thinker. We delve into how Russia perpetuates the image of a victimized nation to justify its history of brutal invasions and ongoing colonialism.Dive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika, produced by Alina Poliakova, mixed and sound design by Anastasiia Fedoskina, co-produced and narrated by Maksym Eristavi. Consider subscribing on a platform that is convenient for you: https://pod.link/1729375002Support the journalism of Ukrainska Pravda. Learn how at https://www.pravda.com.ua/eng/

Red Game Table
Matryoshka 4.5 - The Sphere

Red Game Table

Play Episode Listen Later Jul 22, 2024 104:04


Katya and Yahyo investigate multiple sightings of an unidentified flying object. For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage and find Johnny's show Subversive History on his Linktree. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but can't pay anything, you are just as welcome to the book as anyone.

unbillable hours - a podcast about better professional services marketing

Q: "Hey Ash, Flo - would love to implement some of the value proposition stuff you keep talking about. But we're a 15-practice firm! Where should I even start?" A: "You start by creating multiple propositions. Here, let me show you..." *proceeds to pull out pins, red string, and a Matryoshka doll* Voices, production, etc. by Ash and Flo. Creative and design advice by @calmar.creativ Into, outro voiceover by @iamthedakota Music also by @iamthedakota Please find the shownotes to this episode on unbillable-hrs.com. Oh, and: Register for the Deltek webinar with Flo (July 18th, 8pm CET) using this link right here. Thanks!

Profiling Criminal Minds
Millennium 313-314: Antipas, Matryoshka

Profiling Criminal Minds

Play Episode Listen Later Jul 9, 2024 68:51


We feel gaslit by Millennium! It is now just an "Edgar suit".

Darker Shades of Black
Darker Shades of Black Ep46 (Troika Review)

Darker Shades of Black

Play Episode Listen Later Jul 9, 2024 100:41


In this review we learn about an alien ship that has emerged from a wormhole and into our solar system. A group of cosmonauts are sent to investigate this mysterious vessel and discover what lies within the Matryoshka. We examine Alastair Reynolds science fiction story Troika.

Super Chats
A-chan Left, Kizuna Ai Might Return, and Doki's Got New Duds - Super Chats Ep. 73

Super Chats

Play Episode Listen Later Jul 5, 2024 140:17


Lots of stuff to talk about this week; some of it great some of it far from it. Kizuna AI is teasing a comeback, Doki's a bounty Hunter, Mint's a gremlin, and all that's great! Meanwhile A-chan left Hololive and Brave Group had a data leak. Basically, there's lots of ups and downs in the world of Vtubing, so let's talk our way through all of it and celebrate what we can! Each week we aim to bring together the biggest events in Vtubing and talk about what's been going on. Stop by, hang out, and let's catch up with us! Join this discord : https://discord.gg/wFMcTGHWGJ Follow here for updates: https://twitter.com/SuperChatsPod Shorts over here: https://www.tiktok.com/@superchatspod 00:00:00 Intro 00:01:48 Dokibird's New Skinsuit https://youtu.be/bPnnaM-Liz4 00:12:28 Mint Gone Gremlin Mode https://youtu.be/voYDEg85ksg 00:16:40 Globie Monitizations!  00:18:49 The Brave Group Leak https://bravegroup.co.jp/en/news/6479/ 00:27:27 Vtuber Summer Slam https://store.gamersoutreach.org/ 00:33:09 VSMP Minecraft Server  00:38:37 Bri's Donothon https://youtu.be/yXkWFPdIiJA 00:46:35 A-chan Left Hololive https://youtu.be/V-4PiHDX2Mg 00:54:48 Kizuna AI ...Returns? https://youtu.be/EEmgm61cQjE 01:00:01 MinikoMew https://www.twitch.tv/minikomew 01:05:43 Noels' New Outfit https://youtu.be/i7Zcu0FUI3I 01:08:02 Airi On Break https://www.youtube.com/@ChisakaAiri 01:09:18 Mio's Back! https://youtu.be/TRCwCJVeeMs 01:11:24 Suicide Squad Isekai Anime Has Started!  01:14:30 ENdless Visits America!  01:17:23 Muu Hit 30K Subs https://youtu.be/PN8ulV3yHDg 01:18:27 Punkalopi Gamersupps Merch https://gamersupps.gg/collections/punkalopi-collection 01:22:12 Omocat Advent Merch Released https://www.omocat-shop.com/collections/omocat-x-hololive-en 01:25:44 Ironmouse Hit 2 Million Followers https://www.twitch.tv/ironmouse 01:28:25 Shondo Returns on the 13th https://x.com/fallenshadow_YT 01:32:53 Give Up by Gawr Gura https://youtu.be/YjzQm_34aVw 01:34:54 Sora and Azki covered Matryoshka https://youtu.be/uxbspjmmIYA 01:36:09 Phase Origins covered Adabana Necromancy https://youtu.be/U7uJ3O8B9H8 01:38:38 Gigi Played Project Sekai https://youtu.be/lvS1AeFDho0 01:43:24 Gigi and Cecilia played A Way Out https://youtu.be/Qrs-TbHuoaA 01:47:30 Cecilia's Music Production Stream https://youtu.be/WZHEp917lyU 01:51:46 Raora Panthera Likes Drawing https://youtu.be/VfxI99eZIz4 01:52:51 Aura's Jackbox Collab https://www.twitch.tv/videos/2184847041 01:55:31 Mono Monet's DJ Set https://www.youtube.com/@MonoMonet 01:59:21 Phase Connect Clipping Contest Results Stream https://youtu.be/ebpVoTItXuM 02:02:46 Clara Karaoke'd with her Dad https://youtu.be/XnYkTaSYMi4 02:06:08 Shiina Made Katsudon https://youtu.be/YswbLe66tt8 02:08:33 Community and Shilling 02:18:26 Birfdays

Matryoshka of Lies
Matryoshka of Lies. Episode 3: Infiltration feat. Rory Finnin, Vitaly Chernetsky, Natalia Antelava and Romeo Kokriatski

Matryoshka of Lies

Play Episode Listen Later Jul 5, 2024 20:30


This episode of Matryoshka of Lies exposes the brutal reality of Russian colonialism in Crimea (Qirim) and Georgia (Sakartvelo) and explores how Western academia, influenced by Russian narratives, has failed to recognize it. We delve into the reasons behind this blind spot, including the power imbalances in academia and the weaponization of disinformation.Featuring:- Dr. Rory Finnin, Professor of Ukrainian Studies at the University of Cambridge- Dr. Vitaly Chernetsky, President of the Association for Slavic, East European, and Eurasian Studies- Natalia Antelava, Georgian journalist and founder of Coda Story- Romeo Kokriatski, Ukrainian journalist and Managing Editor at New Voice UkraineDive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika, produced by Alina Poliakova, mixed and sound design by Anastasiia Fedoskina, co-produced and narrated by Maksym Eristavi. Consider subscribing on a platform that is convenient for you: https://pod.link/1729375002Support the journalism of Ukrainska Pravda. Learn how at https://www.pravda.com.ua/eng/

Matryoshka of Lies
Matryoshka of Lies. Episode 2: Supremacy feat. Ewa Thompson and Oksana Zabuzhko

Matryoshka of Lies

Play Episode Listen Later Jun 14, 2024 17:49


In this episode, we delve into how Russia weaponized its culture to justify its empire, rewriting history and erasing indigenous voices along the way.Featuring insights from leading experts like Dr. Ewa Thompson, Ukrainian novelist Oksana Zabuzhko, and more, we challenge the conventional narrative surrounding Russian culture, revealing the colonial undercurrents that have fueled centuries of oppression and expansion.Explore the cognitive dissonance surrounding "great Russian culture" and the role it plays in perpetuating colonialism. This episode is a must-listen for anyone seeking to understand the true nature of Russian imperialism and its impact on the world.Dive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika, produced by Alina Poliakova, mixed and sound designed by Anastasiia Fedoskina, co-produced and narrated by Maksym Eristavi. Consider subscribing on a platform that is convenient for you: https://pod.link/1729375002Support the journalism of Ukrainska Pravda. Learn how at https://www.pravda.com.ua/eng/

Red Game Table
Matryoshka 4.4 - The Staircase

Red Game Table

Play Episode Listen Later Jun 9, 2024 94:57


The team investigates a newly uncovered island in the Aral Sea which contains a stone staircase leading into the earth. [Note: Nikolai disappears in the middle of the sesssion for a while but he does return by the end. We apologize for the inconvenience.] For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage and find Johnny's show Subversive History on his Linktree. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but can't pay anything, you are just as welcome to the book as anyone.

Accelerativ Thrust
130. Local Round Up 7! (Matryoshka Ghost, Grandfather Confusion, William Elliot Whitmore, Good Morning Midnight, Mad Dad Records)

Accelerativ Thrust

Play Episode Listen Later Jun 4, 2024 69:41


Yee-haw! This week Cowboy Dan and Erik round up five local releases! We talk about music from Matryoshka Ghost, Grandfather Confusion, William Elliot Whitmore, Good Morning Midnight, and Mad Dad Records.

Matryoshka of Lies
Matryoshka of Lies. Episode 1: Confusion feat. Oleksandr Polianichev, Azamat Junisbai, and Botakoz Kassymbekova

Matryoshka of Lies

Play Episode Listen Later May 31, 2024 18:30


This episode ventures into a chapter often missing from the mainstream narrative: Russia's influence in Africa and Qazaqstan. Joined by leading experts Dr. Botakoz Kassymbekova (University of Basel in Switzerland), Dr. Azamat Junisbai (Pitzer College, California), Dr. Oleksandr Polianichev (Södertörns högskola, Sweden), host Maksym Eristavi dives deep to explore why understanding these untold stories is crucial to understanding Russia today.Listen to personal stories and expert analyses that challenge the myths and reveal the true nature of Russian imperialism.Dive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika, produced by Alina Poliakova, mixed and sound design by Dmytro Volkovinskyi and Anastasiia Fedoskina, co-produced and narrated by Maksym Eristavi. Consider subscribing on a platform that is convenient for you: https://pod.link/1729375002Support the journalism of Ukrainska Pravda. Learn how at https://www.pravda.com.ua/eng/

GPT Reviews
Google's Apology

GPT Reviews

Play Episode Listen Later May 31, 2024 14:50


Google's AI Overviews are improving to provide accurate and helpful information. Nvidia's new embedding model, NV-Embed-v1, ranks number one on the Massive Text Embedding Benchmark. Matryoshka Query Transformer (MQT) offers flexibility to Large Vision-Language Models (LVLMs) by encoding an image into a variable number of visual tokens during inference. Contextual Position Encoding (CoPE) improves the position encoding method in Large Language Models (LLMs) and solves tasks where popular position embeddings fail.  Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 01:35 AI Overviews: About last week 03:58 Nvidia Releases Embedding Model NV-Embed-v1 04:53 Multi-camera YOLOv5 on Zynq UltraScale+ with Hailo-8 AI Acceleration 06:31 Fake sponsor 08:28 Matryoshka Query Transformer for Large Vision-Language Models 10:24 Similarity is Not All You Need: Endowing Retrieval Augmented Generation with Multi Layered Thoughts 11:51 Contextual Position Encoding: Learning to Count What's Important 13:30 Outro

Matryoshka of Lies
Matryoshka of Lies. Episode 0: Myth

Matryoshka of Lies

Play Episode Listen Later May 17, 2024 10:50


In our season premiere, we challenge the propaganda narratives surrounding Russia. Ukrainian journalist and author Maksym Eristavi takes you on a journey to uncover the deeper story and expose serial imperial and colonial behavior. We'll meet Ukrainian human rights advocate Val Voshchevska and imperialism researcher Mariam Naiem, who share their personal experiences and the stories passed down through generations.Dive into "Matryoshka of Lies" with Maksym Eristavi, author of the illustrated guidebook "Russian Colonialism 101," and Ukrainska Pravda. Unveil the hidden truths and discover the power of untold indigenous stories.This show is written by Yev Kopiika, produced by Alina Poliakova, mixed and sound design by Dmytro Volkovinskyi and Anastasiia Fedoskina, co-produced and narrated by Maksym Eristavi.Support the journalism of Ukrainska Pravda. Learn how at www.pravda.com.ua/eng

Red Game Table
Matryoshka 4.x (interlude) - The Spacewalk

Red Game Table

Play Episode Listen Later Apr 13, 2024 75:25


The crew of Soviet space station Salyut 7 investigate an unexpected visitor. For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage and find Johnny's show Subversive History on his Linktree. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone.  

Red Game Table
Matryoshka 4.2 - Cave Diver, pt. II

Red Game Table

Play Episode Listen Later Mar 22, 2024 137:45


The team joins their KGB comrade to go into the caves underneath the cabin in the woods. For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage, find Johnny's show Subversive History on his Linktree, and listen to Jeremy here. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Top 5 Research Trends + OpenAI Sora, Google Gemini, Groq Math (Jan-Feb 2024 Audio Recap) + Latent Space Anniversary with Lindy.ai, RWKV, Pixee, Julius.ai, Listener Q&A!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 9, 2024 108:52


We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And

ceo american spotify tiktok black australia english art europe google ai china apple vision france politics online service state crisis living san francisco west research russia chinese elon musk reach search microsoft teacher surprise ring harry potter security asian broadway run chatgpt silicon valley mvp ceos medium discord reddit mail dubai stanford math adolf hitler fill worlds complex direction context mixed stanford university qatar dom one year falcon cto offensive tension retro substack ia minecraft newton hungary explorers sf openai gemini archive residence alt nvidia ux api builder laptops apples lamar discovered generate fastest sweep voyager python j'ai stable ui mm developed jet stretching rj gpt ml lama hungarian alibaba github automated llama directions grimes notion rail lava merge lesser transformer clip runway metaphor amd synthetic samba bal emo sora shack wechat copilot ops sam altman structured mamba ix llm unreal engine gpu connector spreadsheets rahul raspberry pi agi bytedance vector zapier sql pixie c4 collected sonar rag anz gpus 7b deepmind lambda vps utilization alessio tiananmen square speculative gopher lms perplexity anthropic lm web summit json arp mixture sundar pichai 60k kura cli pocketcast mistral pika tendency soa motif google gemini digital ocean a16z sumit demo day itamar chinchillas adept versa npm markov yon reassuring dabble linux foundation hacker news dcm boma us tech omo moes svelte agis jupyter yann lecun matryoshka open api jupyter notebooks tpu jeremy howard vipul replit exa 70b groq neurips hbm mece nat friedman rnn rlhf gemini pro chris ray code interpreter mrl naton audio recap simon willison 460k sfai latent space unthinking and openai versal jerry liu matei zaharia hashnode
Multipolarity
Special Edition: The Burger Theory Of History (feat. Malcom Kyeyune)

Multipolarity

Play Episode Listen Later Feb 29, 2024 71:03


"Hamburgers will decide America's future". So says Malcolm Kyeyune in a recent essay ruminating on the American journalist Tucker Carlson's recent visit to Moscow where he famously - or, perhaps, infamously - purchased a burger at Russia's new McDonald's clone, 'Tasty, that's it'. Kyeyune sees Carlson's culinary adventure as reminiscent of Mikhail Gorbachev's decision to do an advert in which Russians would debate whether the fall of the Evil Empire was worth the introduction of fast food chains. Now the shoe is on the other foot, with Carlson highlighting how cheap food is in Russia in comparison to the United States, plagued, as is the rest of the West, with a cost of living crisis.But this latest fast food fight is really only the tip of the iceberg lettuce. Since the pandemic of 2020, a feeling of malaise has crept into the West. The feeling is palpable and encompasses everything from rising costs of basic necessities to a feeling that the culture is spiralling out of control to a questioning of our basic modus operandi - what happened, some ask, to our freedoms?In this week's episode we want to discuss whether the West is spiralling into chaos? It feels like a lot of narratives are breaking down right now, and one crisis seems to open onto another like a Matryoshka doll. With a highly controversial election on the horizon in November and the Biden Administration having failed to deliver on its promise of normality, is our ideological Berlin Wall starting to crumble?*** Please like, subscribe, and be excellent to each other.TwitterPatreon

Red Game Table
Matryoshka 4.2 - Cave Diver, pt. I

Red Game Table

Play Episode Listen Later Feb 26, 2024 119:10


Katya and Yahyo investigate a musical performance that caused its audience to go berserk. For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage and find Johnny's show Subversive History on his Linktree. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone.

Matryoshka of Lies
Matryoshka of Lies: Podcast Teaser

Matryoshka of Lies

Play Episode Listen Later Feb 6, 2024 0:54


Ukraine's not the first one. Russia's colonial grip has choked nations for centuries. Gaslighting, invading, erasing. But this time, the world is watching.Dive into "Matryoshka of Lies" with Maksym Eristavi, a Ukrainian author, and Ukrainska Pravda. Unpack the myths, expose the truth. The empire will fall.

Red Game Table
Matryoshka 4.1 - The Town

Red Game Table

Play Episode Listen Later Feb 1, 2024 152:40


The entire population of a town near the Pacific coast of the RSFSR disappears overnight and the Vladivostok branch of Division 3 is brought in to find where they went and how to bring them back. For comments or questions, email utopologist@protonmail.com. Listen to Lina's labor news podcast Work Stoppage and find Johnny's content on his Linktree. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of version 1.5 of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone.

Papers Read on AI
Matryoshka Representation Learning

Papers Read on AI

Play Episode Listen Later Jan 30, 2024 40:07


Learned representations are a central component in modern ML systems, serving a multitude of downstream tasks. When training such representations, it is often the case that computational and statistical constraints for each downstream task are unknown. In this context rigid, fixed capacity representations can be either over or under-accommodating to the task at hand. This leads us to ask: can we design a flexible representation that can adapt to multiple downstream tasks with varying computational resources? Our main contribution is Matryoshka Representation Learning (MRL) which encodes information at different granularities and allows a single embedding to adapt to the computational constraints of downstream tasks. MRL minimally modifies existing representation learning pipelines and imposes no additional cost during inference and deployment. MRL learns coarse-to-fine representations that are at least as accurate and rich as independently trained low-dimensional representations. The flexibility within the learned Matryoshka Representations offer: (a) up to 14x smaller embedding size for ImageNet-1K classification at the same level of accuracy; (b) up to 14x real-world speed-ups for large-scale retrieval on ImageNet-1K and 4K; and (c) up to 2% accuracy improvements for long-tail few-shot classification, all while being as robust as the original representations. Finally, we show that MRL extends seamlessly to web-scale datasets (ImageNet, JFT) across various modalities -- vision (ViT, ResNet), vision + language (ALIGN) and language (BERT). MRL code and pretrained models are open-sourced at https://github.com/RAIVNLab/MRL. 2022: Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, V. Ramanujan, William Howard-Snyder, Kaifeng Chen, S. Kakade, Prateek Jain, Ali Farhadi https://arxiv.org/pdf/2205.13147v3.pdf

Multiverse 5D
Psychic spies and the Matryoshka technique

Multiverse 5D

Play Episode Listen Later Nov 19, 2023 8:36


Psychic spies and the Matryoshka technique YouTube Awakening Cosmic Reality Show Instagram @multiverse5d_podcast

Apple Coding Daily
Modelos de Difusión Matryoshka, el primer paso de Apple hacia la IA generativa

Apple Coding Daily

Play Episode Listen Later Nov 16, 2023 31:40


Analizamos técnicamente los Modelos de Difusión Matryoshka (MDM) de Apple, un avance en la generación de imágenes impulsada por IA y el primer paso de Apple hacia su proyecto AJAX y su motor de contenido generativo basado en inteligencia artificial para la próxima versión mayor de todos sus sistemas como iOS, iPadOS, tvOS, macOS, watchOS y, por supuesto, visionOS para Vision Pro. Exploramos la arquitectura distintiva de MDM, enfatizando su uso innovador de NestedUNet y el enfoque end-to-end para optimizar la generación de imágenes de alta resolución. Profundizamos en las técnicas de entrenamiento y procesos de difusión, resaltando cómo MDM aborda y supera desafíos clave en el campo de la inteligencia artificial. Un estudio detallado para aquellos interesados en la intersección de la tecnología avanzada y la creatividad digital. Descubre toda la información de la IV Edición del Swift Full Stack Bootcamp en acoding.academy/bootcamp. Puedes leer el paper completo del Matryoshka Difussion Model en ARVIX pulsando aquí: Paper de MDM de Apple. Aprende Swift y SwiftUI con nuestra última formación: Swift Developer Program 2023. Descubre nuestro canal de Twitch en: twitch.tv/applecoding. Descubre nuestras ofertas para oyentes: - Cursos en Udemy (con código de oferta) - Apple Coding Academy - Suscríbete a Apple Coding en nuestro Patreon. - Canal de Telegram de Swift. Acceso al canal. --------------- Consigue las camisetas oficiales de Apple Coding con los logos de Swift y Apple Coding así como todo tipo de merchadising como tazas o fundas. - Tienda de merchandising de Apple Coding. --------------- Tema musical: "For the Win" de "Two Steps from Hell", compuesto por Thomas Bergensen. Usado con permisos de fair use. Escúchalo en Apple Music o Spotify.

Serious Trouble
The Lawsuit Matryoshka

Serious Trouble

Play Episode Listen Later Sep 21, 2023 18:49


This is a free preview of a paid episode. To hear more, visit www.serioustrouble.showRudy Giuliani's ex-lawyer is suing him for non-payment; Hunter Biden is suing the IRS for airing his dirty laundry; the FTX bankruptcy estate is suing SBF's parents for being morons; Ray Epps, still not a Fed, is pleading guilty; Jack Smith wants a gag order on Donald Trump.

Random Sage with Maryann from Revealing Light
Russia's Matryoshka dolls & new leaders

Random Sage with Maryann from Revealing Light

Play Episode Listen Later Jun 26, 2023 22:47


Is this the downfall of Vladimir Putin and why did it take a murdering warlord to weaken him? For all Putin's apologists now is the time of reckoning. Russian leadership is not stable nor to be admired for its strength. It reeks of weakness & dictatorship. And what of the Russian people, the mothers, the sons...what do they want? Do they hope for leadership change or do they still fear, or admire, Putin?

Red Game Table
Matryoshka 3.12 - Sailfish

Red Game Table

Play Episode Listen Later May 8, 2023 144:20


Matryoshka and their Yugoslav comrades make a plan with the Ct'kathka to drive the imperialists out of the sea. For comments or questions, email utopologist@protonmail.com. Listen to Lina and Dan's labor news podcast Work Stoppage and watch Johnny's twitch stream Subversive History. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here. Version 2 will be released after the conclusion of Season 3.

The Malting Hour
Malted Minis - Destihl Brewing "Dosvidanya BBA Stout"

The Malting Hour

Play Episode Listen Later May 1, 2023 26:24


This week we sample Destihl's "Dosvidanya Bourbong Barrel" stout. Beer description: Like a Matryoshka or 'nesting' doll, the secret of Dosvidanya® Imperial Stout lies locked deep within her mysterious & elaborate wooden layers. The hidden soul of this oak bourbon barrel-aged beer that we said Dosvidanya ('farewell') to several months before revealing, is its rich flavors like dark chocolate, toffee, black cherries and coffee along with robust & roasty maltiness that finishes dry. All music provided for this episode provided by @FluidMinds Check out all our episodes at www.themaltinghour.com

Red Game Table
Matryoshka 3.11 - Lights From the Sea

Red Game Table

Play Episode Listen Later Apr 18, 2023 122:26


The squad goes to Yugoslavia to help investigate reports of a large and dangerous bird-/fish-like creature in a small Croatian town on the Adriatic. For comments or questions, email utopologist@protonmail.com. Listen to Lina and Dan's labor news podcast Work Stoppage and watch Johnny's twitch stream Subversive History. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here. Version 2 will be released after the conclusion of Season 3.

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for March 30th, 2023 - Episode 189

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Mar 30, 2023 39:47


2023-03-30 Weekly News - Episode 189Watch the video version on YouTube at https://youtube.com/live/TgmP20awQ1A?feature=share Hosts:  Eric Peterson - Senior Developer at Ortus Solutions Brad Wood - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Books 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Learn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes  Join us for the 10th Into the Box - In person ONLY!!!  Patreon Support ( amiable ) - UPDATED GOALSWe have 41 patreons: Goal 1 - 26% -  This goal would help us to fully fund the hosting of ForgeBox.io (www.forgebox.io), the ColdFusion software directory.Goal 2 - 13% - This goal would fund the development of CommandBox CLI, so it can remain FREE and Open Source forever.Goal 3 - 6% - This goal would help us to fully fund the Modernize or Die podcasts.https://www.patreon.com/ortussolutions. News and AnnouncementsICYMI: Critical Security Update for ColdFusion APSB23-25From Adobehttps://community.adobe.com/t5/coldfusion-discussions/released-coldfusion-2021-and-2018-march-2023-security-updates/td-p/13649873From FoundeoAdobe has just published a security bulletin APSB23-25, and has released security updates for ColdFusion 2018 and 2021.We recommend installing these update as soon as possible, because one of the vulnerabilities has been actively exploited by attackers already. https://helpx.adobe.com/security/products/coldfusion/apsb23-25.htmlhttps://helpx.adobe.com/coldfusion/kb/coldfusion-2018-update-16.htmlhttps://helpx.adobe.com/coldfusion/kb/coldfusion-2021-update-6.htmlHackMyCF has been updated to warn you if the hotfix is missing.It is important to note that if you are on ColdFusion 11, or 2016 that it is possible that your servers could be vulnerable to at least one of these issue as well. However, because these versions reached end of life they are no longer receiving security patches from Adobe.One thing you can do to mitigate one of these issues is to block requests containing a variable named _cfclient. Some of the filters in FuseGuard may help prevent some attack vectors when configured to. But the best solution is to upgrade to CF2018 or 2021 and apply the patch released today.--Foundeo Inc.ICYMI - State of the CF Union 2023 ReleasedHelp us find out the state of the CF Union – what versions of CFML Engine do people use, what frameworks, tools etc.https://teratech.com/state-of-the-cf-union-2023-survey New Releases and UpdatesICYMI - New CommandBox Goodies print.tree() - https://twitter.com/bdw429s/status/1639392842656235520 print.columns() and printColumns - https://twitter.com/bdw429s/status/1639395391148810242 clipboard - https://twitter.com/bdw429s/status/163946183001074483 OpenAI-powered ChatGPT has arrived for Ortus DocumentationWe are pleased to announce a fun little project that our Patreon supports have been testing in private for a week or so. Ortus has rolled out our own OpenAI-powered chat bot, which is fueled by all of the documentation in our GitBooks! This behaves similar to the ChatGPT you've likely played with, but is custom loaded with all of our most recent documentation.https://chatgpt.ortussolutions.com/https://community.ortussolutions.com/t/openai-powered-chatgpt-has-arrived-for-ortus-documentation/9582Adobe ColdFusion 2023 Beta now on ForgeBoxAdobe ColdFusion 2023's public beta is now on ForgeBox for you to test out in CommandBox servers or Docker containers. Use "cfengine=adobe@2023-beta" to start it up and ensure you're on the latest CFConfig.  Happy testing!https://twitter.com/bdw429s/status/1638987316445446144Webinar / Meetups and WorkshopsOrtus Event Calendar for Googlehttps://calendar.google.com/calendar/u/0?cid=Y181NjJhMWVmNjFjNGIxZTJlNmQ4OGVkNzg0NTcyOGQ1Njg5N2RkNGJiNjhjMTQwZjc3Mzc2ODk1MmIyOTQyMWVkQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20 CFSummit East 2023 Training Workshop - ColdFusion MVC for Dummies.Before the ColdFusion Summit East in Washington, D.C., on April 4th, 2023. Luis Majano, the creator of The ColdBox Platform, will be leading this workshop, bringing you a deep dive 1-day workshop: ColdFusion MVC for Dummies.The workshop will combine a variety of theories, hands-on coding, and best practices to give you all the tools needed to leave the workshop ready to build MVC-powered apps when you return to your office.https://www.ortussolutions.com/blog/coldfusion-summit-east-2023-mvc-training-workshopCFCasts Content Updateshttps://www.cfcasts.comRecent Releases Secure your ColdBox Apps with cbSecurity 3 - March 2023 Webinarhttps://cfcasts.com/series/ortus-webinars-2023/videos/secure-your-coldbox-apps-with-cbsecurity-3 Mastering CommandBox 5 - 5 new videos - https://cfcasts.com/series/mastering-commandbox-5 ModCFML IIS / Boncode CFConfig Improvements Custom tray icon actions Minibox Start Pure HTML server 2023 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2023-forgebox-modules-of-the-week  2023 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2023-vs-code-hint-tip-and-trick-of-the-week  Coming Soon Brad with more CommandBox Videos More ForgeBox and VS Code Podcast snippet videos ColdBox Elixir from Eric Getting Started with Inertia.js from Eric CBWire Series from Grant - Fill out the Poll here https://community.ortussolutions.com/t/poll-cbwire-cfcasts-com-series/9513  Getting Started with ContentBox from Daniel Garcia Conferences and TrainingDev NexusApril 4-6th, 2023 in AtlantaGeorgia World Congress Center285 Andrew Young International Blvd NWAtlanta, GA 30313Kubernetes, Java, Software architecture, Kotlin, Performance Tuninghttps://devnexus.com/CFSummit East 2023 Training Workshop - ColdFusion MVC for Dummies.Before the ColdFusion Summit East in Washington, D.C., on April 4th, 2023. Luis Majano, the creator of The ColdBox Platform, will be leading this workshop, bringing you a deep dive 1-day workshop: ColdFusion MVC for Dummies.The workshop will combine a variety of theories, hands-on coding, and best practices to give you all the tools needed to leave the workshop ready to build MVC-powered apps when you return to your office.https://www.ortussolutions.com/blog/coldfusion-summit-east-2023-mvc-training-workshopCFSummit EastThursday, April 6, 20238:00am - 4:00pmWednesday 5th - CertificationMarriott Marquis Washington, DCComplimentary; breakfast and lunch will be providedhttps://carahevents.carahsoft.com/Event/Details/341389-adobe https://carahevents.carahsoft.com/Event/Details/344168-adobeJ on the BeachBringing DevOps, Devs and Data Scientists together around Big DataMay 10-12, 2023 Malaga, Spainhttps://www.jonthebeach.com/ Ortus Profile: https://www.jonthebeach.com/jobs/54/Ortus%20SolutionsVueJS Live MAY 12 & 15, 2023ONLINE + LONDON, UKCODE / CREATE / COMMUNICATE35 SPEAKERS, 10 WORKSHOPS10000+ JOINING ONLINE GLOBALLY300 LUCKIES MEETING IN LONDONhttps://vuejslive.com/ Into the Box 2023 - 10th EditionMay 17-19, 2023 The conference will be held in The Woodlands (Houston), Texas - This year we will continue the tradition of training and offering a pre-conference hands-on training day on May 17th and our live Mariachi Band Party! However, we are back to our Spring schedule and beautiful weather in The Woodlands! Also, this 2023 will mark our 10 year anniversary. So we might have two live bands and much more!!!IN PERSON ONLY Website launched: https://intothebox.orghttps://itb2023.eventbrite.com/ VueConf.usNEW ORLEANS, LA • MAY 24-26, 2023Jazz. Code. Vue.Workshop day: May 24Main Conference: May 25-26https://vueconf.us/ CFCampJune 22-23rd, 2023Marriott Hotel Munich Airport, FreisingCall for Speakers is closedhttps://www.cfcamp.org/More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/https://github.com/scraly/developers-conferences-agenda Blogs, Tweets, and Videos of the Week3/18/23 - Blog - Michael Horne - Chromebook CFML development environment tutorialThis is partly an aide-memoire for me on setting up an environment for CFML development on a Chromebook. The specific Chromebook is a Lenovo S330.My pre-requisite is that you've got a Lucee/ColdFusion application ready to go, although basically you could start from scratch with a simple index.cfm file wherever you eventually start CommandBox, but let's leave that for later.https://recantha.co.uk/chromebook-cfml-development-environment-tutorial/Good guide for any Linux machine.3/22/23 - Blog - James Moberg - Generate Sanitized Email Hash (as Integer)While reviewing the logs of failed contact form submissions, I identified a couple email address variations that were exploiting some Gmail features in an attempt to bypass our filters. (Gmail has a "plus" feature and ignores periods in addresses.) A SQL query using REPLACE to remove all periods revealed that this comment form spammer had performed 279 attempts using 162 variations of their 15 character gmail username in an effort to circumvent our filters. We log the full email address that was posted and, when matching via SQL solely using the email addresses, it appeared as each email address was only used 2-4 times... versus the 279 obfuscated attempts.To better identify & highlight abusers via SQL queries, an EmailHash (INT) column has been added to the database table. When searching or logging the email address, the value is sanitized (remove + string and . from the username) and then a java hashCode is generated. Using integers to join database records is much faster than using varchar and has lower storage requirements.https://dev.to/gamesover/generate-sanitized-email-hash-as-integer-4n3e3/22/23 - Blog - Ben Nadel - Russian Doll Content Wrapping With CFSaveContent In ColdFusionIn web development, the term "Russian Doll" is sometimes used to refer to content that is wrapped inside another piece of content of the same type. This is based on the Russian Doll toy (Matryoshka), which has a multitude of smaller toys contained within it. In the past, I've looked at using the Russian Doll pattern for error handling in Node.js as well as for error handling in ColdFusion. But, its value extends beyond just errors - I often use the CFSaveContent tag to build up a content payload from the outside in. And, I thought it would make for a nice example.https://www.bennadel.com/blog/4431-russian-doll-content-wrapping-with-cfsavecontent-in-coldfusion.htmColdBox Layouts and Views!3/23/23 - Discourse - Brad Wood - Is Using CommandBox to run Adobe ColdFusion sites safe in production? There were some excellent questions asked on CFML Slack today, and I wanted to get the answers to them out on our community forum where they could benefit the larger community (and Google). In a nutshell, these were the concerns:When I'm using CommandBox, am I really using “Adobe ColdFusion” or am I getting a “copy” of Adobe ColdFusion from the Ortus site?We have an Adobe Support Contract and will Adobe provide support for my CommandBox installation?CommandBox is not using Tomcat, but JBoss Undertow. Will it be capable of managing the load of a production site?These are great questions, and one any Enterprise would want answered before committing to CommandBox. Let's go through them categorically.https://community.ortussolutions.com/t/is-using-commandbox-to-run-adobe-coldfusion-sites-safe-in-production/9581/13/29/23 - Blog - Ben Nadel - Getting FusionReactor User Experience Monitoring (UEM) To Play Nicely With Content Security Policy (CSP) In ColdFusionFor the past few days, I've been digging into some network latency issues on my blog. And, in response to some of my public messaging on the topic, David Tattersall suggested that I look into FusionReactor's User Experience Monitoring (UEM). Whereas FusionReactor's Java agent provides server-side insights and confidence, the UEM module is designed to shed light on the end-user experience (UX). After all, the server-side leg is only part of the journey. Getting UEM up-and-running is easy; but, out of the box, it doesn't play very nicely with my Content Security Policy. As such, I wanted to share how I got it working on my ColdFusion blog.https://www.bennadel.com/blog/4436-getting-fusionreactor-user-experience-monitoring-uem-to-play-nicely-with-content-security-policy-csp-in-coldfusion.htmCFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 55 ColdFusion positions from 35 companies across 28 locations in 5 Countries.2 new jobs listed this weekFull-Time - Senior Application Developer at Aurora, IL - United StatesPosted Mar 24https://www.getcfmljobs.com/jobs/index.cfm/united-states/SeniorAppDev-Aurora-IL/11559Contract - Coldfusion Developer at Jacksonville, FL - United StatesPosted Mar 24https://www.getcfmljobs.com/jobs/index.cfm/united-states/CFDeveloper-Jacksonville-FL/11558Other Job LinksThere is a jobs channel in the CFML slack team, and in the Box team slack now tooForgeBox Module of the WeekChatGPT APIBy Matt GiffordA ColdFusion CFC to interact with the chatgpt APIInstantiate the core component chatgpt.cfc and pass in the required properties like so:var chat = new chatgpt(    apiKey = 'xx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');ExampleUse chatgpt to create:var resp = chat.chatCompletion(model='gpt-3.5-turbo',messages=[{"role": "user", "content": "Write me a poem about a summer day with popcorn and unicorns"}]);https://forgebox.io/view/chatgptVS Code Hint Tips and Tricks of the WeekGrammarlyThis extension brings Grammarly to VS Code.Grammarly leads the industry in building AI-enabled services to help people communicate effectively every day. The words you choose can champion your voice, build connections, and spur your academic or professional growth.Communication assistance with Grammarly means a consistent experience of robust, real-time feedback on your writing.https://www.grammarly.com/https://marketplace.visualstudio.com/items?itemName=znck.grammarlyThank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox,  ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack https://community.ortussolutions.com/Top Patreons ( amiable ) John Wilson - Synaptrix Tomorrows Guides Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez  David Belanger   Dan Card Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck  Abdul Raheen And many more PatreonsYou can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors Thanks everyone!!!Homework Watch Social Media CFcamp Call for Speakers is closing Into the Box - Early bird tickets ending soon. ★ Support this podcast on Patreon ★

Go Float Yourself
Season 6 Episode 10: Matryoshka

Go Float Yourself

Play Episode Listen Later Mar 29, 2023 75:29


This week we bring you perhaps the laziest written episode of the hundred yet! Enjoy! Patreon: https://www.patreon.com/Gofloatpod Insta: https://bit.ly/2IHgNIt Twitter: https://bit.ly/2H337qu Theme song: Severe Tire Damage by Kevin MacLeod: https://bit.ly/2ICU0h2  Logo Design by Tori Russell: https://torirussell.com/

Red Game Table
Matryoshka 3.10 - The Far-spider

Red Game Table

Play Episode Listen Later Mar 10, 2023 102:50


Pavel joins Mels and Emiliano as they return to the Blessed Theotokos Monastery to confront what lies beneath it. For comments or questions, email utopologist@protonmail.com. Listen to Lina and Dan's labor news podcast Work Stoppage and watch Johnny's twitch stream Subversive History. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka.  It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Red Game Table
Matryoshka 3.9 - The Signal

Red Game Table

Play Episode Listen Later Feb 28, 2023 104:11


Mels and Emiliano go to the Tajik SSR to investigate a plane that went down near Communism Peak under very bizarre circumstances. For comments or questions, email utopologist@protonmail.com. Listen to Lina and Dan's labor news podcast Work Stoppage and watch Johnny's twitch stream Subversive History. Talk with other listeners in the Work Stoppage discord. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka.  It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Red Game Table
Matryoshka 3.8 - The Singers of Songs

Red Game Table

Play Episode Listen Later Feb 2, 2023 119:52


Mels and Pavel head to the Kazakh Soviet Socialist Republic to find a pair of researchers who disappeared while camping on a deceptively plain-looking tract of land. For comments or questions, email utopologist@protonmail.com. Listen to Lina and Dan's labor news podcast Work Stoppage and watch Johnny's twitch stream Subversive History. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Red Game Table
Matryoshka 3.x (interlude) - The Shadow Under Leningrad pt. II

Red Game Table

Play Episode Listen Later Jan 15, 2023 130:51


The special investigators, joined by a former agent who specialized in the paranormal, come up with a plan to protect the people of Leningrad from the horrors below. Featuring Jeremy from Invent The Future and A Terrible Influence. For comments or questions, email utopologist@protonmail.com, and listen to Lina and Dan's labor news podcast Work Stoppage. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Almostajad | المُستجَد
المستجد جدًّا: ديور وبالنسياغا

Almostajad | المُستجَد

Play Episode Listen Later Dec 7, 2022 6:10


عالم الأزياء يعيش أيامًا مليئة بالآكشن؛ ديور تقيم عرضًا بالقرب من الأهرامات، وتواج دار بالنسياغا هجمة شرسة من الزبائن بسبب إعلان صوّر أطفالاً في سياقات جنسية! في «المستجد جدًا» نوفّر عليكم البحث في أكوام التعليقات والهاشتاغات، إذ نقدم لكم في كبسولة سريعة تلخيصًا لواحد من أخبار الأسبوع وشرحًا لسياقه.. على السريع! هذه الحلقة من إعداد وتقديم بسنت سمهوت، وتحرير عمر فارس. كان في التصميم الصوتيّ يزن قواس. بودكاست ماتريوشكا: https://listen.sowt.com/Matryoshka «المستجد جدًا» فقرة مصغّرة من بودكاست «المُستجَد»، وهي من إنتاج صوت. صفحات صوت على وسائل التواصل الاجتماعي:تويتر: twitter.com/sowtإنستجرام: instagram.com/sowtpodcastsفيسبوك: facebook.com/SowtPodcasts للانضمام إلى عضويّة صوت بلس https://sow.tl/PlusApple

matryoshka sowtpodcasts
Red Game Table
Matryoshka 3.7 - The Mother of Hounds, pt. 2

Red Game Table

Play Episode Listen Later Nov 21, 2022 108:04


Underneath Vaduz, the team must navigate an unfamiliar and unfriendly labyrinth to accomplish their mission. For comments or questions, email utopologist@protonmail.com, and listen to Lina and Dan's labor news podcast Work Stoppage. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Red Game Table
Matryoshka 3.x (interlude) - The Shadow Under Leningrad pt. I

Red Game Table

Play Episode Listen Later Nov 10, 2022 99:18


With the city of Leningrad under siege from the Nazis, a group investigates an interruption in bread delivery and discovers a much darker explanation than they could have dreamed of. Featuring Jeremy from Invent The Future and A Terrible Influence. For comments or questions, email utopologist@protonmail.com, and listen to Lina and Dan's labor news podcast Work Stoppage. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Red Game Table
Matryoshka 3.6 - The Mother of Hounds, pt. I

Red Game Table

Play Episode Listen Later Oct 25, 2022 134:22


The trio must go undercover in Liechtenstein to get their hands on a dangerous artifact before any other more nefarious ones can. For comments or questions, email utopologist@protonmail.com, and listen to Lina and Dan's labor news podcast Work Stoppage. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Red Game Table
Matryoshka 3.5 - The Storm

Red Game Table

Play Episode Listen Later Oct 6, 2022 121:53


The team find themselves stranded in the Belarussian forest in the middle of a blizzard, and something is driving the animals out of their minds. For comments or questions, email utopologist at protonmail.com and listen to Lina and Dan's labor news podcast Work Stoppage. Music used in this episode can be found at redgametable.com. The pdf of Matryoshka: Cosmic Horror Investigation in the Cold War USSR can be downloaded at utopologist.itch.io/matryoshka. It's pay-what-you-want, so if you want the book but have nothing, you are just as welcome to the book as anyone. Physical copies (which contain some additional art and game content) can be ordered here.

Mile High Game Guys: Boardgaming Podcast
Episode 278 - Hots Pot

Mile High Game Guys: Boardgaming Podcast

Play Episode Listen Later Sep 30, 2022 93:14


This week, Adrian gets back into Hanabi, Zach tries his hand at Ginkopolis, and Jeff goes into a Theater of his Mind.   00:03:55 - What have we been playing?: Hanabi, Hallertau, No Thanks, Ginkopolis, Matryoshka, Puzzle Calendar 00:33:30 - Mid-show banter! featuring: Video Games, Avatar 3D, Theater of the Mind, TV 01:00:23 - Crowdfunding: Project Ironwood Table 01:08:28 - Crowdfunding: Fox Experiment 01:13:11 - Crowdfunding: Pueblo 01:18:25 - Crowdfunding: Daybreak 01:30:42 - Listener Feedback   MHGG Twitch Slack Channel  Patreon Guild

Good Morning Night Vale
Good Morning Matryoshka

Good Morning Night Vale

Play Episode Listen Later May 19, 2022 39:38


Meg, Symphony and Hal discuss episode 110 of Welcome to Night Vale: Matryoshka. They chat about the Steve/Cecil relationship, Hiram's feelings, and the tying up of Josie's loose ends. In the FanZone Calzone™ we hear from fans about soft meat crown mind cannons, a Night Vale nesting doll theory and “ham face.” Leave us a voicemail at 929-277-2050 or e-mail us at info@goodmorningnightvale.com. Find out more about satan on our Patreon. www.patreon.com/goodmorningnightvale Follow us on Twitter and Facebook. Good Morning Night Vale is a production of Night Vale Presents Hosted by Symphony Sanders, Hal Lublin, and Meg Bashwiner Produced by Meg Bashwiner Edited by Felicia Dominguez Mixed by Vincent Cacchione Theme Music by Disparition

Post Show Recaps: LIVE TV & Movie Podcasts with Rob Cesternino
Russian Doll | Season 2 Finale Recap, ‘Matryoshka'

Post Show Recaps: LIVE TV & Movie Podcasts with Rob Cesternino

Play Episode Listen Later Apr 26, 2022 91:51


In this episode, the hosts recap the season 2 finale of "Russian Doll," ‘Matryoshka.' The post Russian Doll | Season 2 Finale Recap, ‘Matryoshka' appeared first on PostShowRecaps.com.