Podcasts about Mochi

Japanese rice cake made of mochigome, a special kind of rice

  • 591PODCASTS
  • 715EPISODES
  • 48mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Apr 8, 2025LATEST
Mochi

POPULARITY

20172018201920202021202220232024


Best podcasts about Mochi

Latest podcast episodes about Mochi

Elevate Your Brand
Crafting Craveable Product ft. Brandie Miller of Mochi Love | EYB

Elevate Your Brand

Play Episode Listen Later Apr 8, 2025 31:04


Brandie Miller brings a wealth of food industry experience, blending expertise in startups, retail buying, and e-commerce with a passion for food accessibility. She began her career launching a small ice cream brand, driving it from food service into retail through strategic sales and marketing. She then transitioned to the buying side as a category manager for Grocery Outlet, a 400-store chain, where she championed Natural & Organic (NOSH) products, growing it into the company's largest category. Her efforts contributed to Grocery Outlet's 2019 IPO.Her dedication to making healthy, high-quality food more accessible deepened at Misfits Market, where she served as Senior Director of Grocery. There, she and her team expanded affordable grocery options nationwide, launched an upcycled food initiative, and introduced thousands of new SKUs in just three years—all while scaling operations across four warehouses.Brandie's impact has been widely recognized, earning her industry accolades such as:• Supermarket News' 2016 Disruptor• Progressive Grocer's 2019 Top Women in Grocery – Rising Star• Natural Foods Merchandiser's May/June 2019 Cover Feature• NEXTY Judge at Expo East 2022In 2024, Brandie and Will Miller combined their vastly different experiences to create something completely new—Mochi Love. Driven by a shared passion for innovation, delicious food, and bringing joy to everyday moments, they set out to expand mochi beyond the freezer aisle and introduce it to more categories across grocery stores.Mochi Love is upbeat, delicious, and simply irresistible—a modern take on a time-honored ingredient. Elevate Your Brand is the #1 marketing podcast for entrepreneurs and “wantrepreneurs” looking for insider tips and secrets from the most exciting new and growing brands in Los Angeles and the US at large. Each week, entrepreneurial special guests join Laurel Mintz, founder and CEO of award-winning marketing agency Elevate My Brand, to discuss the marketing failures and successes that have brought their brands to the next level. Learn from real-life experiences and be inspired by leaders in your industry about how smart digital and experiential marketing can elevate your brand.Contact us: https://www.elevatemybrand.com/contact Stay connected & DM us feedback on the podcast:Instagram: https://www.instagram.com/elevatemybrandla/ LinkedIn: https://www.linkedin.com/company/elevatemybrandla/ TikTok: https://www.tiktok.com/@elevatemybrand

Inside The Crazy Ant Farm
Ginnie House & Ally Musmeci | Filmmakers | Lottie, Tsundoku, and Mochi

Inside The Crazy Ant Farm

Play Episode Listen Later Apr 6, 2025 58:25


Our special guests on ep. 290, are the powerhouse filmmaking duo of Ally Musmeci & Ginnie House, who are on a mission to shake up the industry with fresh, dynamic storytelling that fills the creative void audiences have been craving. Ally and Ginnie dive into how their collaborative journey began, the moment they knew they wanted to tell stories together, and how their passion for bold narratives led to the creation of their short film trilogy — Lottie, Tsundoku, and Mochi.They open up about the creative process behind the trilogy, the unique themes each film explores, and what drives their work as artists and storytellers. Plus, they share exciting insight into the future of their company, Be Fucking Nice Productions — including their ambition to expand beyond producing their own work and to become a launchpad for emerging artists who are redefining how stories are told.If you're a fan of indie film, original voices, and the next wave of storytelling talent, you won't want to miss this inspiring conversation.Listen now on all major podcast platforms and join the movement to bring bold storytelling back to the screen!Follow Ginnie and Ally Here:Ginnie:Instagram: https://www.instagram.com/ginonthehouse?igsh=MWNvYnV4MGlkNTRhNw==Ally:Instagram: https://www.instagram.com/allysonsthebomb?igsh=ZnNhNWR6bmR4amRoBe Fucking Nice Productions:Instagram: https://www.instagram.com/beeniceproductions?igsh=ajltOXZmOG5ranYzFollow Us Here:Website: https://crazyantmedia.comMerchandise: https://crazyantmedia.com/crazy-ant-merchandiseOur first film, Deadlines: https://crazyantmedia.com/deadlinesPodcasts:ITCAFpodcast:Apple Podcast: https://podcasts.apple.com/us/podcast/itcafpodcast/id1644145531Spotify: https://open.spotify.com/show/1tf6L0e7vO9xnVtWaip67s?si=tYPrIVr_R36qpYns4qeZ8gEverything's Okay Podcast:Apple Podcast: https://podcasts.apple.com/us/podcast/everythings-okay/id1664547993Spotify: https://open.spotify.com/show/0uMm80MW4K50f8uURgVUYp?si=9mF7mwf_Qe-ZDqKBhEovMgSocial Media:ITCAFpodcastTwitter: https://twitter.com/itcafpodcast?s=21&t=q0HdFq3CPkXBzVYHYdJW6wInstagram: https://instagram.com/itcafpodcast?igshid=YmMyMTA2M2Y=Tiktok: https://www.tiktok.com/t/ZTRLQ7hHn/Everything's OkayTwitter: https://twitter.com/everythingsokp?s=21&t=ckQqBvyxz3lYqKHLrI6peAInstagram: https://instagram.com/everythingsokp?igshid=YmMyMTA2M2Y=Crazy Ant MediaTwitter: https://twitter.com/crazyantmedia?s=21&t=q0HdFq3CPkXBzVYHYdJW6wInstagram: https://instagram.com/crazyantmedia?igshid=YmMyMTA2M2Y=Tiktok: https://www.tiktok.com/t/ZTRLQP1c1/Logan (Left)Twitter: https://twitter.com/jloganaustin?s=21&t=ckQqBvyxz3lYqKHLrI6peAInstagram: https://instagram.com/jloganaustin?igshid=YmMyMTA2M2Y=Tiktok: https://www.tiktok.com/@j.loganaustin?_t=8ZMB9Hp1yxf&_r=1Dustin (Right)Twitter: https://twitter.com/crazyantceo?s=21&t=ckQqBvyxz3lYqKHLrI6peAInstagram: https://instagram.com/crazyantceo?igshid=YmMyMTA2M2Y=Tiktok: https://www.tiktok.com/@crazyantceo?_t=8ZMB84k7BUM&_r=1

Helps Sleep
Mochi Ice Cream & Macarons ASMR Chat & Eating

Helps Sleep

Play Episode Listen Later Mar 27, 2025 35:05


Mochi Ice Cream & Macarons ASMR Chat & EatingAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The Amazing Watch Podcast
I must Mawashi my Mochi S37E02

The Amazing Watch Podcast

Play Episode Listen Later Mar 16, 2025


Welcome to The Amazing Watch Podcast! 6% of people think they could take a bear in a fight. Those people Watch along with Season 37 of The Amazing Race on Amazon Prime Video, CBS. fuboTV, Spectrum On Demand, Paramount Plus, DIRECTV, or buy it as download on Google Play Movies, Vudu, Amazon Video, FandangoNOW, or Microsoft Store. Follow us on social media! Email: amazingwatchpod@gmail.com Facebook: The Amazing Watch Podcast Twitter: @amazingwatchpod Instagram @amazingwatchpod Don't forget to tag #AmazingWatchPod This podcast is hosted by ZenCast.fm

Esperienze Di Gioco
Ep 102 - Azul Duel, Happy Mochi, Arkade, Gears Of War

Esperienze Di Gioco

Play Episode Listen Later Mar 16, 2025 56:23


Puntata numero 102SUPPORTA il PODCAST offrendoci un Caffè qui su KO-FIhttps://ko-fi.com/boardgamesofferteISCRIVITI A BOARDGAMES OFFERTE SU TELEGRAMhttps://t.me/BgOfferteISCRIVITI Alla chat di Esperienze di Gioco su Telegramhttps://t.me/EdgPodcastIn questa punatataARKADEhttps://amzn.to/43JoS6wHAPPY MOCHIhttps://amzn.to/3DCWd8OAZUL DUELhttps://amzn.to/423RPc8GEARS OF WARhttps://bit.ly/41Qq3yqSparaci un Audiocommentohttps://www.speakpipe.com/EsperienzeDiGiocoAudioIscriviti al canale Youtube di Valetutto!https://bit.ly/3TGPFJHIscriviti al canale Youtube de LaGiocoFamigliahttps://bit.ly/40wGr4VIscriviti al canale Youtube di Fabrizio!https://www.youtube.com/@CappellaioMoltoMattoSIGLEINTRO Otierre - La nuova realtàhttps://youtu.be/7DYMnYpDdT4ISCRIVITI A ZENCASTRhttps://zencastr.com/?via=EdgPodcast

On the Side with Jackie London
Recapping Expo West 2025: Rising Wellness Trends, Front-of-Pack Nutrition Labels, & The Great MAHA Debate

On the Side with Jackie London

Play Episode Listen Later Mar 14, 2025 60:02 Transcription Available


In this episode of The Business of Wellness, Jaclyn London, RD breaks down everything from Expo West 2025—the biggest trade show in natural and functional foods. From the latest wellness trends (dragon fruit, GLP-1 marketing, and caffeine + protein combos) to behind-the-scenes industry buzz (the influence of MAHA, the show's winners and losers) and updates to the FDA's proposed Front-of-Pack Nutrition Label (FOPNL), I'm giving you behind-the-scenes intel on the nutrition news you need to know about, the brands to keep an eye on, and the influencers and policy changes that are shaping the food industry today.What You'll Learn in This Episode:The top 13 trends shaping food and beverage in 2025The truth about front-of-pack nutrition labeling and why it's (likely to be) a waste of time & resources Regenerative Agriculture Certified and the status of sustainability claimsHow brands are marketing to GLP-1 users—and why some of it is just hypeThe rise of dates, pulses, electrolytes, tropical fruit, protein everything, and more prebiotic sparkling watersWhy single-serve snacks and aluminum cans are taking over grocery aislesThe wildest moments from the Make America Healthy Again (MAHA) panel and what it means for the future of food policyTimestamps00:00 Introduction to the Business of Wellness01:48 Expo West 2025 overview03:00 Front-of-Pack Nutrition Labeling panel & proposed policy12:51 Trends from Expo West 2025 - 13 trends to watch in wellness, CPG food & beverage; dietary supplements & personal care.21:52 Emerging ingredients and innovations29:43 The future of food marketing30:13 The nutritional value of dates33:03 Mochi mania: The new snack trend36:07 Mood and morality themes in food & beverage branding39:40 The rise of pulses and legumes42:46 Raw honey: A functional food rebrand44:30 Plant-based vs. animal-based products47:28 The Gell-Mann Amnesia Effect in nutrition media49:00 MAHA Updates: The MAHA panel, behind-the-scenes insights, updates in the RFK Jr. Agenda & the food industry's response thus far59:26 Expo West recap and closing thoughtsConnect with Jaclyn London, RDSubscribe to The Business of Wellness with Jaclyn London, RD on Apple Podcasts, Spotify, and YouTubeFollow @jaclynlondonrd on

Liberty Wingspan's Podcasts
What's In the Microwave: mochi

Liberty Wingspan's Podcasts

Play Episode Listen Later Mar 6, 2025 9:30


In this episode, sophomore Natalie Marshall and juniors Nidhi Thomas and Christina Huang go from sediment to cement...

DLWeekly Podcast - Disneyland News and Information
DLW 378: A Teenager's First Disneyland Trip

DLWeekly Podcast - Disneyland News and Information

Play Episode Listen Later Mar 5, 2025 112:09


This week, some great news about the Disneyland Railroad and a returning tour, Sip & Savor pass tips, perks, and more, Food and Wine merchandise, D23 Gold member monthly streams, we talk to Alex about his first experience at Disneyland, and more! Please support the show if you can by going to https://www.dlweekly.net/support/. Check out all of our current partners and exclusive discounts at https://www.dlweekly.net/promos. News: The Sip & Savor pass is back this year for the Food & Wine Festival, so how can you get the best value? This year, you can't lose! To get the most out of the pass, which is $63, or $58 for Key Holders, you need each item to be over $7.75 (or $7.25 for Key Holders). Most of the items are above this threshold. Some of the non-alcoholic drinks and desserts are closer to the break even point, so look out for those. Another tip – order at one booth and take the receipt to the other booths for redemption. – https://www.micechat.com/381303-make-the-most-of-your-sip-savor-pass-at-california-adventure-food-wine-festival/ Sip & Savor pass purchasers should be sure to get the free picnic plate that comes with the pass. The plate is blue, and shaped like Mickey, with park icons printed on it. When we got our pass, this was not available, but it was the next day. – https://www.micechat.com/410123-disneyland-update-crowd-crush-festival-feasts-disneyland-delays/ This year there is a lot of good Food and Wine Fest merchandise! From a Mickey Ear headband, to kitchenware, to toys and bags, there is something for everyone. – https://wdwnt.com/2025/02/new-2025-disney-california-adventure-food-wine-festival-merchandise-includes-mickey-ear-headband-spirit-jersey-and-more/ Railroad fans should be very happy about this story – the Disneyland Railroad Tour is returning. Starting March 21st, guests will be able to take this tour again, which is 90 minutes, and includes a ride inside the Lilly Belle. The Lilly Belle is the last train car from Disneyland's opening day and was decorated by Lillian Disney herself. The tour also includes a treat, a tour of the roundhouse back stage, a meet and greet with a train engineer, and a special keepsake. The tour is $145 per person, with reservations open now. – https://www.disneyfoodblog.com/2025/02/28/a-fan-favorite-tour-is-returning-soon-at-disneyland-resort/ The signaling building that caught fire back in 2022 at the New Orleans Square station has finally been removed. There are still walls up around the former location, but the building is gone. In other train news, the trains have been cycling around the replaced track and should be opening any time now. – https://www.micechat.com/410123-disneyland-update-crowd-crush-festival-feasts-disneyland-delays/ Indiana Jones just had it's 30th anniversary, and Club 33 is celebrating! For Weeklyteers lucky enough to have access to Club 33, there is an Indiana Jones Ceramic Tiki Mug that features snakes, skulls, and temple emblems. It looks straight out of the temple itself! The mug is $85 and comes with a “Why did it have to be snakes?” cocktail. – https://wdwnt.com/2025/03/disneyland-club-33-celebrating-indiana-jones-adventure-anniversary-with-new-ceramic-mug/ Fans of the character headbands that have popped up around the resort in the last few months will be excited to hear about some additional characters! Baymax and Mochi (the cat from Big Hero 6) are now available to adorn your headband. – https://wdwnt.com/2025/02/baymax-and-mochi-plush-added-to-custom-character-headband-experience-at-disneyland-resort/ D23 Gold members have a cool, monthly opportunity to view an exclusive presentation. The D23 Gold Theater is accessible at the link in our show notes for D23 Gold Members. Each month, a different behind-the-scenes presentation will be streamed. February's topic was 70 Years of Disneyland with Don Hahn and Christopher Merritt. Next month is The Walt Disney Archives Presents Weird Disney on March 27th. – https://d23.com/events/goldtheater/ SnackChat: Food and Wine Festival Discussion Topic: Alex's first time at Disneyland

Anime Protagonist Podcast
169 - This Little [WEEB] Went To Market

Anime Protagonist Podcast

Play Episode Listen Later Mar 2, 2025 114:37


Almost nice. This week, the boys take a look at a KyoAni darling anime: Tamako Market. Does it hold up to the reputation the studio has with the anime community?Also, the boys rate snacks (including mochi), our listeners are once again presented with a fantastic choice, and more!Support AniPro:Patreon:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.patreon.com/AniProPodSend us a Mailbag: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://anipropod.com/mailbagUse code "ANIPRO" for $5 off your first #TokyoTreat box through our link: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://tokyotreat.com/?rfsn=7695251.3317fFollow AniPro:⁠⁠⁠⁠⁠⁠⁠⁠X :⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/AniProPodInstagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/anipropod⁠⁠⁠Discord: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://discord.gg/dV5tMCWvM7Next Reviews:Anime: Umamusume: Pretty Derby → Kids on the SlopeManga: Love BulletTracks:Opening Theme: "Shibuya"Bumper Track: "moonstruck girl", leon changMusic licensed by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ slip.stream⁠⁠Timestamps00:00:00 - AniProPod #169 I ntro00:07:38 - Little Ceasars Pizza00:12:03 - Canadian Hockey Win00:18:45 - Mochi & Doritos00:25:04 - Oreos & Pringles00:33:28 - Reese's & Goldfish00:39:23 - M&M's & Gummy Bears00:44:33 - Listeners' Choice Nominations00:54:39 - Mailbag: AniPro Band00:57:05 - Tamako Market Review Intro01:00:34 - Recommendations & Fave Characters01:06:20 - Dere & The Community01:11:58 - Kyoani Characters01:17:12 - OP, ED, & Our Expectations01:26:52 - To Binge or Not To Binge?01:32:32 - Enough Plot & Comedy Thoughts01:39:24 - Character Designs & Romance01:44:26 - Final Thoughts & Ratings01:51:56 - Wrap-Up

Breakfast With Tiffany Show
EP 239 "Love Beyond Borders: Navigating Identity, Culture, and Partnership" (PART 1)

Breakfast With Tiffany Show

Play Episode Listen Later Feb 18, 2025 34:09


Send us a textSupport the showBreakfast With Tiffany Show Official Facebook Page ~ https://www.facebook.com/breakfastwithtiffanyshow Tiffany's Instagram Account ~ https://www.instagram.com/tiffanyrossdaleofficial/ Breakfast With Tiffany Show Youtube Channel ~ https://bit.ly/3vIVzhE Breakfast With Tiffany Show Official Page ~ https://www.tiffanyrossdale.com/podcast For questions, requests, collaborations and comments, feel free to reach us via our e-mail ~ breakfastwithtiffanyshow@outlook.com SUBSCRIBE and SUPPORT us here ~ https://www.buzzsprout.com/1187534/supporters/new

Anime Protagonist Podcast
167 - Checking In With the [WEEB]s

Anime Protagonist Podcast

Play Episode Listen Later Feb 16, 2025 119:40


Winter 2025 is well underway, and the boys have been watching (mostly) everything the current season of anime has to offer. What's good? What's bad? What's mid? Stay tuned and find out!Also, do titles ruin stories, Pokémon lives rent-free in the minds (and hearts) of many, the boys contemplate the inner, psychological complexities of 50% robot, 50% cop, and more!Support AniPro:Patreon:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.patreon.com/AniProPodSend us a Mailbag: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://anipropod.com/mailbagUse code "ANIPRO" for $5 off your first #TokyoTreat box through our link: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://tokyotreat.com/?rfsn=7695251.3317fFollow AniPro:⁠⁠⁠⁠⁠⁠⁠⁠Twitter:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/AniProPodInstagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/anipropod⁠⁠⁠Discord: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://discord.gg/dV5tMCWvM7Next Reviews:Anime: Tamako Market → Umamusume: Pretty Derby → Kids on the SlopeManga: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠20th Century Boys → Love BulletTracks:Opening Theme: "Shibuya"Bumper Track: "MOCHI", bear bear & friendsMusic licensed by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠slip.stream⁠⁠Timestamps00:00:00 - AniProPod 167 Intro00:05:59 - Do Titles Affect Our Viewing?00:13:15 - Of Prunes & Pokemon00:20:52 - Cole's Robocop Game00:24:42 - Solo Leveling Season 200:29:20 - Apothecary Diaries Season 200:31:53 - Dr.Stone: Science Future00:36:07 - 100 GF's Season 200:38:46 - My Happy Marriage S200:41:16 - Bang Dream: Ave Mujica00:44:24 - Re;Zero Season 300:46:14 - Link Click Bridon Arc00:51:32 - Austin Powers00:54:13 - Sakamoto Days01:00:55 - Getting Married To A Girl I Hate In My Class01:03:31 - Zenshu01:08:45 - Guild Receptionist01:13:18 - Honey Lemon Soda01:16:05 - Medaka Kuroiwa Is Impervious..01:20:16 - I Have A Crush At Work01:23:06 - Ameku M.D.: Doctor Detective01:25:59 - Welcome To Japan Ms.Elf01:28:04 - Ubel Blatt01:31:43 - Medalist01:35:54 - Otaku Neet Kunoichi01:39:01 - Okitsura: Fell In Love With An Okinawan Girl01:41:33 - Flower & Asura01:43:45 - Red Ranger Becomes an Adventurer01:47:08 - Momentary Lily01:51:26 - Sorairo Utility / Tasokare Hotel

Abroad in Japan
Japan's Deadliest Dish Explained

Abroad in Japan

Play Episode Listen Later Jan 29, 2025 29:10


Mochi wish ya girlfriend was HOT LIKE ME? After Chris' extensive travels, later than scheduled *plays train sound effect* the Abroad In Japan Podcast will return this Monday 3rd of Feb! Hosted on Acast. See acast.com/privacy for more information.

Seattle Kitchen
Hot Stove Society: Coconut Prawns + Mochi Donuts and Seasonal Pastries

Seattle Kitchen

Play Episode Listen Later Jan 17, 2025 89:00


We’re diving into Coconut Prawns! // Chef Edouardo Jordan, founder of Food with Roots, joins us to talk about the Soul of Seattle Fundraiser on February 8th // It’s National Soup Month, and we’re exploring the ultimate comfort food: Chicken Soup // Fred Ness from Dahlia Bakery stops by to chat about Mochi Donuts and Seasonal Pastries // Grab your skewers—it’s Fondue Party season! // Back by popular demand, Rachel Belle shares her culinary escapades for the New Year // And of course, we wrap up with Rub with Love Food for Thought Tasty Trivia!

The Morning Stream
TMS 2763: Mochi Holes

The Morning Stream

Play Episode Listen Later Jan 14, 2025 82:07


Your Vegas is Showing. Sausage Talk. A me AND a knee problem. I Don't Like Burpeeeeeees! Suitcase lady. I have the loneliness gene. Spared no boob expense. Popcorn Shrimp Without the Popcorn. Las Vegas, Those Aren't Real. Dominant in the word cloud. Very Distinguishable. Goonbots. Crotonana- De Vil. Baby shrimps doo doo doo doo do doo. In the fridge with Travis and more on this episode of The Morning Stream. Hosted on Acast. See acast.com/privacy for more information.

The FrogPants Studios Ultra Feed!
TMS 2763: Mochi Holes

The FrogPants Studios Ultra Feed!

Play Episode Listen Later Jan 14, 2025 82:07


Your Vegas is Showing. Sausage Talk. A me AND a knee problem. I Don't Like Burpeeeeeees! Suitcase lady. I have the loneliness gene. Spared no boob expense. Popcorn Shrimp Without the Popcorn. Las Vegas, Those Aren't Real. Dominant in the word cloud. Very Distinguishable. Goonbots. Crotonana- De Vil. Baby shrimps doo doo doo doo do doo. In the fridge with Travis and more on this episode of The Morning Stream. Hosted on Acast. See acast.com/privacy for more information.

Artifice
Ep. 197: Zack Davisson

Artifice

Play Episode Listen Later Jan 14, 2025 127:06


Zack Davisson is an award-winning translator, writer, and folklorist. He is the author of The Ultimate Guide to Japanese Yokai, Kaibyo The Supernatural Cats of Japan, Yurei the Japanese Ghost, The Art of Star Wars Visions, and Manga: A Visual Guide. He co-writes Ultimate X-Men with Peach Momoko for Marvel Comics. His works have been translated into multiple languages. Zack has translated globally renowned manga such as Go Nagai's Devilman and Cutie Honey, Leiji Matsumoto's Space Battleship Yamato and Captain Harlock, and Satoshi Kon's Opus. He translates Shigeru Mizuki's work such as Kitaro and Showa: A History of Japan, and currently translates Gou Tanabe's Lovecraft adaptations and Nadatani Wataru's Cat + Gamer. Zack lectured on manga, folklore, and translation at Duke University, Annapolis Naval Academy, Università Ca' Foscari Venezia, UCLA, and the University of Washington and contributed to exhibitions at the Museum of International Folkart, Wereldmuseum Rotterdan, Världskulturmuseerna Stockholm, and the Art Gallery of New South Wales. Zack lives in Seattle, WA with his wife Miyuki, dog Mochi, cat Shere Khan, and several ghosts. Bluesky: https://bsky.app/profile/zackdavisson.com Website www.zackdavisson.com

The FrogPants Studios Ultra Feed!
The MONDAY Show: Mochi, NOchi

The FrogPants Studios Ultra Feed!

Play Episode Listen Later Jan 13, 2025 64:19


We found a box of stuff someone sent us! Carter made some bad gluten balls. The new microwave is way hotter than the last. The relics in the comedy club. Carter and Tay rapping in the car, and LOTS more! Including some really great texts and emails this week! Hosted on Acast. See acast.com/privacy for more information.

The MONDAY Show
The MONDAY Show: Mochi, NOchi

The MONDAY Show

Play Episode Listen Later Jan 13, 2025 64:19


We found a box of stuff someone sent us! Carter made some bad gluten balls. The new microwave is way hotter than the last. The relics in the comedy club. Carter and Tay rapping in the car, and LOTS more! Including some really great texts and emails this week! Hosted on Acast. See acast.com/privacy for more information.

Japanese with K
#162 Japanese Mochi, Korean Tteok / 餅とトッポギ

Japanese with K

Play Episode Listen Later Jan 6, 2025 14:21


※スクリプトは2日以内に⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Japanese with K⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠に投稿します! Paid members will have access to English subtitle, and Japanese scripts in two versions: one with hiragana and one without hiragana. In order to sustain this endeavor, K requires support from all of you. If you enjoy this podcast, please consider supporting K. High ratings are also greatly appreciated. You can provide support through ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (payment service: Paypal). You can provide support through ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Japanese with K⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (payment service: Stripe).

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the NYC AI Engineer Summit, focused on Agents at Work, are open!When we first started Latent Space, in the lightning round we'd always ask guests: “What's your favorite AI product?”. The majority would say Midjourney. The simple UI of prompt → very aesthetic image turned it into a $300M+ ARR bootstrapped business as it rode the first wave of AI image generation.In open source land, StableDiffusion was congregating around AUTOMATIC1111 as the de-facto web UI. Unlike Midjourney, which offered some flags but was mostly prompt-driven, A1111 let users play with a lot more parameters, supported additional modalities like img2img, and allowed users to load in custom models. If you're interested in some of the SD history, you can look at our episodes with Lexica, Replicate, and Playground.One of the people involved with that community was comfyanonymous, who was also part of the Stability team in 2023, decided to build an alternative called ComfyUI, now one of the fastest growing open source projects in generative images, and is now the preferred partner for folks like Black Forest Labs's Flux Tools on Day 1. The idea behind it was simple: “Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use.”Unlike its predecessors, ComfyUI does not have an input text box. Everything is based around the idea of a node: there's a text input node, a CLIP node, a checkpoint loader node, a KSampler node, a VAE node, etc. While daunting for simple image generation, the tool is amazing for more complex workflows since you can break down every step of the process, and then chain many of them together rather than manually switching between tools. You can also re-start execution halfway instead of from the beginning, which can save a lot of time when using larger models.To give you an idea of some of the new use cases that this type of UI enables:* Sketch something → Generate an image with SD from sketch → feed it into SD Video to animate* Generate an image of an object → Turn into a 3D asset → Feed into interactive experiences* Input audio → Generate audio-reactive videosTheir Examples page also includes some of the more common use cases like AnimateDiff, etc. They recently launched the Comfy Registry, an online library of different nodes that users can pull from rather than having to build everything from scratch. The project has >60,000 Github stars, and as the community grows, some of the projects that people build have gotten quite complex:The most interesting thing about Comfy is that it's not a UI, it's a runtime. You can build full applications on top of image models simply by using Comfy. You can expose Comfy workflows as an endpoint and chain them together just like you chain a single node. We're seeing the rise of AI Engineering applied to art.Major Tom's ComfyUI Resources from the Latent Space DiscordMajor shoutouts to Major Tom on the LS Discord who is a image generation expert, who offered these pointers:* “best thing about comfy is the fact it supports almost immediately every new thing that comes out - unlike A1111 or forge, which still don't support flux cnet for instance. It will be perfect tool when conflicting nodes will be resolved”* AP Workflows from Alessandro Perili are a nice example of an all-in-one train-evaluate-generate system built atop Comfy* ComfyUI YouTubers to learn from:* @sebastiankamph* @NerdyRodent* @OlivioSarikas* @sedetweiler* @pixaroma* ComfyUI Nodes to check out:* https://github.com/kijai/ComfyUI-IC-Light* https://github.com/MrForExample/ComfyUI-3D-Pack* https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait* https://github.com/pydn/ComfyUI-to-Python-Extension* https://github.com/THtianhao/ComfyUI-Portrait-Maker* https://github.com/ssitu/ComfyUI_NestedNodeBuilder* https://github.com/longgui0318/comfyui-magic-clothing* https://github.com/atmaranto/ComfyUI-SaveAsScript* https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID* https://github.com/AIFSH/ComfyUI-FishSpeech* https://github.com/coolzilj/ComfyUI-Photopea* https://github.com/lks-ai/anynode* Sarav: https://www.youtube.com/@mickmumpitz/videos ( applied stuff )* Sarav: https://www.youtube.com/@latentvision (technical, but infrequent)* look for comfyui node for https://github.com/magic-quill/MagicQuill* “Comfy for Video” resources* Kijai (https://github.com/kijai) pushing out support for Mochi, CogVideoX, AnimateDif, LivePortrait etc* Comfyui node support like LTX https://github.com/Lightricks/ComfyUI-LTXVideo , and HunyuanVideo* FloraFauna AI* Communities: https://www.reddit.com/r/StableDiffusion/, https://www.reddit.com/r/comfyui/Full YouTube EpisodeAs usual, you can find the full video episode on our YouTube (and don't forget to like and subscribe!)Timestamps* 00:00:04 Introduction of hosts and anonymous guest* 00:00:35 Origins of Comfy UI and early Stable Diffusion landscape* 00:02:58 Comfy's background and development of high-res fix* 00:05:37 Area conditioning and compositing in image generation* 00:07:20 Discussion on different AI image models (SD, Flux, etc.)* 00:11:10 Closed source model APIs and community discussions on SD versions* 00:14:41 LoRAs and textual inversion in image generation* 00:18:43 Evaluation methods in the Comfy community* 00:20:05 CLIP models and text encoders in image generation* 00:23:05 Prompt weighting and negative prompting* 00:26:22 Comfy UI's unique features and design choices* 00:31:00 Memory management in Comfy UI* 00:33:50 GPU market share and compatibility issues* 00:35:40 Node design and parameter settings in Comfy UI* 00:38:44 Custom nodes and community contributions* 00:41:40 Video generation models and capabilities* 00:44:47 Comfy UI's development timeline and rise to popularity* 00:48:13 Current state of Comfy UI team and future plans* 00:50:11 Discussion on other Comfy startups and potential text generation supportTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hey everyone, we are in the Chroma Studio again, but with our first ever anonymous guest, Comfy Anonymous, welcome.Comfy [00:00:19]: Hello.swyx [00:00:21]: I feel like that's your full name, you just go by Comfy, right?Comfy [00:00:24]: Yeah, well, a lot of people just call me Comfy, even when they know my real name. Hey, Comfy.Alessio [00:00:32]: Swyx is the same. You know, not a lot of people call you Shawn.swyx [00:00:35]: Yeah, you have a professional name, right, that people know you by, and then you have a legal name. Yeah, it's fine. How do I phrase this? I think people who are in the know, know that Comfy is like the tool for image generation and now other multimodality stuff. I would say that when I first got started with Stable Diffusion, the star of the show was Automatic 111, right? And I actually looked back at my notes from 2022-ish, like Comfy was already getting started back then, but it was kind of like the up and comer, and your main feature was the flowchart. Can you just kind of rewind to that moment, that year and like, you know, how you looked at the landscape there and decided to start Comfy?Comfy [00:01:10]: Yeah, I discovered Stable Diffusion in 2022, in October 2022. And, well, I kind of started playing around with it. Yes, I, and back then I was using Automatic, which was what everyone was using back then. And so I started with that because I had, it was when I started, I had no idea like how Diffusion works. I didn't know how Diffusion models work, how any of this works, so.swyx [00:01:36]: Oh, yeah. What was your prior background as an engineer?Comfy [00:01:39]: Just a software engineer. Yeah. Boring software engineer.swyx [00:01:44]: But like any, any image stuff, any orchestration, distributed systems, GPUs?Comfy [00:01:49]: No, I was doing basically nothing interesting. Crud, web development? Yeah, a lot of web development, just, yeah, some basic, maybe some basic like automation stuff. Okay. Just. Yeah, no, like, no big companies or anything.swyx [00:02:08]: Yeah, but like already some interest in automations, probably a lot of Python.Comfy [00:02:12]: Yeah, yeah, of course, Python. But I wasn't actually used to like the Node graph interface before I started Comfy UI. It was just, I just thought it was like, oh, like, what's the best way to represent the Diffusion process in the user interface? And then like, oh, well. Well, like, naturally, oh, this is the best way I've found. And this was like with the Node interface. So how I got started was, yeah, so basic October 2022, just like I hadn't written a line of PyTorch before that. So it's completely new. What happened was I kind of got addicted to generating images.Alessio [00:02:58]: As we all did. Yeah.Comfy [00:03:00]: And then I started. I started experimenting with like the high-res fixed in auto, which was for those that don't know, the high-res fix is just since the Diffusion models back then could only generate that low-resolution. So what you would do, you would generate low-resolution image, then upscale, then refine it again. And that was kind of the hack to generate high-resolution images. I really liked generating. Like higher resolution images. So I was experimenting with that. And so I modified the code a bit. Okay. What happens if I, if I use different samplers on the second pass, I was edited the code of auto. So what happens if I use a different sampler? What happens if I use a different, like a different settings, different number of steps? And because back then the. The high-res fix was very basic, just, so. Yeah.swyx [00:04:05]: Now there's a whole library of just, uh, the upsamplers.Comfy [00:04:08]: I think, I think they added a bunch of, uh, of options to the high-res fix since, uh, since, since then. But before that was just so basic. So I wanted to go further. I wanted to try it. What happens if I use a different model for the second, the second pass? And then, well, then the auto code base was, wasn't good enough for. Like, it would have been, uh, harder to implement that in the auto interface than to create my own interface. So that's when I decided to create my own. And you were doing that mostly on your own when you started, or did you already have kind of like a subgroup of people? No, I was, uh, on my own because, because it was just me experimenting with stuff. So yeah, that was it. Then, so I started writing the code January one. 2023, and then I released the first version on GitHub, January 16th, 2023. That's how things got started.Alessio [00:05:11]: And what's, what's the name? Comfy UI right away or? Yeah.Comfy [00:05:14]: Comfy UI. The reason the name, my name is Comfy is people thought my pictures were comfy, so I just, uh, just named it, uh, uh, it's my Comfy UI. So yeah, that's, uh,swyx [00:05:27]: Is there a particular segment of the community that you targeted as users? Like more intensive workflow artists, you know, compared to the automatic crowd or, you know,Comfy [00:05:37]: This was my way of like experimenting with, uh, with new things, like the high risk fixed thing I mentioned, which was like in Comfy, the first thing you could easily do was just chain different models together. And then one of the first things, I think the first times it got a bit of popularity was when I started experimenting with the different, like applying. Prompts to different areas of the image. Yeah. I called it area conditioning, posted it on Reddit and it got a bunch of upvotes. So I think that's when, like, when people first learned of Comfy UI.swyx [00:06:17]: Is that mostly like fixing hands?Comfy [00:06:19]: Uh, no, no, no. That was just, uh, like, let's say, well, it was very, well, it still is kind of difficult to like, let's say you want a mountain, you have an image and then, okay. I'm like, okay. I want the mountain here and I want the, like a, a Fox here.swyx [00:06:37]: Yeah. So compositing the image. Yeah.Comfy [00:06:40]: My way was very easy. It was just like, oh, when you run the diffusion process, you kind of generate, okay. You do pass one pass through the diffusion, every step you do one pass. Okay. This place of the image with this brand, this space, place of the image with the other prop. And then. The entire image with another prop and then just average everything together, every step, and that was, uh, area composition, which I call it. And then, then a month later, there was a paper that came out called multi diffusion, which was the same thing, but yeah, that's, uh,Alessio [00:07:20]: could you do area composition with different models or because you're averaging out, you kind of need the same model.Comfy [00:07:26]: Could do it with, but yeah, I hadn't implemented it. For different models, but, uh, you, you can do it with, uh, with different models if you want, as long as the models share the same latent space, like we, we're supposed to ring a bell every time someone says, yeah, like, for example, you couldn't use like Excel and SD 1.5, because those have a different latent space, but like, uh, yeah, like SD 1.5 models, different ones. You could, you could do that.swyx [00:07:59]: There's some models that try to work in pixel space, right?Comfy [00:08:03]: Yeah. They're very slow. Of course. That's the problem. That that's the, the reason why stable diffusion actually became like popular, like, cause was because of the latent space.swyx [00:08:14]: Small and yeah. Because it used to be latent diffusion models and then they trained it up.Comfy [00:08:19]: Yeah. Cause a pixel pixel diffusion models are just too slow. So. Yeah.swyx [00:08:25]: Have you ever tried to talk to like, like stability, the latent diffusion guys, like, you know, Robin Rombach, that, that crew. Yeah.Comfy [00:08:32]: Well, I used to work at stability.swyx [00:08:34]: Oh, I actually didn't know. Yeah.Comfy [00:08:35]: I used to work at stability. I got, uh, I got hired, uh, in June, 2023.swyx [00:08:42]: Ah, that's the part of the story I didn't know about. Okay. Yeah.Comfy [00:08:46]: So the, the reason I was hired is because they were doing, uh, SDXL at the time and they were basically SDXL. I don't know if you remember it was a base model and then a refiner model. Basically they wanted to experiment, like chaining them together. And then, uh, they saw, oh, right. Oh, this, we can use this to do that. Well, let's hire that guy.swyx [00:09:10]: But they didn't, they didn't pursue it for like SD3. What do you mean? Like the SDXL approach. Yeah.Comfy [00:09:16]: The reason for that approach was because basically they had two models and then they wanted to publish both of them. So they, they trained one on. Lower time steps, which was the refiner model. And then they, the first one was trained normally. And then they went during their test, they realized, oh, like if we string these models together are like quality increases. So let's publish that. It worked. Yeah. But like right now, I don't think many people actually use the refiner anymore, even though it is actually a full diffusion model. Like you can use it on its own. And it's going to generate images. I don't think anyone, people have mostly forgotten about it. But, uh.Alessio [00:10:05]: Can we talk about models a little bit? So stable diffusion, obviously is the most known. I know flux has gotten a lot of traction. Are there any underrated models that people should use more or what's the state of the union?Comfy [00:10:17]: Well, the, the latest, uh, state of the art, at least, yeah, for images there's, uh, yeah, there's flux. There's also SD3.5. SD3.5 is two models. There's a, there's a small one, 2.5B and there's the bigger one, 8B. So it's, it's smaller than flux. So, and it's more, uh, creative in a way, but flux, yeah, flux is the best. People should give SD3.5 a try cause it's, uh, it's different. I won't say it's better. Well, it's better for some like specific use cases. Right. If you want some to make something more like creative, maybe SD3.5. If you want to make something more consistent and flux is probably better.swyx [00:11:06]: Do you ever consider supporting the closed source model APIs?Comfy [00:11:10]: Uh, well, they, we do support them as custom nodes. We actually have some, uh, official custom nodes from, uh, different. Ideogram.swyx [00:11:20]: Yeah. I guess DALI would have one. Yeah.Comfy [00:11:23]: That's, uh, it's just not, I'm not the person that handles that. Sure.swyx [00:11:28]: Sure. Quick question on, on SD. There's a lot of community discussion about the transition from SD1.5 to SD2 and then SD2 to SD3. People still like, you know, very loyal to the previous generations of SDs?Comfy [00:11:41]: Uh, yeah. SD1.5 then still has a lot of, a lot of users.swyx [00:11:46]: The last based model.Comfy [00:11:49]: Yeah. Then SD2 was mostly ignored. It wasn't, uh, it wasn't a big enough improvement over the previous one. Okay.swyx [00:11:58]: So SD1.5, SD3, flux and whatever else. SDXL. SDXL.Comfy [00:12:03]: That's the main one. Stable cascade. Stable cascade. That was a good model. But, uh, that's, uh, the problem with that one is, uh, it got, uh, like SD3 was announced one week after. Yeah.swyx [00:12:16]: It was like a weird release. Uh, what was it like inside of stability actually? I mean, statute of limitations. Yeah. The statute of limitations expired. You know, management has moved. So it's easier to talk about now. Yeah.Comfy [00:12:27]: And inside stability, actually that model was ready, uh, like three months before, but it got, uh, stuck in, uh, red teaming. So basically the product, if that model had released or was supposed to be released by the authors, then it would probably have gotten very popular since it's a, it's a step up from SDXL. But it got all of its momentum stolen. It got stolen by the SD3 announcement. So people kind of didn't develop anything on top of it, even though it's, uh, yeah. It was a good model, at least, uh, completely mostly ignored for some reason. Likeswyx [00:13:07]: I think the naming as well matters. It seemed like a branch off of the main, main tree of development. Yeah.Comfy [00:13:15]: Well, it was different researchers that did it. Yeah. Yeah. Very like, uh, good model. Like it's the Worcestershire authors. I don't know if I'm pronouncing it correctly. Yeah. Yeah. Yeah.swyx [00:13:28]: I actually met them in Vienna. Yeah.Comfy [00:13:30]: They worked at stability for a bit and they left right after the Cascade release.swyx [00:13:35]: This is Dustin, right? No. Uh, Dustin's SD3. Yeah.Comfy [00:13:38]: Dustin is a SD3 SDXL. That's, uh, Pablo and Dome. I think I'm pronouncing his name correctly. Yeah. Yeah. Yeah. Yeah. That's very good.swyx [00:13:51]: It seems like the community is very, they move very quickly. Yeah. Like when there's a new model out, they just drop whatever the current one is. And they just all move wholesale over. Like they don't really stay to explore the full capabilities. Like if, if the stable cascade was that good, they would have AB tested a bit more. Instead they're like, okay, SD3 is out. Let's go. You know?Comfy [00:14:11]: Well, I find the opposite actually. The community doesn't like, they only jump on a new model when there's a significant improvement. Like if there's a, only like a incremental improvement, which is what, uh, most of these models are going to have, especially if you, cause, uh, stay the same parameter count. Yeah. Like you're not going to get a massive improvement, uh, into like, unless there's something big that, that changes. So, uh. Yeah.swyx [00:14:41]: And how are they evaluating these improvements? Like, um, because there's, it's a whole chain of, you know, comfy workflows. Yeah. How does, how does one part of the chain actually affect the whole process?Comfy [00:14:52]: Are you talking on the model side specific?swyx [00:14:54]: Model specific, right? But like once you have your whole workflow based on a model, it's very hard to move.Comfy [00:15:01]: Uh, not, well, not really. Well, it depends on your, uh, depends on their specific kind of the workflow. Yeah.swyx [00:15:09]: So I do a lot of like text and image. Yeah.Comfy [00:15:12]: When you do change, like most workflows are kind of going to be complete. Yeah. It's just like, you might have to completely change your prompt completely change. Okay.swyx [00:15:24]: Well, I mean, then maybe the question is really about evals. Like what does the comfy community do for evals? Just, you know,Comfy [00:15:31]: Well, that they don't really do that. It's more like, oh, I think this image is nice. So that's, uh,swyx [00:15:38]: They just subscribe to Fofr AI and just see like, you know, what Fofr is doing. Yeah.Comfy [00:15:43]: Well, they just, they just generate like it. Like, I don't see anyone really doing it. Like, uh, at least on the comfy side, comfy users, they, it's more like, oh, generate images and see, oh, this one's nice. It's like, yeah, it's not, uh, like the, the more, uh, like, uh, scientific, uh, like, uh, like checking that's more on specifically on like model side. If, uh, yeah, but there is a lot of, uh, vibes also, cause it is a like, uh, artistic, uh, you can create a very good model that doesn't generate nice images. Cause most images on the internet are ugly. So if you, if that's like, if you just, oh, I have the best model at 10th giant, it's super smart. I created on all the, like I've trained on just all the images on the internet. The images are not going to look good. So yeah.Alessio [00:16:42]: Yeah.Comfy [00:16:43]: They're going to be very consistent. But yeah. People like, it's not going to be like the, the look that people are going to be expecting from, uh, from a model. So. Yeah.swyx [00:16:54]: Can we talk about LoRa's? Cause we thought we talked about models then like the next step is probably LoRa's. Before, I actually, I'm kind of curious how LoRa's entered the tool set of the image community because the LoRa paper was 2021. And then like, there was like other methods like textual inversion that was popular at the early SD stage. Yeah.Comfy [00:17:13]: I can't even explain the difference between that. Yeah. Textual inversions. That's basically what you're doing is you're, you're training a, cause well, yeah. Stable diffusion. You have the diffusion model, you have text encoder. So basically what you're doing is training a vector that you're going to pass to the text encoder. It's basically you're training a new word. Yeah.swyx [00:17:37]: It's a little bit like representation engineering now. Yeah.Comfy [00:17:40]: Yeah. Basically. Yeah. You're just, so yeah, if you know how like the text encoder works, basically you have, you take your, your words of your product, you convert those into tokens with the tokenizer and those are converted into vectors. Basically. Yeah. Each token represents a different vector. So each word presents a vector. And those, depending on your words, that's the list of vectors that get passed to the text encoder, which is just. Yeah. Yeah. I'm just a stack of, of attention. Like basically it's a very close to LLM architecture. Yeah. Yeah. So basically what you're doing is just training a new vector. We're saying, well, I have all these images and I want to know which word does that represent? And it's going to get like, you train this vector and then, and then when you use this vector, it hopefully generates. Like something similar to your images. Yeah.swyx [00:18:43]: I would say it's like surprisingly sample efficient in picking up the concept that you're trying to train it on. Yeah.Comfy [00:18:48]: Well, people have kind of stopped doing that even though back as like when I was at Stability, we, we actually did train internally some like textual versions on like T5 XXL actually worked pretty well. But for some reason, yeah, people don't use them. And also they might also work like, like, yeah, this is something and probably have to test, but maybe if you train a textual version, like on T5 XXL, it might also work with all the other models that use T5 XXL because same thing with like, like the textual inversions that, that were trained for SD 1.5, they also kind of work on SDXL because SDXL has the, has two text encoders. And one of them is the same as the, as the SD 1.5 CLIP-L. So those, they actually would, they don't work as strongly because they're only applied to one of the text encoders. But, and the same thing for SD3. SD3 has three text encoders. So it works. It's still, you can still use your textual version SD 1.5 on SD3, but it's just a lot weaker because now there's three text encoders. So it gets even more diluted. Yeah.swyx [00:20:05]: Do people experiment a lot on, just on the CLIP side, there's like Siglip, there's Blip, like do people experiment a lot on those?Comfy [00:20:12]: You can't really replace. Yeah.swyx [00:20:14]: Because they're trained together, right? Yeah.Comfy [00:20:15]: They're trained together. So you can't like, well, what I've seen people experimenting with is a long CLIP. So basically someone fine tuned the CLIP model to accept longer prompts.swyx [00:20:27]: Oh, it's kind of like long context fine tuning. Yeah.Comfy [00:20:31]: So, so like it's, it's actually supported in Core Comfy.swyx [00:20:35]: How long is long?Comfy [00:20:36]: Regular CLIP is 77 tokens. Yeah. Long CLIP is 256. Okay. So, but the hack that like you've, if you use stable diffusion 1.5, you've probably noticed, oh, it still works if I, if I use long prompts, prompts longer than 77 words. Well, that's because the hack is to just, well, you split, you split it up in chugs of 77, your whole big prompt. Let's say you, you give it like the massive text, like the Bible or something, and it would split it up in chugs of 77 and then just pass each one through the CLIP and then just cut anything together at the end. It's not ideal, but it actually works.swyx [00:21:26]: Like the positioning of the words really, really matters then, right? Like this is why order matters in prompts. Yeah.Comfy [00:21:33]: Yeah. Like it, it works, but it's, it's not ideal, but it's what people expect. Like if, if someone gives a huge prompt, they expect at least some of the concepts at the end to be like present in the image. But usually when they give long prompts, they, they don't, they like, they don't expect like detail, I think. So that's why it works very well.swyx [00:21:58]: And while we're on this topic, prompts waiting, negative comments. Negative prompting all, all sort of similar part of this layer of the stack. Yeah.Comfy [00:22:05]: The, the hack for that, which works on CLIP, like it, basically it's just for SD 1.5, well, for SD 1.5, the prompt waiting works well because CLIP L is a, is not a very deep model. So you have a very high correlation between, you have the input token, the index of the input token vector. And the output token, they're very, the concepts are very close, closely linked. So that means if you interpolate the vector from what, well, the, the way Comfy UI does it is it has, okay, you have the vector, you have an empty prompt. So you have a, a chunk, like a CLIP output for the empty prompt, and then you have the one for your prompt. And then it interpolates from that, depending on your prompt. Yeah.Comfy [00:23:07]: So that's how it, how it does prompt waiting. But this stops working the deeper your text encoder is. So on T5X itself, it doesn't work at all. So. Wow.swyx [00:23:20]: Is that a problem for people? I mean, cause I'm used to just move, moving up numbers. Probably not. Yeah.Comfy [00:23:25]: Well.swyx [00:23:26]: So you just use words to describe, right? Cause it's a bigger language model. Yeah.Comfy [00:23:30]: Yeah. So. Yeah. So honestly it might be good, but I haven't seen many complaints on Flux that it's not working. So, cause I guess people can sort of get around it with, with language. So. Yeah.swyx [00:23:46]: Yeah. And then coming back to LoRa's, now the, the popular way to, to customize models is LoRa's. And I saw you also support Locon and LoHa, which I've never heard of before.Comfy [00:23:56]: There's a bunch of, cause what, what the LoRa is essentially is. Instead of like, okay, you have your, your model and then you want to fine tune it. So instead of like, what you could do is you could fine tune the entire thing, but that's a bit heavy. So to speed things up and make things less heavy, what you can do is just fine tune some smaller weights, like basically two, two matrices that when you multiply like two low rank matrices and when you multiply them together, gives a, represents a difference between trained weights and your base weights. So by training those two smaller matrices, that's a lot less heavy. Yeah.Alessio [00:24:45]: And they're portable. So you're going to share them. Yeah. It's like easier. And also smaller.Comfy [00:24:49]: Yeah. That's the, how LoRa's work. So basically, so when, when inferencing you, you get an inference with them pretty efficiently, like how ComputeWrite does it. It just, when you use a LoRa, it just applies it straight on the weights so that there's only a small delay at the base, like before the sampling to when it applies the weights and then it just same speed as, as before. So for, for inference, it's, it's not that bad, but, and then you have, so basically all the LoRa types like LoHa, LoCon, everything, that's just different ways of representing that like. Basically, you can call it kind of like compression, even though it's not really compression, it's just different ways of represented, like just, okay, I want to train a different on the difference on the weights. What's the best way to represent that difference? There's the basic LoRa, which is just, oh, let's multiply these two matrices together. And then there's all the other ones, which are all different algorithms. So. Yeah.Alessio [00:25:57]: So let's talk about LoRa. Let's talk about what comfy UI actually is. I think most people have heard of it. Some people might've seen screenshots. I think fewer people have built very complex workflows. So when you started, automatic was like the super simple way. What were some of the choices that you made? So the node workflow, is there anything else that stands out as like, this was like a unique take on how to do image generation workflows?Comfy [00:26:22]: Well, I feel like, yeah, back then everyone was trying to make like easy to use interface. Yeah. So I'm like, well, everyone's trying to make an easy to use interface.swyx [00:26:32]: Let's make a hard to use interface.Comfy [00:26:37]: Like, so like, I like, I don't need to do that, everyone else doing it. So let me try something like, let me try to make a powerful interface that's not easy to use. So.swyx [00:26:52]: So like, yeah, there's a sort of node execution engine. Yeah. Yeah. And it actually lists, it has this really good list of features of things you prioritize, right? Like let me see, like sort of re-executing from, from any parts of the workflow that was changed, asynchronous queue system, smart memory management, like all this seems like a lot of engineering that. Yeah.Comfy [00:27:12]: There's a lot of engineering in the back end to make things, cause I was always focused on making things work locally very well. Cause that's cause I was using it locally. So everything. So there's a lot of, a lot of thought and working by getting everything to run as well as possible. So yeah. ConfUI is actually more of a back end, at least, well, not all the front ends getting a lot more development, but, but before, before it was, I was pretty much only focused on the backend. Yeah.swyx [00:27:50]: So v0.1 was only August this year. Yeah.Comfy [00:27:54]: With the new front end. Before there was no versioning. So yeah. Yeah. Yeah.swyx [00:27:57]: And so what was the big rewrite for the 0.1 and then the 1.0?Comfy [00:28:02]: Well, that's more on the front end side. That's cause before that it was just like the UI, what, cause when I first wrote it, I just, I said, okay, how can I make, like, I can do web development, but I don't like doing it. Like what's the easiest way I can slap a node interface on this. And then I found this library. Yeah. Like JavaScript library.swyx [00:28:26]: Live graph?Comfy [00:28:27]: Live graph.swyx [00:28:28]: Usually people will go for like react flow for like a flow builder. Yeah.Comfy [00:28:31]: But that seems like too complicated. So I didn't really want to spend time like developing the front end. So I'm like, well, oh, light graph. This has the whole node interface. So, okay. Let me just plug that into, to my backend.swyx [00:28:49]: I feel like if Streamlit or Gradio offered something that you would have used Streamlit or Gradio cause it's Python. Yeah.Comfy [00:28:54]: Yeah. Yeah. Yeah.Comfy [00:29:00]: Yeah.Comfy [00:29:14]: Yeah. logic and your backend logic and just sticks them together.swyx [00:29:20]: It's supposed to be easy for you guys. If you're a Python main, you know, I'm a JS main, right? Okay. If you're a Python main, it's supposed to be easy.Comfy [00:29:26]: Yeah, it's easy, but it makes your whole software a huge mess.swyx [00:29:30]: I see, I see. So you're mixing concerns instead of separating concerns?Comfy [00:29:34]: Well, it's because... Like frontend and backend. Frontend and backend should be well separated with a defined API. Like that's how you're supposed to do it. Smart people disagree. It just sticks everything together. It makes it easy to like a huge mess. And also it's, there's a lot of issues with Gradio. Like it's very good if all you want to do is just get like slap a quick interface on your, like to show off your ML project. Like that's what it's made for. Yeah. Like there's no problem using it. Like, oh, I have my, I have my code. I just wanted a quick interface on it. That's perfect. Like use Gradio. But if you want to make something that's like a real, like real software that will last a long time and will be easy to maintain, then I would avoid it. Yeah.swyx [00:30:32]: So your criticism is Streamlit and Gradio are the same. I mean, those are the same criticisms.Comfy [00:30:37]: Yeah, Streamlit I haven't used as much. Yeah, I just looked a bit.swyx [00:30:43]: Similar philosophy.Comfy [00:30:44]: Yeah, it's similar. It's just, it just seems to me like, okay, for quick, like AI demos, it's perfect.swyx [00:30:51]: Yeah. Going back to like the core tech, like asynchronous queues, slow re-execution, smart memory management, you know, anything that you were very proud of or was very hard to figure out?Comfy [00:31:00]: Yeah. The thing that's the biggest pain in the ass is probably the memory management. Yeah.swyx [00:31:05]: Were you just paging models in and out or? Yeah.Comfy [00:31:08]: Before it was just, okay, load the model, completely unload it. Then, okay, that, that works well when you, your model are small, but if your models are big and it takes sort of like, let's say someone has a, like a, a 4090, and the model size is 10 gigabytes, that can take a few seconds to like load and load, load and load, so you want to try to keep things like in memory, in the GPU memory as much as possible. What Comfy UI does right now is it. It tries to like estimate, okay, like, okay, you're going to sample this model, it's going to take probably this amount of memory, let's remove the models, like this amount of memory that's been loaded on the GPU and then just execute it. But so there's a fine line between just because try to remove the least amount of models that are already loaded. Because as fans, like Windows drivers, and one other problem is the NVIDIA driver on Windows by default, because there's a way to, there's an option to disable that feature, but by default it, like, if you start loading, you can overflow your GPU memory and then it's, the driver's going to automatically start paging to RAM. But the problem with that is it's, it makes everything extremely slow. So when you see people complaining, oh, this model, it works, but oh, s**t, it starts slowing down a lot, that's probably what's happening. So it's basically you have to just try to get, use as much memory as possible, but not too much, or else things start slowing down, or people get out of memory, and then just find, try to find that line where, oh, like the driver on Windows starts paging and stuff. Yeah. And the problem with PyTorch is it's, it's high levels, don't have that much fine-grained control over, like, specific memory stuff, so kind of have to leave, like, the memory freeing to, to Python and PyTorch, which is, can be annoying sometimes.swyx [00:33:32]: So, you know, I think one thing is, as a maintainer of this project, like, you're designing for a very wide surface area of compute, like, you even support CPUs.Comfy [00:33:42]: Yeah, well, that's... That's just, for PyTorch, PyTorch supports CPUs, so, yeah, it's just, that's not, that's not hard to support.swyx [00:33:50]: First of all, is there a market share estimate, like, is it, like, 70% NVIDIA, like, 30% AMD, and then, like, miscellaneous on Apple, Silicon, or whatever?Comfy [00:33:59]: For Comfy? Yeah. Yeah, and, yeah, I don't know the market share.swyx [00:34:03]: Can you guess?Comfy [00:34:04]: I think it's mostly NVIDIA. Right. Because, because AMD, the problem, like, AMD works horribly on Windows. Like, on Linux, it works fine. It's, it's lower than the price equivalent NVIDIA GPU, but it works, like, you can use it, you generate images, everything works. On Linux, on Windows, you might have a hard time, so, that's the problem, and most people, I think most people who bought AMD probably use Windows. They probably aren't going to switch to Linux, so... Yeah. So, until AMD actually, like, ports their, like, raw cam to, to Windows properly, and then there's actually PyTorch, I think they're, they're doing that, they're in the process of doing that, but, until they get it, they get a good, like, PyTorch raw cam build that works on Windows, it's, like, they're going to have a hard time. Yeah.Alessio [00:35:06]: We got to get George on it. Yeah. Well, he's trying to get Lisa Su to do it, but... Let's talk a bit about, like, the node design. So, unlike all the other text-to-image, you have a very, like, deep, so you have, like, a separate node for, like, clip and code, you have a separate node for, like, the case sampler, you have, like, all these nodes. Going back to, like, the making it easy versus making it hard, but, like, how much do people actually play with all the settings, you know? Kind of, like, how do you guide people to, like, hey, this is actually going to be very impactful versus this is maybe, like, less impactful, but we still want to expose it to you?Comfy [00:35:40]: Well, I try to... I try to expose, like, I try to expose everything or, but, yeah, at least for the, but for things, like, for example, for the samplers, like, there's, like, yeah, four different sampler nodes, which go in easiest to most advanced. So, yeah, if you go, like, the easy node, the regular sampler node, that's, you have just the basic settings. But if you use, like, the sampler advanced... If you use, like, the custom advanced node, that, that one you can actually, you'll see you have, like, different nodes.Alessio [00:36:19]: I'm looking it up now. Yeah. What are, like, the most impactful parameters that you use? So, it's, like, you know, you can have more, but, like, which ones, like, really make a difference?Comfy [00:36:30]: Yeah, they all do. They all have their own, like, they all, like, for example, yeah, steps. Usually you want steps, you want them to be as low as possible. But you want, if you're optimizing your workflow, you want to, you lower the steps until, like, the images start deteriorating too much. Because that, yeah, that's the number of steps you're running the diffusion process. So, if you want things to be faster, lower is better. But, yeah, CFG, that's more, you can kind of see that as the contrast of the image. Like, if your image looks too bursty. Then you can lower the CFG. So, yeah, CFG, that's how, yeah, that's how strongly the, like, the negative versus positive prompt. Because when you sample a diffusion model, it's basically a negative prompt. It's just, yeah, positive prediction minus negative prediction.swyx [00:37:32]: Contrastive loss. Yeah.Comfy [00:37:34]: It's positive minus negative, and the CFG does the multiplier. Yeah. Yeah. Yeah, so.Alessio [00:37:41]: What are, like, good resources to understand what the parameters do? I think most people start with automatic, and then they move over, and it's, like, snap, CFG, sampler, name, scheduler, denoise. Read it.Comfy [00:37:53]: But, honestly, well, it's more, it's something you should, like, try out yourself. I don't know, you don't necessarily need to know how it works to, like, what it does. Because even if you know, like, CFGO, it's, like, positive minus negative prompt. Yeah. So the only thing you know at CFG is if it's 1.0, then that means the negative prompt isn't applied. It also means sampling is two times faster. But, yeah. But other than that, it's more, like, you should really just see what it does to the images yourself, and you'll probably get a more intuitive understanding of what these things do.Alessio [00:38:34]: Any other nodes or things you want to shout out? Like, I know the animate diff IP adapter. Those are, like, some of the most popular ones. Yeah. What else comes to mind?Comfy [00:38:44]: Not nodes, but there's, like, what I like is when some people, sometimes they make things that use ComfyUI as their backend. Like, there's a plugin for Krita that uses ComfyUI as its backend. So you can use, like, all the models that work in Comfy in Krita. And I think I've tried it once. But I know a lot of people use it, and it's probably really nice, so.Alessio [00:39:15]: What's the craziest node that people have built, like, the most complicated?Comfy [00:39:21]: Craziest node? Like, yeah. I know some people have made, like, video games in Comfy with, like, stuff like that. So, like, someone, like, I remember, like, yeah, last, I think it was last year, someone made, like, a, like, Wolfenstein 3D in Comfy. Of course. And then one of the inputs was, oh, you can generate a texture, and then it changes the texture in the game. So you can plug it to, like, the workflow. And there's a lot of, if you look there, there's a lot of crazy things people do, so. Yeah.Alessio [00:39:59]: And now there's, like, a node register that people can use to, like, download nodes. Yeah.Comfy [00:40:04]: Like, well, there's always been the, like, the ComfyUI manager. Yeah. But we're trying to make this more, like, I don't know, official, like, with, yeah, with the node registry. Because before the node registry, the, like, okay, how did your custom node get into ComfyUI manager? That's the guy running it who, like, every day he searched GitHub for new custom nodes and added dev annually to his custom node manager. So we're trying to make it less effortless. So we're trying to make it less effortless for him, basically. Yeah.Alessio [00:40:40]: Yeah. But I was looking, I mean, there's, like, a YouTube download node. There's, like, this is almost like, you know, a data pipeline more than, like, an image generation thing at this point. It's, like, you can get data in, you can, like, apply filters to it, you can generate data out.Comfy [00:40:54]: Yeah. You can do a lot of different things. Yeah. So I'm thinking, I think what I did is I made it easy to make custom nodes. So I think that helped a lot. I think that helped a lot for, like, the ecosystem because it is very easy to just make a node. So, yeah, a bit too easy sometimes. Then we have the issue where there's a lot of custom node packs which share similar nodes. But, well, that's, yeah, something we're trying to solve by maybe bringing some of the functionality into the core. Yeah. Yeah. Yeah.Alessio [00:41:36]: And then there's, like, video. People can do video generation. Yeah.Comfy [00:41:40]: Video, that's, well, the first video model was, like, stable video diffusion, which was last, yeah, exactly last year, I think. Like, one year ago. But that wasn't a true video model. So it was...swyx [00:41:55]: It was, like, moving images? Yeah.Comfy [00:41:57]: I generated video. What I mean by that is it's, like, it's still 2D Latents. It's basically what I'm trying to do. So what they did is they took SD2, and then they added some temporal attention to it, and then trained it on videos and all. So it's kind of, like, animated, like, same idea, basically. Why I say it's not a true video model is that you still have, like, the 2D Latents. Like, a true video model, like Mochi, for example, would have 3D Latents. Mm-hmm.Alessio [00:42:32]: Which means you can, like, move through the space, basically. It's the difference. You're not just kind of, like, reorienting. Yeah.Comfy [00:42:39]: And it's also, well, it's also because you have a temporal VAE. Mm-hmm. Also, like, Mochi has a temporal VAE that compresses on, like, the temporal direction, also. So that's something you don't have with, like, yeah, animated diff and stable video diffusion. They only, like, compress spatially, not temporally. Mm-hmm. Right. So, yeah. That's why I call that, like, true video models. There's, yeah, there's actually a few of them, but the one I've implemented in comfy is Mochi, because that seems to be the best one so far. Yeah.swyx [00:43:15]: We had AJ come and speak at the stable diffusion meetup. The other open one I think I've seen is COG video. Yeah.Comfy [00:43:21]: COG video. Yeah. That one's, yeah, it also seems decent, but, yeah. Chinese, so we don't use it. No, it's fine. It's just, yeah, I could. Yeah. It's just that there's a, it's not the only one. There's also a few others, which I.swyx [00:43:36]: The rest are, like, closed source, right? Like, Cling. Yeah.Comfy [00:43:39]: Closed source, there's a bunch of them. But I mean, open. I've seen a few of them. Like, I can't remember their names, but there's COG videos, the big, the big one. Then there's also a few of them that released at the same time. There's one that released at the same time as SSD 3.5, same day, which is why I don't remember the name.swyx [00:44:02]: We should have a release schedule so we don't conflict on each of these things. Yeah.Comfy [00:44:06]: I think SD 3.5 and Mochi released on the same day. So everything else was kind of drowned, completely drowned out. So for some reason, lots of people picked that day to release their stuff.Comfy [00:44:21]: Yeah. Which is, well, shame for those. And I think Omnijet also released the same day, which also seems interesting. Yeah. Yeah.Alessio [00:44:30]: What's Comfy? So you are Comfy. And then there's like, comfy.org. I know we do a lot of things for, like, news research and those guys also have kind of like a more open source thing going on. How do you work? Like you mentioned, you mostly work on like, the core piece of it. And then what...Comfy [00:44:47]: Maybe I should fade it in because I, yeah, I feel like maybe, yeah, I only explain part of the story. Right. Yeah. Maybe I should explain the rest. So yeah. So yeah. Basically, January, that's when the first January 2023, January 16, 2023, that's when Amphi was first released to the public. Then, yeah, did a Reddit post about the area composition thing somewhere in, I don't remember exactly, maybe end of January, beginning of February. And then someone, a YouTuber, made a video about it, like Olivio, he made a video about Amphi in March 2023. I think that's when it was a real burst of attention. And by that time, I was continuing to develop it and it was getting, people were starting to use it more, which unfortunately meant that I had first written it to do like experiments, but then my time to do experiments went down. It started going down, because people were actually starting to use it then. Like, I had to, and I said, well, yeah, time to add all these features and stuff. Yeah, and then I got hired by Stability June, 2023. Then I made, basically, yeah, they hired me because they wanted the SD-XL. So I got the SD-XL working very well withітhe UI, because they were experimenting withámphi.house.com. Actually, the SDX, how the SDXL released worked is they released, for some reason, like they released the code first, but they didn't release the model checkpoint. So they released the code. And then, well, since the research was related to code, I released the code in Compute 2. And then the checkpoints were basically early access. People had to sign up and they only allowed a lot of people from edu emails. Like if you had an edu email, like they gave you access basically to the SDXL 0.9. And, well, that leaked. Right. Of course, because of course it's going to leak if you do that. Well, the only way people could easily use it was with Comfy. So, yeah, people started using. And then I fixed a few of the issues people had. So then the big 1.0 release happened. And, well, Comfy UI was the only way a lot of people could actually run it on their computers. Because it just like automatic was so like inefficient and bad that most people couldn't actually, like it just wouldn't work. Like because he did a quick implementation. So people were forced. To use Comfy UI, and that's how it became popular because people had no choice.swyx [00:47:55]: The growth hack.Comfy [00:47:56]: Yeah.swyx [00:47:56]: Yeah.Comfy [00:47:57]: Like everywhere, like people who didn't have the 4090, they had like, who had just regular GPUs, they didn't have a choice.Alessio [00:48:05]: So yeah, I got a 4070. So think of me. And so today, what's, is there like a core Comfy team or?Comfy [00:48:13]: Uh, yeah, well, right now, um, yeah, we are hiring. Okay. Actually, so right now core, like, um, the core core itself, it's, it's me. Uh, but because, uh, the reason where folks like all the focus has been mostly on the front end right now, because that's the thing that's been neglected for a long time. So, uh, so most of the focus right now is, uh, all on the front end, but we are, uh, yeah, we will soon get, uh, more people to like help me with the actual backend stuff. Yeah. So, no, I'm not going to say a hundred percent because that's why once the, once we have our V one release, which is because it'd be the package, come fee-wise with the nice interface and easy to install on windows and hopefully Mac. Uh, yeah. Yeah. Once we have that, uh, we're going to have to, lots of stuff to do on the backend side and also the front end side, but, uh.Alessio [00:49:14]: What's the release that I'm on the wait list. What's the timing?Comfy [00:49:18]: Uh, soon. Uh, soon. Yeah, I don't want to promise a release date. We do have a release date we're targeting, but I'm not sure if it's public. Yeah, and we're still going to continue doing the open source, making MPUI the best way to run stable infusion models. At least the open source side, it's going to be the best way to run models locally. But we will have a few things to make money from it, like cloud inference or that type of thing. And maybe some things for some enterprises.swyx [00:50:08]: I mean, a few questions on that. How do you feel about the other comfy startups?Comfy [00:50:11]: I mean, I think it's great. They're using your name. Yeah, well, it's better they use comfy than they use something else. Yeah, that's true. It's fine. We're going to try not to... We don't want to... We want people to use comfy. Like I said, it's better that people use comfy than something else. So as long as they use comfy, I think it helps the ecosystem. Because more people, even if they don't contribute directly, the fact that they are using comfy means that people are more likely to join the ecosystem. So, yeah.swyx [00:50:57]: And then would you ever do text?Comfy [00:50:59]: Yeah, well, you can already do text with some custom nodes. So, yeah, it's something we like. Yeah, it's something I've wanted to eventually add to core, but it's more like not a very... It's a very high priority. But because a lot of people use text for prompt enhancement and other things like that. So, yeah, it's just that my focus has always been on diffusion models. Yeah, unless some text diffusion model comes out.swyx [00:51:30]: Yeah, David Holtz is investing a lot in text diffusion.Comfy [00:51:34]: Yeah, well, if a good one comes out, then we'll probably implement it since it fits with the whole...swyx [00:51:39]: Yeah, I mean, I imagine it's going to be a close source to Midjourney. Yeah.Comfy [00:51:43]: Well, if an open one comes out, then I'll probably implement it.Alessio [00:51:54]: Cool, comfy. Thanks so much for coming on. This was fun. Bye. Get full access to Latent Space at www.latent.space/subscribe

Small Talk Kagoshima
Mochi-Caused Deaths Rise Over Japanese Holidays? | STJ 269

Small Talk Kagoshima

Play Episode Listen Later Jan 4, 2025 33:54


Support us on patreon: https://www.patreon.com/smalltalkjapan

So Japanese
Kagami Mochi: More Than Just Fancy Rice Cakes

So Japanese

Play Episode Listen Later Jan 2, 2025 31:38


Ever wondered why those two-tiered rice cakes with a tiny orange on top are everywhere in Japan during the New Year? Meet Kagami Mochi! In this episode, we unwrap the symbolism, history, and spiritual significance of this unique tradition—from Shinto rituals to hopes for family prosperity. Plus, we'll tell you why you shouldn't point a knife at it on January 11th! 日本のお正月に、なぜあの小さな橙が乗った二段重ねの鏡餅がどこにでもあるのか、不思議に思ったことはありませんか?今回は「鏡餅」について徹底解説します!このユニークな伝統の象徴や歴史、そして神道の儀式や家族繁栄への願いなど、そのスピリチュアルな意味に迫ります。そして、1月11日に鏡餅を包丁で切ってはいけない理由(硬くて切れない説‼笑)もお話しします! Support the showhttps://linktr.ee/Sojapanese

SBS Japanese - SBSの日本語放送
Sydney Fukuoka prefecture association's annual mochi event - シドニー福岡県人会、毎年恒例の餅つき大会

SBS Japanese - SBSの日本語放送

Play Episode Listen Later Dec 31, 2024 8:30


SBS Japanese visited Sydney's Fukuoka prefecture association's annual mochi event. Freshly pounded mochi was served with sweet red bean paste, coated in kinako or prepared as Fukuoka-style ozoni soup. - シドニー福岡県人会の毎年恒例、餅つき大会にお邪魔しました。つきたてのお餅はあんこを入れたり、きな粉をかけたり、福岡のお雑煮として振る舞われました。

Comfort Creatures
119: Corvid Compendium: Nutcrackers

Comfort Creatures

Play Episode Listen Later Dec 5, 2024 31:07


Tis the season for Sugar Plum Faries, dancing swans and, of course, Nutcrackers. But did you know the Nutcracker isn't just decorative wooden man, but actually a kind of CORVID? Well it is! And this week we learn all about the Nutcracker bird as we continue our Corvid Compendium. Plus, we have a very fun Ready Pet Go from Altaay and Mochi!

2 Knit Lit Chicks
Episode 295: I'm In a Bubble of Yarn

2 Knit Lit Chicks

Play Episode Listen Later Dec 4, 2024 62:44


Recorded on November 28, 2024 Book talk starts at 29:40   Our 2024 Fall Sweater KAL is continuing.  You have until January 15, 2025 to complete an adult sweater and post a photo in our FOs thread.  It must have some type of sleeves - short sleeves are fine!  Check out our bundles of patterns for inspiration and join the discussion on our Sweater KAL Chatter thread!   Our Zoom group is continuing.  Please join us on Saturdays, 12 noon Pacific time.  All the info you need is in our Ravelry group!   EVENTS We will be at the New Years Fiber Retreat at the St Francis Retreat Center in San Juan Bautista, CA in January. Dates for NoCKRs will be April 10-13, 2025, and registration info letters have been sent out. If you haven't received one and are interested in going to the retreat, please contact Tracie.     KNITTING Barb has finished: Yume by Isabel Kraemer, using Indigodragonfly R.O.U. Sport in the Is She all Green and Fuzzy and Mossy colorway   Tracie has finished: Scraps Chaps by Barbara Prime #6 -“Chihuahua” in Encore Worsted …a hint of summer by Isabel Kraemer in Fyberspates Scrumptious Lace in Jen S. Green and JuniperMon Fibers Findley in Curacao Knit Hat with added earflaps and pompom by Kathy Green in Berocco Vintage in the Mochi colorway Baby/Kids Earflap Hat by Julie Hentz in Berocco Vintage in the Mochi colorway Barb is still working on: Granito by Joji Locatelli, using Serendipidye Dyeworks 24 Carrot Fingering MCN in the Peppermint Julip colorway Two by Two beanie by Anne Gagnon using a mystery worsted yarn in a Heather gray colorway The Market Bag by Davina Choy, using a DK white cotton pima and a DK blue cotton pima   Tracie has Cast On: Koko Bean hat by Judithmarieknits in Lisa Souza Blue Face Leicester Worsted in Styx Gno Fun Like Gnome Fun - 6 Gnaomis in Plymouth Select Worsted for class at Rumpelstilskin LYS Alignment sweater by Katrine Birkenwasser in Seattle Sky Dyeworks Mismated Rhododendron   Tracie is still working on: Davis #5 by Pam Allen in Western Sky Knits Merino 17 Worsted- Nightfall colony Socks for Ryan in Marinated Yarns Practicality 75/25 in Melted Box of Crayons colorway   BOOKS Barb has read: Everyone is Watching by Heather Gedenkauf - 2.5 stars Here One Moment by Liane Moriarty - 5 stars   Barb Did Not Finish: The Boys from Biloxi by John Grisham   Tracie read: Murder Your Employer by Rupert Holmes - 4 stars Reread - Razor Girl by Carl Hiaason - 4 stars The Hive by Gregg Olsen - 3 stars None of this is True by Lisa Jewell - 5 stars Saint Xby Alexis Schaitkin - 4 stars This episode ends with our usual Thanksgiving Thumbs Up/ Thumbs Down from our whole family! 

True Story: The Public Relations Podcast
Pros & Cons: Big Agencies vs Small Agencies with Amy Dang-Stojanovic - Managing Partner, Mochee

True Story: The Public Relations Podcast

Play Episode Listen Later Dec 2, 2024 43:32


Is social media still the Wild West of marketing, or has it grown into a strategic powerhouse? In this episode, Whitney Lee sits down with Amy Dang-Stojanovic, managing partner of Mochi, to explore the ever-changing world of social media and PR. They pull back the curtain on: The rise of authentic content and the shift away from polished perfection. Why smaller, nimble teams can outshine big-name agencies. The critical balance between engagement and calls to action in successful campaigns. How content creators and influencers play different but vital roles in modern marketing. Amy also shares her thoughts on navigating client expectations, staying profitable as a social media agency, and crafting innovative content strategies that keep brands ahead of the curve. Whether you're in the trenches of PR or just curious about the inner workings of this fast-paced industry, this conversation is packed with real talk and actionable takeaways.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We have a full slate of upcoming events: AI Engineer London, AWS Re:Invent in Las Vegas, and now Latent Space LIVE! at NeurIPS in Vancouver and online. Sign up to join and speak!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!We try to stay close to the inference providers as part of our coverage, as our podcasts with Together AI and Replicate will attest: However one of the most notable pull quotes from our very well received Braintrust episode was his opinion that open source model adoption has NOT gone very well and is actually declining in relative market share terms (it is of course increasing in absolute terms):Today's guest, Lin Qiao, would wholly disagree. Her team of Pytorch/GPU experts are wholly dedicated toward helping you serve and finetune the full stack of open source models from Meta and others, across all modalities (Text, Audio, Image, Embedding, Vision-understanding), helping customers like Cursor and Hubspot scale up open source model inference both rapidly and affordably.Fireworks has emerged after its successive funding rounds with top tier VCs as one of the leaders of the Compound AI movement, a term first coined by the Databricks/Mosaic gang at Berkeley AI and adapted as “Composite AI” by Gartner:Replicating o1We are the first podcast to discuss Fireworks' f1, their proprietary replication of OpenAI's o1. This has become a surprisingly hot area of competition in the past week as both Nous Forge and Deepseek r1 have launched competitive models.Full Video PodcastLike and subscribe!Timestamps* 00:00:00 Introductions* 00:02:08 Pre-history of Fireworks and PyTorch at Meta* 00:09:49 Product Strategy: From Framework to Model Library* 00:13:01 Compound AI Concept and Industry Dynamics* 00:20:07 Fireworks' Distributed Inference Engine* 00:22:58 OSS Model Support and Competitive Strategy* 00:29:46 Declarative System Approach in AI* 00:31:00 Can OSS replicate o1?* 00:36:51 Fireworks f1* 00:41:03 Collaboration with Cursor and Speculative Decoding* 00:46:44 Fireworks quantization (and drama around it)* 00:49:38 Pricing Strategy* 00:51:51 Underrated Features of Fireworks Platform* 00:55:17 HiringTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner at CTO at Danceable Partners, and I'm joined by my co-host, Swyx founder, Osmalayar.Swyx [00:00:11]: Hey, and today we're in a very special studio inside the Fireworks office with Lin Qiang, CEO of Fireworks. Welcome. Yeah.Lin [00:00:20]: Oh, you should welcome us.Swyx [00:00:21]: Yeah, welcome. Yeah, thanks for having us. It's unusual to be in the home of a startup, but it's also, I think our relationship is a bit unusual compared to all our normal guests. Definitely.Lin [00:00:34]: Yeah. I'm super excited to talk about very interesting topics in that space with both of you.Swyx [00:00:41]: You just celebrated your two-year anniversary yesterday.Lin [00:00:43]: Yeah, it's quite a crazy journey. We circle around and share all the crazy stories across these two years, and it has been super fun. All the way from we experienced Silicon Valley bank run to we delete some data that shouldn't be deleted operationally. We went through a massive scale where we actually are busy getting capacity to, yeah, we learned to kind of work with it as a team with a lot of brilliant people across different places to join a company. It has really been a fun journey.Alessio [00:01:24]: When you started, did you think the technical stuff will be harder or the bank run and then the people side? I think there's a lot of amazing researchers that want to do companies and it's like the hardest thing is going to be building the product and then you have all these different other things. So, were you surprised by what has been your experience the most?Lin [00:01:42]: Yeah, to be honest with you, my focus has always been on the product side and then after the product goes to market. And I didn't realize the rest has been so complicated, operating a company and so on. But because I don't think about it, I just kind of manage it. So it's done. I think I just somehow don't think about it too much and solve whatever problem coming our way and it worked.Swyx [00:02:08]: So let's, I guess, let's start at the pre-history, the initial history of Fireworks. You ran the PyTorch team at Meta for a number of years and we previously had Sumit Chintal on and I think we were just all very interested in the history of GenEI. Maybe not that many people know how deeply involved Faire and Meta were prior to the current GenEI revolution.Lin [00:02:35]: My background is deep in distributed system, database management system. And I joined Meta from the data side and I saw this tremendous amount of data growth, which cost a lot of money and we're analyzing what's going on. And it's clear that AI is driving all this data generation. So it's a very interesting time because when I joined Meta, Meta is going through ramping down mobile-first, finishing the mobile-first transition and then starting AI-first. And there's a fundamental reason about that sequence because mobile-first gave a full range of user engagement that has never existed before. And all this user engagement generated a lot of data and this data power AI. So then the whole entire industry is also going through, falling through this same transition. When I see, oh, okay, this AI is powering all this data generation and look at where's our AI stack. There's no software, there's no hardware, there's no people, there's no team. I want to dive up there and help this movement. So when I started, it's very interesting industry landscape. There are a lot of AI frameworks. It's a kind of proliferation of AI frameworks happening in the industry. But all the AI frameworks focus on production and they use a very certain way of defining the graph of neural network and then use that to drive the model iteration and productionization. And PyTorch is completely different. So they could also assume that he was the user of his product. And he basically says, researchers face so much pain using existing AI frameworks, this is really hard to use and I'm going to do something different for myself. And that's the origin story of PyTorch. PyTorch actually started as the framework for researchers. They don't care about production at all. And as they grow in terms of adoption, so the interesting part of AI is research is the top of our normal production. There are so many researchers across academic, across industry, they innovate and they put their results out there in open source and that power the downstream productionization. So it's brilliant for MATA to establish PyTorch as a strategy to drive massive adoption in open source because MATA internally is a PyTorch shop. So it creates a flying wheel effect. So that's kind of a strategy behind PyTorch. But when I took on PyTorch, it's kind of at Caspo, MATA established PyTorch as the framework for both research and production. So no one has done that before. And we have to kind of rethink how to architect PyTorch so we can really sustain production workload, the stability, reliability, low latency, all this production concern was never a concern before. Now it's a concern. And we actually have to adjust its design and make it work for both sides. And that took us five years because MATA has so many AI use cases, all the way from ranking recommendation as powering the business top line or as ranking newsfeed, video ranking to site integrity detect bad content automatically using AI to all kinds of effects, translation, image classification, object detection, all this. And also across AI running on the server side, on mobile phones, on AI VR devices, the wide spectrum. So by the time we actually basically managed to support AI across ubiquitous everywhere across MATA. But interestingly, through open source engagement, we work with a lot of companies. It is clear to us like this industry is starting to take on AI first transition. And of course, MATA's hyperscale always go ahead of industry. And it feels like when we start this AI journey at MATA, there's no software, no hardware, no team. For many companies we engage with through PyTorch, we feel the pain. That's the genesis why we feel like, hey, if we create fireworks and support industry going through this transition, it will be a huge amount of impact. Of course, the problem that the industry is facing will not be the same as MATA. MATA is so big, right? So it's kind of skewed towards extreme scale and extreme optimization in the industry will be different. But we feel like we have the technical chop and we've seen a lot. We'll look to kind of drive that. So yeah, so that's how we started.Swyx [00:06:58]: When you and I chatted about the origins of fireworks, it was originally envisioned more as a PyTorch platform, and then later became much more focused on generative AI. Is that fair to say? What was the customer discovery here?Lin [00:07:13]: Right. So I would say our initial blueprint is we should build a PyTorch cloud because a PyTorch library and there's no SaaS platform to enable AI workloads.Swyx [00:07:26]: Even in 2022, it's interesting.Lin [00:07:28]: I would not say absolutely no, but cloud providers have some of those, but it's not first class citizen, right? At 2022, there's still like TensorFlow is massively in production. And this is all pre-gen AI, and PyTorch is kind of getting more and more adoption. But there's no PyTorch-first SaaS platform existing. At the same time, we are also a very pragmatic set of people. We really want to make sure from the get-go, we get really, really close to customers. We understand their use case, we understand their pain points, we understand the value we deliver to them. So we want to take a different approach instead of building a horizontal PyTorch cloud. We want to build a verticalized platform first. And then we talk with many customers. And interestingly, we started the company in September 2022, and in October, November, the OpenAI announced ChatGPT. And then boom, when we talked with many customers, they were like, can you help us work on the JNS aspect? So of course, there are some open source models. It's not as good at that time, but people are already putting a lot of attention there. Then we decided that if we're going to pick a vertical, we're going to pick JNI. The other reason is all JNI models are PyTorch models. So that's another reason. We believe that because of the nature of JNI, it's going to generate a lot of human consumable content. It will drive a lot of consumer, customer-developer-facing application and product innovation. Guaranteed. We're just at the beginning of this. Our prediction is for those kind of applications, the inference is much more important than training because inference scale is proportional to the up-limit award population. And training scale is proportional to the number of researchers. Of course, each training round could be very expensive. Although PyTorch supports both inference and training, we decided to laser focus on inference. So yeah, so that's how we got started. And we launched our public platform August last year. When we launched, it was a single product. It's a distributed inference engine with a simple API, open AI compatible API with many models. We started with LM and then we added a lot of models. Fast forward to now, we are a full platform with multiple product lines. So we love to kind of dive deep into what we offer. But that's a very fun journey in the past two years.Alessio [00:09:49]: What was the transition from you start to focus on PyTorch and people want to understand the framework, get it live. And now say maybe most people that use you don't even really know much about PyTorch at all. You know, they're just trying to consume a model. From a product perspective, like what were some of the decisions early on? Like right in October, November, you were just like, hey, most people just care about the model, not about the framework. We're going to make it super easy or was it more a gradual transition to the model librarySwyx [00:10:16]: you have today?Lin [00:10:17]: Yeah. So our product decision is all based on who is our ICP. And one thing I want to acknowledge here is the generic technology is disruptive. It's very different from AI before GNI. So it's a clear leap forward. Because before GNI, the companies that want to invest in AI, they have to train from scratch. There's no other way. There's no foundation model. It doesn't exist. So that means then to start a team, first hire a team who is capable of crunch data. There's a lot of data to crunch, right? Because training from scratch, you have to prepare a lot of data. And then they need to have GPUs to train, and then you start to manage GPUs. So then it becomes a very complex project. It takes a long time and not many companies can afford it, actually. And the GNI is a very different game right now, because it is a foundation model. So you don't have to train anymore. That makes AI much more accessible as a technology. As an app developer or product manager, even, not a developer, they can interact with GNI models directly. So our goal is to make AI accessible to all app developers and product engineers. That's our goal. So then getting them into the building model doesn't make any sense anymore with this new technology. And then building easy, accessible APIs is the most important. Early on, when we got started, we decided we're going to be open AI compatible. It's just kind of very easy for developers to adopt this new technology, and we will manage the underlying complexity of serving all these models.Swyx [00:11:56]: Yeah, open AI has become the standard. Even as we're recording today, Gemini announced that they have open AI compatible APIs. Interesting. So we just need to drop it all in line, and then we have everyone popping in line.Lin [00:12:09]: That's interesting, because we are working very closely with Meta as one of the partners. Meta, of course, is kind of very generous to donate many very, very strong open source models, expecting more to come. But also they have announced LamaStack, which is basically standardized, the upper level stack built on top of Lama models. So they don't just want to give out models and you figure out what the upper stack is. They instead want to build a community around the stack and build a new standard. I think there's an interesting dynamics in play in the industry right now, when it's more standardized across open AI, because they are kind of creating the top of the funnel, or standardized across Lama, because this is the most used open source model. So I think it's a lot of fun working at this time.Swyx [00:13:01]: I've been a little bit more doubtful on LamaStack, I think you've been more positive. Basically it's just like the meta version of whatever Hugging Face offers, you know, or TensorRT, or BLM, or whatever the open source opportunity is. But to me, it's not clear that just because Meta open sources Lama, that the rest of LamaStack will be adopted. And it's not clear why I should adopt it. So I don't know if you agree.Lin [00:13:27]: It's very early right now. That's why I kind of work very closely with them and give them feedback. The feedback to the meta team is very important. So then they can use that to continue to improve the model and also improve the higher level I think the success of LamaStack heavily depends on the community adoption. And there's no way around it. And I know the meta team would like to kind of work with a broader set of community. But it's very early.Swyx [00:13:52]: One thing that after your Series B, so you raced for Benchmark, and then Sequoia. I remember being close to you for at least your Series B announcements, you started betting heavily on this term of Compound AI. It's not a term that we've covered very much in the podcast, but I think it's definitely getting a lot of adoption from Databricks and Berkeley people and all that. What's your take on Compound AI? Why is it resonating with people?Lin [00:14:16]: Right. So let me give a little bit of context why we even consider that space.Swyx [00:14:22]: Because like pre-Series B, there was no message, and now it's like on your landing page.Lin [00:14:27]: So it's kind of very organic evolution from when we first launched our public platform, we are a single product. We are a distributed inference engine, where we do a lot of innovation, customized KUDA kernels, raw kernel kernels, running on different kinds of hardware, and build distributed disaggregated execution, inference execution, build all kinds of caching. So that is one. So that's kind of one product line, is the fast, most cost-efficient inference platform. Because we wrote PyTorch code, we know we basically have a special PyTorch build for that, together with a custom kernel we wrote. And then we worked with many more customers, we realized, oh, the distributed inference engine, our design is one size fits all. We want to have this inference endpoint, then everyone come in, and no matter what kind of form and shape or workload they have, it will just work for them. So that's great. But the reality is, we realized all customers have different kinds of use cases. The use cases come in all different forms and shapes. And the end result is the data distribution in their inference workload doesn't align with the data distribution in the training data for the model. It's a given, actually. If you think about it, because researchers have to guesstimate what is important, what's not important in preparing data for training. So because of that misalignment, then we leave a lot of quality, latency, cost improvement on the table. So then we're saying, OK, we want to heavily invest in a customization engine. And we actually announced it called FHIR Optimizer. So FHIR Optimizer basically helps users navigate a three-dimensional optimization space across quality, latency, and cost. So it's a three-dimensional curve. And even for one company, for different use cases, they want to land in different spots. So we automate that process for our customers. It's very simple. You have your inference workload. You inject into the optimizer along with the objective function. And then we spit out inference deployment config and the model setup. So it's your customized setup. So that is a completely different product. So that product thinking is one size fits all. And now on top of that, we provide a huge variety of state-of-the-art models, hundreds of them, varying from text to large state-of-the-art English models. That's where we started. And as we talk with many customers, we realize, oh, audio and text are very, very close. Many of our customers start to build assistants, all kinds of assistants using text. And they immediately want to add audio, audio in, audio out. So we support transcription, translation, speech synthesis, text, audio alignment, all different kinds of audio features. It's a big announcement. You should have heard by the time this is out. And the other areas of vision and text are very close with each other. Because a lot of information doesn't live in plain text. A lot of information lives in multimedia format, images, PDFs, screenshots, and many other different formats. So oftentimes to solve a problem, we need to put the vision model first to extract information and then use language model to process and then send out results. So vision is important. We also support vision model, various different kinds of vision models specialized in processing different kinds of source and extraction. And we're also going to have another announcement of a new API endpoint we'll support for people to upload various different kinds of multimedia content and then get the extract very accurate information out and feed that into LM. And of course, we support embedding because embedding is very important for semantic search, for RAG, and all this. And in addition to that, we also support text-to-image, image generation models, text-to-image, image-to-image, and we're adding text-to-video as well in our portfolio. So it's a very comprehensive set of model catalog that built on top of File Optimizer and Distributed Inference Engine. But then we talk with more customers, they solve business use case, and then we realize one model is not sufficient to solve their problem. And it's very clear because one is the model hallucinates. Many customers, when they onboard this JNI journey, they thought this is magical. JNI is going to solve all my problems magically. But then they realize, oh, this model hallucinates. It hallucinates because it's not deterministic, it's probabilistic. So it's designed to always give you an answer, but based on probabilities, so it hallucinates. And that's actually sometimes a feature for creative writing, for example. Sometimes it's a bug because, hey, you don't want to give misinformation. And different models also have different specialties. To solve a problem, you want to ask different special models to kind of decompose your task into multiple small tasks, narrow tasks, and then have an expert model solve that task really well. And of course, the model doesn't have all the information. It has limited knowledge because the training data is finite, not infinite. So the model oftentimes doesn't have real-time information. It doesn't know any proprietary information within the enterprise. It's clear that in order to really build a compiling application on top of JNI, we need a compound AI system. Compound AI system basically is going to have multiple models across modalities, along with APIs, whether it's public APIs, internal proprietary APIs, storage systems, database systems, knowledge to work together to deliver the best answer.Swyx [00:20:07]: Are you going to offer a vector database?Lin [00:20:09]: We actually heavily partner with several big vector database providers. Which is your favorite? They are all great in different ways. But it's public information, like MongoDB is our investor. And we have been working closely with them for a while.Alessio [00:20:26]: When you say distributed inference engine, what do you mean exactly? Because when I hear your explanation, it's almost like you're centralizing a lot of the decisions through the Fireworks platform on the quality and whatnot. What do you mean distributed? It's like you have GPUs in a lot of different clusters, so you're sharding the inference across the same model.Lin [00:20:45]: So first of all, we run across multiple GPUs. But the way we distribute across multiple GPUs is unique. We don't distribute the whole model monolithically across multiple GPUs. We chop them into pieces and scale them completely differently based on what's the bottleneck. We also are distributed across regions. We have been running in North America, EMEA, and Asia. We have regional affinity to applications because latency is extremely important. We are also doing global load balancing because a lot of applications there, they quickly scale to global population. And then at that scale, different content wakes up at a different time. And you want to kind of load balancing across. So all the way, and we also have, we manage various different kinds of hardware skew from different hardware vendors. And different hardware design is best for different types of workload, whether it's long context, short context, long generation. So all these different types of workload is best fitted for different kinds of hardware skew. And then we can even distribute across different hardware for a workload. So the distribution actually is all around in the full stack.Swyx [00:22:02]: At some point, we'll show on the YouTube, the image that Ray, I think, has been working on with all the different modalities that you offer. To me, it's basically you offer the open source version of everything that OpenAI typically offers. I don't think there is. Actually, if you do text to video, you will be a superset of what OpenAI offers because they don't have Sora. Is that Mochi, by the way? Mochi. Mochi, right?Lin [00:22:27]: Mochi. And there are a few others. I will say, the interesting thing is, I think we're betting on the open source community is going to proliferate. This is literally what we're seeing. And there's amazing video generation companies. There is amazing audio companies. Like cross-border, the innovation is off the chart, and we are building on top of that. I think that's the advantage we have compared with a closed source company.Swyx [00:22:58]: I think I want to restate the value proposition of Fireworks for people who are comparing you versus a raw GPU provider like a RunPod or Lambda or anything like those, which is like you create the developer experience layer and you also make it easily scalable or serverless or as an endpoint. And then, I think for some models, you have custom kernels, but not all models.Lin [00:23:25]: Almost for all models. For all large language models, all your models, and the VRMs. Almost for all models we serve.Swyx [00:23:35]: And so that is called Fire Attention. I don't remember the speed numbers, but apparently much better than VLM, especially on a concurrency basis.Lin [00:23:44]: So Fire Attention is specific mostly for language models, but for other modalities, we'll also have a customized kernel.Swyx [00:23:51]: And I think the typical challenge for people is understanding that has value, and then there are other people who are also offering open-source models. Your mode is your ability to offer a good experience for all these customers. But if your existence is entirely reliant on people releasing nice open-source models, other people can also do the same thing.Lin [00:24:14]: So I would say we build on top of open-source model foundation. So that's the kind of foundation we build on top of. But we look at the value prop from the lens of application developers and product engineers. So they want to create new UX. So what's happening in the industry right now is people are thinking about a completely new way of designing products. And I'm talking to so many founders, it's just mind-blowing. They help me understand existing way of doing PowerPoint, existing way of coding, existing way of managing customer service. It's actually putting a box in our head. For example, PowerPoint. So PowerPoint generation is we always need to think about how to fit into my storytelling into this format of slide one after another. And I'm going to juggle through design together with what story to tell. But the most important thing is what's our storytelling lines, right? And why don't we create a space that is not limited to any format? And those kind of new product UX design combined with automated content generation through Gen AI is the new thing that many founders are doing. What are the challenges they're facing? Let's go from there. One is, again, because a lot of products built on top of Gen AI, they are consumer-personal developer facing, and they require interactive experience. It's just a kind of product experience we all get used to. And our desire is to actually get faster and faster interaction. Otherwise, nobody wants to spend time, right? And then that requires low latency. And the other thing is the nature of consumer-personal developer facing is your audience is very big. You want to scale up to product market fit quickly. But if you lose money at a small scale, you're going to bankrupt quickly. So it's actually a big contrast. I actually have product market fit, but when I scale, I scale out of my business. So that's kind of a very funny way to think about it. So then having low latency and low cost is essential for those new applications and products to survive and really become a generation company. So that's the design point for our distributed inference engine and the file optimizer. File optimizer, you can think about that as a feedback loop. The more you feed your inference workload to our inference engine, the more we help you improve quality, lower latency further, lower your cost. It basically becomes better. And we automate that because we don't want you as an app developer or product engineer to think about how to figure out all these low-level details. It's impossible because you're not trained to do that at all. You should kind of keep your focus on the product innovation. And then the compound AI, we actually feel a lot of pain as the app developers, engineers, there are so many models. Every week, there's at least a new model coming out.Swyx [00:27:09]: Tencent had a giant model this week. Yeah, yeah.Lin [00:27:13]: I saw that. I saw that.Swyx [00:27:15]: It's like $500 billion.Lin [00:27:18]: So they're like, should I keep chasing this or should I forget about it? And which model should I pick to solve what kind of sub-problem? How do I even decompose my problem into those smaller problems and fit the model into it? I have no idea. And then there are two ways to think about this design. I think I talked about that in the past. One is imperative, as in you figure out how to do it. You give developer tools to dictate how to do it. Or you build a declarative system where a developer tells what they want to do, not how. So these are completely two different designs. So the analogy I want to draw is, in the data world, the database management system is a declarative system because people use database, use SQL. SQL is a way you say, what do you want to extract out of a database? What kind of result do you want? But you don't figure out which node is going to, how many nodes you're going to run on top of, how you redefine your disk, which index you use, which project. You don't need to worry about any of those. And database management system will figure out, generate a new best plan, and execute on that. So database is declarative. And it makes it super easy. You just learn SQL, which is learn a semantic meaning of SQL, and you can use it. Imperative side is there are a lot of ETL pipelines. And people design this DAG system with triggers, with actions, and you dictate exactly what to do. And if it fails, then how to recover. So that's an imperative system. We have seen a range of systems in the ecosystem go different ways. I think there's value of both. There's value of both. I don't think one is going to subsume the other. But we are leaning more into the philosophy of the declarative system. Because from the lens of app developer and product engineer, that would be easiest for them to integrate.Swyx [00:29:07]: I understand that's also why PyTorch won as well, right? This is one of the reasons. Ease of use.Lin [00:29:14]: Focus on ease of use, and then let the system take on the hard challenges and complexities. So we follow, we extend that thinking into current system design. So another announcement is we will also announce our next declarative system is going to appear as a model that has extremely high quality. And this model is inspired by Owen's announcement for OpenAI. You should see that by the time we announce this or soon.Alessio [00:29:46]: Trained by you.Lin [00:29:47]: Yes.Alessio [00:29:48]: Is this the first model that you trained? It's not the first.Lin [00:29:52]: We actually have trained a model called FireFunction. It's a function calling model. It's our first step into compound AI system. Because function calling model can dispatch a request into multiple APIs. We have pre-baked set of APIs the model learned. You can also add additional APIs through the configuration to let model dispatch accordingly. So we have a very high quality function calling model that's already released. We have actually three versions. The latest version is very high quality. But now we take a further step that you don't even need to use function calling model. You use our new model we're going to release. It will solve a lot of problems approaching very high OpenAI quality. So I'm very excited about that.Swyx [00:30:41]: Do you have any benchmarks yet?Lin [00:30:43]: We have a benchmark. We're going to release it hopefully next week. We just put our model to LMSYS and people are guessing. Is this the next Gemini model or a MADIS model? People are guessing. That's very interesting. We're watching the Reddit discussion right now.Swyx [00:31:00]: I have to ask more questions about this. When OpenAI released o1, a lot of people asked about whether or not it's a single model or whether it's a chain of models. Noam and basically everyone on the Strawberry team was very insistent that what they did for reinforcement learning, chain of thought, cannot be replicated by a whole bunch of open source model calls. Do you think that that is wrong? Have you done the same amount of work on RL as they have or was it a different direction?Lin [00:31:29]: I think they take a very specific approach where the caliber of team is very high. So I do think they are the domain expert in doing the things they are doing. I don't think there's only one way to achieve the same goal. We're on the same direction in the sense that the quality scaling law is shifting from training to inference. For that, I fully agree with them. But we're taking a completely different approach to the problem. All of that is because, of course, we didn't train the model from scratch. All of that is because we built on the show of giants. The current model available we have access to is getting better and better. The future trend is the gap between the open source model and the co-source model. It's just going to shrink to the point there's not much difference. And then we're on the same level field. That's why I think our early investment in inference and all the work we do around balancing across quality, latency, and cost pay off because we have accumulated a lot of experience and that empowers us to release this new model that is approaching open-ended quality.Alessio [00:32:39]: I guess the question is, what do you think the gap to catch up will be? Because I think everybody agrees with open source models eventually will catch up. And I think with 4, then with Lama 3.2, 3.1, 4.5b, we close the gap. And then 0.1 just reopened the gap so much and it's unclear. Obviously, you're saying your model will have...Swyx [00:32:57]: We're closing that gap.Alessio [00:32:58]: But you think in the future, it's going to be months?Lin [00:33:02]: So here's the thing that's happened. There's public benchmark. It is what it is. But in reality, open source models in certain dimensions are already on par or beat closed source models. So for example, in the coding space, open source models are really, really good. And in function calling, file function is also really, really good. So it's all a matter of whether you build one model to solve all the problems and you want to be the best of solving all the problems, or in the open source domain, it's going to specialize. All these different model builders specialize in certain narrow area. And it's logical that they can be really, really good in that very narrow area. And that's our prediction is with specialization, there will be a lot of expert models really, really good and even better than one-size-fits-all closed source models.Swyx [00:33:55]: I think this is the core debate that I am still not 100% either way on in terms of compound AI versus normal AI. Because you're basically fighting the bitter lesson.Lin [00:34:09]: Look at the human society, right? We specialize. And you feel really good about someone specializing doing something really well, right? And that's how our way evolved from ancient times. We're all journalists. We do everything. Now we heavily specialize in different domains. So my prediction is in the AI model space, it will happen also. Except for the bitter lesson.Swyx [00:34:30]: You get short-term gains by having specialists, domain specialists, and then someone just needs to train like a 10x bigger model on 10x more inference, 10x more data, 10x more model perhaps, whatever the current scaling law is. And then it supersedes all the individual models because of some generalized intelligence slash world knowledge. I think that is the core insight of the GPTs, the GPT-123 networks. Right.Lin [00:34:56]: But the training scaling law is because you have an increasing amount of data to train from. And you can do a lot of compute. So I think on the data side, we're approaching the limit. And the only data to increase that is synthetic generated data. And then there's like what is the secret sauce there, right? Because if you have a very good large model, you can generate very good synthetic data and then continue to improve quality. So that's why I think in OpenAI, they are shifting from the training scaling law intoSwyx [00:35:25]: inference scaling law.Lin [00:35:25]: And it's the test time and all this. So I definitely believe that's the future direction. And that's where we are really good at, doing inference.Swyx [00:35:34]: A couple of questions on that. Are you planning to share your reasoning choices?Lin [00:35:39]: That's a very good question. We are still debating.Swyx [00:35:43]: Yeah.Lin [00:35:45]: We're still debating.Swyx [00:35:46]: I would say, for example, it's interesting that, for example, SweetBench. If you want to be considered for ranking, you have to submit your reasoning choices. And that has actually disqualified some of our past guests. Cosign was doing well on SweetBench, but they didn't want to leak those results. So that's why you don't see O1 preview on SweetBench, because they don't submit their reasoning choices. And obviously, it's IP. But also, if you're going to be more open, then that's one way to be more open. So your model is not going to be open source, right? It's going to be an endpoint that you provide. Okay, cool. And then pricing, also the same as OpenAI, just kind of based on...Lin [00:36:25]: Yeah, this is... I don't have, actually, information. Everything is going so fast, we haven't even thought about that yet. Yeah, I should be more prepared.Swyx [00:36:33]: I mean, this is live. You know, it's nice to just talk about it as it goes live. Any other things that you want feedback on or you're thinking through? It's kind of nice to just talk about something when it's not decided yet. About this new model. It's going to be exciting. It's going to generate a lot of buzz. Right.Lin [00:36:51]: I'm very excited to see how people are going to use this model. So there's already a Reddit discussion about it. And people are asking very deep, mathematical questions. And since the model got it right, surprising. And internally, we're also asking the model to generate what is AGI. And it generates a very complicated DAG thinking process. So we're having a lot of fun testing this internally. But I'm more curious, how will people use it? What kind of application they're going to try and test on it? And that's where we really like to hear feedback from the community. And also feedback to us. What works out well? What doesn't work out well? What works out well, but surprising them? And what kind of thing they think we should improve on? And those kind of feedback will be tremendously helpful.Swyx [00:37:44]: Yeah. So I've been a production user of Preview and Mini since launch. I would say they're very, very obvious jobs in quality. So much so that they made clods on it. And they made the previous state-of-the-art look bad. It's really that stark, that difference. The number one thing, just feedback or feature requests, is people want control on the budget. Because right now, in 0.1, it kind of decides its own thinking budget. But sometimes you know how hard the problem is. And you want to actually tell the model, spend two minutes on this. Or spend some dollar amount. Maybe it's time you miss dollars. I don't know what the budget is. That makes a lot of sense.Lin [00:38:27]: So we actually thought about that requirement. And it should be, at some point, we need to support that. Not initially. But that makes a lot of sense.Swyx [00:38:38]: Okay. So that was a fascinating overview of just the things that you're working on. First of all, I realized that... I don't know if I've ever given you this feedback. But I think you guys are one of the reasons I agreed to advise you. Because I think when you first met me, I was kind of dubious. I was like... Who are you? There's Replicate. There's Together. There's Laptop. There's a whole bunch of other players. You're in very, very competitive fields. Like, why will you win? And the reason I actually changed my mind was I saw you guys shipping. I think your surface area is very big. The team is not that big. No. We're only 40 people. Yeah. And now here you are trying to compete with OpenAI and everyone else. What is the secret?Lin [00:39:21]: I think the team. The team is the secret.Swyx [00:39:23]: Oh boy. So there's no thing I can just copy. You just... No.Lin [00:39:30]: I think we all come from a very aligned culture. Because most of our team came from meta.Swyx [00:39:38]: Yeah.Lin [00:39:38]: And many startups. So we really believe in results. One is result. And second is customer. We're very customer obsessed. And we don't want to drive adoption for the sake of adoption. We really want to make sure we understand we are delivering a lot of business values to the customer. And we really value their feedback. So we would wake up midnight and deploy some model for them. Shuffle some capacity for them. And yeah, over the weekend, no brainer.Swyx [00:40:15]: So yeah.Lin [00:40:15]: So that's just how we work as a team. And the caliber of the team is really, really high as well. So as plug-in, we're hiring. We're expanding very, very fast. So if we are passionate about working on the most cutting-edge technology in the general space, come talk with us. Yeah.Swyx [00:40:38]: Let's talk a little bit about that customer journey. I think one of your more famous customers is Cursor. We were the first podcast to have Cursor on. And then obviously since then, they have blown up. Cause and effect are not related. But you guys especially worked on a fast supply model where you were one of the first people to work on speculative decoding in a production setting. Maybe just talk about what was the behind the scenes of working with Cursor?Lin [00:41:03]: I will say Cursor is a very, very unique team. I think the unique part is the team has very high technical caliber. There's no question about it. But they have decided, although many companies building coding co-pilot, they will say, I'm going to build a whole entire stack because I can. And they are unique in the sense they seek partnership. Not because they cannot. They're fully capable, but they know where to focus. That to me is amazing. And of course, they want to find a bypass partner. So we spent some time working together. They are pushing us very aggressively because for them to deliver high caliber product experience, they need the latency. They need the interactive, but also high quality at the same time. So actually, we expanded our product feature quite a lot as we support Cursor. And they are growing so fast. And we massively scaled quickly across multiple regions. And we developed a pretty high intense inference stack, almost like similar to what we do for Meta. I think that's a very, very interesting engagement. And through that, there's a lot of trust being built. They realize, hey, this is a team they can really partner with. And they can go big with. That comes back to, hey, we're really customer obsessed. And all the engineers working with them, there's just enormous amount of time syncing together with them and discussing. And we're not big on meetings, but we are like stack channel always on. Yeah, so you almost feel like working as one team. So I think that's really highlighted.Swyx [00:42:38]: Yeah. For those who don't know, so basically Cursor is a VS Code fork. But most of the time, people will be using closed models. Like I actually use a lot of SONET. So you're not involved there, right? It's not like you host SONET or you have any partnership with it. You're involved where Cursor is small, or like their house brand models are concerned, right?Lin [00:42:58]: I don't know what I can say, but the things they haven't said.Swyx [00:43:04]: Very obviously, the drop down is 4.0, but in Cursor, right? So I assume that the Cursor side is the Fireworks side. And then the other side, they're calling out the other. Just kind of curious. And then, do you see any more opportunity on the... You know, I think you made a big splash with 1,000 tokens per second. That was because of speculative decoding. Is there more to push there?Lin [00:43:25]: We push a lot. Actually, when I mentioned Fire Optimizer, right? So as in, we have a unique automation stack that is one size fits one. We actually deployed to Cursor earlier on. Basically optimized for their specific workload. And that's a lot of juice to extract out of there. And we see success in that product. It actually can be widely adopted. So that's why we started a separate product line called Fire Optimizer. So speculative decoding is just one approach. And speculative decoding here is not static. We actually wrote a blog post about it. There's so many different ways to do speculative decoding. You can pair a small model with a large model in the same model family. Or you can have equal pads and so on. There are different trade-offs which approach you take. It really depends on your workload. And then with your workload, we can align the Eagle heads or Medusa heads or a small big model pair much better to extract the best latency reduction. So all of that is part of the Fire Optimizer offering.Alessio [00:44:23]: I know you mentioned some of the other inference providers. I think the other question that people always have is around benchmarks. So you get different performance on different platforms. How should people think about... People are like, hey, Lama 3.2 is X on MMLU. But maybe using speculative decoding, you go down a different path. Maybe some providers run a quantized model. How should people think about how much they should care about how you're actually running the model? What's the delta between all the magic that you do and what a raw model...Lin [00:44:57]: Okay, so there are two big development cycles. One is experimentation, where they need fast iteration. They don't want to think about quality, and they just want to experiment with product experience and so on. So that's one. And then it looks good, and they want to post-product market with scaling. And the quality is really important. And latency and all the other things are becoming important. During the experimentation phase, it's just pick a good model. Don't worry about anything else. Make sure you even generate the right solution to your product. And that's the focus. And then post-product market fit, then that's kind of the three-dimensional optimization curve start to kick in across quality, latency, cost, where you should land. And to me, it's purely a product decision. To many products, if you choose a lower quality, but better speed and lower cost, but it doesn't make a difference to the product experience, then you should do it. So that's why I think inference is part of the validation. The validation doesn't stop at offline eval. The validation will go through A-B testing, through inference. And that's where we offer various different configurations for you to test which is the best setting. So this is the traditional product evaluation. So product evaluation should also include your new model versions and different model setup into the consideration.Swyx [00:46:22]: I want to specifically talk about what happens a few months ago with some of your major competitors. I mean, all of this is public. What is your take on what happens? And maybe you want to set the record straight on how Fireworks does quantization because I think a lot of people may have outdated perceptions or they didn't read the clarification post on your approach to quantization.Lin [00:46:44]: First of all, it's always a surprise to us that without any notice, we got called out.Swyx [00:46:51]: Specifically by name, which is normally not what...Lin [00:46:54]: Yeah, in a public post. And have certain interpretation of our quality. So I was really surprised. And it's not a good way to compete, right? We want to compete fairly. And oftentimes when one vendor gives out results, the interpretation of another vendor is always extremely biased. So we actually refrain ourselves to do any of those. And we happily partner with third parties to do the most fair evaluation. So we're very surprised. And we don't think that's a good way to figure out the competition landscape. So then we react. I think when it comes to quantization, the interpretation, we wrote actually a very thorough blog post. Because again, no one says it's all. We have various different quantization schemes. We can quantize very different parts of the model from ways to activation to cross-TPU communication. They can use different quantization schemes or consistent across the board. And again, it's a trade-off. It's a trade-off across this three-dimensional quality, latency, and cost. And for our customer, we actually let them find the best optimized point. And we have a very thorough evaluation process to pick that point. But for self-serve, there's only one point to pick. There's no customization available. So of course, it depends on what we talk with many customers. We have to pick one point. And I think the end result, like AA published, later on AA published a quality measure. And we actually looked really good. So that's why what I mean is, I will leave the evaluation of quality or performance to third party and work with them to find the most fair benchmark. And I think that's a good approach, a methodology. But I'm not a part of an approach of calling out specific namesSwyx [00:48:55]: and critique other competitors in a very biased way. Databases happens as well. I think you're the more politically correct one. And then Dima is the more... Something like this. It's you on Twitter.Lin [00:49:11]: It's like the Russian... We partner. We play different roles.Swyx [00:49:20]: Another one that I wanted to... I'm just the last one on the competition side. There's a perception of price wars in hosting open source models. And we talked about the competitiveness in the market. Do you aim to make margin on open source models? Oh, absolutely, yes.Lin [00:49:38]: So, but I think it really... When we think about pricing, it's really need to coordinate with the value we're delivering. If the value is limited, or there are a lot of people delivering the same value, there's no differentiation. There's only one way to go. It's going down. So through competition. If I take a big step back, there is pricing from... We're more compared with close model providers, APIs, right? The close model provider, their cost structure is even more interesting because we don't bear any training costs. And we focus on inference optimization, and that's kind of where we continue to add a lot of product value. So that's how we think about product. But for the close source API provider, model provider, they bear a lot of training costs. And they need to amortize the training costs into the inference. So that created very interesting dynamics of, yeah, if we match pricing there, and I think how they are going to make money is very, very interesting.Swyx [00:50:37]: So for listeners, opening eyes 2024, $4 billion in revenue, $3 billion in compute training, $2 billion in compute inference, $1 billion in research compute amortization, and $700 million in salaries. So that is like...Swyx [00:50:59]: I mean, a lot of R&D.Lin [00:51:01]: Yeah, so I think matter is basically like, make it zero. So that's a very, very interesting dynamics we're operating within. But coming back to inference, so we are, again, as I mentioned, our product is, we are a platform. We're not just a single model as a service provider as many other inference providers, like they're providing a single model. We have our optimizer to highly customize towards your inference workload. We have a compound AI system where significantly simplify your interaction to high quality and low latency, low cost. So those are all very different from other providers.Alessio [00:51:38]: What do people not know about the work that you do? I guess like people are like, okay, Fireworks, you run model very quickly. You have the function model. Is there any kind of like underrated part of Fireworks that more people should try?Lin [00:51:51]: Yeah, actually, one user post on x.com, he mentioned, oh, actually, Fireworks can allow me to upload the LoRa adapter to the service model at the same cost and use it at same cost. Nobody has provided that. That's because we have a very special, like we rolled out multi-LoRa last year, actually. And we actually have this function for a long time. And many people has been using it, but it's not well known that, oh, if you find your model, you don't need to use on demand. If you find your model is LoRa, you can upload your LoRa adapter and we deploy it as if it's a new model. And then you use, you get your endpoint and you can use that directly, but at the same cost as the base model. So I'm happy that user is marketing it for us. He discovered that feature, but we have that for last year. So I think to feedback to me is, we have a lot of very, very good features, as Sean just mentioned. I'm the advisor to the company,Swyx [00:52:57]: and I didn't know that you had speculative decoding released.Lin [00:53:02]: We have prompt catching way back last year also. We have many, yeah. So I think that is one of the underrated feature. And if they're developers, you are using our self-serve platform, please try it out.Swyx [00:53:16]: The LoRa thing is interesting because I think you also, the reason people add additional costs to it, it's not because they feel like charging people. Normally in normal LoRa serving setups, there is a cost to dedicating, loading those weights and dedicating a machine to that inference. How come you can't avoid it?Lin [00:53:36]: Yeah, so this is kind of our technique called multi-LoRa. So we basically have many LoRa adapters share the same base model. And basically we significantly reduce the memory footprint of serving. And the one base model can sustain a hundred to a thousand LoRa adapters. And then basically all these different LoRa adapters can share the same, like direct the same traffic to the same base model where base model is dominating the cost. So that's how we advertise that way. And that's how we can manage the tokens per dollar, million token pricing, the same as base model.Swyx [00:54:13]: Awesome. Is there anything that you think you want to request from the community or you're looking for model-wise or tooling-wise that you think like someone should be working on in this?Lin [00:54:23]: Yeah, so we really want to get a lot of feedback from the application developers who are starting to build on JNN or on the already adopted or starting about thinking about new use cases and so on to try out Fireworks first. And let us know what works out really well for you and what is your wishlist and what sucks, right? So what is not working out for you and we would like to continue to improve. And for our new product launches, typically we want to launch to a small group of people. Usually we launch on our Discord first to have a set of people use that first. So please join our Discord channel. We have a lot of communication going on there. Again, you can also give us feedback. We'll have a starting office hour for you to directly talk with our DevRel and engineers to exchange more long notes.Alessio [00:55:17]: And you're hiring across the board?Lin [00:55:18]: We're hiring across the board. We're hiring front-end engineers, infrastructure cloud, infrastructure engineers, back-end system optimization engineers, applied researchers, like researchers who have done post-training, who have done a lot of fine-tuning and so on.Swyx [00:55:34]: That's it. Thank you. Thanks for having us. Get full access to Latent Space at www.latent.space/subscribe

Thicc Radio
Grecian Lightning (w/_the_mochi_)

Thicc Radio

Play Episode Listen Later Nov 19, 2024 47:59


This week, we discuss the moves of fat liberation in Greece! How is this European society dealing with beautiful, fat bodies? And what kind of fantastic work is our guest getting up to? Who are we? We're James and Tim; two gainers who want to explore everything about gaining and feedism. New episodes will come out every Tuesday, so please subscribe! Rate us five stars, leave us a review, donate to support us and share this episode with your friends. You can find us on our socials below if you want to contact us, but until next time, bye fats! James Instagram: s.t.a.n.n.u.m BlueSky: stannnum.bsky.social Tim Instagram: thickey_mouse Grommr: orpheus Twitter: thickey_mouse YouTube: thickey_mouse TikTok: thickey_mouse Special Guest | Mochi Instagram: _the_mochi_ Twitter: _the_mochi_ Facebook: Mochi Georgiou Thicc Radio Instagram: thiccradio TikTok: thiccradio YouTube: thiccradio Website: ⁠⁠podpage.com/thiccradio/⁠⁠ Email: ⁠⁠thethiccradio@gmail.com --- Support this podcast: https://podcasters.spotify.com/pod/show/thiccradio/support

Saku's Radio from Chicago
#191 マイク・タイソンの復帰戦にアメリカが注目 他

Saku's Radio from Chicago

Play Episode Listen Later Nov 19, 2024 49:02


1. オープニングトーク 「引越したい」 2. “What's Happening America” 〜どうなってんのよアメリカ〜   (1) マイク・タイソンがYou Tuberと対戦!ネットフリックスが生中継! ・マイク・タイソン伝説 ・ジェイク・ポールとは? ・スポーツ・ベッティングの今 (2) トランプ、閣僚の指名で物議! ・マット・ゲーツと疑惑 ・クリスティ・ノームの自伝 ・ロバート・ケネディ・ジュニアの主張 (3) 近年、アメリカでブームのMochiアイスクリームとは? ・日本が先? ・ミカワヤ ・フランセス・ハシモト 3. Saku's Weekly Update 「資本主義」 4. Saku's Weekly English 今週の英語:Delulu 5. Ask Saku リスナーの皆様からのお便りのコーナー ご支援、投げ銭はこちらから Pay Pal : saku39yanagawa@gmail.com 英会話レッスンのお申し付けも sakusradio@gmail.comまで 作:Saku Yanagawa 出演:Saku Yanagawa, Saeko --- Support this podcast: https://podcasters.spotify.com/pod/show/saku-yanagawa/support

Fluent Fiction - Japanese
Crow Commotion: A Mochi Mishap and Unexpected Friendship

Fluent Fiction - Japanese

Play Episode Listen Later Nov 14, 2024 12:08


Fluent Fiction - Japanese: Crow Commotion: A Mochi Mishap and Unexpected Friendship Find the full episode transcript, vocabulary words, and more:fluentfiction.com/ja/episode/2024-11-14-23-34-02-ja Story Transcript:Ja: 紅葉が舞う秋の日、広々とした桜公園は静かな安らぎを提供していました。En: On an autumn day with falling maple leaves, the expansive Sakura Park offered a quiet solace.Ja: 広志は、忙しい日常の中でそのひとときを大切にしていました。En: Hiroshi cherished that moment amidst his busy daily life.Ja: 公園のベンチに腰掛け、彼はお気に入りのもちを取り出しました。En: Sitting on a park bench, he took out his favorite mochi.Ja: しかし、そのもちを狙うものがいました。En: However, there was someone eyeing that mochi.Ja: 桜の木の上、黒いカラスが彼をじっと見ていました。En: Up in the sakura tree, a black crow was watching him intently.Ja: "カア、カア"と啼きながら、カラスは飛び降り、広志のもちをぱっと掴みました。En: With a "Caw, caw," the crow swooped down and snatched Hiroshi's mochi in an instant.Ja: 「まて!」広志は立ち上がり、小さな声で叫びました。En: "Wait!" Hiroshi stood up and called out softly.Ja: 周囲の人々はその様子に興味を持ちます。En: People around took interest in the scene.Ja: 散歩をしていた幸子と重則は、事情を聞いて笑顔で言いました。「私たちも手伝うよ!」En: Sachiko and Shigenori, who were out for a walk, heard the story and said with a smile, "We'll help too!"Ja: 三人は公園の中を駆け巡ります。En: The three of them dashed around the park.Ja: カラスは素早く飛び回り、もちをしっかりくわえています。En: The crow flew agilely, holding the mochi firmly in its beak.Ja: 広志たちは少しずつカラスを追い込み、ついに角に追い詰めます。En: Little by little, Hiroshi and the others cornered the crow.Ja: 「そこの木の後ろ!」と重則が指さしました。En: "That tree behind there!" Shigenori pointed.Ja: 驚いたカラスは、思わずもちを落としました。En: Startled, the crow inadvertently dropped the mochi.Ja: 地面に転がったもちを拾い上げながら、三人は笑い合いました。En: Picking up the mochi that had rolled to the ground, the three laughed together.Ja: 「これ、みんなで食べようよ。」と広志が言うと、幸子と重則も嬉しそうに頷きました。En: "Let's all eat this together," suggested Hiroshi, and Sachiko and Shigenori nodded happily in agreement.Ja: もちをみんなで分け合って食べると、広志は小さなことで苛立つのをやめることが大切だと学びました。En: As they shared and ate the mochi together, Hiroshi learned the importance of not getting upset over small things.Ja: 新しい友達を得ることができて、驚きと喜びが溢れました。En: He was filled with surprise and joy at gaining new friends.Ja: 紅葉が風に揺れる桜公園で、広志とその仲間たちの笑い声が響き渡り、平穏なひとときと新しい友情の始まりを告げているようでした。En: In Sakura Park, where the maple leaves swayed in the wind, the laughter of Hiroshi and his companions echoed, signaling a peaceful moment and the beginning of a new friendship. Vocabulary Words:autumn: 秋expansive: 広々としたsolace: 安らぎcherished: 大切にしていましたfavorite: お気に入りeyeing: 狙うswooped: 飛び降りsnatched: 掴みましたintently: じっとsoftly: 小さな声でinterest: 興味dashed: 駆け巡りますagilely: 素早くcornered: 追い詰めますstartled: 驚いたinadvertently: 思わずrolled: 転がったlaughter: 笑い声swayed: 揺れるechoed: 響き渡りpeaceful: 平穏なimportance: 大切だupset: 苛立つgaining: 得ることcompanions: 仲間たちsignaling: 告げているmoment: ひとときoffer: 提供していましたrolled: 転がったsurprise: 驚き

TeknoSafari's Podcast
Çakal Stajyer Yapay Zeka Ajanlığı Yaparsa - Yapay Zekada Bu Hafta S2 B4

TeknoSafari's Podcast

Play Episode Listen Later Nov 14, 2024 19:57


1. Claude bilgisayarınızı görebiliyor! 2. Perplexity NotebookLM'e rakip olmaya çalışıyor. Ayrıca Reasoningle de iddialı. Öte yandan #WallStreetJournal ve #NewYorkPost telif hakkı ihlali ve marka hasarıyla suçlayarak #PerplexityAI ye dava açıyor. 3. Ideogram, CANVAS ile zirveye çıktı. 4. SOTA videoda çok iyi ve açık kaynaklı. Mochi 1 kesinlikle denemeye değer. 5. Flux gelince konu kapanmadı, Stabble Diffusion 3.5 geldi 6. RunwayML, At-One ile atakta 7. OpenAI gelişmiş sesi Avrupa'ya da açtı. 8. ByteDance stajyeri yapay zeka modellerine zararlı kod yerleştirdiği için kovuldu 9. Elon XAI APIsi yayınladı. Grok uygulamalarınıza eklenebiliyor. GROK3 gelirse ortalık karışır. 10. GPT Windows uygulaması geldi. #yapayzeka #teknolojihaberleri #bilim

Hustleshare
Guaya Melgar and Adrian Co - The Hustle Behind Mochi

Hustleshare

Play Episode Listen Later Nov 10, 2024 84:50


Guaya Melgar, CEO/Co-Founder, and Adrian Co, CTO, share their journeys leading up to co-founding Mochi, a receivables management platform designed to improve billing and collections. Pete discusses his early self-taught coding experiences and forming a dev group, which led him to Proudcloud and later to Logitech. Meanwhile, Guaya reflects on her first startup, Vesl, which addressed credit access for small businesses, and how these experiences prepared her for Mochi's creation. They both recount their initial meeting and partnership, explaining the intensive early product testing and the decision to build Mochi using TypeScript and Next.js for scalability, and Foxmont's early investment that helped them launch. They share candid insights on navigating funding challenges, product bugs, and finding validation in their first paying customer, all while building a strong co-founder relationship based on transparency and resilience.This episode is brought to you by OneCFO and LarkFor show notes, go to Hustleshare.comHustleshare is powered by Podmachine Test https://plus.acast.com/s/hustleshare. Hosted on Acast. See acast.com/privacy for more information.

Ask Noah Show
Episode 413: Ask Noah Show 413 | Contributing to Ubuntu

Ask Noah Show

Play Episode Listen Later Oct 30, 2024 55:14


This week Robie Basak joins Noah from the Ubuntu Summit and gives an introduction on how to get started contributing to Ubuntu. -- During The Show -- 01:26 HexOS - Craig Start with the Command Line When is a GUI appropriate Start with make a ZFS pool make a samba share Help us understand your goal What is HexOS Ubuntu and ZFS DKMS kABI Advantages of TrueNAS Snapshots Send/Receive 1 Click Re-silvering 17:08 Questions about HDMI switch - Andy Theater Receiver Decimator (https://www.amazon.com/Decimator-DMON-QUAD-SD-SDI-Multi-Viewer-Outputs/dp/B072NGFDMR) 21:14 News Wire SQLite 3.47.0 - sqlite.org (https://sqlite.org/releaselog/3_47_0.html) Peazip 10 - github.io (https://peazip.github.io) Jellyfin 10.10.0 - jellyfin.org (https://jellyfin.org/posts/jellyfin-release-10.10.0/) EasyOS 6.4 - puppylinux.com (https://forum.puppylinux.com/viewtopic.php?t=12973) Gnome 47.1 - gnome.org (https://discourse.gnome.org/t/gnome-47-1-released/24670) Tor Browser 14.0 - torproject.org (https://blog.torproject.org/new-release-tor-browser-140/) AlmaLinux Kitten 10 - almalinux.org (https://almalinux.org/blog/2024-10-22-introducing-almalinux-os-kitten/) Gentoo & DTrace 2.0 - gentoo.org (https://www.gentoo.org/news/2024/10/23/DTrace-for-Gentoo.html) NASA $15.6M Grant for Open Source Tools - spaceanddefense.io (https://spaceanddefense.io/nasa-awards-15-6-million-in-open-source-software-funding/) Open Source Printable Lathe - hackaday.com (https://hackaday.com/2024/10/23/a-3d-printed-open-source-lathe/) Thelio Astra - system76.com (https://system76.com/desktops/thelio-astra) Eight Nvidia High Severity Vulnerabilities - forbes.com (https://www.forbes.com/sites/daveywinder/2024/10/25/urgent-new-nvidia-security-warning-for-200-million-linux-and-windows-gamers/) OpenSSL 3.4 - github.com (https://github.com/openssl/openssl/releases/tag/openssl-3.4.0) IPS Snort v3.5 - github.com (https://github.com/snort3/snort3/releases) Parrot OS 6.2 - parrotsec.org (https://parrotsec.org/blog/2024-10-23-parrot-6.2-release-notes/) New Granite 3.0 - zdnet.com (https://www.zdnet.com/article/ibm-doubles-down-on-open-source-ai-with-new-granite-3-0-models/) HUGS - reuters.com (https://www.reuters.com/technology/startup-hugging-face-aims-cut-ai-costs-with-open-source-offering-2024-10-23/) SynthID Now Open Source - theverge.com (https://www.theverge.com/2024/10/23/24277873/google-artificial-intelligence-synthid-watermarking-open-source) Mochi 1 - venturebeat.com (https://venturebeat.com/ai/video-ai-startup-genmo-launches-mochi-1-an-open-source-model-to-rival-runway-kling-and-others/) Ubuntu Turns 20 - ubuntu.com (https://ubuntu.com/20years) 23:23 Robie Basak - Ubuntu Technical Council What drew you to Linux? Why did you decide to work for Canonical? What is the Ubuntu Technical Board? Difference between Ubuntu and Canonical The process of granting commit rights Conflict resolution Cloud init Unique ID Ubuntu Summit Range of interaction Membership Board Meeting Full Hour Long Meeting Recording YouTube (https://www.youtube.com/live/pyRcIZskKNE?si=frx3zrPhUoeLrHi8) 43:40 Fedora 41 Fedora 41 available early! New DNF bootc Plasma Mobile Spin (https://fedoramagazine.org/announcing-fedora-linux-41/) Fedora Magazine (https://fedoramagazine.org/announcing-fedora-linux-41/) Minisforum v3 (https://store.minisforum.com/products/minisforum-v3?) Steve, Fedora, hardware 50:30 Russian Kernel Maintainers Removed Greg Kroah-Hartman removed them due to "various compliance requirements" Removed developers Russian and not minor contributors We live in a world where decisions are made for political reasons zdnet.com (https://www.zdnet.com/article/why-remove-russian-maintainers-of-linux-kernel-heres-what-torvalds-says/) therecord.media (https://therecord.media/russia-separate-linux-community-kernel-maintainers-delisted) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/413) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)

Let's Talk AI
#187 - Anthropic Agents, Mochi1, 3.4B data center, OpenAI's FAST image gen

Let's Talk AI

Play Episode Listen Later Oct 28, 2024 129:38


Our 187th episode with a summary and discussion of last week's big AI news, now with Jeremie co-hosting once again! With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris) Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter (00:03:07) Response to listener comments / corrections (00:05:13) Sponsor Read) Tools & Apps(00:06:22) Anthropic's latest AI update can use a computer on its own (00:18:09) AI video startup Genmo launches Mochi 1, an open source rival to Runway, Kling, and others (00:20:37) Canva has a shiny new text-to-image generator (00:23:35) Canvas Beta brings Remix, Extend, and Magic Fill to Ideogram users (00:26:16) StabilityAI releases Stable Diffusion 3.5  (00:28:27) Bringing Agentic Workflows into Inflection for Enterprise Applications & Business(00:32:35) Crusoe's $3.4B joint venture to build AI data center campus with up to 100,000 GPUs (00:39:08) Anthropic reportedly in early talks to raise new funding on up to $40B valuation (00:45:47) Longtime policy researcher Miles Brundage leaves OpenAI (00:49:53) NVIDIA's Blackwell GB200 AI Servers Ready For Mass Deployment In December (00:52:41) Foxconn building Nvidia superchip facility in Mexico, executives say (00:55:27) xAI, Elon Musk's AI startup, launches an API Projects & Open Source(00:58:32) INTELLECT-1: The First Decentralized 10-Billion-Parameter AI Model Training (01:06:34) Meta FAIR Releases Eight New AI Research Artifacts—Models, Datasets, and Tools to Inspire the AI Community (01:10:02) Google DeepMind is making its AI text watermark open source Research & Advancements(01:13:21) OpenAI researchers develop new model that speeds up media generation by 50X (01:17:54) How much AI compute is out there, and who owns it? (01:25:28) Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning (01:33:30) Inference Scaling for Long-Context Retrieval Augmented Generation Policy & Safety(01:41:50) Announcing our updated Responsible Scaling Policy (01:48:52) Anthropic is testing AI's capacity for sabotage (01:56:30) OpenAI asked US to approve energy-guzzling 5GW data centers, report says (02:00:05) US Probes TSMC's Dealings with Huawei (02:03:03) TikTok owner ByteDance taps TSMC to make its own AI GPUs to stop relying on Nvidia — the company has reportedly spent over $2 billion on Nvidia AI GPUs (02:06:37) Outro

Mother Knows Death
Woman Overboard on Taylor Swift Cruise, Death By Mochi, Woman Licks Eyeballs, Al Pacino Childhood Injury, and More!

Mother Knows Death

Play Episode Listen Later Oct 25, 2024 57:37


Mother Knows Death
Woman Overboard on Taylor Swift Cruise, Death By Mochi, Woman Licks Eyeballs, Al Pacino Childhood Injury, and More!

Mother Knows Death

Play Episode Listen Later Oct 25, 2024 56:08 Transcription Available


The LA Food Podcast
The influencer debate heats up. Plus, political burritos, immortal mochi, and a cheesy conversation with The Cheese Store of Beverly Hills' Dominick DiBartolomeo.

The LA Food Podcast

Play Episode Listen Later Oct 25, 2024 102:54


Today on The LA Food Podcast presented by Rusty's Chips… how are influencers transforming LA's restaurant scene? What does a legendary San Bernardino restaurant have to do with political polarization? And why did one Little Tokyo mochi manufacturer inspire a long-form feature in the New York Times? Father Sal is with us to discuss all of the above, beginning with a deep dive into Eater LA's four-story package on the dreaded “i” word - influencers. Are they adding something valuable to the overall conversation? Or are they scammers that everybody from restaurants to consumers should be wary of? As always, we have the definitive, unquestionable, and unchallengeable answers, for you dear listener, so get excited, cuz this conversation is Lox Level 9.9. In Part 2, I caught up with the iconic Dominick DiBartolomeo at The Cheese Store of Beverly Hills. While The Cheese Store has been in business since the 60s, the latest iteration on Santa Monica Blvd is making waves that have the LA's culinary nerds abuzz with excitement. Dom and I talk about all things cheese and sandwiches, we learn what his favorite cheeses are right now, and which cheese he would eat if he could only pick one to consume for the rest of his life. He tells us the crazy lengths he goes to to source his incredible product, and how he goes about forging relationships with LA's best chefs. Talking to Dominick felt like talking to a long-lost relative, and that's not just cuz we're both Italian. He's a true gem of a human being, and I can't wait for you to brie this conversation. I mean, hear this conversation.  As always please consider leaving us a rating or a review wherever you listen to podcasts. I'm your host Luca Servodio and without further ado, let's go Dodgers and let's chow down.  Helpful links: Our free newsletter LA FOODSTACK, where you'll find most of the articles we referenced today https://thelacountdown.substack.com/ The Cheese Store of Beverly Hills https://www.cheesestorebh.com/ The LA Food Podcast is produced with the help of: Adam Skaggs Tiffany Perez Tim Bertolini Abdo Hajj – Get 10% off at Rusty's Chips using code “LACOUNTDOWN” ⁠https://rustyschips.com/discount/LACOUNTDOWN⁠ -- Get 10% off at House of Macadamias using code "LAFOOD" https://www.houseofmacadamias.com/pages/la-foods cc: Gustavo Arellano, Bill Esparza, Meghan McCarron, Pete Wells, Steve Martin, Mona Holmes, Cathy Chaplin, Rebecca Roland, Matthew Kang, Gab Chabran --- Support this podcast: https://podcasters.spotify.com/pod/show/thelafoodpodcast/support

AI For Humans
Anthropic's New AI Agent, OpenAI Plays Catch-up, Runway's Act-One & More AI News

AI For Humans

Play Episode Listen Later Oct 24, 2024 50:12


AI NEWS: Agents are here from Anthropic with Computer Use in Claude Sonnet 3.5 (new) and likely coming from OpenAI, O1 keeps getting better and might get upgraded soon, Runway's New Act One let's you puppet AI video, Ideogram's new Canvas upgrades AI imaging, Unitree's Robots are getting WAY better and we show you how to make Google's NotebookLM uncensored. AND OH SO MUCH MORE.   It's a big, massive week of AI news. And we are here, for you.   Join our Patreon: https://www.patreon.com/AIForHumansShow Jump in our Discord: https://discord.gg/muD2TYgC8f Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow And to contact or book us for speaking/consultation, please visit our website: https://www.aiforhumans.show/   // Show Links //   Anthropic Drops “Computer Use” In Sonnet 3.5 aka AI Agents https://www.anthropic.com/news/3-5-models-and-computer-use   Claude Coding 90s Website: https://youtu.be/vH2f7cjXjKI?si=XqTRKVxHZx1bK36b   Picks the first link on Google: https://x.com/AnthropicAI/status/1848742757151498717   What Computer Use Can't Do https://x.com/forgebitz/status/1848764235729244254   OpenAI's Noam Brown on O1 https://v.redd.it/7dic62adm3wd1   OpenAI Feels The Pressure, Close To Releasing Coding Bot https://www.theinformation.com/articles/openai-in-duel-with-anthropic-doubles-down-on-ai-that-writes-software   OpenAI Agentic Rumors Involving Microsoft https://x.com/flowersslop/status/1848506100435304852   Sam Altman Teases ChatGPT Update For Second Birthday https://x.com/sama/status/1848487309211275398   Satya Nadella Says We're “Using AI Tools to Build Better AI” https://x.com/tsarnick/status/1848472478257189374   Runway Act-One https://runwayml.com/research/introducing-act-one   Teaser Video https://x.com/runwayml/status/1848785907723473001   Two actors in a scene https://x.com/runwayml/status/1848785913918218517   Mochi 1 -- New OpenSource AI Video From Genmo https://x.com/genmoai/status/1848762405779574990   Ideogram Canvas Feature https://x.com/ideogram_ai/status/1848757699606983143   Stable Diffusion 3.5 https://x.com/StabilityAI/status/1848729212250951911   Unitree Robot Exercise Videos https://youtu.be/G6JE7mNYz2A?si=KLiXYznOUy7Qz4Rh   TANGO https://x.com/dreamingtulpa/status/1847310594434584922   Trump at a McDonald's https://x.com/aliensupershow/status/1848438728148111822   NotebookLM Uncensored https://www.reddit.com/r/notebooklm/comments/1g64iyi/holy_shit_listeners_notebooklm_can_generate_18/

Sengoku Daimyo's Chronicles of Japan

So the year 649 was so bad that they went and changed the whole calendar to forget about it!  In 650 a white pheasant is brought to the court, and they sieze on that as a chance to rename the era from Taika to Hakuchi.  That should make things better, right? This episode we talk about this event--their reasoning, as well as what is recorded as having happened.  We also take a look at the completion of the Ajifu no Miya and how it was renamed to the Naniwa no Toyosaki no Nagara no Miya, or the Toyosaki Nagara Palace of Naniwa.  This is thought to be what we know today as the Early Naniwa Palace, and it was a real change, and, in many ways, the physical manifestation of the Taika era reforms. For photos and more, check out https://sengokudaimyo.com/podcast/episode-113 Rough Transcript: Welcome to Sengoku Daimyo's Chronicles of Japan.  My name is Joshua, and this is Episode 113: The White Pheasant.   The officials of the court stood sentinel at the palace gates, a formidable line of authority draped in flowing, vibrant robes that signified their rank. Each step down the line revealed a cascade of colors, a living tapestry of power and prestige. Only the envoys from distant shores stood apart, their unique uniforms adding an exotic flair to the proceedings, as well as a certain legitimacy as outside witnesses.   The air crackled with anticipation as the crowd waited, their breath held, until four figures emerged, bearing aloft a magnificent litter adorned with intricate decorations that shimmered as they caught the sun's rays.   Upon that litter rested a cage, and within it,a dazzling white pheasant, plucked from the untamed wilds of Anato. Whispers rippled through the throng; some questioned the significance of this fragile creature, while others dared to see it as a divine omen. Was this bird as pure as the tales had promised? The capital had buzzed with rumors ever since its unexpected arrival, and those in the back stretched their necks, desperate for a glimpse of this rare marvel.   The past year had cast a shadow over the Yamato court, leaving the air thick with uncertainty. Yet, this ethereal bird, shimmering with the promise of renewal, seemed to herald a shift—an opportunity for rebirth that everyone craved.  At the very least it was a much needed distraction from everything that had previously occurred.   As the litter glided past, the courtiers bowed deeply in reverence, forming two disciplined lines that followed through the grand gates. Together, they marched into the palace, hearts pounding with hope. They were not just entering a building; they were stepping into a new era, one that, with a whisper of fate, could rise above the struggles of the past.     This episode we kick off the start of a new era—the Hakuchi era, or the era of the White Pheasant.  It followed the Taika era, and it does have a different feel.  It is less about new edicts and more about how things were shaking out and coming together.  And one of the things that was coming together was the Nagara no Toyosaki palace, which is believed to be the same one known to archaeologists as the “Early Naniwa Palace” unearthed in Ohosaka and dated to the mid-7th century.  We'll actually start with a look at this palace, continuing our discussion from last episode, as our sovereign, Karu, aka Koutoku Tennou, seems to have been a bit crazy about all of his palaces, and figuring out just which is which can be an issue in and of itself. We'll also touch on the start of this new era, and look at why and what it meant to come up with a new era name—a new “nengou”—in the middle of a reign like this.  And so we catch ourselves at the start of the year 650, still, technically, in the Taika era.  The year started well enough, with the sovereign celebrating the new year at the Ajifu palace and then coming straight back—the Ajifu palace was apparently yet another new palace and it seems construction had only recently begun.  Now, There is some confusion between the Ajifu palace and the Toyosaki palace.  The Ajifu palace is traditionally thought to have been located on the opposite side o f the Yodo river, in the area of modern Settsu city, on the site of what became the Ajifu Shrine.  Others have suggested that it was actually on the Kanimachi plateau, which is where the Toyosaki palace was.  Notably the “Toyosaki” palace is not located anywhere near the modern area of “Toyosaki” with which it seems to share a name.  From what little information we have, it seems to have been quite the complex.  As to why he would need yet another palace, I could not say.  And yet, later we see that the Ajifu Palace is eventually named the Nagara Toyosaki Palace.  So are they one and the same?  Did they move the Toyosaki Palace?  Or did they build the Toyosaki Palace and then *rebuild* it as the Ajifu Palace—aka the Nagara Toyosaki Palace? At this point the way that the Chronicles talk about it, the Ajifu palace site seems to have been almost purely conceptual, while previous accounts seem to indicate that the Toyosaki Palace was already in use.  That would have made for an interesting New Year's celebration, probably in temporary buildings erected quickly amongst the grass and fields, with some nearby tomb mounds that would need to be leveled or moved to make room, we are later told.  It seems they were still surveying the site, but I guess Karu really was looking for a change.  And so he celebrated the new year at the Ajifu palace, but quickly returned back to wherever the work of the government was actually occurring. As to where that was, well, we talked last episode about all of Karu's meanderings from one palace to the other.  The Nihon Shoki text itself is not exactly clear, as I read it.  It doesn't help that the term for palace, or “miya”, appears to refer to both a complex and a single residence, without a clear distinction given between the two.  And so, though I mentioned it last episode, let's recap what we know about the palaces this reign. So in 645, we are told that Karu decided upon Naniwa and we are told that this is the “Toyosaki” palace.  Then in 646, Karu took up residence in the “detached” palace of Koshiro in Sayabe, Naniwa.  This was likely him repurposing the Miyake, the government offices with the royal granaries.  He was only there for about two months, though, before he returned.  Then, in the third month of 646, he issues an amnesty claiming to have taken up residence in the new palace—but we aren't told which one. In 647, two years into the reign, the government offices at Wogohori are torn down and a palace was built there.  Now this is somewhat confusing because there appear to be two government districts:  Wogohori and Ohogohori.  You'll probably notice how similar these two sound, though it may have been more like “wogopori” and “opogopori”. Back in the day.  Wo-gohori, or the “Small District”, is mentioned once, but mainly just as a place name.  Ohogohori, or the “Big District” has previously shown up as the place with government offices for the envoys from overseas.   Confusing matters, in a later entry, Karu eventually moves out of the palace at Oho-gohori and into the palace that would be known as the Nagara Toyosaki palace.  So was he at Wogohori and then later at Ohogohori?  Or was there some scribal error such that the two got confused? And then in 648 we are told that Karu moved into the Toyosaki palace in Naniwa.  Two years later, in 650, and he is now celebrating New Year's at the Ajifu palace, which may refer to a location on the other side of the Yodo river, but is likely in the spot we now think of as the Nagara Toyosaki Palace.  We then know that in 651 they were still building a palace.  And it isn't until the last day of 651 that Karu would formally move from Ohogori into the Ajifu palace, which we are told was then renamed the Nagara no Toyosaki no Miya---the Nagara Toyosaki Palace. I have several thoughts on all of this.  One, is that there may have been two “Toyosaki” palaces—there was the Toyosaki palace that he first moved into, and then there is the Nagara Toyosaki Palace.  “Nagara” appears to mean something like “Long Handle”, but other than that, I don't know that there is a good translation.  It may refer to the fact that it was meant to last longer, or that it was even larger than the previous palace.  It may even be that the original Toyosaki Palace was just a few of the buildings, and that eventually it grew into the larger Nagara Toyosaki Palace, but if that is the case, what is up with term “Ajifu”?  Was that just one building in the larger palace?  Or are earlier mentions of “Toyosaki” anachronistic, and perhaps it wasn't until the entire thing was complete that they gave it that name?  Many modern accounts appear to conflate the Toyosaki palace with the Nagara no Toyosaki Palace, saying it just took that long to build.  That would imply that the Ajifu palace really was there on the Kamimachi plateau, at the known Naniwa palace site.  Alternatively, “Nagara” could possibly have been a reference to the fact that the Ajifu palace was an extension of the larger Toyosaki complex, possibly built out of the government offices of either Wogohori or Ohogohori. For all that we don't know exactly what was happening here, we have a pretty good idea in the archaeological record about at least one of the palace sites on the Kamimachi plateau.  This site has been identified as the Toyosaki palace of Karu, aka Koutoku Tennou, and it would actually be reused at a later date.  Sure enough, there are remains of at least two palace complexes on the site, with the one from our period known as the “Early Naniwa Palace” site. Based on its size and layout, this Early Naniwa palace was the first of its kind.  Previous palaces in Asuka had not dissimilar designs in terms of the general arrangement, but this clearly made use of the structure of continental style palace complexes, and was likely intended to be a new, permanent capital. The north of the palace complex consisted of a rectangular, walled section 185 meters east to west and 200 meters north to south, making up the “dairi”.  That's almost 10 acres of enclosed space, set aside as the sovereign's personal living quarters. South of that was a smaller area with the front hall, one of the largest for its time.  It was 36 meters east to west and 19 meters north to south.  This would have been the hall called the “Daigokuden” in later palaces, where official rituals would take place.  There was a gate between it and the Dairi, to the north, as well as a gate to the south, flanked by two octagonal buildings, which led to the Chodoin, the main working area of the court complex. This is part of what sets this palace apart from others, and why it likely took a while to build.  It may also explain all the different palace names as there was probably a lot of construction for a long time.  In previous instances, as far as we can tell, the sovereign's palace was both their home and the building where state business was conducted.  Think, perhaps, of the White House, in the US, and then imagine that the White House, the Capitol Building, and the Supreme Court were all part of the same compound, with only the barest of concessions to privacy between them.  In this new layout, the dairi was reserved to the sovereign, there was a small area for the official throne room, and then south of that was the Chodoin, the court hall complex. This was a huge change to how things had operated in the past.  While the main audience hall was still nominally part of the dairi, so the “private” areas of the palace weren't entirely “private”, it was still leaps and bounds more separated than in the previous palaces we've uncovered.  Sure, the idea of lining up buildings from the front gate to the larger buildings towards the back, making people approach successively larger and more impressive buildings, generally seems to have been a thing as far back as the Makimuku Palace near Mt. Miwa, back in the third century, but even then, there is no clearly defined separation between the public and private spaces of the sovereign.  There does seem to have been restrictions on who could enter what parts of the compound, with the sovereign's personal quarters being the most restricted, but now there were walls and gates and guards separating one area from another. The Chodoin itself, the main “business” or “public” area of the court, appears to have been about 262.8 meters north to south and 233.6 meters east to west—a little over 15 acres.  Most of that was open space between the 14 “choudou” halls lined up symmetrically, 7 on either side.  These were the individual buildings where the various government officials were to meet and conduct business, as well as conduct rituals, feasts, etc.  There was a southern gate that provided the entrance to the Chodoin and led to another large area with the Choshuden, the buildings where officials could change into and out of their formal court uniforms, and otherwise prepare for or close out the day.  South of that was the main gate for the entire compound, the Suzaku gate, named for Suzaku, the red bird of the south, one of the four directional guardian spirits. We know the buildings largely from their post holes.  They were made of wood, and it is likely that most of them were thatched.  They may have been painted white, vermillion, and green—classic paints that were based on continental styles and which were said to help prevent the wooden pillars from rotting too quickly.  It is unsurprising that this would have taken years—but it is also possible that they built some quarters for the sovereign and then built out from there.  This also would have been key to a lot of the governmental reforms, providing an actual location for the work that the reforms were directing. Of course, there was a lot of work to be done, and the halls in the palace were limited, so two areas to the east and west of the complex were set aside and appear to have been built up with other government offices, suitable for carrying out the day to day minutiae that was required. There is still a question of whether or not they also instituted the larger grid system city layout around the palace complex.  Currently we have no evidence for that, though perhaps they were considering it, eventually.  Unfortunately, with all of the construction in Osaka over time, I don't know if we could be able to find or discern such a layout if we did find it.  For now, we will stick with what we know:  an absolute unit of a court complex that took them several years to build. Getting back to the Chronicles: Our next entry in the Nihon Shoki, after the New Years celebration, tells us that in the second month, Kusakabe no Muraji no Shikofu, the governor of Anato Province, brought a white pheasant to the court.  The report claimed that it had been caught by Nihe, a relative of Obito, the Kuni no Miyatsuko of Anato, on the 9th day of the first month, on Mt. Wonoyama. For reference, the land of Anato was at the far western end of Honshu, part of the San'yodo, itself a designation for the lands along the Seto Inland Sea coast from Harima, modern Hyogo prefecture, out to Anato, modern Yamaguchi prefecture.  It was on the Honshu side of the Shimonoseki strait, which was the main entrance from the Korean Strait and the Japan Sea to the Seto Inland Sea.  The area would later be known as Nagato, which would eventually be called Choshu, an area which any students of the fall of the Tokugawa shogunate are sure to recognize. We discussed back in Episode 94 how white or albino animals—assuming they weren't normally white—were considered particularly auspicious.  So in 598, the land of Koshi sent a white deer they had found to the court of Kashikiya Hime, which is to say Suiko Tenno.  And so the white pheasant from Anato was clearly seen as an omen—but was it truly auspicious.  Here we see the court investigating this, and how exactly they go about that is somewhat enlightening as to how the court thought in general. First, they made inquiry of the lords of Baekje—I would suspect this referred to those recognized as Baekje nobility residing in the archipelago, rather than sending a correspondence to the peninsula and back.  That they went to someone from Baekje would seem to indicate the importance they placed on Baekje as a conduit for continental learning.  Indeed, the answer they got back—whether from a single, unnamed individual or a group of Baekje nobility—was that White Pheasants were recorded in the 11th year of Yongping, which would be 68 CE to us, during the reign of Ming of the later Han dynasty.  Han Mingdi, aka Emperor Ming of Han was born Liu Yang and also known as Liu Zhang, reigned from 57 to 75 CE.  Ming and his son, Emperor Zhang oversaw a period of particular prosperity for the Eastern Han dynasty.  On the other hand, there was an attempt to curse Emperor Ming in 67 CE, which ended with the death of the ambitious Prince Jing of Guanglin.  Then, in 70, Prince Ying of Chu was also convicted of using magic to try and secure blessings while he fomented revolution against the emperor, and he was exiled, where he committed suicide.  So I don't know if this marks the pheasant as particularly auspicious or not. Asking the Buddhist priests, who frequently studied not just Buddhist canon, but other continental texts, they mostly drew a blank—at least on the specifics of a white pheasant.  They did recommend that a general amnesty would not be amiss, as it would bring joy to the people.  I guess if you aren't sure about the nature of an omen you can certainly do something to help it out. And while they weren't specifically sure about a white pheasant in Buddhist scripture, a couple of priests did have suggestions. The Priest Doutou recounted a story from Goguryeo, when the court there wished to build a new Buddhist temple, but could not divine a suitable and auspicious site.  When someone witnessed a white deer, they chose that spot for the temple, which was then called the Temple of the Park of the White Deer.  According to Doutou, this temple established Buddhism in Goguryeo. Furthermore, he recounted, when a white sparrow was seen on the farmstead of another temple, or when a dead crow with three legs had been brought back from the Tang dynasty, the people had proclaimed both of these to be good omens.  So given all of that, Priest Doutou concluded, a white pheasant must be especially auspicious. The Priest Bin agreed.  Bin, you may recall, had been heavily relied upon for his knowledge in setting up the new governmental structure, which would seem to indicate that he was quite well-versed in continental ideas, and he had even traveled there himself.  He provided the court several different reasons that a white pheasant might appear. First, it might appear when a ruler extended his influence to all four quarters. Second, it might appear when the sovereign's sacrifices are appropriate, and when his banquets and clothing are in due measure. Third, it might appear when the sovereign cultivates frugality. Finally, it might appear when the sovereign was humane. He didn't provide any specific examples of how he arrived as his conclusions—at least nothing was recorded—and so he may have been relying on his own expertise.  However, he did recount one tale in particular.  It was a story from the time of Emperor Cheng Wang of the Zhou dynasty.  Cheng Wang is said to have reigned in the 11th century BCE, from 1042 to 1021, and so take that how you will.  Important to us is not what happened so much as what the Yamato court believed had happened—what was the historical truth that they were workin with at the time? According to Bin, during Cheng Wang's reign, the Yuehshang family brought a white pheasant to the court.  Apparently it had been three years without any exceptional storms or rains, and neither the rivers nor seas had flooded.  Apparently the old men found this an extremely long time to go without some kind of disaster, indicating that the pheasant was clearly an auspicious omen in deed. Priest Bin also mentioned other accounts, but the Chroniclers omitted them from the record. Whatever they were, the court had heard enough.  The White Pheasant was declared auspicious, and a new era was declared:  the Hakuchi, or White Pheasant, era.  They let the white pheasant loose in the royal garden, presumably with clipped wings or otherwise kept from flying off, and then preparations were made  immediately to officially inaugurate the new era 6 days later, on the 15th day of the 2nd month of 650. Before we get into that, though, I want to pause and take a look at something here:  The authority of precedent.  Time, as conceived of in the continental model, was cyclical.  There was the cycle of day and night.  The cycle of the year and the repeating seasons.  Likewise the planets and heavens all had their own cyclical periods.  In addition, there was the idea that the Yin and Yang forces in the universe likewise cycled through predictable patterns—the sexagenary cycle, or cycle of 60 years, being an example of a longer term cycle.  And then there was the Buddhist cycle or death and rebirth, at least as long as one remained tied to this mortal plane of existence. If time is cyclical, then one can look to the past to predict the present.   Stories of the past were seen as holding authority over similar events in the present.  Understanding these historical stories and being able to pull from them provided its own kind of power and authority.  Rather than attempting to reason from first principles, precedent was often a more convincing argument. Being able to read and write and recall all of these stories gave scholars the ability to influence events.  Of course, who had time to do all that other than people like Buddhist priests or the doctors of the court? This is also one of the reasons that people would have had to write down histories and, eventually, to keep diaries and accounts of what happened.  Those accounts would, over time, become essential records to invoke for moments like this—and even a record like the Nihon Shoki or the Kojiki would have similar significance.  In many ways, it is propaganda, but not just in how it describes the past as the Chroniclers wished it to be, but it set the precedent for succeeding eras to look back on.  While we may challenge that view, today, for many from the 8th century onward the events described in the Nihon Shoki were considered the gospel truth in more ways than one. Of course, all that aside, we've had plenty of auspicious events before, but why, now, would they be enough to trigger a new era?  Why not just note them and move on? Well, to start with, let's face it, nobody is likely to name 649 as the greatest year ever, any time soon, and certainly not the Yamato court.  The Crown Prince, Naka no Oe, had been tricked into thinking that his co-conspirator, Soga no Kurayamada no Ishikawa no Maro, was a traitor.  To be fair, Maro had been more than complicit in the murderous takedown of his own relatives to set up the current government, and history has time and again suggested that those who put someone on the throne can just as easily take them off it.  That's why they are often either brought deeper into the inner circle, or removed—either physically or more euphemistically.  In this case, though, it seems that fears of Naka no Oe and others were unjustified, and they sent the royal troops after an innocent man; or at least a man as innocent as any of the other elites at that time.  After all, the wealth of the elites came from the rice fields that they owned—or that were at least designated for their stipends—and they certainly weren't working those fields themselves, so make of that what you will. All of that had led to the death of Maro, his family, and the rest of his household.  That, in turn, led to the death of his daughter, Miyatsuko Hime, who was married to Naka no Oe himself.  When they finally did realize what had happened, the best justice they could figure out was to send the scandal-mongering Soga no Musa out to Tsukushi in a form of luxurious banishment.  Demotion by promotion, as he was made the Viceroy of Tsukushi, the top man of the court at the edge of the archipelago. To say that the year 649 had been a bust is an understatement.  Don't get me wrong, it was a far cry from the worst year that the archipelago had ever experienced—or would in the future, for that matter.  But that was scant comfort to the folks living in it. And so it was with some relief, I suspect, that the court welcomed news from the far flung land of Anato, because they really needed a distraction. With that in mind, let us move on to the events of the 15th day of the 2nd month of the year 650, describing how they inaugurated the new era.  Now, if the Chronicles are to be believed, this is not the first time they inaugurated a new era—we are told that year 645 was considered the first year of Taika, or Great Change.  But, assuming that did happen, and that it wasn't just named after the fact, the era would have started at the same time as a new reign.  Previously, from everything we can tell, dates were based regnal years.  Things are recorded as happening in the X year of Y sovereign.  Some of the oldest accounts seem to even note it more as X year of the sovereign who reigned from the Y palace, as the palace was likely more distinct a feature than the names and titles that they used, and the posthumous names, like “Koutoku Tennou” were not actually used until the end of the 7th or early 8th century. It is possible that Hakuchi is actually the first true nengo—or era name—and the first one that appears in the middle of a reign—though even here some say that the instantiation of “Hakuchi” is anachronistic. Personally, I see no harm in taking it at face value, at least for now, while acknowledging that everything in the Nihon Shoki is suspect.  Still, we are approaching a time when the events being written down may have still been in the living memory of people alive at that time.  720 is only 70 years away, and the project started even before then, so unless there are obvious discrepancies or supernatural events, we can probably assume that the Chronicles at this point are largely truthful, if possibly embellished. And so it is we are told of what happened.  To begin with, the court lined the ministers of the left and right and all of the functionaries in four lines outside the “purple” gate, as they would during a New Year's reception, like the one they had just had at the Ajifu palace.  The “Purple” gate was probably a reference to the southern gate The fact that the courtiers lined up at the south gate in the same way that they would have during a New Year's reception would seem to indicate that this was seen as the start of a new year.  It was no longer a Taika year—starting on that day it was now the first year of Hakuchi.  The month and day would not change, however, so it was still the 15th day of the 2nd month.  That means that technically the first year of Hakuchi would only have ten and a half months in the year—maybe eleven and a half, if there was an extranumerary month.  Likewise, the last year of Taika would only have one and a half months.  And if you are thinking that must make Japanese dates really tricky around the start or end of year, you don't know the half of it.  Sometimes events will get placed in the wrong “era” because they happened a few months before or after the change, and people forget that when they are translating to and from western dates.  It also means era names can't just give you the years of the era, but really need to give you the month and date it starts and ends.  Fortunately, most people are quite understanding about the occasional mistake.  But anyway, I digress. The courtiers were lined up as though for new years, and then they watched as Ahata no Omi no Ihimushi and three others bore a litter with the pheasant on it and went ahead through the gates.  The others followed in rank order—with the Ministers of the Left and Right leading the various functionaries.  The Baekje prince Pungjang and his uncle, Sesyeong Chyungseung, whom we mentioned back in Episodes 105 and 107, as well as Mochi, the physician to the King of Goguryeo, a scholar attached to the court of Silla, along with other important persons all advanced as well into the Central court of the palace. The pheasants litter was taken up by Mikuni no Kimi no Maro, Wina no Kimi no Takami, Miwa no Kimi no Mikaho, and Ki no Omi no Maro, who brought it to the front of the hall.  There, the ministers of the left and right then took the front of the litter, while the Prince of Ise, Mikuni no Kimi no Maro, and Kura no Omi no Woguso took hold of the rear.  Together, they placed it in front of the throne.  The sovereign, Kura, and the Crown Prince, Naka no Oe, examined the pheasant together. The Crown Prince then backed away, and the new Minister of the Left, Kose no Omi, presented a congratulatory address. He gave thanks to the sovereign and claimed that the pheasant was a sign that the sovereign would rule for one thousand autumns and ten thousand years across the Great Eight Islands—the Ohoyashima—of the archipelago and the four quarters of the earth.  Effectively, this is a long-winded version of “Banzai”, the congratulatory wish of ten thousand years of life for an emperor. Karu responded to this address by quoting auspicious times that white animals had been omens of good rule.  He then gave credit to the ministers and functionaries, and urged them to continue to provide good service.  Then he declared a general amnesty, forgiving various offenses, and noted that the era name would change to “Hakuchi”. Karu then directed presents to be handed out to the Ministers, the Daibu, the officials of lower rank, all the way down to the clerks.  Each received gifts commensurate with their rank.  Finally, Kusakabe no Muraji no Shikofu, the governor of Anato, was commended, and granted the rank of Daisen along with what we are told were a goodly number of presents.  In addition, the commuted taxes and corvees of Anato were remitted for three years, meaning that Anato would be allowed to keep all of the rice and product for themselves—something that was likely quite significant, though it is unclear whether this means that it was felt down at the level of basic workers or it just meant that the governor was able to keep what he taxed from the people for himself. And with that, we enter a new era.  Forget the unfortunate bloodshed and regrettable decisions of the previous year, this was a new start.  And that is often how these eras were seen.  Whether it was a new reign or things were just going so poorly that the court felt there needed to be a new start, future nengo would often follow a similar pattern.   And there was no set time for how long an era would last.  In fact, here's a little trivia for you:  The shortest nengo in Japanese history was “Ryakunin”, and it lasted just under two and a half months from late 1238 to the start of 1239.  It really shows how important it was to come up with a good name of these eras, as “ryakunin”, which seems to mean something like “humane period”, could also be written with characters meaning “abbreviated person”.  So they decided to abbreviate the era, instead, changing the era name again. This first year of the new era of Hakuchi continued relatively normally.  In the fourth month there were envoys from Silla—another source, according to the Nihon Shoki, claimed that Goguryeo, Baekje, and Silla sent envoys every year from this reign onward.  Then, in the tenth month, we see more work being done on the palace—presumably the Ajifu palace.  We are told that presents were given out in respect to tombs that had been demolished to make room for the new construction, as well as for the people who had been moved off their land.  Then Aratawi no Atahe no Hirafu was sent to place the boundary posts, no doubt marking out the outer extremities of the new palace precincts. In addition, that month work began—no doubt at the court's direction—on a giant tapestry, or mandala, with a sixteen foot tall Buddha image, attendant Boddhisatvas, and figures of all eight classes of beings according to the Buddhist cosmology.  That includes Heavenly beings, such as Devas; dragons; demonic Yaksha, Gandharva, and Asura; the bird-like Garuda and Kimnara; and the snake-like Mahoraga.  All told, there were some 46 figures.  It doesn't seem to say where it was to be installed, though it may have been made for the new palace complex. Also in that year we are told that the court ordered Aya no Yamaguchi no Atahe no Ohoguchi to carve one thousand images of Buddha—but once again, we aren't told where they resided.  We do know that the 16 foot tall embroidered Buddha was completed in the 3rd month of 651: it had taken them approximately five months.  The day after they were completed, the Dowager Queen, Takara no Himemiko, aka the former sovereign, Kougyoku Tennou, who had stepped down in 645, invited ten Buddhist teachers and prepared a feast and entertainment, likely to bless and show off the completed images. At the end of 651, the palace itself was finally complete.  We are told that over 2100 priests were invited to the Ajifu palace to read the Issaikyo on  the last day of the year.  The Issaikyo is the entirety of the Buddhsit canon, and so this was probably done in the abbreviated tendoku style, with priests just reading the chapter headings and flipping through the sutras, though with 2100 it is possible they just each red a different portion, all at the same time.  As it grew dark, the palace courtyard was kept bright with 2700 lights while we are told that the Antaku and Dosoku sutras were read.  Aston notes that these “sutras” of Antaku and Dosoku don't appear to reference any actual sutras that we know of, and posits that they may simply be rituals for home safety and the like.  Given what we know about the fate of so many of these old wooden palaces, it makes sense. After the sutras were read, the sovereign, Karu, formally moved from his residence in Ohogohori into the new palace, which was called Naniwa no Nagara no Toyosaki no Miya.  As I noted at the beginning, it is unclear if this was the Ohogohori or Wogohori, and it is even somewhat murky as to whether or not it was considered a palace.  Not to mention that after the New Year's ceremonies were completed, the royal chariot—which would have been carrying the sovereign—went back to Ohogohori.  I guess things weren't quite ready yet.  He would return on the 9th day of the third month, and even then we don't see a note that the palace was completed until the 9th month of 652.. There is a lot here where we see things that appear to be scheduled so that they can occur on auspicious days, even if everything else isn't quite ready.  So, for example, reading the sutras and formally “moving” into the palace on the last day of the year so that one could host the New Year's celebration there the next day.  That seems like something that was done purely for ceremonial purposes.  You may recall that in 650 they did the same thing. There are a few more references to the palace.  On the 15th of the 4th month of 652, the Buddhist ascetic E'on was invited into the Dairi to explain the Muryouju Sutra, also known as the Sukhavati Vyuha sutra.  E'on was made a lecturer, and there were said to be 1,000 ascetics in the audience, listening to his teachings.  That apparently went on for five days, being discontinued on the 20th day.  And the power of the sutras, and E'on's teachings, is shown in the weather, because the Chronicles claim that large rains began to fall in a monsoon that lasted for nine days.  This wasn't a gentle “water your crops” kind of rain.  This was more like a “demolish your buildings and destroy your fields” kind of rain.  There must have been massive flooding as men, horses, and cattle were caught up in the water and drowned. Given the way this is written, I'm not entirely certain of the takeaway.  Were the sutras that powerful that they brought rain, and E'on didn't understand his own strength?  Or was it a punishment for stopping E'on from continuing his lecture?  Or was it the rains that caused the lectures to stop, perhaps making it untennable for people to sit out in the courtyard and listen as the rains came down?  My rational brain suspects the latter, but I'm not sure how it was read by the people of the 8th century. On the last day of 652, priests and nuns from around the country were invited to the dairi, to the interior of the palace, and entertained and given a feast.  Alms were given and lights kindled to celebrate the new year. But that's the last entry I really see for the palace, as such.  There was plenty more happening through the era, and we'll touch on that.  We start to see Silla and Tang dynasty getting chummy, and we also see some of the reforms still working their way across the land.  We also have Yamato's own expeditions out to the Great Tang dynasty.  But we'll save that for the next episode, as we continue to dive into the Hakuchi era. And so, until next time, thank you for listening and for all of your support. If you like what we are doing, please tell your friends and feel free to rate us wherever you listen to podcasts.  If you feel the need to do more, and want to help us keep this going, we have information about how you can donate on Patreon or through our KoFi site, ko-fi.com/sengokudaimyo, or find the links over at our main website, SengokuDaimyo.com/Podcast, where we will have some more discussion on topics from this episode. Also, feel free to reach out to our Sengoku Daimyo Facebook page.  You can also email us at the.sengoku.daimyo@gmail.com.  Thank you, also, to Ellen for their work editing the podcast. And that's all for now.  Thank you again, and I'll see you next episode on Sengoku Daimyo's Chronicles of Japan.

ASIAN AMERICA: THE KEN FONG PODCAST
EP 501: Mika Shino On Launching, Mass-producing & Marketing Issei Mochi Gummies

ASIAN AMERICA: THE KEN FONG PODCAST

Play Episode Listen Later Oct 6, 2024 47:03


Because Mika Shino was born in Japan, she possessed an innate connection to Japan's traditions, culture, aesthetics, and cuisines. But having grown up in other countries, especially America, she also was imbued with a creative curiosity that was free to explore beyond the boundaries of her native roots. When she became a mom, she soon learned that most of American snacks originatetd in Europe, and they weren't healthy. So she began to experiment in her kitchen, eventually concocting a healthy snack that her boys and their friends loved that was based in the traditional Japanese mochi cake. But she took a huge leap of faith when she decided to mass produce Issei Mochi Gummies. Her unique Japanese American healthy snack is now found in most grocery stores, on Amazon, and can also be bought directly from www.mochigummies.com. She is adamant about sticking with Issei's goal to create beautiful, healthy, and delicious foods that bring happiness, honor Asian heritage, and garner a community. They aim to enhance diversity and inclusion in the food sector, building bridges across cultures through food.

PODUCER
DJ Mochi: Amapiano, Everyday People, Global Currency, DJ Etiquette

PODUCER

Play Episode Listen Later Oct 2, 2024 126:32


DJ Mochi, a Chicago-based DJ, producer, and event curator. DJ Mochi shares insights into his career, musical influences, and the cultural significance of the genres he plays. Contrary to popular belief, his name is not derived from the Japanese rice cake dessert. The nickname "Mochi" originated in Argentina, where friends called him "Mochi," short for "mochila," meaning "backpack" in Spanish, because he was always following his friend Lucas around. While journalism was his initial career path, he found his true passion in DJing and event curation. He is known for playing a wide range of genres and for his dynamic sets that blend different musical styles. Website: djmochi.com Social Media: Instagram (@djmochi) Global Currency: Follow on social media (@globalcurrencychi) Podcast Chapters: 00:00 - Introduction 00:38 - Guest Introduction: Who is DJ Mochi? 01:16 - Origin of the Name "DJ Mochi" 03:23 - Experiences in Argentina 05:17 - Moving from Portland to Chicago 08:18 - First Concert Experience 10:59 - Early Musical Influences and Radio Background 12:06 - Discovering and Explaining Amapiano 19:27 - Music Discovery and Curation Methods 24:45 - Performing at the Everyday People Event 31:31 - Favorite Chicago Clubs and Venues 40:00 - Cultural Sensitivity and Appropriation in Music 47:17 - Navigating Cultural Appropriation as a DJ 56:15 - DJ Etiquette 58:52 - Global Currency 01:09:00 - Touring and Performing in Other Cities 01:17:24 - Future Aspirations for Global Currency 01:28:15 - Technical Aspects of DJing and Equipment 01:35:31 - Dealing with Performance Anxiety and Imposter Syndrome 01:42:17 - More on DJ Etiquette and Repeating Songs 01:53:23 - Most Impactful Concert Experience 01:58:17 - Final Thoughts and Shout-outs

POPlitics
Mochi & The Senate | Lobotomy Hour Appt 1

POPlitics

Play Episode Listen Later Oct 1, 2024 39:42


On this first ever episode of Lobotomy Hour Alex gives a summary of how the rebrand has gone, getting her first ever dog Mochi, doing a podcast swap with Skinny Confidential, and testifying at the FREAKING SENATE! Thank you to our sponsors! Wise Traditions | Use code “ALEX” for $25 OFF at wisetraditions.org MASA | Use code "REALALEXCLARK" for 20% OFF at ⁠⁠masachips.com⁠ Garnuu | Use code “ALEX” for 10% OFF garnuu.com Good Ranchers | Use code “CLARK” for $25 OFF at goodranchers.com YRefy | Call (888) 502-2612 or visit ⁠⁠yrefy.com⁠ Alex Clark Instagram | ⁠⁠⁠@realalexclark⁠⁠⁠ Instagram | ⁠⁠⁠@cultureapothecary⁠⁠⁠ Facebook | ⁠⁠⁠⁠@realalexclark⁠⁠⁠⁠ X | ⁠⁠⁠⁠@yoalexrapz⁠⁠⁠⁠ YouTube | ⁠⁠⁠⁠@RealAlexClark⁠⁠⁠⁠ Spotify | ⁠⁠⁠⁠Culture Apothecary with Alex Clark ⁠⁠⁠⁠ Apple Podcast | ⁠⁠⁠⁠Culture Apothecary with Alex Clark⁠⁠ New 'Culture Apothecary' Merch OUT NOW! Glass tumblers, weekly wellness planners, hats, crewnecks and more. Use code "Alex Clark" for 10% OFF at ⁠⁠⁠⁠tpusamerch.com⁠⁠ Join the Cuteservatives Facebook group to connect with likeminded friends who love America and all things health and wellness! ⁠⁠⁠⁠⁠⁠Join the CUTEservative Facebook Group!⁠⁠⁠⁠⁠ Subscribe to ‘Culture Apothecary' on ⁠⁠Apple Podcasts⁠⁠ and ⁠⁠Spotify⁠⁠. New episodes drop 6pm PST/ 9pm EST every Monday and Thursday. This show is made possible with generous donations from listeners who believe in our mission to heal a sick culture. You can support our show by leaving a tax deductible donation, or by subscribing to ⁠⁠⁠@RealAlexClark⁠⁠⁠ YouTube for FREE! ⁠⁠⁠donate.tpusa.com⁠⁠ #cultureapothecary #alexclark #podcast #health #women

Unreached of the Day
Pray for the Mochi (Muslim traditions) in Pakistan

Unreached of the Day

Play Episode Listen Later Sep 27, 2024 1:01


Episode Description Episode Description Sign up to receive this Unreached of the Day podcast sent to you:  https://unreachedoftheday.org/resources/podcast/ People Group Summary: https://joshuaproject.net/people_groups/17625 #PrayforZERO is a podcast Sponsor.         https://prayforzero.com/ Take your place in history! We could be the generation to translate God's Word into every language. YOUR prayers can make this happen.  Take your first step and sign the Prayer Wall to receive the weekly Pray For Zero Journal:  https://prayforzero.com/prayer-wall/#join Pray for the largest Frontier People Groups (FPG): Visit JoshuaProject.net/frontier#podcast provides links to podcast recordings of the prayer guide for the 31 largest FPGs.  Go31.org/FREE provides the printed prayer guide for the largest 31 FPGs along with resources to support those wanting to enlist others in prayer for FPGs

Binge-Watchers Podcast
Frozen Lakes, Killer Cars, and Vampires

Binge-Watchers Podcast

Play Episode Listen Later Sep 18, 2024 24:30


Starting with a cheeky debate on whether a fleshlight could break from overstimulation ("asking for a friend"), the conversation quickly moves to exciting horror rumors. Johnny speculates on the possibility of a new Friday the 13th movie set on a frozen lake, envisioning a holiday-themed Jason Voorhees wreaking havoc during family gatherings. He also talks about Nocturne's return for a second season and his love for Alucard from Castlevania, weighing him against Vampire Hunter D. The episode's main feature is a deep dive into the 1977 cult horror film The Car, where a possessed black Lincoln Continental terrorizes a desert town. Johnny provides fascinating trivia about the film's production, including its eerie connection to James Brolin's personal life and the car's creepy horn blasts that heighten the tension. The show covers everything from the film's side characters, including hot teachers and a cowboy sheriff, to Johnny's surprise at how much he actually enjoyed the movie. Ending on a high note, Johnny rates The Car a "Binge Now," encouraging listeners to experience the surprisingly sinister ride for themselves. Don't forget, tonight's episode is brought to you by Mochi—joinmochi.com for $40 off with the code BINGEWATCHERS.

Let's Know Things
Compounded Semaglutide

Let's Know Things

Play Episode Listen Later Sep 10, 2024 19:24


This week we talk about Wegovy, Eli Lilly, and HIMS.We also discuss pig pancreases, beneficial side-effects, and shortages.Recommended Book: The Death Café Movement by Jack FongTranscriptIn the 1970s, a pair of researchers looking into possible ways to address duodenal ulcer disease were studying the way we secrete different hormones while eating, and that led to an experiment in which they pumped a hormone called glucagon-like peptide 1, or GLP-1, extracted from pigs, into pig pancreases to see what effect that would have.As it turned out, this hormone stimulated the secretion of insulin while inhibiting the secretion of glucagon, and that was notable to these researchers because folks with diabetes have too much glucagon in their bodies, which is what causes high blood sugar.The idea, then, was that by stoking the production of more insulin and limiting the amount of glucagon being produced, you might be able to help folks with type 2 diabetes control their symptoms.These researchers shopped around the idea of building a treatment based on this hormone a little bit in subsequent years, but didn't get much interest from the major drug companies. In 1993, though, they were able to do a study that showed that infusing folks who have type 2 diabetes with GLP-1, they could reset their blood glucose levels back to normal within just four hours, which was a pretty big deal—a lot better than most other options at the time.A drug based on this hormone was approved by the FDA for medical use in the US in 2017 under the name Semaglutide, and by 2021 it had become one of the top 100 most-prescribed drugs in the country—which is saying something, as the US is awash in pharmaceutical options, these days.Even before that approval, though, there were signs that GLP-1 receptor agonists, which is what Semaglutide and other drugs based on this concept are called, might have also had some other uses.In some of the clinical trials in which they were trying to gauge how well folks with type 2 diabetes faired while using the drug, for instance, they found that many of their subjects had trouble finishing the meals they were supposed to eat, which was a problem, as having that meal was part of the process, and after they ate it, ideally the whole thing, researchers would measure their blood insulin—so keeping that controlled was kind of important for their results, but the subjects consistently just weren't as hungry as they typically would have been.Interestingly, this realization led to a proposal by one of those original researchers to the drug company Novo Nordisk, the company that brought Semaglutide to market, for another drug that would help people control their appetite and consequently limit food intake, perhaps serving as a means of remediating obesity, which at the time, in 1998, was already becoming a big health issue of significant global concern and widespread impact.The company didn't end up doing anything with the patent they went in on with that researcher, but they did pursue something along those lines a little bit later, which approached the issue with a similar underlying substance, but via a different route.And in March of 2021, the company started clinical trials for that drug, which eventually became Wegovy, using basically the same substance as Semaglutide, but in a different volume, and the adult subjects in that trial lost a significant amount of weight.A few months later, in June of 2021, Wegovy was approved for use in the US to treat adults with obesity, and then in December the following year it was approved for use by obese teens, as well.Now, Wegovy and its effects were in some ways forecasted in those trials for Semaglutide when test subjects were eating less than usual while on the drug, and something similar happened here, as subjects who were being given Wegovy for weight loss purposes were showing other, unanticipated positive effects, as well.Among those effects were positive cardiovascular outcomes, which Novo Nordisk then tested for specifically, noting that the drug reduces the risk of major adverse cardiovascular events like heart attacks and stroke by about 20% in obese adults. The FDA approved the drug for this purpose in March of 2024, and another study that looked into Semaglutide's effect on folks with liver disease resulting from HIV found that it meaningfully reduces the severity of that disease—another unexpected win.Several earlier studies that showed positive results, and which are now being looked into on larger scales and with human subjects, include those looking into its impact on depression and suicidal ideation, its potential to reduce alcohol consumption, and the possibility that it might also help with gambling addiction and other non-substance-related addictions, alongside substance-based ones like nicotine.Semaglutide seems to help with eating disorders and may help with infertility issues. It may also help with persistent inflammation, enhance autophagic activity, meaning it could help the body break down the cells that don't work anymore so new ones can grow, and it might help prevent the buildup of what's called alpha-synuclein in our brains, which is thought to maybe be a cause of or contributor to Alzheimer's and Parkinson's.There's even early evidence that GLP-1-based drugs might reduce our risk of developing some types of cancer, and maybe the worst, long-term sorts of COVID outcomes, as well.It's a very interesting time in this space, in other words, as the more we test these things, and the more people who take them, the more we learn about their effects and potential other use-cases.And a lot of people are using this class of drug right now: up to 12% of the US adult population has used a GLP-1 drug at some point, as of early 2024, according to research from KFF, and Novo Nordisk has been struggling to make enough of the stuff in its different manifestations, branded for different purposes, as have its competitors who have launched their own copy-cat products, and in some cases products that up the ante with even more impressive clinical results than what the first wave of GLP-1 drugs can boast.Novo Nordisk has become Europe's most valuable company on the strength of this drug class, growing by about 230% since 2021 when it first launched Wegovy; it's now hovering at something like $500 billion in market cap.But the company has suffered a few recent stock value hits due to the one-two punch of patients not being able to afford the drug, which can cost more than $1000 per month, and a dearth of production capacity, which means they've been unable to meet this drug class's perhaps understandably significant demand.What I'd like to talk about today is an aspect of the pharmaceutical industry in the US that has generally operated under-the-radar, but which has recently stepped into the limelight because of this rush to get GLP-1 drugs to market and in the hands of those who want them.—In the world of pharmaceuticals, especially in the US, but also in a few other countries, “compounding” refers to the practice of creating a drug on-demand for a patient, usually because they need a dosage or specific composition that isn't manufactured in bulk, or which isn't readily available in its mass-manufactured form.So while the majority of drugs in the US and similar wealthy countries are produced on scale, these days, and in a variety of common portions or doses, in some cases you might need an exact dosage that's somewhere between two doses that are manufactured on scale by the company that makes the drug, and a pharmacist will make that specific you-sized dose for you, maybe by measuring out the right amount of drug powder into a gel-cal pill, maybe by blending two substances into a single liquid that you can take all at once.These days, the most common compounding tasks revolve around removing non-active ingredients from a drug—something in the gel-capsule, for instance, or a binding agent that allows a drug to be delivered in liquid form—for folks who need that drug, but who are allergic or otherwise sensitive to something in the final, mass-produced form; a color additive, a suspension, a flavoring, something like that.This is often referred to as “traditional compounding,” and it can only be done by a licensed pharmacist; and while all licensed pharmacists will have at least a rudimentary understanding of how to compound custom medications, much of this kind of work is done in facilities that have compounding-specific equipment on hand; some that can do sterile work, and some that can only be used for non-sterile final products.Many pharmacies have some basic tools that allow them to do things like mix flavorings into a gross substance to make it more palatable to kids or pets, or to weigh and mix and divvy-up medicinal powders into properly sized capsules, but some pharmacies are a lot more specialized and have far fancier tools that allow them to output more elaborate concoctions for their customers.Another role these compounding pharmacies can play, though—and in this case I'm referring to that latter type, the ones with specialized tools and machines that allow them to compound on a larger and more specialized scale, if they need to do so—is that the FDA, the Food and Drug Administration which regulates the US drug market, can allow them to make drugs that are experiencing a shortage on the market; when those who have the patent for a drug are unable to scale-up fast enough and meet market demand, in other words, these compounding pharmacies can be given the legal go-ahead by the FDA to make and sell that drug.To be clear, these pharmacies aren't allowed to make the exact drug: they can make a drug with the same active ingredients, and sometimes they'll be quite similar and sometimes they'll be in a different form (an injectable rather than a powder, a capsule rather than a tablet, etc). These things are also not FDA approved, so while the FDA says it's okay for them to make and sell them in those limited circumstances, it's not meant to be equivalent to the real-deal, market-approved product; it's a temporary, emergency measure meant to help people who would be in a lot of pain or discomfort or even danger if they don't get a drug they need on a regular basis because of a shortage.And that brings us to what's happening now: Novo Nordisk is experiencing a shortage of its GLP-1 inhibitor-based drugs, and the FDA gave these compounding pharmacies legal permission to make GLP-1 inhibitor based drugs, with the same active ingredients, usually in the same dosage, while this shortage persists.Consequently, there are a bunch of drugs made by compounding pharmacies being marketed all over the place, produced by existing companies like HIMS and 23andMe, alongside brands like Mochi and Eden and HenryMeds—most of them selling doses equivalent to those that are sold by Novo Nordisk for something like $1,000 to $1,300 a month, but those sold by the compounding pharmacies are usually going for closer to $250-300 per month.It's been estimated, by the way, that it probably costs only about $5 to produce each of those doses—so even the compounding pharmacies selling at that dramatic cut to the sticker price are likely making money hand over fist on each of these doses, which is probably why ads for these alternative branded versions of the drug are plastered all over the internet, TV, billboards, and magazines, at the moment.The FDA does keep tabs on these compounded pharmacies, and they can shut down them down if they sell unsafe products, and they can threaten to do so if they don't toe various lines—which is something the FDA has already done, as a version of the drug that was being delivered attached to salt, which would be dissolved in water before injecting, wasn't considered to be as safe as the free base version of the drug, so the FDA put out a warning and all the folks who were making the salt version converted over to the free base version, lest they lose their legal ability to sell this product type.Even with that regulatory pseudo-oversight, there have been reports of people ordering these cheaper versions and getting shoddy products.One study found that those reports are probably of a kind with reports about side effects experienced by people who take the Novo Nordisk version, as folks taking any version of this drug can experience some pretty uncomfortable side effects, but it's hard to say right about that right now, as the drug is still relatively new and this aspect of the pharmaceutical industry is, again, approved but not as well-regulated.So it's a buy with caution and at your own risk sort of situation, though the cost savings very well might be worth it for many people, regardless of the potential risks.All of which is interesting, in part because this category of drug-maker is becoming more brazen with its flogging of products, probably at least in part because this particular drug is such a cash-cow and very popular right now, and in part because it will be a little while before the patent-holding drug-makers like Novo Nordisk and Eli Lilly can scale-up their manufacturing capacity appropriately. So investments they make in marketing will pay off longer than they might have, had this shortage been a brief one.But it's also interesting because of what this implies about the market, as, conceivably at least, a lot of potential customers for this drug will become accustomed to paying just a few hundred dollars per month for it, rather than more than $1,000, and while that lower price is doable for the compound pharmacies, there's a chance the Novo Nordisk's of the world won't consider that reduced profit margin to be worth their time and up-front investment in developing this drug, which could lead to some weird market effects and a potential whipcrack in the other direction, especially if national insurance plans don't get on board with adding this type of drug to their acceptable list; a higher sticker price paired with a lack of support from insurance companies would mean this drug remains out of reach for the majority of people who might otherwise benefit from it, and that, in turn, could mean a rough couple of years for Novo, until they can recalibrate their expectations and/or their product catalog, accordingly.That said, Novo Nordisk competitor Eli Lilly recently announced that they will be selling a version of their Wegovy competitor, Zepbound, which will be sold in vials instead of in auto-injector pens, reducing their packaging costs and requiring that customers load the syringes themselves, that will have a shelf price as low as $399 per month.That's a staggering undercut of Novo's offerings. And while this is partially an attempt to address the shortage of this drug, as this lower priced version will also be available in smaller doses, it will almost certainly also help them compete with Novo and the many compounded pharmacy offerings that are still cheaper, but not as dramatically cheaper as this name-brand offering, as before.There's a good chance this move by Eli Lilly is just the first of many reworks to a drug type that will permanently shift the average price, allowing the fully FDA-backed versions to compete with the compound versions, remaining a little pricier, but not much, which should help them maintain market share until they can get their new manufacturing capacity online, knocking those compounding competitors out of the game entirely.Of course, there's a chance that within months or just a few years, this whole industry could shift once more, as what's generally considered to be the “holy grail” in this space—a pill-delivered drug that accomplishes the same or better outcomes as the injectables—is in development by pretty much everyone, and some of them already have pills in phase 2 trials.For the moment, though, the name of the game seems to be discovering new benefits of this drug type, opening it up for more use-cases and, thus, customers, and repackaging it in different ways so that the price can go lower without fully depleting the massive profits those who are producing it—big pharma and compounding pharmacies, alike—are enjoying.Show Noteshttps://qz.com/ozempic-shortage-ema-novo-nordisk-1851638383https://en.wiktionary.org/wiki/compounding_pharmacyhttps://www.pharmacist.com/Practice/Patient-Care-Services/Compounding/Compounding-FAQshttps://jamanetwork.com/journals/jamanetworkopen/fullarticle/2816824https://en.wikipedia.org/wiki/Compoundinghttps://www.goodrx.com/classes/glp-1-agonists/compounded-semaglutidehttps://www.fda.gov/drugs/human-drug-compounding/drug-compounding-and-drug-shortageshttps://archive.ph/Czn0thttps://qz.com/viking-therapeutics-weight-loss-drugs-amazing-1851631337https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11227080/https://en.wikipedia.org/wiki/Semaglutidehttps://www.wired.com/story/obesity-drugs-researcher-interview-ozempic-wegovy/https://www.drugs.com/history/wegovy.htmlhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11011817/https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9417299/https://www.cnn.com/2024/07/30/health/liraglutide-alzheimers-trial/index.htmlhttps://sci-hub.st/https://pubmed.ncbi.nlm.nih.gov/16529340/https://www.jci.org/articles/view/72434https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7606641/https://www.biorxiv.org/content/10.1101/2023.10.31.564990v1.full.pdfhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711387/https://www.mdpi.com/2076-3425/14/6/617https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3700649/https://pubmed.ncbi.nlm.nih.gov/24133407/https://www.science.org/doi/10.1126/science.adn4128https://www.reuters.com/business/healthcare-pharmaceuticals/most-patients-stop-using-wegovy-ozempic-weight-loss-within-two-years-analysis-2024-07-10/https://jamanetwork.com/journals/jama/article-abstract/2819949https://www.reuters.com/business/healthcare-pharmaceuticals/obesity-drugmaker-novo-nordisk-misses-q2-profit-forecast-2024-08-07/https://www.nytimes.com/2024/08/30/health/wegovy-covid-deaths.html This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

The Ajumma Show
Ep. 82 - Fumi Abe and The Sound of Mochi

The Ajumma Show

Play Episode Listen Later Sep 2, 2024 56:08


We chat up one of our favorite comics Fumi Abe!

Lessons from the Playroom
172. International Crisis Play Therapy: Insights from Isabella Cassina & Claudio Mochi

Lessons from the Playroom

Play Episode Listen Later Jul 9, 2024 49:45


In this special episode, Lisa is joined by two extraordinary guests, Isabella Cassina and Claudio Mochi, pioneers using play therapy in international crisis intervention. As co-founders of the INA International Academy for Play Therapy Studies and Psychosocial Projects in Switzerland, Isabella and Claudio have dedicated their careers to supporting children and families in some of the most challenging environments around the world. Isabella, Director of Project Management, and Claudio, Director of the Academy, bring decades of experience and a wealth of knowledge to the table. They have worked in over 30 cities across 15 countries, addressing the needs of those affected by natural disasters, conflicts, and high-risk situations. Together, they developed the "Coping with the Present while Building for the Future" (CPBF) model, which they have shared globally. In this episode, you'll discover: How Isabella and Claudio began their journey in crisis and humanitarian work. The unique challenges and rewards of using play therapy to support children and their families affected by international crises. The philosophy behind "Empowering Playtime"—a program designed to restore play opportunities for children during migration and other crises. Insights into their model, CPBF, and its impact on mental health professionals worldwide. How to create play sanctuaries and foster a culture of play in international crisis contexts. The MAP (My Awareness Process) as a guiding reference point, emphasizing a comprehensive awareness process rather than just therapeutic techniques. The importance of co-constructing with populations in crisis to ensure program sustainability after providers leave, highlighting the need for community collaboration and cultural sensitivity regarding language, customs, and traditions. Claudio and Isabella also share personal stories from the field, highlighting the emotional impact of their work and the resilience of the children and communities they serve. Learn about their strategies for self-care and the significance of creating safe play environments amidst chaos. This is an inspiring conversation filled with practical insights and heartfelt stories, showcasing the transformative power of play therapy in crisis situations. Don't miss this opportunity to learn from two of the leading experts in the field. For more information about their work and the International Academy of Play Therapy, visit Crisis Play Therapy. Tune in to explore how play therapy can be a beacon of hope in the darkest times and how you can be a part of this vital work. Podcast Resources:  Synergetic Play Therapy Institute Synergetic Play Therapy Learning Website FREE Resources to support you on your play therapy journey  Aggression in Play Therapy: A Neurobiological Approach to Integrating Intensity * If you enjoy this podcast, please give us a five-star rating and review on Apple Podcast, subscribe wherever you listen to podcasts, and invite your friends/fellow colleagues to join us.

Cooking Issues with Dave Arnold
No Tangent Tuesday: Mochi & More!

Cooking Issues with Dave Arnold

Play Episode Listen Later Jul 5, 2024 60:04


No Tangent Tuesday: Mochi & More! Hosted on Acast. See acast.com/privacy for more information.

Get Your Guy Coaching Podcast
Quit Like A Winner With Mandy Tang

Get Your Guy Coaching Podcast

Play Episode Listen Later Jun 19, 2024 35:14


Send us a Text Message.Hey Girl, In this episode, we'll explore the empowering decision to walk away from situations, or relationships that no longer serve you. Learn how quitting can be a strategic step towards a more fulfilling and balanced life. Quit smart, quit strong, and quit like a winner.Mandy Tang is a holistic career coach who helps leaders reconnect with their purpose and power.  After starting her career in editorial at Condé Nast, Mandy went on to manage digital marketing at Ogilvy and American Express, and co-founded a fashion tech startup along the way.  Mandy can be found on TikTok, where she talks about how to heal your career wounds and helps people figure out what to do next.  Mandy has a BA from Brown University, an MBA from Columbia Business School, and is a graduate of a 3-year shamanic practitioner program.  She lives with her family and two cats, Mochi and Chillbear, in the Pacific Northwest.  Resources Mentioned in This Episode:Website:  www.rosegoldcareers.comSocial:  https://www.tiktok.com/@careercoachmandyFollow Us:​Book A Call With Me - I've been getting A LOT of DM and email requests for to chat with me and answer specific questions about love, dating, relationships, and men so I'm opening back up my limited calendar for a few calls. So book a time with me here!Join the Get Your Guy Club- Wanna have Dating Support for a year to help you get your guy but at your own pace. You can get access to my weekly group calls, my private Facebook group, and my online course with 25+ hours of content for just monthly payments of $250...​​Check Out the Get Your Guy Coaching Podcast- With more than 100 episodes, you can binge and learn so much with my podcast. The latest episode is all about the Q1 Viral Tiktok review, check it out here.Book a Consult to Work with MeJoin my Get Your Guy ClubBuy My Dating Strategy CourseCheck out My Latest Podcast EpisodeThank You: A big thank you to our listeners for tuning in! 

Intermittent Fasting Stories
Episode 406: Mochi Kaimu

Intermittent Fasting Stories

Play Episode Listen Later Apr 23, 2024 51:37


In this episode of Intermittent Fasting Stories, Gin talks to Mochi Kaimu from Worcester, VT.Are you ready to take your intermittent fasting lifestyle to the next level? There's nothing better than community to help with that. In the Delay, Don't Deny community we all embrace the clean fast, and there's just the right support for you as you live your intermittent fasting lifestyle. You can connect directly with Gin in the Ask Gin group, and she will answer all of your questions personally. If you're new to intermittent fasting or recommitting to the IF lifestyle, join the 28-Day FAST Start group. After your fast start, join us for support in The 1st Year group. Need tips for long term maintenance? We have a place for that! There are many more useful spaces beyond these, and you can interact in as many as you like. Visit ginstephens.com/community to join us. An annual membership costs just over a dollar a week when you do the math. If you aren't ready to fully commit for a year, join for a month and you can cancel at any time. If you know you'll want to stay forever, we also have a lifetime membership option available. IF is free. You don't need to join our community to fast. But if you're looking for support from a community of like-minded IFers, we are here for you at ginstephens.com/community. Mochi works for a company called Farmers to You (https://farmerstoyou.com/) that is building a sustainable food network, connecting consumers to farmers. He is also the author of a book called Plans for a Simple Sauna.Mochi shares his experiences with intermittent fasting andholistic living. His turning point came in 2019, after a beach vacation withfamily. Mochi shares how intermittent fasting led to weight loss while alsohelping him manage panic attacks and high blood pressure. His story is atestament to the positive impact of dietary mindfulness and a prioritization ofnatural health practices over conventional pharmaceutical interventions. Heshares his experiences with a variety of natural practices such as yogicbreathing techniques, regular sauna use, and cold plunging. Mochi emphasizes the delicate balancing act that is requiredwhen discussing dietary habits with your children, advocating for promotinghealthy relationships with food and self-image above all else. Mochi concludes with a final piece of advice for our listeners:Just give intermittent fasting a try! Place your wellbeing at the forefront,let the positive changes extend beyond the scales, and remember to discuss andembody a healthy relationship to food and lifestyle with health at theforefront.Get Gin's books at: http://www.ginstephens.com/get-the-books.html, including her latest bestseller 28-Day Fast Start Day-By-Day, the Ultimate Guide to Starting (or Restarting) Your Intermittent Fasting Lifestyle so it Sticks, New York Times Bestseller, Fast. Feast. Repeat., and Cleanish, available wherever you buy books! Delay, Don't Deny is available on Amazon. Join Gin's community! Go to: ginstephens.com/communityDo you enjoy Intermittent Fasting Stories? You'll probably also like Gin's other podcast with cohost Sheri Bullock: Fast. Feast. Repeat. Intermittent Fasting for Life. Find it wherever you listen to podcasts. Share your intermittent fasting stories with Gin: gin@intermittentfastingstories.comVisit Gin's website at: ginstephens.com Check out Gin's Favorite Things at http://www.ginstephens.com/gins-favorite-things.htmlSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.