POPULARITY
Categories
Side Quests is back and this episode's host is singer, actor and Cog fighting Toon, Cassidy Sledge! The game she is talking about today is Toontown Rewritten which is a fan revival of Towntown Online which was originally by Disney's Virtual Reality Studio and Schell Games. You can also find her work here! We have a Patreon! Gain access to episode shout outs, bonus podcasts, reviews, early downloads of regular episodes, an exclusive rss feed and more! Click here! You can find the show on Bluesky, Instagram and YouTube! Please rate and review us on Apple Podcasts! Rate us on Spotify! Wanna join the Certain POV Discord? Click here!
This week Cog and Hoeg join Brad to discuss Marathon, The Hundred Link: Last Defense Academy, Clair Obscur: Expedition 33, and more!Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. 0:00:00 - Intro0:25:05 - Marathon0:53:21 - The Hundred Line: Last Defense Academy1:11:00 - Keep It Up/Sort It Out1:37:12 - Clair Obscur: Expedition 332:19:28 - Old Skies2:26:46 - Closing Questions To watch the podcast on YouTube: https://bit.ly/LastStandMediaYouTube Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/SummonSign If you like the show, telling a friend about it would be amazing! You can text, email, Tweet, or send this link to a friend: https://bit.ly/SummonSign Learn more about your ad choices. Visit podcastchoices.com/adchoices
I have a beautiful wife and two awesome kids who I love very much. That being said, they have shown absolutely no interest in listening to this podcast, week in and week out, and that makes me mad. Obviously, there's only one way to handle this. I will now proceed to write really mean things about all three of them right here in this episode description!. . . I mean, I would, but they would never see it anyway, so what's the point?My only real hope for revenge is for them to start their own podcast someday, so I can completely ignore it.Until then, Gene, Cog and the Dagster will continue to go unappreciated in the Moriarty household. Hey, their loss.0:00:00 - Intro0:05:56 - Horror Games and Movies0:51:29 - Cartoons1:47:15 - Favorite Rivalries Learn more about your ad choices. Visit podcastchoices.com/adchoices
ILP# 398 4/27/2025https://lordsofgaming.net/1) ILP ROAD TO TOKYO Merch: https://ironlordspodcast.creator-spring.com2) NZXT & IRON LORDS PC Use Affiliate LINK: https://nzxt.co/Lords3) HAWORTH Chairs & ILP Use Affiliate LINK: https://haworth.pxf.io/4PKj7M4) MANSCAPED & ILP Use Code: Use Code: "IRONLORDS" at https://www.manscaped.com/5) VALARI GAMING Pillow Use Code: "LORD15" for 15% Off https://thevalari.com/products/ironlords6) ADVANCED GG & ILP Use Code "IRONLORD" to save 10% off https://advanced.gg/?ref=LORDCOGNITO7) ILP FACTOR MEALS GET 50% Off Go to https://factormeals.com/ironlords50ILP ROYAL SWAG: https://teespring.com/stores/ironlordspodcast ILP PATREON: https://www.patreon.com/IronLordsPodcastCheck out the "Lords of Gaming Network" Discord server! : https://discord.gg/Z7FZqzg The Lords are on Spotify, Google Play, Itunes & Soundcloud! Check out the links below! Reach out to: https://lordsofgaming.net/contact-us/ if you are interested in writing with us!*********************************************************00:00 - ILP #398 Pre-Show11:23 - Show Start14:12 - Road To Tokyo Special Intros42:34 - Addict's Road To Tokyo Challenge ft @Teslatidbits 1:09:58 - Marathon Closed Alpha Impressions1:45:59 - King's Road To Tokyo Challenge1:51:25 - Cog's Road To Tokyo Challenge2:02:10 - Expedition 33 GOTY Success?3:26:46 - Xbox Game Pass 50 Million?3:47:08 - Ghost Of Yotei Release Date4:11:05 - ILP Outro*********************************************************Welcome to The Iron Lords Podcast!Be sure to visit www.LordsOfGaming.net for all your gaming news!ILP Spotify: https://open.spotify.com/show/6XRMnu8Tf1fgIdGlTIpzsKILP Google Play:https://play.google.com/music/m/Iz2esvyqeaixk6dorkmur2nm7xa?t=Iron_Lords_PodcastILP SoundCloud: https://soundcloud.com/user-780168349ILP Itunes: https://itunes.apple.com/us/podcast/iron-lords-podcast-ili-1/id1179199929?fbclid=IwAR1p_5D8Z-nKUpbwJeiK7zQoYdQhpb1VhxBDZxoMul-uiR-IgF6cE9EQicIILP on Twitter: twitter.cm/IronLordPodcastILP on Instagram: www.instagram.com/ironlordspodcast/ILP DESTINY CLAN: www.bungie.net/en/Clan/Detail/178626The Iron Lords and the Lords of Gaming have an official group on Facebook! Join the Lords at:www.facebook.com/groups/194793427842267www.facebook.com/groups/lordsofgamingnetwork/Lord COGNITO--- twitter.com/LordCognitoLord KING--- twitter.com/kingdavidotwLord ADDICT--- twitter.com/LordAddictILPLord SOVEREIGN--- twitter.com/LordSovILPLord GAMING FORTE---twitter.com/Gaming_ForteILP YouTube Channel for ILP, Addict Show & all ILP related content: www.youtube.com/channel/UCYiUhEbYWiuwRuWXzKZMBxQXbox Frontline with King David: https://www.youtube.com/@xboxfrontlineFollow us on Twitter @IronLordPodcast to get plugged in so you don't miss any of our content.
What if God was your dead wife? What if your boat learned how to be a man on the internet? What if your Dead God Wife forgave you for being a huge asshole after obliterating an evil future version of you who was an even bigger asshole? What if Cog was the most important guy in the world? What if you could make a big hole? What if you were smooth and purple? What if all of this is just a bad dream? That would be nice, wouldn't it? Noise Space | Discord
In this episode Devo Spice discusses recording new episodes of Escape from Vault Disney and Random Acts of Horror, his trip to California for Dr. Demento's birthday party, a severe allergic reaction, FuMPFest developments, and his updated Patreon. Music in this episode: "I Shit More in the Summertime" by Anonymous, "Loser, Baby" by Holy Bongwater, "Bow to the COG" by Consortium of Genius, and "Spam (Let the Pork Be Pork)" by the great Luke Ski.
Our adventurers talk their way out of trouble, then find themselves right back in it again when they travel back to the rat-laden basement of the Cog 'n Grog.
https://slasrpodcast.com/ SLASRPodcast@gmail.com Welcome to episode 192 of the sounds like a search and rescue podcast, this week I'm joined by cohost Andy Vilaine, who when he is not co-hosting here or out hiking is the TrainMaster at the Cog Railway. Andy will be giving us a detailed trip report of his recent solo winter hike to Mount Katahdin - we will break down everything you need to know about hiking in Baxter State park. Plus an update on Stomp, when will Ice Out on Lake Winnipesaukee be declared?, a hiking based film is coming to Lincoln this summer and they are putting a call out for actors, A recent rescue on one of Stomp's favorite mountains, Mount Cardigan, news about weather apps, a serial killer in New England, Notable hikes and some national search and rescue stories. This weeks Higher Summit Forecast SLASR 48 Peaks Alzheimers team - Join here! About The Cog Cog Railway Website Topics Welcome Andy Some Cog updates, spring schedule, getting ready for summer Cog helping out on rescues A longer update on Stomp and the future direction of SLASR (all good news) - Starts around the 22:30 mark Cannibalism on Boon Island, Maine White Mountains Visitors Center in N. Woodstock Appalachian Trail reroute - Great Gulf Ice Out On Winnipesaukee Hiking Movie is being filmed in Lincoln, NH this summer Rescue on Mount Cardigan UK SAR Data - Young People needing rescues are exploding Mountain Forecast is moving to a pay model, plus some reminders on better weather resources for the WMNF Drinks, SLASR 48 Peaks Team, Dad Jokes, Coffee, Recent Hikes, Notable Hikes Andy's Solo Winter Hike on Mount Katahdin - info on hiking Baxter State Park Show Notes Apple Podcast link for 5 star reviews SLASR Merchandise SLASR LinkTree SLASR's BUYMEACOFFEE cannibalism on Boon Island, Maine White Mountains Visitors Center, N Woodstock. Exit 32 on Rt. 93 AT Reroute Announcement Ice Out is predicted to be April 19th for the lakes region. Movie filming in Lincoln, NH Indigogo Page - synopsis Rescue on Mount Cardigan Social media and map apps blamed for record rise in mountain rescue callouts https://archive.is/WdqtX Mountain Forecast - moving to a pay model - Section Hiker weather guide TrailsNH Forecast - Kimball Rexford (Ep. 154) A New England Serial Killer? Baxter State Park - Rules and Permits Sponsors, Friends and Partners Wild Raven Endurance Coaching CS Instant Coffee 2024 Longest Day - 48 Peaks Mount Washington Higher Summits Forecast Hiking Buddies Vaucluse - Sweat less. Explore more. – Vaucluse Gear Fieldstone Kombucha
Does anybody even read these things? I feel like I could fill this space with complete gibberish and nobody would even notice. Rumpelstiltskin. See? You don't even know it says that! Anyway, Col, Cog, Lock and Dag convene this week to talk about stuff. Also, flibbertygibbet, because you'll never see that either. Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. 0:00:00 - Intro 0:34:00 - Top LSM moments 1:10:37 - Being Remembered 1:51:09 - What We'd Like to See Return Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transformers Earthrise stumbles and falls face first into an utterly deranged conclusion. Here are a list of actual things that happen in the final two episodes of Earthrise; Sky Lynx becomes Optimus Prime's spiritual guide, a JPEG of Unicron slurps Galvatron through reality, Cog turns into Solid Snake, The Decepticons invade the Ark and get beaten in less than a minute, Optimus asks Megatron to forgive him, Elita One (probably) dies along with every other Autobot on Cybertron, the Quintessons do absolutely nothing, Earth shows up very briefly, and Dinobot is in the stinger of the episode. Or for a more brief synopsis: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAARRRGH!!!! Noise Space | Discord
Send us a textEver wonder why you react a certain way in relationships? Your subconscious core beliefs might be running the show without you even realizing it!We sat down with Debbie Caudle, licensed therapist in the COG, to talk about how these hidden beliefs shape our interactions - and how to rewrite the ones that aren't serving us.
[4+ HOUR LONG SHOW! JOIN THE PIZZA FUND! $12 level. https://podawful.com/posts/2545] ETHAN RALPH CALLS IN LIVE! After all I did to try to help Ethan, he had the nerve to get mad at me. And Mad at the Internet. Speaking of which, the many people are talking about my call with Ralph, including Josh Moon from Kiwifarms. Turns out he's a fan. Some sort of witness protection burn victim named Persnickety chimed in. Even British Mersh himself, Cog, played my entire show while he got up and went for a bite of Tandoori Chicken. But now Ethan is trying to DMCA and flag down my episode about him! This ape is pulling every dirty trick in the book. And while the whole sektur turns their back on King Kong Killstream, his only friend steps in to help him in his moment of need... MERSH. Unfortunately, Ralph was so ZOO-TED on Nightwave that even Mersh had to give him the boot early. Ethan Ralph has officially entered his LUNATIC FRINGE mental-bangs cycle. SAD! PATHETIC! VIDEO: https://www.youtube.com/live/lTCvRfmrCds Buy A Shirt: http://podawful.shop PODAWFUL is an anti-podcast hosted by Jesse P-S
[4+ HOUR LONG SHOW! JOIN THE PIZZA FUND! $12 level. https://podawful.com/posts/2545] ETHAN RALPH CALLS IN LIVE! After all I did to try to help Ethan, he had the nerve to get mad at me. And Mad at the Internet. Speaking of which, the many people are talking about my call with Ralph, including Josh Moon from Kiwifarms. Turns out he's a fan. Some sort of witness protection burn victim named Persnickety chimed in. Even British Mersh himself, Cog, played my entire show while he got up and went for a bite of Tandoori Chicken. But now Ethan is trying to DMCA and flag down my episode about him! This ape is pulling every dirty trick in the book. And while the whole sektur turns their back on King Kong Killstream, his only friend steps in to help him in his moment of need... MERSH. Unfortunately, Ralph was so ZOO-TED on Nightwave that even Mersh had to give him the boot early. Ethan Ralph has officially entered his LUNATIC FRINGE mental-bangs cycle. SAD! PATHETIC! VIDEO: https://www.youtube.com/live/lTCvRfmrCds Buy A Shirt: http://podawful.shop PODAWFUL is an anti-podcast hosted by Jesse P-S
During FuMPFest 2024 members of The FuMP were invited to Escape from the Secret Lab by Dr. Milo T. Pinkerton of the Consortium of Genius. This was a command performance that Devo Spice, Luke Ski, Insane Ian, and Chris Mezzolesta took part in. If they did well they'd be spared. If not, they would be crushed under a 16 ton weight. One of the challenges was an Instant Song, where the performers were required to come up with a song "on the spot." So Chris provided a funky riff on the guitar while Devo, Ian, and Luke each did short verses about their experiences being kidnapped by the COG. This is a recording of that performance. We are proud to report that The FuMP was spared. This recorded is just one of many you can hear on the new FuMPFest 2024 - Live! 2-CD compilation! The album features two hours of music recorded at FuMPFest 2024 and features performances by Pastor Hastur, Holy Bongwater, 2d6, Ross Childs, Steve Goodie, Carla Ulbruch, and our guest of honor The Consortium of Genius! Plus bits from the Live Podcasts panel, the Dumb Parody Ideas contest, Movie Time, the QandA, and more! Pick up your copy now in the store.
Modern knowledge work jobs should be cushy gigs. Fixed hours, air conditioning, no hard manual labor, flexibility. So why are we so often burnt out and what can we do about this reality? In this episode, Cal draws a lesson from an unexpected corner of computer science, computer security research, about how we can fix some of the big problems of work in a digital age. He then answers listener questions and returns once again to the topic of AI in a final tech corner segment. Find out more about Done Daily at DoneDaily.com! Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here's the link: bit.ly/3U3sTvo Video from today's episode: youtube.com/calnewportmedia Deep Dive: You Are Not a Cog [11:00] - Should I break my large tasks into many small ones? [38:18]- How will AI affect living the deep life? [42:47]- How can I say “no” to more incoming requests? [48:14]- Should an architect take on broader roles that don't necessarily add to career capital? [52:03]- Can a nurse implement time blocking? [54:33]- Can a Kansan system work across all departments without being overly complex? [58:11]- CALL: Organizing the details of a Trello board [1:03:19] CASE STUDY: Lifestyle centric value based planning for a young family [1:07:58] TECH CORNER: AGI is not Super-intelligence [1:15:35] Links:Buy Cal's latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal's “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal's monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?medium.com/@tony.infisical/password-requirements-are-still-confusing-in-2023-also-heres-our-flavor-44ce03a3255cyoutube.com/watch?v=ZzJMxh68RGI Thanks to our Sponsors: This episode is sponsored by to you BetterHelp. Give online therapy a try at betterhelp.com/deepquestionsmybodytutor.comupliftdesk.com/deepshopify.com/deep Thanks to Jesse Miller for production, Jay Kerstens for the intro music, Kieron Rees for the slow productivity music, and Mark Miles for mastering.
This week Hoeg and Cog join Brad to discuss a verity of games including Monster Hunter Wilds, Eternal Strands, Avowed, and more! Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. 0:00:00 - Intro 0:15:30 - Monster Hunter Wilds 0:44:55 - Eternal Strands 0:57:32 - Avowed 1:18:55 - Sort It Out/Keep It Up 2:03:29 - Civ VII 2:21:58 - Revenge of Shinobi/Shinobi 3 2:36:30 - Like a Dragon: Pirate Yakuza in Hawaii 2:48:48 - Closing Questions To watch the podcast on YouTube: https://bit.ly/LastStandMediaYouTube Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/SummonSign If you like the show, telling a friend about it would be amazing! You can text, email, Tweet, or send this link to a friend: https://bit.ly/SummonSign Learn more about your ad choices. Visit podcastchoices.com/adchoices
Elected leaders across the region are grappling with the economic fallout of the White House's continued push to shrink the federal government. Some economists fear it could send the D.C.-area into a recession. Clark Mercer, who leads the Metropolitan Washington Council of Governments, got behind the mic to break down the economic impact a shrinking federal workforce could have the region.687,000 people living in D.C., Maryland, and Virginia work for the federal government, according to COG. Mercer compared the D.C. area to other company towns. "We are more reliant in this region on the federal government than Detroit is on the auto industry, or Houston is on oil and gas... by a long shot," he said.He says the D-C region's reputation as recession-proof is now being tested, and we could see unemployment rates higher than at the height of the pandemiPlus, he explained what a shrinking federal workforce means for local government budgets, housing, and public transportation.Virginia Democratic Party Chair Susan Swecker announced recently she was stepping down. Her decade-long tenure heading up the Virginia Democrats has seen plenty of ups and downs, including flipping a Northern Virginia Congressional seat, Republican Glenn Youngkin's gubernatorial win in 2021, and a blue wave in 2023. She joined Kojo and Tom to share what she's learned over her ten years at the helm, and what advice she has for Democrats nationwide. They also discussed the upcoming governor and House of Delegates races.Swecker predicted that Virginia Republicans will suffer this year at the ballot box due to their support of President Donald Trump's federal job cuts"I don't want anybody to be hurt. This is gut-wrenching. But do I think they're gonna pay a price, Republicans, at the polls for what they're doing and standing lock, stock and barrel behind Donald Trump? Yes, I do," she said.Swecker also talked about the future of federal workers and what's next for the Virginia Democratic party.Become a member of WAMU: wamu.org/donateSend us questions and comments for guests: kojo@wamu.orgFollow us on Instagram: instagram.com/wamu885
It has been a whirlwind of a week for the games industry. Massive studio closures continue at a rapid pace and now we have Fable delay for the cherry on top! Where to begin? Well, for the Xbox faithful, let's sink into Fable first. It was given a 2025 release window at Xbox's 2024 Summer Showcase, but Playground Games has pulled the plug early on that happening. Announced by new head of Xbox Game Studios Craig Duncan, we also got a glimpse at some pre-alpha gameplay for Fable. The game looks great, at least visually, but has left many wondering what's taking so long? Rumors of this game started kicking around in 2018 only for the game to be fully announced in 2020. By summer 2026, we're looking at six full years of building this game up. So, what gives? Is it a PS5 port, the looming release of GTA VI, or something much deeper? The Dukes discuss in Cog's return! Naturally, we dive deep into the mismanagement of WB Games as we have for weeks. To put an exclamation point on things, they have shut down Monolith Productions, Player First Games, and WB San Diego. As their game's division continues its tailspin, the Dukes sift through the wreckage to see if there's any hope in saving what remains of WB Games. We also tackle the conversation around MUSE once more following new comments by Satya Nadella, how NetEase is hitting a speed bump despite the rocketing success of Marvel Rivals, and much more! Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. 0:00:00 - Intro 0:33:55 - Tony Hawk's Pro Skater 3+4 has been rated 0:39:50 - 11 games announced at ID@Xbox showcase 0:43:33 - The truth behind Avowed's development revealed 0:53:23 - Satya Nadella speaks on MUSE 1:08:09 - Bloober Team and Konami announce a deal together 1:09:52 - Some updates on the January hardware performance 1:12:45 - What's going on with NetEase? 1:22:18 - Tekken 8 has reached 3 million copies sold 1:27:16 - Exodus is targeting a 2026 release window 1:29:45 - Persona 4 Remake on the way? 1:32:11 - God Of War remasters soon? 1:33:43 - Dragon Age The Veilguard is already on PS Plus 1:38:27 - What We're Playing 2:14:16 - Xbox delays Fable to 2026 2:44:16 - WB Games shuts down three studios 3:15:06 - Game Pass Pick Of The Week Learn more about your ad choices. Visit podcastchoices.com/adchoices
Attitude is everything. We need to approach life with an optimistic spirit. We can choose to look at the negative side of any situation, or we can choose to be hopeful in the promises of God. No matter how bad our circumstances become, we still have Jesus Christ and our hope in the Lord.
Send us a textThis week, we had
The fellas are back for the COG formerly the Honda Classic to kick off the Florida swing. We touch on the meltdown in Mexico as well.....
The cycling world wondered what would happen to Caleb Ewan this season. Seemingly on his way out of Jayco Alula, where was he going to end up? INEOS swooped in, midway through January, and now have themselves a top level sprinter. Exciting, right? G certainly thinks so, and is delighted the team have signed a close friend. He called him up and, following the move, we thought it was the right time to get Caleb back to talk about it. Why did he leave Jayco? Did he ever consider stopping? What can we expect in 2025? Will he be at the Tour? He answers all of those - and plenty more besides. Enjoy. In the land of ROUVY, a new update went live last week which allows full compatibility with Zwift hardware on the platform. Until now, the Ride plus Click and Cog were only compatible with Zwift software. However, with the introduction of ROUVY's virtual shifting compatibility, riders using Zwift hardware can now access ROUVY software and ride any of the 1,500+ immersive routes on the app. Riders will have full use of all 24 virtual gears, which are displayed on-screen to indicate their current gear status, alongside the already existing on-screen data including power, heart rate and cadence. If you've been waiting for this tweak to give ROUVY a try then use our code GTCC1M for a free month on us when you sign up. See you next week. Music courtesy of BMG Music Production Learn more about your ad choices. Visit podcastchoices.com/adchoices
It's been far too long. GTCC members the world over have been writing in to ask: when will he be back? Well, he's back. Yes, Ambassador De Pluski is back on the pod - and he's back in action with G. The Volta ao Algarve kicks off this week and the duo will be tackling the climbs of southern Portugal against the likes of Roglic, Vingegaard and Wout van Aert. It should be a cracker. With De Pluski back on the pod, we thought we'd take the opportunity to pick his brain and hold our first GTCC preview of the season. Can Pog win San Remo? Will he ride Roubaix? Can Vingegaard reclaim his Tour throne? Why is Roglic riding the Giro? There's lots to discuss and boy oh boy are we excited for the 2025 season. In the land of ROUVY, a new update went live this week which allows full compatibility with Zwift hardware on the platform. Until now, the Ride plus Click and Cog were only compatible with Zwift software. However, with the introduction of ROUVY's virtual shifting compatibility, riders using Zwift hardware can now access ROUVY software and ride any of the 1,500+ immersive routes on the app. Riders will have full use of all 24 virtual gears, which are displayed on-screen to indicate their current gear status, alongside the already existing on-screen data including power, heart rate and cadence. If you've been waiting for this tweak to give ROUVY a try then use our code GTCC1M for a free month on us when you sign up. See you next week. Music courtesy of BMG Music Production Learn more about your ad choices. Visit podcastchoices.com/adchoices
This sermon is a followup message to the previous sermon, "Soul-winners." Todd will continue to look at the priority of being a witness and sharing our faith in Christ.
African American Folklorist of the Month - Larry Handy Larry Handy discusses with me the concept of Ethnopoetic theory (a method for analyzing and recording oral poetry and performances to capture the poetic elements of the original performance) and his love for archiving and being a librarian. Handy is a “Folklife Poet” and shares with us the meaning; he also dives deep into activism and protests. Larry lives in California, and we recorded this interview at the height of the California wildfires when he was a few miles away. In sharing his Journey to Folklore, he discusses Folk Consciousness and his "Tour of Duty," an LA Protest Memoir. BIO: Larry Handy is a folklife poet who leads the award-winning poetry band Totem Maples. His fiction, nonfiction, and poetry appear in such journals as The Coachella Review, Cog, Mosaic: Art and Literary Journal, Proximity, Quiddity, Rivet, Roi Fainéant, Storylandia, Straight Forward Poetry, and elsewhere. He holds an MFA in creative writing and writing for the performing arts from the University of California, Riverside, and a master's in library science from Emporia State University. TWB Press published his horror novelette Paper Cuts: 1000 Paper Cranes. His essay “What to Do When Grandma Has Dementia” was nominated for a Pushcart Prize and was listed in The Best American series under Notable Essays and Literary Nonfiction of 2016. He is either practicing Chinese martial arts or running 26.2-mile marathons when not writing. Southern California is his home.
Am I a soul winner? This is a question we should ask ourselves from time to time. Our primary responsibility as Christians is to make disciples. Someone shared the gospel with us and led us to Jesus. We have the same compelling call to share Christ with others. This week's sermon considers our Christian witness and the call to make disciples.
This week Brad is joined by Matty and Cog and they share their thoughts on KCD2, Donkey Kong Returns HD, Marvel Rivals and discuss the Switch 2. Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. 0:00:00 - Intro 0:15:50 - Switch 2 Impressions 0:31:31 - Kingdom Come: Deliverance II 1:08:59 - Donkey Kong Country Returns HD 1:27:23 - Sort it Out/Keep it Up 2:06:33 - Marvel Rivals 2:35:35 - Closing Questions To watch the podcast on YouTube: https://bit.ly/LastStandMediaYouTube Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/SummonSign If you like the show, telling a friend about it would be amazing! You can text, email, Tweet, or send this link to a friend: https://bit.ly/SummonSign Learn more about your ad choices. Visit podcastchoices.com/adchoices
Games are releasing at a ridiculous, potentially unsustainable pace. It's no longer a question of money, but rather time. Game Pass is always putting out good stuff, but there's so much to sift through that it makes the service a bit overwhelming. Well, the Dukes have arrived, and they have lists that they've checked twice. It's time to go over the naughty and nice for Xbox's prestige service in 2024. What games hit for Matty and Cog and which deserve a lump of coal in their stocking? Let's find out. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Applications for the NYC AI Engineer Summit, focused on Agents at Work, are open!When we first started Latent Space, in the lightning round we'd always ask guests: “What's your favorite AI product?”. The majority would say Midjourney. The simple UI of prompt → very aesthetic image turned it into a $300M+ ARR bootstrapped business as it rode the first wave of AI image generation.In open source land, StableDiffusion was congregating around AUTOMATIC1111 as the de-facto web UI. Unlike Midjourney, which offered some flags but was mostly prompt-driven, A1111 let users play with a lot more parameters, supported additional modalities like img2img, and allowed users to load in custom models. If you're interested in some of the SD history, you can look at our episodes with Lexica, Replicate, and Playground.One of the people involved with that community was comfyanonymous, who was also part of the Stability team in 2023, decided to build an alternative called ComfyUI, now one of the fastest growing open source projects in generative images, and is now the preferred partner for folks like Black Forest Labs's Flux Tools on Day 1. The idea behind it was simple: “Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use.”Unlike its predecessors, ComfyUI does not have an input text box. Everything is based around the idea of a node: there's a text input node, a CLIP node, a checkpoint loader node, a KSampler node, a VAE node, etc. While daunting for simple image generation, the tool is amazing for more complex workflows since you can break down every step of the process, and then chain many of them together rather than manually switching between tools. You can also re-start execution halfway instead of from the beginning, which can save a lot of time when using larger models.To give you an idea of some of the new use cases that this type of UI enables:* Sketch something → Generate an image with SD from sketch → feed it into SD Video to animate* Generate an image of an object → Turn into a 3D asset → Feed into interactive experiences* Input audio → Generate audio-reactive videosTheir Examples page also includes some of the more common use cases like AnimateDiff, etc. They recently launched the Comfy Registry, an online library of different nodes that users can pull from rather than having to build everything from scratch. The project has >60,000 Github stars, and as the community grows, some of the projects that people build have gotten quite complex:The most interesting thing about Comfy is that it's not a UI, it's a runtime. You can build full applications on top of image models simply by using Comfy. You can expose Comfy workflows as an endpoint and chain them together just like you chain a single node. We're seeing the rise of AI Engineering applied to art.Major Tom's ComfyUI Resources from the Latent Space DiscordMajor shoutouts to Major Tom on the LS Discord who is a image generation expert, who offered these pointers:* “best thing about comfy is the fact it supports almost immediately every new thing that comes out - unlike A1111 or forge, which still don't support flux cnet for instance. It will be perfect tool when conflicting nodes will be resolved”* AP Workflows from Alessandro Perili are a nice example of an all-in-one train-evaluate-generate system built atop Comfy* ComfyUI YouTubers to learn from:* @sebastiankamph* @NerdyRodent* @OlivioSarikas* @sedetweiler* @pixaroma* ComfyUI Nodes to check out:* https://github.com/kijai/ComfyUI-IC-Light* https://github.com/MrForExample/ComfyUI-3D-Pack* https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait* https://github.com/pydn/ComfyUI-to-Python-Extension* https://github.com/THtianhao/ComfyUI-Portrait-Maker* https://github.com/ssitu/ComfyUI_NestedNodeBuilder* https://github.com/longgui0318/comfyui-magic-clothing* https://github.com/atmaranto/ComfyUI-SaveAsScript* https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID* https://github.com/AIFSH/ComfyUI-FishSpeech* https://github.com/coolzilj/ComfyUI-Photopea* https://github.com/lks-ai/anynode* Sarav: https://www.youtube.com/@mickmumpitz/videos ( applied stuff )* Sarav: https://www.youtube.com/@latentvision (technical, but infrequent)* look for comfyui node for https://github.com/magic-quill/MagicQuill* “Comfy for Video” resources* Kijai (https://github.com/kijai) pushing out support for Mochi, CogVideoX, AnimateDif, LivePortrait etc* Comfyui node support like LTX https://github.com/Lightricks/ComfyUI-LTXVideo , and HunyuanVideo* FloraFauna AI* Communities: https://www.reddit.com/r/StableDiffusion/, https://www.reddit.com/r/comfyui/Full YouTube EpisodeAs usual, you can find the full video episode on our YouTube (and don't forget to like and subscribe!)Timestamps* 00:00:04 Introduction of hosts and anonymous guest* 00:00:35 Origins of Comfy UI and early Stable Diffusion landscape* 00:02:58 Comfy's background and development of high-res fix* 00:05:37 Area conditioning and compositing in image generation* 00:07:20 Discussion on different AI image models (SD, Flux, etc.)* 00:11:10 Closed source model APIs and community discussions on SD versions* 00:14:41 LoRAs and textual inversion in image generation* 00:18:43 Evaluation methods in the Comfy community* 00:20:05 CLIP models and text encoders in image generation* 00:23:05 Prompt weighting and negative prompting* 00:26:22 Comfy UI's unique features and design choices* 00:31:00 Memory management in Comfy UI* 00:33:50 GPU market share and compatibility issues* 00:35:40 Node design and parameter settings in Comfy UI* 00:38:44 Custom nodes and community contributions* 00:41:40 Video generation models and capabilities* 00:44:47 Comfy UI's development timeline and rise to popularity* 00:48:13 Current state of Comfy UI team and future plans* 00:50:11 Discussion on other Comfy startups and potential text generation supportTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hey everyone, we are in the Chroma Studio again, but with our first ever anonymous guest, Comfy Anonymous, welcome.Comfy [00:00:19]: Hello.swyx [00:00:21]: I feel like that's your full name, you just go by Comfy, right?Comfy [00:00:24]: Yeah, well, a lot of people just call me Comfy, even when they know my real name. Hey, Comfy.Alessio [00:00:32]: Swyx is the same. You know, not a lot of people call you Shawn.swyx [00:00:35]: Yeah, you have a professional name, right, that people know you by, and then you have a legal name. Yeah, it's fine. How do I phrase this? I think people who are in the know, know that Comfy is like the tool for image generation and now other multimodality stuff. I would say that when I first got started with Stable Diffusion, the star of the show was Automatic 111, right? And I actually looked back at my notes from 2022-ish, like Comfy was already getting started back then, but it was kind of like the up and comer, and your main feature was the flowchart. Can you just kind of rewind to that moment, that year and like, you know, how you looked at the landscape there and decided to start Comfy?Comfy [00:01:10]: Yeah, I discovered Stable Diffusion in 2022, in October 2022. And, well, I kind of started playing around with it. Yes, I, and back then I was using Automatic, which was what everyone was using back then. And so I started with that because I had, it was when I started, I had no idea like how Diffusion works. I didn't know how Diffusion models work, how any of this works, so.swyx [00:01:36]: Oh, yeah. What was your prior background as an engineer?Comfy [00:01:39]: Just a software engineer. Yeah. Boring software engineer.swyx [00:01:44]: But like any, any image stuff, any orchestration, distributed systems, GPUs?Comfy [00:01:49]: No, I was doing basically nothing interesting. Crud, web development? Yeah, a lot of web development, just, yeah, some basic, maybe some basic like automation stuff. Okay. Just. Yeah, no, like, no big companies or anything.swyx [00:02:08]: Yeah, but like already some interest in automations, probably a lot of Python.Comfy [00:02:12]: Yeah, yeah, of course, Python. But I wasn't actually used to like the Node graph interface before I started Comfy UI. It was just, I just thought it was like, oh, like, what's the best way to represent the Diffusion process in the user interface? And then like, oh, well. Well, like, naturally, oh, this is the best way I've found. And this was like with the Node interface. So how I got started was, yeah, so basic October 2022, just like I hadn't written a line of PyTorch before that. So it's completely new. What happened was I kind of got addicted to generating images.Alessio [00:02:58]: As we all did. Yeah.Comfy [00:03:00]: And then I started. I started experimenting with like the high-res fixed in auto, which was for those that don't know, the high-res fix is just since the Diffusion models back then could only generate that low-resolution. So what you would do, you would generate low-resolution image, then upscale, then refine it again. And that was kind of the hack to generate high-resolution images. I really liked generating. Like higher resolution images. So I was experimenting with that. And so I modified the code a bit. Okay. What happens if I, if I use different samplers on the second pass, I was edited the code of auto. So what happens if I use a different sampler? What happens if I use a different, like a different settings, different number of steps? And because back then the. The high-res fix was very basic, just, so. Yeah.swyx [00:04:05]: Now there's a whole library of just, uh, the upsamplers.Comfy [00:04:08]: I think, I think they added a bunch of, uh, of options to the high-res fix since, uh, since, since then. But before that was just so basic. So I wanted to go further. I wanted to try it. What happens if I use a different model for the second, the second pass? And then, well, then the auto code base was, wasn't good enough for. Like, it would have been, uh, harder to implement that in the auto interface than to create my own interface. So that's when I decided to create my own. And you were doing that mostly on your own when you started, or did you already have kind of like a subgroup of people? No, I was, uh, on my own because, because it was just me experimenting with stuff. So yeah, that was it. Then, so I started writing the code January one. 2023, and then I released the first version on GitHub, January 16th, 2023. That's how things got started.Alessio [00:05:11]: And what's, what's the name? Comfy UI right away or? Yeah.Comfy [00:05:14]: Comfy UI. The reason the name, my name is Comfy is people thought my pictures were comfy, so I just, uh, just named it, uh, uh, it's my Comfy UI. So yeah, that's, uh,swyx [00:05:27]: Is there a particular segment of the community that you targeted as users? Like more intensive workflow artists, you know, compared to the automatic crowd or, you know,Comfy [00:05:37]: This was my way of like experimenting with, uh, with new things, like the high risk fixed thing I mentioned, which was like in Comfy, the first thing you could easily do was just chain different models together. And then one of the first things, I think the first times it got a bit of popularity was when I started experimenting with the different, like applying. Prompts to different areas of the image. Yeah. I called it area conditioning, posted it on Reddit and it got a bunch of upvotes. So I think that's when, like, when people first learned of Comfy UI.swyx [00:06:17]: Is that mostly like fixing hands?Comfy [00:06:19]: Uh, no, no, no. That was just, uh, like, let's say, well, it was very, well, it still is kind of difficult to like, let's say you want a mountain, you have an image and then, okay. I'm like, okay. I want the mountain here and I want the, like a, a Fox here.swyx [00:06:37]: Yeah. So compositing the image. Yeah.Comfy [00:06:40]: My way was very easy. It was just like, oh, when you run the diffusion process, you kind of generate, okay. You do pass one pass through the diffusion, every step you do one pass. Okay. This place of the image with this brand, this space, place of the image with the other prop. And then. The entire image with another prop and then just average everything together, every step, and that was, uh, area composition, which I call it. And then, then a month later, there was a paper that came out called multi diffusion, which was the same thing, but yeah, that's, uh,Alessio [00:07:20]: could you do area composition with different models or because you're averaging out, you kind of need the same model.Comfy [00:07:26]: Could do it with, but yeah, I hadn't implemented it. For different models, but, uh, you, you can do it with, uh, with different models if you want, as long as the models share the same latent space, like we, we're supposed to ring a bell every time someone says, yeah, like, for example, you couldn't use like Excel and SD 1.5, because those have a different latent space, but like, uh, yeah, like SD 1.5 models, different ones. You could, you could do that.swyx [00:07:59]: There's some models that try to work in pixel space, right?Comfy [00:08:03]: Yeah. They're very slow. Of course. That's the problem. That that's the, the reason why stable diffusion actually became like popular, like, cause was because of the latent space.swyx [00:08:14]: Small and yeah. Because it used to be latent diffusion models and then they trained it up.Comfy [00:08:19]: Yeah. Cause a pixel pixel diffusion models are just too slow. So. Yeah.swyx [00:08:25]: Have you ever tried to talk to like, like stability, the latent diffusion guys, like, you know, Robin Rombach, that, that crew. Yeah.Comfy [00:08:32]: Well, I used to work at stability.swyx [00:08:34]: Oh, I actually didn't know. Yeah.Comfy [00:08:35]: I used to work at stability. I got, uh, I got hired, uh, in June, 2023.swyx [00:08:42]: Ah, that's the part of the story I didn't know about. Okay. Yeah.Comfy [00:08:46]: So the, the reason I was hired is because they were doing, uh, SDXL at the time and they were basically SDXL. I don't know if you remember it was a base model and then a refiner model. Basically they wanted to experiment, like chaining them together. And then, uh, they saw, oh, right. Oh, this, we can use this to do that. Well, let's hire that guy.swyx [00:09:10]: But they didn't, they didn't pursue it for like SD3. What do you mean? Like the SDXL approach. Yeah.Comfy [00:09:16]: The reason for that approach was because basically they had two models and then they wanted to publish both of them. So they, they trained one on. Lower time steps, which was the refiner model. And then they, the first one was trained normally. And then they went during their test, they realized, oh, like if we string these models together are like quality increases. So let's publish that. It worked. Yeah. But like right now, I don't think many people actually use the refiner anymore, even though it is actually a full diffusion model. Like you can use it on its own. And it's going to generate images. I don't think anyone, people have mostly forgotten about it. But, uh.Alessio [00:10:05]: Can we talk about models a little bit? So stable diffusion, obviously is the most known. I know flux has gotten a lot of traction. Are there any underrated models that people should use more or what's the state of the union?Comfy [00:10:17]: Well, the, the latest, uh, state of the art, at least, yeah, for images there's, uh, yeah, there's flux. There's also SD3.5. SD3.5 is two models. There's a, there's a small one, 2.5B and there's the bigger one, 8B. So it's, it's smaller than flux. So, and it's more, uh, creative in a way, but flux, yeah, flux is the best. People should give SD3.5 a try cause it's, uh, it's different. I won't say it's better. Well, it's better for some like specific use cases. Right. If you want some to make something more like creative, maybe SD3.5. If you want to make something more consistent and flux is probably better.swyx [00:11:06]: Do you ever consider supporting the closed source model APIs?Comfy [00:11:10]: Uh, well, they, we do support them as custom nodes. We actually have some, uh, official custom nodes from, uh, different. Ideogram.swyx [00:11:20]: Yeah. I guess DALI would have one. Yeah.Comfy [00:11:23]: That's, uh, it's just not, I'm not the person that handles that. Sure.swyx [00:11:28]: Sure. Quick question on, on SD. There's a lot of community discussion about the transition from SD1.5 to SD2 and then SD2 to SD3. People still like, you know, very loyal to the previous generations of SDs?Comfy [00:11:41]: Uh, yeah. SD1.5 then still has a lot of, a lot of users.swyx [00:11:46]: The last based model.Comfy [00:11:49]: Yeah. Then SD2 was mostly ignored. It wasn't, uh, it wasn't a big enough improvement over the previous one. Okay.swyx [00:11:58]: So SD1.5, SD3, flux and whatever else. SDXL. SDXL.Comfy [00:12:03]: That's the main one. Stable cascade. Stable cascade. That was a good model. But, uh, that's, uh, the problem with that one is, uh, it got, uh, like SD3 was announced one week after. Yeah.swyx [00:12:16]: It was like a weird release. Uh, what was it like inside of stability actually? I mean, statute of limitations. Yeah. The statute of limitations expired. You know, management has moved. So it's easier to talk about now. Yeah.Comfy [00:12:27]: And inside stability, actually that model was ready, uh, like three months before, but it got, uh, stuck in, uh, red teaming. So basically the product, if that model had released or was supposed to be released by the authors, then it would probably have gotten very popular since it's a, it's a step up from SDXL. But it got all of its momentum stolen. It got stolen by the SD3 announcement. So people kind of didn't develop anything on top of it, even though it's, uh, yeah. It was a good model, at least, uh, completely mostly ignored for some reason. Likeswyx [00:13:07]: I think the naming as well matters. It seemed like a branch off of the main, main tree of development. Yeah.Comfy [00:13:15]: Well, it was different researchers that did it. Yeah. Yeah. Very like, uh, good model. Like it's the Worcestershire authors. I don't know if I'm pronouncing it correctly. Yeah. Yeah. Yeah.swyx [00:13:28]: I actually met them in Vienna. Yeah.Comfy [00:13:30]: They worked at stability for a bit and they left right after the Cascade release.swyx [00:13:35]: This is Dustin, right? No. Uh, Dustin's SD3. Yeah.Comfy [00:13:38]: Dustin is a SD3 SDXL. That's, uh, Pablo and Dome. I think I'm pronouncing his name correctly. Yeah. Yeah. Yeah. Yeah. That's very good.swyx [00:13:51]: It seems like the community is very, they move very quickly. Yeah. Like when there's a new model out, they just drop whatever the current one is. And they just all move wholesale over. Like they don't really stay to explore the full capabilities. Like if, if the stable cascade was that good, they would have AB tested a bit more. Instead they're like, okay, SD3 is out. Let's go. You know?Comfy [00:14:11]: Well, I find the opposite actually. The community doesn't like, they only jump on a new model when there's a significant improvement. Like if there's a, only like a incremental improvement, which is what, uh, most of these models are going to have, especially if you, cause, uh, stay the same parameter count. Yeah. Like you're not going to get a massive improvement, uh, into like, unless there's something big that, that changes. So, uh. Yeah.swyx [00:14:41]: And how are they evaluating these improvements? Like, um, because there's, it's a whole chain of, you know, comfy workflows. Yeah. How does, how does one part of the chain actually affect the whole process?Comfy [00:14:52]: Are you talking on the model side specific?swyx [00:14:54]: Model specific, right? But like once you have your whole workflow based on a model, it's very hard to move.Comfy [00:15:01]: Uh, not, well, not really. Well, it depends on your, uh, depends on their specific kind of the workflow. Yeah.swyx [00:15:09]: So I do a lot of like text and image. Yeah.Comfy [00:15:12]: When you do change, like most workflows are kind of going to be complete. Yeah. It's just like, you might have to completely change your prompt completely change. Okay.swyx [00:15:24]: Well, I mean, then maybe the question is really about evals. Like what does the comfy community do for evals? Just, you know,Comfy [00:15:31]: Well, that they don't really do that. It's more like, oh, I think this image is nice. So that's, uh,swyx [00:15:38]: They just subscribe to Fofr AI and just see like, you know, what Fofr is doing. Yeah.Comfy [00:15:43]: Well, they just, they just generate like it. Like, I don't see anyone really doing it. Like, uh, at least on the comfy side, comfy users, they, it's more like, oh, generate images and see, oh, this one's nice. It's like, yeah, it's not, uh, like the, the more, uh, like, uh, scientific, uh, like, uh, like checking that's more on specifically on like model side. If, uh, yeah, but there is a lot of, uh, vibes also, cause it is a like, uh, artistic, uh, you can create a very good model that doesn't generate nice images. Cause most images on the internet are ugly. So if you, if that's like, if you just, oh, I have the best model at 10th giant, it's super smart. I created on all the, like I've trained on just all the images on the internet. The images are not going to look good. So yeah.Alessio [00:16:42]: Yeah.Comfy [00:16:43]: They're going to be very consistent. But yeah. People like, it's not going to be like the, the look that people are going to be expecting from, uh, from a model. So. Yeah.swyx [00:16:54]: Can we talk about LoRa's? Cause we thought we talked about models then like the next step is probably LoRa's. Before, I actually, I'm kind of curious how LoRa's entered the tool set of the image community because the LoRa paper was 2021. And then like, there was like other methods like textual inversion that was popular at the early SD stage. Yeah.Comfy [00:17:13]: I can't even explain the difference between that. Yeah. Textual inversions. That's basically what you're doing is you're, you're training a, cause well, yeah. Stable diffusion. You have the diffusion model, you have text encoder. So basically what you're doing is training a vector that you're going to pass to the text encoder. It's basically you're training a new word. Yeah.swyx [00:17:37]: It's a little bit like representation engineering now. Yeah.Comfy [00:17:40]: Yeah. Basically. Yeah. You're just, so yeah, if you know how like the text encoder works, basically you have, you take your, your words of your product, you convert those into tokens with the tokenizer and those are converted into vectors. Basically. Yeah. Each token represents a different vector. So each word presents a vector. And those, depending on your words, that's the list of vectors that get passed to the text encoder, which is just. Yeah. Yeah. I'm just a stack of, of attention. Like basically it's a very close to LLM architecture. Yeah. Yeah. So basically what you're doing is just training a new vector. We're saying, well, I have all these images and I want to know which word does that represent? And it's going to get like, you train this vector and then, and then when you use this vector, it hopefully generates. Like something similar to your images. Yeah.swyx [00:18:43]: I would say it's like surprisingly sample efficient in picking up the concept that you're trying to train it on. Yeah.Comfy [00:18:48]: Well, people have kind of stopped doing that even though back as like when I was at Stability, we, we actually did train internally some like textual versions on like T5 XXL actually worked pretty well. But for some reason, yeah, people don't use them. And also they might also work like, like, yeah, this is something and probably have to test, but maybe if you train a textual version, like on T5 XXL, it might also work with all the other models that use T5 XXL because same thing with like, like the textual inversions that, that were trained for SD 1.5, they also kind of work on SDXL because SDXL has the, has two text encoders. And one of them is the same as the, as the SD 1.5 CLIP-L. So those, they actually would, they don't work as strongly because they're only applied to one of the text encoders. But, and the same thing for SD3. SD3 has three text encoders. So it works. It's still, you can still use your textual version SD 1.5 on SD3, but it's just a lot weaker because now there's three text encoders. So it gets even more diluted. Yeah.swyx [00:20:05]: Do people experiment a lot on, just on the CLIP side, there's like Siglip, there's Blip, like do people experiment a lot on those?Comfy [00:20:12]: You can't really replace. Yeah.swyx [00:20:14]: Because they're trained together, right? Yeah.Comfy [00:20:15]: They're trained together. So you can't like, well, what I've seen people experimenting with is a long CLIP. So basically someone fine tuned the CLIP model to accept longer prompts.swyx [00:20:27]: Oh, it's kind of like long context fine tuning. Yeah.Comfy [00:20:31]: So, so like it's, it's actually supported in Core Comfy.swyx [00:20:35]: How long is long?Comfy [00:20:36]: Regular CLIP is 77 tokens. Yeah. Long CLIP is 256. Okay. So, but the hack that like you've, if you use stable diffusion 1.5, you've probably noticed, oh, it still works if I, if I use long prompts, prompts longer than 77 words. Well, that's because the hack is to just, well, you split, you split it up in chugs of 77, your whole big prompt. Let's say you, you give it like the massive text, like the Bible or something, and it would split it up in chugs of 77 and then just pass each one through the CLIP and then just cut anything together at the end. It's not ideal, but it actually works.swyx [00:21:26]: Like the positioning of the words really, really matters then, right? Like this is why order matters in prompts. Yeah.Comfy [00:21:33]: Yeah. Like it, it works, but it's, it's not ideal, but it's what people expect. Like if, if someone gives a huge prompt, they expect at least some of the concepts at the end to be like present in the image. But usually when they give long prompts, they, they don't, they like, they don't expect like detail, I think. So that's why it works very well.swyx [00:21:58]: And while we're on this topic, prompts waiting, negative comments. Negative prompting all, all sort of similar part of this layer of the stack. Yeah.Comfy [00:22:05]: The, the hack for that, which works on CLIP, like it, basically it's just for SD 1.5, well, for SD 1.5, the prompt waiting works well because CLIP L is a, is not a very deep model. So you have a very high correlation between, you have the input token, the index of the input token vector. And the output token, they're very, the concepts are very close, closely linked. So that means if you interpolate the vector from what, well, the, the way Comfy UI does it is it has, okay, you have the vector, you have an empty prompt. So you have a, a chunk, like a CLIP output for the empty prompt, and then you have the one for your prompt. And then it interpolates from that, depending on your prompt. Yeah.Comfy [00:23:07]: So that's how it, how it does prompt waiting. But this stops working the deeper your text encoder is. So on T5X itself, it doesn't work at all. So. Wow.swyx [00:23:20]: Is that a problem for people? I mean, cause I'm used to just move, moving up numbers. Probably not. Yeah.Comfy [00:23:25]: Well.swyx [00:23:26]: So you just use words to describe, right? Cause it's a bigger language model. Yeah.Comfy [00:23:30]: Yeah. So. Yeah. So honestly it might be good, but I haven't seen many complaints on Flux that it's not working. So, cause I guess people can sort of get around it with, with language. So. Yeah.swyx [00:23:46]: Yeah. And then coming back to LoRa's, now the, the popular way to, to customize models is LoRa's. And I saw you also support Locon and LoHa, which I've never heard of before.Comfy [00:23:56]: There's a bunch of, cause what, what the LoRa is essentially is. Instead of like, okay, you have your, your model and then you want to fine tune it. So instead of like, what you could do is you could fine tune the entire thing, but that's a bit heavy. So to speed things up and make things less heavy, what you can do is just fine tune some smaller weights, like basically two, two matrices that when you multiply like two low rank matrices and when you multiply them together, gives a, represents a difference between trained weights and your base weights. So by training those two smaller matrices, that's a lot less heavy. Yeah.Alessio [00:24:45]: And they're portable. So you're going to share them. Yeah. It's like easier. And also smaller.Comfy [00:24:49]: Yeah. That's the, how LoRa's work. So basically, so when, when inferencing you, you get an inference with them pretty efficiently, like how ComputeWrite does it. It just, when you use a LoRa, it just applies it straight on the weights so that there's only a small delay at the base, like before the sampling to when it applies the weights and then it just same speed as, as before. So for, for inference, it's, it's not that bad, but, and then you have, so basically all the LoRa types like LoHa, LoCon, everything, that's just different ways of representing that like. Basically, you can call it kind of like compression, even though it's not really compression, it's just different ways of represented, like just, okay, I want to train a different on the difference on the weights. What's the best way to represent that difference? There's the basic LoRa, which is just, oh, let's multiply these two matrices together. And then there's all the other ones, which are all different algorithms. So. Yeah.Alessio [00:25:57]: So let's talk about LoRa. Let's talk about what comfy UI actually is. I think most people have heard of it. Some people might've seen screenshots. I think fewer people have built very complex workflows. So when you started, automatic was like the super simple way. What were some of the choices that you made? So the node workflow, is there anything else that stands out as like, this was like a unique take on how to do image generation workflows?Comfy [00:26:22]: Well, I feel like, yeah, back then everyone was trying to make like easy to use interface. Yeah. So I'm like, well, everyone's trying to make an easy to use interface.swyx [00:26:32]: Let's make a hard to use interface.Comfy [00:26:37]: Like, so like, I like, I don't need to do that, everyone else doing it. So let me try something like, let me try to make a powerful interface that's not easy to use. So.swyx [00:26:52]: So like, yeah, there's a sort of node execution engine. Yeah. Yeah. And it actually lists, it has this really good list of features of things you prioritize, right? Like let me see, like sort of re-executing from, from any parts of the workflow that was changed, asynchronous queue system, smart memory management, like all this seems like a lot of engineering that. Yeah.Comfy [00:27:12]: There's a lot of engineering in the back end to make things, cause I was always focused on making things work locally very well. Cause that's cause I was using it locally. So everything. So there's a lot of, a lot of thought and working by getting everything to run as well as possible. So yeah. ConfUI is actually more of a back end, at least, well, not all the front ends getting a lot more development, but, but before, before it was, I was pretty much only focused on the backend. Yeah.swyx [00:27:50]: So v0.1 was only August this year. Yeah.Comfy [00:27:54]: With the new front end. Before there was no versioning. So yeah. Yeah. Yeah.swyx [00:27:57]: And so what was the big rewrite for the 0.1 and then the 1.0?Comfy [00:28:02]: Well, that's more on the front end side. That's cause before that it was just like the UI, what, cause when I first wrote it, I just, I said, okay, how can I make, like, I can do web development, but I don't like doing it. Like what's the easiest way I can slap a node interface on this. And then I found this library. Yeah. Like JavaScript library.swyx [00:28:26]: Live graph?Comfy [00:28:27]: Live graph.swyx [00:28:28]: Usually people will go for like react flow for like a flow builder. Yeah.Comfy [00:28:31]: But that seems like too complicated. So I didn't really want to spend time like developing the front end. So I'm like, well, oh, light graph. This has the whole node interface. So, okay. Let me just plug that into, to my backend.swyx [00:28:49]: I feel like if Streamlit or Gradio offered something that you would have used Streamlit or Gradio cause it's Python. Yeah.Comfy [00:28:54]: Yeah. Yeah. Yeah.Comfy [00:29:00]: Yeah.Comfy [00:29:14]: Yeah. logic and your backend logic and just sticks them together.swyx [00:29:20]: It's supposed to be easy for you guys. If you're a Python main, you know, I'm a JS main, right? Okay. If you're a Python main, it's supposed to be easy.Comfy [00:29:26]: Yeah, it's easy, but it makes your whole software a huge mess.swyx [00:29:30]: I see, I see. So you're mixing concerns instead of separating concerns?Comfy [00:29:34]: Well, it's because... Like frontend and backend. Frontend and backend should be well separated with a defined API. Like that's how you're supposed to do it. Smart people disagree. It just sticks everything together. It makes it easy to like a huge mess. And also it's, there's a lot of issues with Gradio. Like it's very good if all you want to do is just get like slap a quick interface on your, like to show off your ML project. Like that's what it's made for. Yeah. Like there's no problem using it. Like, oh, I have my, I have my code. I just wanted a quick interface on it. That's perfect. Like use Gradio. But if you want to make something that's like a real, like real software that will last a long time and will be easy to maintain, then I would avoid it. Yeah.swyx [00:30:32]: So your criticism is Streamlit and Gradio are the same. I mean, those are the same criticisms.Comfy [00:30:37]: Yeah, Streamlit I haven't used as much. Yeah, I just looked a bit.swyx [00:30:43]: Similar philosophy.Comfy [00:30:44]: Yeah, it's similar. It's just, it just seems to me like, okay, for quick, like AI demos, it's perfect.swyx [00:30:51]: Yeah. Going back to like the core tech, like asynchronous queues, slow re-execution, smart memory management, you know, anything that you were very proud of or was very hard to figure out?Comfy [00:31:00]: Yeah. The thing that's the biggest pain in the ass is probably the memory management. Yeah.swyx [00:31:05]: Were you just paging models in and out or? Yeah.Comfy [00:31:08]: Before it was just, okay, load the model, completely unload it. Then, okay, that, that works well when you, your model are small, but if your models are big and it takes sort of like, let's say someone has a, like a, a 4090, and the model size is 10 gigabytes, that can take a few seconds to like load and load, load and load, so you want to try to keep things like in memory, in the GPU memory as much as possible. What Comfy UI does right now is it. It tries to like estimate, okay, like, okay, you're going to sample this model, it's going to take probably this amount of memory, let's remove the models, like this amount of memory that's been loaded on the GPU and then just execute it. But so there's a fine line between just because try to remove the least amount of models that are already loaded. Because as fans, like Windows drivers, and one other problem is the NVIDIA driver on Windows by default, because there's a way to, there's an option to disable that feature, but by default it, like, if you start loading, you can overflow your GPU memory and then it's, the driver's going to automatically start paging to RAM. But the problem with that is it's, it makes everything extremely slow. So when you see people complaining, oh, this model, it works, but oh, s**t, it starts slowing down a lot, that's probably what's happening. So it's basically you have to just try to get, use as much memory as possible, but not too much, or else things start slowing down, or people get out of memory, and then just find, try to find that line where, oh, like the driver on Windows starts paging and stuff. Yeah. And the problem with PyTorch is it's, it's high levels, don't have that much fine-grained control over, like, specific memory stuff, so kind of have to leave, like, the memory freeing to, to Python and PyTorch, which is, can be annoying sometimes.swyx [00:33:32]: So, you know, I think one thing is, as a maintainer of this project, like, you're designing for a very wide surface area of compute, like, you even support CPUs.Comfy [00:33:42]: Yeah, well, that's... That's just, for PyTorch, PyTorch supports CPUs, so, yeah, it's just, that's not, that's not hard to support.swyx [00:33:50]: First of all, is there a market share estimate, like, is it, like, 70% NVIDIA, like, 30% AMD, and then, like, miscellaneous on Apple, Silicon, or whatever?Comfy [00:33:59]: For Comfy? Yeah. Yeah, and, yeah, I don't know the market share.swyx [00:34:03]: Can you guess?Comfy [00:34:04]: I think it's mostly NVIDIA. Right. Because, because AMD, the problem, like, AMD works horribly on Windows. Like, on Linux, it works fine. It's, it's lower than the price equivalent NVIDIA GPU, but it works, like, you can use it, you generate images, everything works. On Linux, on Windows, you might have a hard time, so, that's the problem, and most people, I think most people who bought AMD probably use Windows. They probably aren't going to switch to Linux, so... Yeah. So, until AMD actually, like, ports their, like, raw cam to, to Windows properly, and then there's actually PyTorch, I think they're, they're doing that, they're in the process of doing that, but, until they get it, they get a good, like, PyTorch raw cam build that works on Windows, it's, like, they're going to have a hard time. Yeah.Alessio [00:35:06]: We got to get George on it. Yeah. Well, he's trying to get Lisa Su to do it, but... Let's talk a bit about, like, the node design. So, unlike all the other text-to-image, you have a very, like, deep, so you have, like, a separate node for, like, clip and code, you have a separate node for, like, the case sampler, you have, like, all these nodes. Going back to, like, the making it easy versus making it hard, but, like, how much do people actually play with all the settings, you know? Kind of, like, how do you guide people to, like, hey, this is actually going to be very impactful versus this is maybe, like, less impactful, but we still want to expose it to you?Comfy [00:35:40]: Well, I try to... I try to expose, like, I try to expose everything or, but, yeah, at least for the, but for things, like, for example, for the samplers, like, there's, like, yeah, four different sampler nodes, which go in easiest to most advanced. So, yeah, if you go, like, the easy node, the regular sampler node, that's, you have just the basic settings. But if you use, like, the sampler advanced... If you use, like, the custom advanced node, that, that one you can actually, you'll see you have, like, different nodes.Alessio [00:36:19]: I'm looking it up now. Yeah. What are, like, the most impactful parameters that you use? So, it's, like, you know, you can have more, but, like, which ones, like, really make a difference?Comfy [00:36:30]: Yeah, they all do. They all have their own, like, they all, like, for example, yeah, steps. Usually you want steps, you want them to be as low as possible. But you want, if you're optimizing your workflow, you want to, you lower the steps until, like, the images start deteriorating too much. Because that, yeah, that's the number of steps you're running the diffusion process. So, if you want things to be faster, lower is better. But, yeah, CFG, that's more, you can kind of see that as the contrast of the image. Like, if your image looks too bursty. Then you can lower the CFG. So, yeah, CFG, that's how, yeah, that's how strongly the, like, the negative versus positive prompt. Because when you sample a diffusion model, it's basically a negative prompt. It's just, yeah, positive prediction minus negative prediction.swyx [00:37:32]: Contrastive loss. Yeah.Comfy [00:37:34]: It's positive minus negative, and the CFG does the multiplier. Yeah. Yeah. Yeah, so.Alessio [00:37:41]: What are, like, good resources to understand what the parameters do? I think most people start with automatic, and then they move over, and it's, like, snap, CFG, sampler, name, scheduler, denoise. Read it.Comfy [00:37:53]: But, honestly, well, it's more, it's something you should, like, try out yourself. I don't know, you don't necessarily need to know how it works to, like, what it does. Because even if you know, like, CFGO, it's, like, positive minus negative prompt. Yeah. So the only thing you know at CFG is if it's 1.0, then that means the negative prompt isn't applied. It also means sampling is two times faster. But, yeah. But other than that, it's more, like, you should really just see what it does to the images yourself, and you'll probably get a more intuitive understanding of what these things do.Alessio [00:38:34]: Any other nodes or things you want to shout out? Like, I know the animate diff IP adapter. Those are, like, some of the most popular ones. Yeah. What else comes to mind?Comfy [00:38:44]: Not nodes, but there's, like, what I like is when some people, sometimes they make things that use ComfyUI as their backend. Like, there's a plugin for Krita that uses ComfyUI as its backend. So you can use, like, all the models that work in Comfy in Krita. And I think I've tried it once. But I know a lot of people use it, and it's probably really nice, so.Alessio [00:39:15]: What's the craziest node that people have built, like, the most complicated?Comfy [00:39:21]: Craziest node? Like, yeah. I know some people have made, like, video games in Comfy with, like, stuff like that. So, like, someone, like, I remember, like, yeah, last, I think it was last year, someone made, like, a, like, Wolfenstein 3D in Comfy. Of course. And then one of the inputs was, oh, you can generate a texture, and then it changes the texture in the game. So you can plug it to, like, the workflow. And there's a lot of, if you look there, there's a lot of crazy things people do, so. Yeah.Alessio [00:39:59]: And now there's, like, a node register that people can use to, like, download nodes. Yeah.Comfy [00:40:04]: Like, well, there's always been the, like, the ComfyUI manager. Yeah. But we're trying to make this more, like, I don't know, official, like, with, yeah, with the node registry. Because before the node registry, the, like, okay, how did your custom node get into ComfyUI manager? That's the guy running it who, like, every day he searched GitHub for new custom nodes and added dev annually to his custom node manager. So we're trying to make it less effortless. So we're trying to make it less effortless for him, basically. Yeah.Alessio [00:40:40]: Yeah. But I was looking, I mean, there's, like, a YouTube download node. There's, like, this is almost like, you know, a data pipeline more than, like, an image generation thing at this point. It's, like, you can get data in, you can, like, apply filters to it, you can generate data out.Comfy [00:40:54]: Yeah. You can do a lot of different things. Yeah. So I'm thinking, I think what I did is I made it easy to make custom nodes. So I think that helped a lot. I think that helped a lot for, like, the ecosystem because it is very easy to just make a node. So, yeah, a bit too easy sometimes. Then we have the issue where there's a lot of custom node packs which share similar nodes. But, well, that's, yeah, something we're trying to solve by maybe bringing some of the functionality into the core. Yeah. Yeah. Yeah.Alessio [00:41:36]: And then there's, like, video. People can do video generation. Yeah.Comfy [00:41:40]: Video, that's, well, the first video model was, like, stable video diffusion, which was last, yeah, exactly last year, I think. Like, one year ago. But that wasn't a true video model. So it was...swyx [00:41:55]: It was, like, moving images? Yeah.Comfy [00:41:57]: I generated video. What I mean by that is it's, like, it's still 2D Latents. It's basically what I'm trying to do. So what they did is they took SD2, and then they added some temporal attention to it, and then trained it on videos and all. So it's kind of, like, animated, like, same idea, basically. Why I say it's not a true video model is that you still have, like, the 2D Latents. Like, a true video model, like Mochi, for example, would have 3D Latents. Mm-hmm.Alessio [00:42:32]: Which means you can, like, move through the space, basically. It's the difference. You're not just kind of, like, reorienting. Yeah.Comfy [00:42:39]: And it's also, well, it's also because you have a temporal VAE. Mm-hmm. Also, like, Mochi has a temporal VAE that compresses on, like, the temporal direction, also. So that's something you don't have with, like, yeah, animated diff and stable video diffusion. They only, like, compress spatially, not temporally. Mm-hmm. Right. So, yeah. That's why I call that, like, true video models. There's, yeah, there's actually a few of them, but the one I've implemented in comfy is Mochi, because that seems to be the best one so far. Yeah.swyx [00:43:15]: We had AJ come and speak at the stable diffusion meetup. The other open one I think I've seen is COG video. Yeah.Comfy [00:43:21]: COG video. Yeah. That one's, yeah, it also seems decent, but, yeah. Chinese, so we don't use it. No, it's fine. It's just, yeah, I could. Yeah. It's just that there's a, it's not the only one. There's also a few others, which I.swyx [00:43:36]: The rest are, like, closed source, right? Like, Cling. Yeah.Comfy [00:43:39]: Closed source, there's a bunch of them. But I mean, open. I've seen a few of them. Like, I can't remember their names, but there's COG videos, the big, the big one. Then there's also a few of them that released at the same time. There's one that released at the same time as SSD 3.5, same day, which is why I don't remember the name.swyx [00:44:02]: We should have a release schedule so we don't conflict on each of these things. Yeah.Comfy [00:44:06]: I think SD 3.5 and Mochi released on the same day. So everything else was kind of drowned, completely drowned out. So for some reason, lots of people picked that day to release their stuff.Comfy [00:44:21]: Yeah. Which is, well, shame for those. And I think Omnijet also released the same day, which also seems interesting. Yeah. Yeah.Alessio [00:44:30]: What's Comfy? So you are Comfy. And then there's like, comfy.org. I know we do a lot of things for, like, news research and those guys also have kind of like a more open source thing going on. How do you work? Like you mentioned, you mostly work on like, the core piece of it. And then what...Comfy [00:44:47]: Maybe I should fade it in because I, yeah, I feel like maybe, yeah, I only explain part of the story. Right. Yeah. Maybe I should explain the rest. So yeah. So yeah. Basically, January, that's when the first January 2023, January 16, 2023, that's when Amphi was first released to the public. Then, yeah, did a Reddit post about the area composition thing somewhere in, I don't remember exactly, maybe end of January, beginning of February. And then someone, a YouTuber, made a video about it, like Olivio, he made a video about Amphi in March 2023. I think that's when it was a real burst of attention. And by that time, I was continuing to develop it and it was getting, people were starting to use it more, which unfortunately meant that I had first written it to do like experiments, but then my time to do experiments went down. It started going down, because people were actually starting to use it then. Like, I had to, and I said, well, yeah, time to add all these features and stuff. Yeah, and then I got hired by Stability June, 2023. Then I made, basically, yeah, they hired me because they wanted the SD-XL. So I got the SD-XL working very well withітhe UI, because they were experimenting withámphi.house.com. Actually, the SDX, how the SDXL released worked is they released, for some reason, like they released the code first, but they didn't release the model checkpoint. So they released the code. And then, well, since the research was related to code, I released the code in Compute 2. And then the checkpoints were basically early access. People had to sign up and they only allowed a lot of people from edu emails. Like if you had an edu email, like they gave you access basically to the SDXL 0.9. And, well, that leaked. Right. Of course, because of course it's going to leak if you do that. Well, the only way people could easily use it was with Comfy. So, yeah, people started using. And then I fixed a few of the issues people had. So then the big 1.0 release happened. And, well, Comfy UI was the only way a lot of people could actually run it on their computers. Because it just like automatic was so like inefficient and bad that most people couldn't actually, like it just wouldn't work. Like because he did a quick implementation. So people were forced. To use Comfy UI, and that's how it became popular because people had no choice.swyx [00:47:55]: The growth hack.Comfy [00:47:56]: Yeah.swyx [00:47:56]: Yeah.Comfy [00:47:57]: Like everywhere, like people who didn't have the 4090, they had like, who had just regular GPUs, they didn't have a choice.Alessio [00:48:05]: So yeah, I got a 4070. So think of me. And so today, what's, is there like a core Comfy team or?Comfy [00:48:13]: Uh, yeah, well, right now, um, yeah, we are hiring. Okay. Actually, so right now core, like, um, the core core itself, it's, it's me. Uh, but because, uh, the reason where folks like all the focus has been mostly on the front end right now, because that's the thing that's been neglected for a long time. So, uh, so most of the focus right now is, uh, all on the front end, but we are, uh, yeah, we will soon get, uh, more people to like help me with the actual backend stuff. Yeah. So, no, I'm not going to say a hundred percent because that's why once the, once we have our V one release, which is because it'd be the package, come fee-wise with the nice interface and easy to install on windows and hopefully Mac. Uh, yeah. Yeah. Once we have that, uh, we're going to have to, lots of stuff to do on the backend side and also the front end side, but, uh.Alessio [00:49:14]: What's the release that I'm on the wait list. What's the timing?Comfy [00:49:18]: Uh, soon. Uh, soon. Yeah, I don't want to promise a release date. We do have a release date we're targeting, but I'm not sure if it's public. Yeah, and we're still going to continue doing the open source, making MPUI the best way to run stable infusion models. At least the open source side, it's going to be the best way to run models locally. But we will have a few things to make money from it, like cloud inference or that type of thing. And maybe some things for some enterprises.swyx [00:50:08]: I mean, a few questions on that. How do you feel about the other comfy startups?Comfy [00:50:11]: I mean, I think it's great. They're using your name. Yeah, well, it's better they use comfy than they use something else. Yeah, that's true. It's fine. We're going to try not to... We don't want to... We want people to use comfy. Like I said, it's better that people use comfy than something else. So as long as they use comfy, I think it helps the ecosystem. Because more people, even if they don't contribute directly, the fact that they are using comfy means that people are more likely to join the ecosystem. So, yeah.swyx [00:50:57]: And then would you ever do text?Comfy [00:50:59]: Yeah, well, you can already do text with some custom nodes. So, yeah, it's something we like. Yeah, it's something I've wanted to eventually add to core, but it's more like not a very... It's a very high priority. But because a lot of people use text for prompt enhancement and other things like that. So, yeah, it's just that my focus has always been on diffusion models. Yeah, unless some text diffusion model comes out.swyx [00:51:30]: Yeah, David Holtz is investing a lot in text diffusion.Comfy [00:51:34]: Yeah, well, if a good one comes out, then we'll probably implement it since it fits with the whole...swyx [00:51:39]: Yeah, I mean, I imagine it's going to be a close source to Midjourney. Yeah.Comfy [00:51:43]: Well, if an open one comes out, then I'll probably implement it.Alessio [00:51:54]: Cool, comfy. Thanks so much for coming on. This was fun. Bye. Get full access to Latent Space at www.latent.space/subscribe
The holidays is a time of reflection and relaxation. It's a time to kick your feet up and chill with friends and family while enjoying a nice, toasty fire as you sip on a mug of warm cocoa and cherish some much needed peace and quiet. Unless you're us. If you're us, you want to make sure to squeeze in just one more episode of Constellation before the new year. Because, hey, we love you. This week, Lord Cognito starts things off by reminding us that we should be comfortable with our sensitive sides. In fact, Cog encourages us to discuss the things that make us cry. From movies and music to memories and missing our loved ones, there isn't a dry eye in the house as the gang talks about all the things that trigger those tears. Next, Dagster discusses his competitiveness, and he challenges his podcast pals to do the same. What's the one thing that brings out that cutthroat quality inside each of us, driving us to be the best and possessing us to claim victory at any cost? Furthermore, who are the rivals that keep us on our toes? Finally, Ben shares his feelings about Daylight Savings Time. We won't give too much away, but let's put it this way: Daylight Savings Time is lucky it isn't a living, breathing thing because Ben would have murdered it in cold blood years ago. Cog and Dagan also briefly share their feelings while mainly letting Ben vent angrily. Hey Daylight Savings Time, I would stay away from the Pittsburgh area until this whole thing blows over. Just sayin.' Timestamps: 0:00:00 - Intro 0:19:55 - Tearjerkers 0:54:18 - Competitiveness 1:37:44 - Daylight Savings Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode of Devolution Power Hour, hosts Jon Herold and Burning Bright are joined by Chris Paul for the first 30 minutes as they break down key political developments. They discuss Trump's ongoing influence, the implications of continuity of government (COG), and the potential for executive orders. The conversation also covers the growing impact of Bitcoin and cryptocurrency on the future of financial systems, including the idea of a Bitcoin-backed U.S. dollar. Tune in for a detailed look at the current political landscape and the shifting power dynamics shaping the future of the U.S.
- SUBSCRIBE TO OUR PODCAST: http://cornerofthegalaxy.com/subscribe/ - COG LA GALAXY DISCORD: https://discord.gg/drr9HFZY2P - COG ANTHEM MUSIC BY RAY PLAZA: https://linktr.ee/munditoplaza - COG ANTHEM MUSIC DOWNLOAD: https://open.spotify.com/artist/3asiasldwKyoCRm1Vzx2h7?si=_LmXI9otT9y9j0ChMGMt2w COG STUDIOS, Calif. -- Getting to preview an MLS Cup doesn't get any more special. Corner of the Galaxy is lucky to be previewing its fifth MLS Cup, and this Saturday's game will be big. Hosts Josh Guesman and Sophie Nicolaou are joined by Apple TV Analyst Taylor Twellman, former CoG Co-Host Cory Ritzau, and CoG co-host Eric Vieira as they look at what Saturday's matchup with Red Bull New York has in store. Can the Galaxy find a way to overcome the Riqui Puig injury? Will the Galaxy's set-piece defense be their biggest undoing? And was that a 10-minute intro to a podcast? No, that was an 11-minute intro to a podcast. New. World. Record. Let's talk! -- Corner of the Galaxy is kicking off Season 16, just a few shows past show no. 1,100! And we can't wait to show you everything we've got in store for 2024! Exciting changes are in store for Corner of the Galaxy! We're unveiling a brand new anthem, composed by Ray Plaza, and a refreshed look and feel for the podcast. Get ready for a fresh and engaging experience! This is just a reminder that we go live twice a week — Monday and Thursday at 8 PM on YouTube—and that you can find us conveniently anywhere you get your podcasts (Apple, Spotify, Soundcloud, YouTube, GooglePlay, etc.). We're making it easy for you to stay connected! So tell a friend that you've been listening to the longest-running team-specific podcast in Major League Soccer and that 2024 is a great time to start listening!
This week Lock and Cog join Brad and discuss S.T.A.L.K.E.R. 2: Heart of Chernobyl, Avowed, Pokémon Emerald, and more! Please make note of the Metaphor: Refantazio timestamp as it contains some spoilers Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. Timestamps: 0:00:00 - Intro 0:11:51 - STALKER 2 0:43:59 - Avowed 01:07:44 - Sort It Out/Keep It Up 01:30:29 - Pokemon Emerald 01:44:36 - Dragon Age: The Veilguard 02:15:45 - Metaphor: ReFantazio (CONTAINS SPOILERS) 02:33:26 - Closing Questions To watch the podcast on YouTube: https://bit.ly/LastStandMediaYouTube Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/SummonSign If you like the show, telling a friend about it would be amazing! You can text, email, Tweet, or send this link to a friend: https://bit.ly/SummonSign Learn more about your ad choices. Visit podcastchoices.com/adchoices
Several years ago when Michael Snyder was a Plain Truth senior writer, he interviewed several nuclear war experts and physicists during a Washington DC science conference. That produced the popular Plain Truth cover article “Humanity won't end this way,” referencing concerns about nuclear war which Mr. Herbert Armstrong later made into a World Tomorrow telecast. His work and article were later turned into a booklet on nuclear war where he shared a byline with Mr. Armstrong. It's online at various COG historical sites. Today we talk about the state of the world that could possibly be on the brink of World War III. The war between Russia and Ukraine has escalated. Where is the end. Listen to Mike Snyder's insight about current dangerous conditions. The podcast was made after church services and you may hear the voices of children playing. ____________________________________ Michael A. Snyder, (not to be confused with Michael T. Snyder, a non-related prolific writer on speculative prophecy) has written for Beyond Today (both the magazine and the telecast) and previously served as a senior writer for The PlainTruth magazine. He and his wife Jamie live in Indianapolis, Indiana and attend the Indianapolis United Church of God congregation. They previously lived in Pasadena, California.
We live in a society that has rejected the Word of God. In fact, most people do not know the truths of God's Word. Even professing Christians lack the knowledge of the truth. In this present falling away, more than ever, God's people need to know and follow the words of Christ.
This is The Fortieth Episode of the Visual Scholar Podcast - With Tim Mcburnie.Let's talk about the dangers of being one dimensional in your character design!Below is in Automagically generated summary so you can check out the content and also to help with search functionality! ---In this video, I delve into the complexities of character design, exploring the contrast between visual representation and character development through actions. As artists, we're often taught to make characters look like who they are, emphasizing clear, exaggerated visual cues. However, in writing, it's the actions a character takes that truly define them, sometimes conflicting with their outward appearance.I discuss how these differing approaches can clash and how understanding both the external and internal aspects of character can lead to more nuanced and compelling designs. By examining examples like Tyrion Lannister from "Game of Thrones," Darth Vader from "Star Wars," and Long John Silver from "Treasure Island," I highlight how characters with contrasting appearances and motivations captivate audiences.Whether you're designing your own characters or aiming to deepen your storytelling, this video offers insights into balancing clarity with complexity. Join me as I unpack these concepts to help you create characters that resonate on a deeper level with your audience.---00:00 Intro01:39 Welcome02:04 Are There Problems With General Character Design Advice?06:04 Complexity and Duality of Great Characters14:50 The Benefit of Being Basic...20:23 The Dangers of Being a Cog in The Machine...24:00 The Analytical Takeaway27:05 The Simple Takeaway27:19 The Practical Takeaway31:20 The Philosophical Takeaway35:11 Out... Plus The Mona Lisa This Podcast is designed to help you demystify the world of Art, Productivity, and Creativity. So you can get better faster, and enjoy your Art Journey.We discuss Drawing, Painting, Illustration and Entertainment Design. Along with Productivity and Career Advice.Learn more about Tim Mcburnie:Learn Drawing and Illustration at The Drawing Codex: www.thedrawingcodex.comCheck out this podcast with video on Youtube: https://www.youtube.com/@Tim-McburnieSpotify Link: open.spotify.com/show/6VbdwbiZqy8nqb90ruKQCX?si=21b9c1d02c85419fApple Podcasts: podcasts.apple.com/us/podcast/the-visual-scholar/id1678505411https://podcast.thedrawingcodex.com/Check out The Drawing Codex Youtube Channel: https://www.youtube.com/@TheDrawingCodexPortfolio: www.timmcburnie.comwww.artstation.com/tim-mcburniewww.instagram.com/timmcburnietwitter.com/timmcburnie
As every October 23rd comes around, Fallout fans are reminded of the missed opportunity for Bethesda to have their own "N7 Day" with the franchise. Well, 2024 is the year they finally took action and put fan demands into motion! What took place is, quite frankly, a whole lot of nothing. The Dukes sift through the fallout (!) of this showcase and what Bethesda can do to make 2025's showing a much better, more worthwhile one. While it's Fallout Day, it also appears to be Digital Day for Xbox! 91% of US players are buying games digitally and half of Xbox's customers have bought into their discless console. Has Xbox's push into cloud gaming and Game Pass been vindicated in light of these statistics? Plus, we have info on 2025's Call Of Duty the very week that Black Ops 6 drops! What does Activision have in store for us? More changes, it seems! We also get into the Banjo Kazooie kerfuffle, Helldivers 2 on Xbox, Cog's love of Metaphor, the surprise that is TMNT: Mutants Unleashed, and plenty more! Learn more about your ad choices. Visit podcastchoices.com/adchoices
Caesar ties all loose ends in his Gallic Campaign, in part 8, the final episode of our series on Caesar's masterwork of psychology, strategy, and propaganda: On the Gallic War (De Bello Gallico). This is a world-history making story (the conquest of what's now modern France), told by a world-history making storyteller. Caesar entered Gaul as a mere politician. He returned 9 years later as a conqueror - and an enemy of the state. He tells how it all happened with his own pen.In this episode:Final skirmishes with the Bituriges and the CarnutesSiege of a hill-fort, and Caesar takes arms (literally)Conclusion of a bloody chapter, and some takeawaysCost of Glory is an Infinite Media production — and big thanks to Dr. Richard Johnson of the great city of New York for sponsoring this episode and many others in this series! Coffee + Great Heroes of History = Imperium Coffee. They make great roasts and are big fans of CoG - use code COSTOFGLORY when you checkout for a discount.See my favorite, Sertorius Roast - co-designed by yours truly.https://imperium.coffee/roasts/sertorius/
It's another episode of Constellation, Last Stand Media's conversational podcast! Chris Ray Gun (tiny big-headed blind freak) is at the helm yet again, joined by MrMattyPlays (todd howard simp), Lord Cognito (xbox shill), and none other than Gene Park (beauty pageant judge)! Matty selfishly regales us with tales of his wedding and true love in front of three painfully single people. Gene swerves the conversation to our history with Ubisoft (stewards of the acclaimed Skull & Bones franchise) and how Assassin's Creed Shadows is shaping up post-delay. Cog takes us on a winding path about our favorite drinks, poisons, and elixirs. Chris ends with what is essentially a show and tell about video game special editions. It's literally just “look at the cool things I have” the entire time. This fellow is a genuine jerk. Someone should knock his block off. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Starfield faced a mighty amount of criticism throughout its 2023 launch. In today's gaming climate, this set the stage for what could be a big comeback for Bethesda Game Studios with the Shattered Space expansion. So, did it deliver? Matty has played 16 hours and finished it with Cog just getting his adventure started, so they both have much to say! Learn more about your ad choices. Visit podcastchoices.com/adchoices
This week Cog and Ben join Brad and talk Warhammer: Space Marine 2, Astro Bot, Towerborne and more! To watch the podcast on YouTube: https://bit.ly/LastStandMediaYouTube Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/SummonSign If you like the show, telling a friend about it would be amazing! You can text, email, Tweet, or send this link to a friend: https://bit.ly/SummonSign Learn more about your ad choices. Visit podcastchoices.com/adchoices
Caesar fights through enemy tribes and orders a Purge in his manhunt for Ambiorix, in part 6 of our series on Caesar's masterwork of psychology, strategy, and propaganda: On the Gallic War (De Bello Gallico). This is a world-history making story (the conquest of what's now modern France), told by a world-history making storyteller. Caesar entered Gaul as a mere politician. He returned 9 years later as a conqueror - and an enemy of the state. He tells how it all happened with his own pen.In this episode:The hunt for the crafty Ambiorix and his renegade tribeThe exploits of Titus Labienus and of Publius Sextius BaculusHints of Civil War…(and Druids and ancient cows)Thanks to Ancient Language Institute and long time CoG fan Dr. Richard Johnson for sponsoring this episode!P.S.—If you're interested in sponsoring an episode, feel free to get in touch, any support is highly appreciated as it helps me create more high quality work.
The Dukes are off to San Diego and that means we had to get a little crafty compared to our usual episodes! Since we're recording in advance, we forged an episode in the spirit of Defining Duke Ultimate and leaned heavily into one topic: the Series X/S generation thus far. Easily our most asked question is whether or not to dive into a Xbox at this point, so we dedicated an entire episode to the idea! Matty and Cog sift through every first party release and discuss the highs and lows they've brought. Furthermore, we analyze every Day 1 Game Pass drop since the service began introducing third party with Outriders. We also happily indulge in another conversation of where Xbox could stand to improve and even what could be green lit to shift some of the optics around the brand. Indeed, this is the all-encompassing conversation on Xbox's star in 2024 and we hope you do enjoy! Cancel your unwanted subscriptions at https://www.rocketmoney.com/duke Learn more about your ad choices. Visit podcastchoices.com/adchoices
It's time once more for another episode of Constellation, Last Stand Media's conversational podcast. This week, I (Colin) am joined by Punching Up co-host Micah Moriarty, Defining Duke co-host Lord Cognito, and Sacred Symbols co-host Chris Ray Gun. As usual, topics are varied. Chris kicks things off with an inquiry into exotic animals. No, not cats, dogs, and the like, but far more unordinary fare. Do we want a weird creature companion? Next, Micah explores the wide world of salads. Are we excited or repulsed by leafy greens and their endless accoutrements? From there, Cog takes us down an interesting path, exploring our respective sleep schedules, and the cadence by which we go through our daily lives. Are we early birds or night owls? Finally, I wrap things up -- here, on the precipice of football season! -- by checking everyone's temperature on the upcoming NFL campaign. Are we optimistic, pessimistic, or wildly disinterested? Learn more about your ad choices. Visit podcastchoices.com/adchoices
This week, Brad is joined by the Duke's Matty and Cog to talk about Teenage Mutant Ninja Turtles: Splintered Fate and Death Stranding: Director's Cut, and to do a deep dive into the state of the Xbox brand in 2024. To watch the podcast on YouTube: https://bit.ly/LastStandMediaYouTube Don't forget to subscribe to the podcast for free wherever you're listening or by using this link: https://bit.ly/SummonSign If you like the show, telling a friend about it would be amazing! You can text, email, Tweet, or send this link to a friend: https://bit.ly/SummonSign Learn more about your ad choices. Visit podcastchoices.com/adchoices
Welcome back to another Constellation conversation! Dagan finally returns to the fold to host the latest exciting discussion with a cast of some of your LSM favorites. First, Dustin wonders about becoming a dad. As he and his wife Holly prepare for the imminent birth of their first child, the soon to be father asks for the crew's thoughts and advice on parenting. Next up, Ben wants to pick our brains about flying. What do we think about air travel, and what were some of our best and worst experiences in planes and airports? Once the gang lands safely back on the ground, Cog inspires us to appreciate our parents. As they get older, how have our relationships developed with the people that brought us into this world? Closing out this week's conversation, Dagan reflects on his intermittent hypochondria and inquires about his cohost's experience with this specific flavor of anxiety. Do we know people who tend to worry about their health to an unhealthy degree? Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this edition of the Process This: Artificial Intelligence series we are joined by AI pioneer, computer scientist, and theologian Dr. Anne Foerst. Anne offers a deep dive into the fascinating intersection of AI and theology, urging us to consider the profound ethical, social, and theological questions as we march into an increasingly AI-driven future. As a member of MIT's earliest explorations of embodied AI, she brings a unique perspective to a conversation increasingly shaped by the emergence of large language models of AI. Dr. Anne Foerst is a Professor and chair of Computer Science at St. Bonaventure University. She teaches Computers and Society, Cybersecurity Ethics, and other interdisciplinary courses. She is also the director of the individualized major program at SBU. Her research focuses on the theological implications of Artificial Intelligence and the personhood of robots. Previously, she has worked as a research scientist at the Artificial Intelligence Laboratory, Massachusetts Institute of Technology, and was also affiliated with the Center for the Studies of Values in Public Life of Harvard Divinity School. While in the artificial intelligence lab at MIT, she served as theological advisor for the Cog and Kismet projects, two attempts to develop embodied, autonomous social robots that resemble human infants in their ability to learn and develop more mature intelligence levels. Check out her book God in the Machine: What Robots teach us about God and Humanity. Watch the conversation here on YouTube In the conversation, we discuss... Defining Artificial Intelligence Anthropomorphism and AI Personhood and Ethical Challenges AI in Religious Contexts Social Robots and Human Development Ethical Implications of AI AI and Human Evolution The Role of Community and Ethics Join my Substack - Process This! Join our upcoming class - THE GOD OF THE BIBLE: An Absolutely Clear and Final Guide to Ultimate Mystery ;) Come to THEOLOGY BEER CAMP. Follow the podcast, drop a review, send feedback/questions or become a member of the HBC Community. Learn more about your ad choices. Visit megaphone.fm/adchoices
Watch The X22 Report On Video No videos found Click On Picture To See Larger Picture The entire green new scam is going down the tubes, no matter how many subsidies, mandates the people are still rejecting it all and companies are folding. Stores are closing, the economy is in a downward spiral, people are catching on. The WaPo has begun the change of batter narrative. They are pushing the idea that [HRC] should replace [KH]. The problem is how are you going to replace a black women for a white women, they can't this is to wake the people up to the change. War is building and now Russia and NK have made a defense agreement. There is a new book out that talks about how Trump believes he still has foreign policy powers. Since the US was overthrown by domestic and foreign entities and they staged an insurrection against the President, the military has been activated and Trump is the Commander-in-Chief, welcome to COG. (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:13499335648425062,size:[0, 0],id:"ld-7164-1323"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.customads.co/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs"); Economy Solar Co. Collapses, Customers Furious as Dems' Favorite Power Source Leaves Homeowners High and Dry Titan Solar told employees on June 13 that it was “closing its doors” because no one would buy the company, according to Time. Titan said that it put solar panels on more than 100,000 homes. The fate of consumers who have panels on their homes and no one to maintain them is uncertain. Over 2023 and 2024 to date, 16 major solar companies have filed for bankruptcy, Solar Insure reported. https://twitter.com/magszee39790/status/1803190222571618691?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1803190222571618691%7Ctwgr%5E21e1e5b21ce860c08dd8fad5cc6ca156bd063ea3%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.thegatewaypundit.com%2F2024%2F06%2Fsolar-co-collapses-customers-furious-dems-favorite-power%2F https://twitter.com/WEschenbach/status/1800919912715841750?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1800919912715841750%7Ctwgr%5E21e1e5b21ce860c08dd8fad5cc6ca156bd063ea3%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.thegatewaypundit.com%2F2024%2F06%2Fsolar-co-collapses-customers-furious-dems-favorite-power%2F Source: thegatewaypundit.com Bidenomics: More companies announce bankruptcies, shutter operations, citing inflation Retailers are closing nearly 3,200 stores this year, according to a recent analysis from CoreSight Research. The closures are a 24% increase from 2023. U.S. drug stores and pharmacy closures led to 8 million square feet of shuttered retail space this year, the research company said. It also notes that retailers are losing inventory and customers due to retail theft. “Retail shrink” is closely connected to “organized retail crime,” it notes. Out of the 3,200 being closed, the majority are being closed by roughly 30 retailers, with Family Dollar closing the most of over 600, according to the data, CBS News reported. Tupperware is the latest to announce it's permanently closing its last operating production plant in the U.S. in Hemingway, South Carolina. All of its 148 workers will be laid off, the first in September, followed by others in waves through next January. Tupperware announced its plans last week, stating it would continue to produce its products in a plant in Lerma, Mexico. Others closing stores this year include CVS Health, 7-Eleven, Rite Aid, Express, Walgreens Boots Alliance, Macy's, The Body Shop, Soft Surroundings, Burlington stores, Foot Locker, Carter's Big Lots, Dollar General, Abercrombie & Fitch Co., Big Lots, Best Buy and others, according to the CoreSight analysis. Source: justthenews.com
Watch The X22 Report On Video No videos found Click On Picture To See Larger PictureThe [WEF] is bragging about the 100 billion they have taken in from carbon taxes. Has this helped, no, this is a money grab hoax. Hochul reverses course, it's an election year, can't tax people until after the election. Jobs are imploding, wages decreasing and jobs going to illegals. First it was Canada, now the ECB cut rates, next is the Fed in Sept. The [DS] plan has fallen apart, they thought the people will jump ship if they convicted Trump, it has backfired. Now the other cases are in jeopardy. The DOD has now changed their COG document, they changed the name from directive to instruction. Trump has begun the process of vetting the next VP, it might not be the person on the list. The [DS] is now trapped, the are in a downward spiral which means they will try to coverup all of their crimes with war. (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:13499335648425062,size:[0, 0],id:"ld-7164-1323"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.customads.co/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs"); Economy WEF brags that over $100 billion was taken via carbon taxes last year The WEF has announced that $104 billion was syphoned out of the pockets of taxpayers globally via carbon taxes last year to pay for environmental schemes. In an article, the WEF brags that this is a new record and that carbon taxes and emissions trading systems now account for 24% of the world's emissions, up from 7% in 2013. However, according to the report, it's not enough. Less than 1% of global emissions are being subject to a direct carbon price at the target rates, meaning there's still plenty of room for more taxes. No evidence that carbon taxes work Of course, there's no evidence carbon taxes change the weather or affect emissions. If there was, Canada, a world-renowned champion of it, would be able to provide evidence of this. In fact, there's probably evidence to the contrary, i.e., that not only has the carbon tax not changed the weather, it's made life significantly worse for Canadians for no good reason—as is likely the case everywhere else there's a carbon tax Source: thecountersignal.com Another Failed Dem Policy: NY Gov. Hochul Forced to Place ‘Indefinite Pause' on New York City ‘Congestion Pricing' “The public transit authority hasn't determined how much to charge drivers, but has considered fares ranging from $9 to $23 for passenger vehicles and between $12 and $82 for trucks. The scenarios contemplate different combinations of potential discounts, credits, and exemptions,” Politico reported. New York's Democrat Governor Kathy Hochul's spox John Lindsay cheered the approval saying at the time, “Governor Hochul is committed to implementing congestion pricing to reduce traffic, improve air quality, and support our public transit system. We've worked closely with partners across government and with community members over the last four years to develop a plan that will achieve these goals.” Politico reports: “Circumstances have changed and we must respond to the facts on the ground and not the rhetoric of five years ago,” she said. The decision by Hochul leaves a host of unanswered questions, including how the MTA will make up the expected annual $1 billion in revenue. The money would have gone to a larger $15 billion capital plan for public transportation improvements. Watch: https://twitter.com/LeeMZeldin/status/1798341157296226597?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1798341157296226597%7Ctwgr%5E85a2cbd5eb8a3cf70464c9c566d3a95bfd437ab3%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.thegatewaypundit.com%2F2024%2F06%2Fanother-failed-dem-policy-ny-g...
Derek Johnson is a United States Army Veteran, Retired, DV. 1776 nation which is thedocuments.info, where breaks down the covert operation that is going on right now. Derek is also a 2 time Billboard Hit Artist and Songwriter, Derek is best known for his two Billboard Hits "Real Cool Kinda Hot" and "Right Beer Right. Derek begins the conversation explaining how we are still on track with the plan and the operation to clean out the bad guys is on going. The [DS] is not in control of the country, Trump is the commander and chief and we are now running on COG. The residents term is part of the plan, this is Trumps 2nd term to wake the people up. If the military was used to remove the [DS] players it would have been seen as a coup. The people must take back this country, the plan is working on schedule. The [DS] will attempt to fight back but it will fail.