Podcasts about cerebus

Comic book

  • 91PODCASTS
  • 158EPISODES
  • 1h 28mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 20, 2025LATEST
cerebus

POPULARITY

20172018201920202021202220232024


Best podcasts about cerebus

Latest podcast episodes about cerebus

The Flopcast
Flopcast 676: Breakdancing Fish Police - The Comics of 1985

The Flopcast

Play Episode Listen Later Apr 20, 2025 58:50


We're looking back at the comic book scene of forty years ago, and we've brought in a special guest to help: ESO Network Director (and longtime comic book fan) Mike Faber. The big story of 1985 was Crisis on Infinite Earths, but we cover much more, including: The New Teen Titans, Justice League Detroit, the trial of the Flash, Elvira's House of Mystery, Secret Wars II, Heroes for Hope, the silliness of Ambush Bug, the genius of Alan Moore (Swamp Thing, Miracleman), and the independent stuff too (Love and Rockets, Cerebus, DNAgents, Jon Sable Freelance, and those crazy Ninja Turtles). This podcast should be bagged, boarded, graded, slabbed, and thrown on eBay with a Buy It Now price of zero dollars. The Flopcast website! The ESO Network! The Flopcast on Facebook! The Flopcast on Instagram! The Flopcast on Bluesky! The Flopcast on Mastadon! Please rate and review The Flopcast on Apple Podcasts! Email: info@flopcast.net Our music is by The Sponge Awareness Foundation! This week's promo: Monkeeing Around!    

THE AWESOME COMICS PODCAST
Episode 511 - These Indie Comics Should be BIGGER!

THE AWESOME COMICS PODCAST

Play Episode Listen Later Apr 14, 2025 133:07


There are some indie comics and characters that have become huge global successes, moved into legend and lets face it, everyone knows about. What about those books and characters that have gone under the radar and should have been as big? Well Alan Henderson (The Penned Guins) joins the ACP gang to discuss what comics you should rediscover, or discover for the first time. We won't let them fade into time and neither should you! Also there is chat about creativity, conventions, great events to check out going forward and of course... lots of COMICS!! Great stuff to check out: The Best of Indie Comics: Words Only, Pie Press, Cerebus, Concrete, Martha Washington, Frank Miller, Dave Gibbons, Mars, First Comics, Groo, Sergio Aragones, Skeleton Key, Andy Watson, Amaze Inc, Liberty Meadows, Frank Cho, Mike Mauser, Miss Tree, Mike R.Cane, Starblazer, Boneyard, Richard Moore, NBM, The Press Guardian, Green Archer Comics, Allan Liska, The Dry Cleaner, TBH Comics, Tyrant Fall, Tribute Press, Caribou, Francis Todd, Incident Report, ADM Comics, My Sugar Baby: Adventure in Ukraine (Until the War), Lizzie Parsec Episode 4: back in Aelia, Hugh Newell, Penultimate Quest, Lars Brown

An Audio Moment Of Cerebus
Please Hold For Dave Sim 3/2025

An Audio Moment Of Cerebus

Play Episode Listen Later Mar 7, 2025 106:02


Cerebus, glamourpuss, and The Strange Death of Alex Raymond creator: Dave Sim and A Moment of Cerebus Interim Editor: Manly Matt Dow are back with another one of these things...This time:Dave isn't entertaining visitors Rob...Matt remembers Cerebus fan Jeff SeilerDave remembers Cerebus fan Jeff SeilerDave does another episode for April C. AKA: creative_fey of the Off The Spinner Rack podcastThen, straight from Alfie's Fish & Chips (610 N H St, Lompoc, CA 93436, (805) 736-0154), mention A Moment of Cerebus AND Please Hold For Dave Sim ANNNDDD “swordfish” for NINETY, I say NINETY percent off strange looks from thewaitstaff…MJ Sewall asks about the Origin of Cerebus Phone Book CollectionsJames [insert funny pseudonyms here] Smith's colours for the Spawn #10 Script book, and Dave's notesFrom the Home Office in Easton. PA, Michael (#26) R. asks about the future of Aardvark-Vanaheim's collaborations with The Waverly PressChriston asks about Cerebus circulation numbers and Cerebus #1 and the 1000ish crappily printed coversAnd how to draw CerebusOh what joyous fun for all!Tune in next month when we find out what was in the box..?Join the fun by sending questions/comments/promises of large piles or currency to momentofcerebus@gmail.com before April 3rd when we do this again...

Art Hounds
Art Hounds recommend art by museum staffers, mental health professionals and prisoners

Art Hounds

Play Episode Listen Later Feb 20, 2025 4:11


From MPR News, Art Hounds are members of the Minnesota arts community who look beyond their own work to highlight what's exciting in local art. Their recommendations are lightly edited from the audio heard in the player above. Want to be an Art Hound? Submit here.Artists at work Diane Richard of St. Paul worked for 21 years at the Minneapolis Institute of Art (Mia), and she wants people to know about “Artists at Work: the Mia Staff Art Show.” It's tucked away in the community commons area just past the cafe and the family center (pro tip: you can bring your lunch with you to the exhibit!) The show runs through April 13. Diane explains: You might never have thought about it, but the people who work in museums are often artists themselves — and good ones, too. They work as security guards, and they create public programs, hang art on the walls, help you figure out where you're going, and sell you stuff in the shop. And they work in everything from oil painting to watercolor and prints, ceramic sculpture to embroidery, video and collage. There's even a tarot card created from crop seeds.  One work waves from the wall: the menacing loon flag was security guard Rob McBroom official entry into the state's flag contest.  As I strolled around, Cara O'Connell's portrait of Myrna drew me over. It's from O'Connell's series on caregivers. Myrna is a beatific presence under a halo of robins. For me, the showstopper was Adam White's “It Came with the Room.” White's triptych collage is layered with thousands of cartoon bubbles filled with intriguing messages, many about the hellhound Cerebus. You could spend hours in front of it searching for meaning. Overall, the show gives insight into the mostly unseen hands responsible for MIA's daily operations. What comes through is their passion for art.— Diane RichardThe art of mental health Carla Mansoni is the director of arts and cultural Engagement at CLUES, one of the largest and oldest Latin organizations in Minnesota. She wants people to know about “The Art of Mental Health,” a group show of art created by people who work in the mental health field, curated by Kasia Chojan-Cymerman and Thrace Soryn. The exhibit at the Vine Arts Center in Minneapolis opens this Saturday, Feb. 22, with an artist reception from 6 p.m. to 9 p.m. featuring a performance by psychologist/musician Mindy Benowitz. The show runs on Saturdays through March. There is a performance by bluegrass Americana trio Echo Trail on March 15.  Carla says: The idea is to focus on the mental health professionals who also use art to heal themselves. This is a wonderful opportunity to showcase the diversity of art forms and how art and culture also heals the healer, elevating the humanity of those working in mental health spaces. — Carla MansoniSEENJennifer Bowen, founder and director of the Minnesota Prison Writing Workshop, was deeply moved by the exhibit “SEEN” currently on display at the Weisman Art Museum on the University of Minnesota campus in Minneapolis. Curated by Emily Baxter of We Are All Criminals, this show is half a decade in the making. Seven artists partnered with seven incarcerated artists to create installations. The show runs through May 18, with a panel conversation planned for Wednesday, Feb. 26 at 6 p.m. Some installations respond to incarcerated life, such as work by Sarith Peou and Carl Flink, which reflect the steps of traditional Cambodian dance Peou used to keep himself active and healthy while on COVID lockdown in his cell.  Jennifer says: There's another exhibit of a poet named Brian, who's got a massive chandelier of bird cages hanging from the ceiling with some of his poetry being read and voiced over by himself and other folks that he lives with. And I think the title of the poem is “We Can't Hear Ourselves Sing,” and it's about the kind of chaos and cacophony of life inside a prison. It was the first thing I saw when I walked into the exhibit. And it literally took my breath away, the way that it speaks metaphorically not just to the pain that incarceration causes, but to the kind of human need to still find beauty in the midst of that pain. But then there are other artists who chose to think about what the future would look like, or what healing might look like. There's an artist named Ronald who has a garden reminiscent of the garden his grandfather grew when he was in Detroit that's meant to be this kind of healing look forward. It's a really heavy but beautiful exhibit.  And one thing this exhibit does is offers the community, not only a chance to listen on phones to the artists' voices and to see interviews, but it also gives the public a chance to write notes to them that will go back to them. — Jennifer Bowen

An Audio Moment Of Cerebus
Please Hold For Dave Sim 2/2025

An Audio Moment Of Cerebus

Play Episode Listen Later Feb 7, 2025 122:44


Cerebus creator and Aardvark-Vanaheim President Dave Sim, and A Moment of Cerebus Interim Editor Manly Matt Dow are back with the SECOND P.H.F.D.S. (that's Please Hold For Dave Sim for the acronym deficient amongst you...) of 2025! How's everybody holding up? Great. I don't care. Anywho, this month Dave and Manly have a lively and fun discussion of: Manly remembers Cerebus fan Jeff Seiler in a story that touches on Dave's latest Kickstarter, and paying royalties to eight guys when the profits aren't that big to begin with. Dave's thoughts/feelings about what is happening in Gaza. Dave's thoughts on how current creators can overcome the problem of distribution as the industry contracts, and how Diamond's Bankruptcy will effect AV. Then we got our first Podcaster, since Dave instituted his Podcaster Policy, April C. AKA: @creative_fey, host of the Off the Spinner Rack Podcast, who got an exclusive (which means I didn't slot the audio in, you can see it on her channel) from Dave, which can be viewed here. MJ Sewall, proprietor of Alfie's Fish & Chips (stop in and tell your server you heard of Alfie's on Please Hold For Dave Sim for COMPLEMENTARY water, and a hug...), asks Dave "Where did your love of black and white art begin? " Does Dave just randomly reread Cerebus? The backstory on Cerebus being on the cover of Mile High Comics catalog in 1982. Dave answers: "How would Dave rank each phonebook in terms of his personal enjoyment/satisfaction?" And "How would Dave rank each phonebook in terms with sales". Will there ever be a remastered Cerebus issue 3? Dave's ideas for the Cerebus Archive Portfolios after the first sixteen are done. (We're on number 11 for those playing along at home...) How Dave and Gerhard split their original art between themselves when Ger left AV. How Cerebus Fan Jeff Seiler's untimely death led to a page of original art from the end of Church & State being lost to the wild. Dave's thoughts on Josh Even's custom 3D printed Cerebus action figure. And is Dave doing commissions? Fernando H. Ramirez did a twelve page Cerebus story. Dave remembers the SPIRITS OF INDEPENDENCE tour stop in Austin, TX, and how he would try and help young upcomers like he was helped by a 1978 signing with John Byrne. All this and Bernie Wrightson too!

All the Pouches: An Image Comics Podcast
11 Turtle Power Hour — Wingnut, Screwloose, and Cerebus, oh my!

All the Pouches: An Image Comics Podcast

Play Episode Listen Later Jan 19, 2025 90:44


Clinton, Layne, and Jon take a look at issue 8 from both Archie's Teenage Mutant Ninja Turtles Adventures and Mirage's Teenage Mutant Ninja Turtles series!

An Audio Moment Of Cerebus
Please Hold For Dave Sim 1/2025

An Audio Moment Of Cerebus

Play Episode Listen Later Jan 3, 2025 175:59


The Year? 2025. Which is when Stephen King's novel The Running Man (published as written by Richard Bachman.) is set. People aren't currently being hunted for television, but we're only three days into this crapfest, so “there's still time”? This time out, Dave Sim and A Moment of Cerebus's Manly Matt Dow discuss: Dave remembers Cerebus fan Jeff Seiler which leads to Dave announcing Aardvark-Vanaheim's plans for 2025, first up (if Donald Trump doesn't crap all over Canada,) CAN11 and Narutobus on the Kickstarter! Did You Know?: Dave was in discussions to license/sell Cerebus to DC Comics in 1985. Dave reads the thirty-nine year old proposal (spoilers: he turned down $100,000. Then he self-distributed the collected High Society and made $150,000 a year later.) Now it CAN be told! DAVE screwed up your bookshelf... Cerebus continuity Matt's Steamboat Willie/Blade Runner mash-up (prints are coming soon?) Neal Adams and Creator's Rights 2025 Dave's thoughts on Deni Loubert and Cerebus in 1977 What does a normal day look like for Dave Sim? Why an Aardvark? Does Dave celebrate New Year's? Anniversary tours? A Cerebus bi-monthly spin-off book by other creators? Dave's thoughts on modern day Canadian life. The location and condition of Dave's Albatross notebooks (used in the creation of Cerebus from issues 20 through 300) (Trigger Warning for Margaret: it...it's not good...not good...) Who was Dave's model for F. Stop Kennedy in Going Home? (spoilers: it was F. Scott Fitzgerald.) Hoo-boy it's two hours fifty-six minutes of distraction from the fact that their gonna hunt you down for May sweeps!

Eating the Fantastic
Episode 243: Tom Brevoort

Eating the Fantastic

Play Episode Listen Later Dec 18, 2024 114:28


Settle in for a steak dinner with Marvel's Tom Brevoort as we discuss how a guy whose first love was DC Comics ended up at Marvel, why he hated his early exposure to Marvel so much he'd tell his parents not to buy them because "they're bad," the pluses and minuses of comic book subscriptions (and the horror when issues arrived folded), how Cerebus the Aardvark inspired him to believe he could build a career in indie comics, the most unbelievable thing he ever read in a Flash comic, how he might never have worked at Marvel had I not gone to school with Bob Budiansky, the prevailing Marvel ethos he disagreed with from the moment he was hired, what it takes to last 35 years at the same company without either walking off in disgust or getting fired, the differing ways Marvel and DC reused their Golden Age characters, how to prevent yourself from being pedantic when you own an encyclopedic knowledge of the history of comics, and much more.

An Audio Moment Of Cerebus
Please Hold For Dave Sim 12/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Dec 6, 2024 130:50


Dave Sim and A Moment of Cerebus' Manly Matt Dow return for the last Please Hold For Dave Sim of 2024. This time out: Matt remembers Cerebus Fan Jeff Seiler Dave sets some Policy (this goes on for quite a bit.) Dave and Matt discuss the Cerebus Facebook Group Dave would like his Humble Bundle money A surprise for a lucky Cerebus fan, and maybe a remastered Guys for the rest of us. Why didn't Dave learn to drive? Where's the third printing of The Last Day The Friesens Remastered Edition Cerebus volume #1 What was the highest print run of an issue of Cerebus? Does Dave ever get burnt out? (Matt does. Every damn month when it's time to deal with this stupid website...) Praise for Cancel America Inking Jack Kirby (well, kinda) It's two hours and ten minutes of your life you'll never get back, but heck, you weren't doing anything anyway... What? Don't give me that look. Search your feelings, you know it to be true... --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Versian Chronicles
Mass Effect (Season 4 Episode 14)

Versian Chronicles

Play Episode Listen Later Nov 9, 2024 58:46


After a brief hiatus for sickness and Halloween shenanigans, we return to the Normandy and check out Cerebus's mess created by the Thorian disaster on Feros.. If you like the cast, please do us a favor and comment or follow the cast on your favorite podcatcher, or follow us on Youtube at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@TheVersian⁠⁠⁠⁠ If you would like to follow me on Threads, you can find me at ⁠⁠⁠⁠https://www.threads.net/@the_versian.⁠⁠⁠⁠ You can follow us on Discord at the invite code "8CtCNCj3fC" Be good, give your mother a hug if you can, and I'll catch you next time! Kailee Cain

Raging Bullets
Raging Bullets S2 E25 : A Comics Fan Podcast

Raging Bullets

Play Episode Listen Later Oct 20, 2024 60:07


Season 2 Episode 25 Interview with Robert Jeschonek: We were lucky enough to have an opportunity to have a conversation with author, Bob Jeschonek about his Kickstarter, Legends of Indie Comics Words Only. This features some amazing and legendary talent revisiting classic characters and concepts such as Badger, Cerebus, Concrete, El Muerto, Flaming Carrot, Grimjack, Jon Sable, Luther Strode, Megaton Man, Michael Mauser, Mr. Monster, The American, The Desert Peach, Those Annoying Post Bros, and Whisper.   We also talked about his novel Piggyback which you can find as a part of the currently running Finally, we get a chance to geek out about classic comics and creators in a fun casual discussion. Please consider supporting and sharing out Bob's work! Podcast Legacy Number 744 Bob Jeschonek https://www.kickstarter.com/projects/planetbob/legends-of-indie-comics-words-only?ref=project_link https://storybundle.com/monsters https://bobscribe.com We are on Threads! https://www.threads.net/@ragingbulletspodcast Sean is a cohost on “Is it Jaws?” Check it out here : https://twotruefreaks.com/podcast/qt-series/is-it-jaws-movie-reviews/ Upcoming: Absolute Power Finale, Star Wars 1-6 (2015) Future Topics : Batman Lonely Place of Dying, Miles Morales, Ultimate Marvel, Miracleman, The Boys, Radiant Black, the Bat-man First Knight, Marvel Super-Heroes Secret Wars, Emperor Joker and much much more because we are in constant planning. Contact Info (Social Media and Gaming) Updated 9/23: https://ragingbullets.com/about/ Facebook Group: https://www.facebook.com/groups/401332833597062/ Show Notes: 0:00 Show opening, http://www.heroinitiative.org, http://cbldf.org/,http://www.DCBService.com, http://www.Instocktrades.com, show voicemail line 1-440-388-4434 or drnorge on Skype, and more.   1:50 Bob Jeschonek interview 56:10 Closing We'll be back in a week with more content.  Check our website, Twitter and our Facebook group for regular updates.

Assault Of The 2-Headed Space Mules!
Episode 100 - The Ken Reid Returns! Talking about the indie comics boom with comedian Ken Reid!

Assault Of The 2-Headed Space Mules!

Play Episode Listen Later Oct 18, 2024 81:36


Host Douglas Arthur is joined by special guest Ken Reid for his 100th episode to talk about the indie comics boom of the 1980's, including a history that goes back to the underground comics of the 60's and 70's as well as the influence into the 90's! Cerebus, Teenage Mutant Ninja Turtles, Concrete, Madman, Evan Dorkin, Stephen Bissette, Rick Veitch, Frank Miller and so much more are discussed in this free-wheeling stream of consciousness run through comics history!

An Audio Moment Of Cerebus
Please Hold For Dave Sim 10/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Oct 5, 2024 138:33


Cerebus Creator Dave Sim and A Moment of Cerebus Interim Editor "Manly" Matt Dow have been doing this for a few years now. If this is your first time, welcome. But we're reasonably certain that you're a returning customer, and know the score. Dave and Matt talk about: The Continuity of TMNT #8 and Spawn #10 Steve Peters and his Sparky: Cosmic Delinquent Kickstarter Who used the Lectratone, and when? Dave's favorite Cerebus covers What are the beliefs one has to hold to be a Marxist? A correction regarding last month's discussion of Chester Brown's Mary Wept Over the Feet of Jesus Why the earliest Cerebus Trade collections are "spineless" Does Dave do commissions? A discussion of Monotheism A discussion of Jim Valentino Dave's thoughts on Manga The amazing price a slabbed Cerebus #1 went for The two different covers of Swords of Cerebus #1 The 1982 Tour book It's the total package. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 9/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Sep 6, 2024 129:11


Manly Matt Dow and Dave Sim do this again: Dave remembers Cerebus Fan Jeff Seiler Dave's tips on world building in a comic strip “Dave Sim's 3 Minute Guide to Understanding Steve Ditko” We don't REALLY talk about Neil Gaiman and his ongoing headache MJ Sewall's questions about Albatross #1, Dave's first notebook used in the production of Cerebus. You can get your own replica of it ⁠here⁠ Dave explains how he was inspired to furnish his apartment (it's actually a neat story...) More about Paul Anka's "My Way" than you would expect to find in a podcast about Cerebus the Aardvark. (Thanks ⁠Wikipedia⁠!) Chester Brown's MARY WEPT OVER THE FEET OF JESUS Friend to the ⁠Blog⁠: James Banderas-Smith has a new ⁠Kickstarter⁠ Other friend to the ⁠Blog:⁠ Steve Peters' new Kickstarter, with a new 8 page story by Dave, launces next week Man, what a way to spend two hours and nine minutes. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 8/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Aug 2, 2024 182:33


For the EIGHTH time in 2024, Dave Sim and A Moment of Cerebus' Manly Matt Dow come together to answer questions from listeners and generally shoot the bull for 'bout three hours. This time: It's Matt's turn to remember Cerebus Fan Jeff Seiler The Upcoming TMNT/Naruto crossover and Cerebus' Naruto parody The difference between stealing, aping, copying, homage, influence, and spoof The cover to Cerebus Volume #8: WOMEN How much Aardvark-Vanaheim made of the recent Cerebus Humble Bundle The chances of Dave doing a collection of his early Fanzine interviews with Famous Comics Creators The differences between the eleventh and twelfth printings of Cerebus Volume #2: High Society Is Dave getting Canadian Pension and Old Age Security payments (the Canadian version of Social Security) Would Dave write and draw any more original Cerebus stories Dave and Matt discuss Neil Gaiman's recent troubles It's three hours of your life you weren't gonna spend productively anyway... --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Epic Tales From the Sewers
Epic Tales with Joshua Even the Cerebus TMNT crossover King!

Epic Tales From the Sewers

Play Episode Listen Later Jul 23, 2024 84:19


In this episode Justin and Eric are speaking to Cerebus and TMNT collector and super fan Josh Even! After meeting Josh at Granite State 2023, we knew that he has some very interesting stories about TMNT and especially his artwork collection. Josh has well over 300+ pieces of commissioned artwork that feature Cerebus the Aardvark and a Barbarian Turtle! Take a look at the Gallery here: Savage Cerebus & the TMNT A Pizza that's worthy of Savanti Romero himself the Quattro Stagioni Pizza!

An Audio Moment Of Cerebus
Please Hold For Dave Sim 7/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Jul 6, 2024 50:56


Dave Sim and A Moment of Cerebus' Manly Matt Dow return for the Please Hold For Dave Sim 2024 Fourth of July Spectacular! In the shortest Please Hold they've ever recorded, Dave answers questions while Matt stays relatively quiet. Dave: Remembers Cerebus Fan Jeff Seiler Then a NON-Question from the STAR of Steve Peters Week, Steve Peters who's latest Kickstarter has a nifty piece of Dave Sim Original Art that could be YOURS Will Eisner's the Octopus and the strange Rip Kirby connection How Dave writes Cerebus in Hell? How the Cerebus Humble Bundle came to be The Cerebus "The Hell it's yours, put it back" bookplate pricing and when it will be next available The Original script to Spawn #10 has been found, and is being returned to the Off-White House How Al Nickerson inks All that and a Musical Interlude too! --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Kirby's Kids
The Kids Talk Cerebus

Kirby's Kids

Play Episode Listen Later Jun 12, 2024 56:23


Doc and Angus celebrate the independent spirit that is Cerebus and the movement it launched in comics! Cerebus, Volume 1 https://www.amazon.com/Cerebus-1-Dave-Sim/dp/0919359086/ Welcome to Estarcion, the wildly absurd and funny world of Cerebus the Aardvark. This initial volume collects the first two years of stories from Dave Sim's 300-issue magnum opus (still in progress after 20 years). Don't be discouraged by the initially crude artwork or the silliness of the stories. It gets better--even noticeably within this volume. This first installment is the most valuable in preparing for the larger stories ahead. When we first meet Cerebus--a small, gray, and chronically ill-tempered aardvark--he is making his living as a barbarian. In 1977, when the Cerebus comic book series began, Sim initially conceived of it as a parody of such popular series as Conan, Red Sonja, and Elric but quickly mined that material and transformed the scope of the series into much more. Even by the end of this volume, the Cerebus story begins to transform beyond "funny animal" humor into something much more complex and interesting. Leave a message at kirbyskidspodcast@gmail.com Join the Community Discussions ⁠⁠⁠⁠⁠⁠https://mewe.com/join/kirbyskids  ⁠⁠⁠⁠⁠⁠ Please join us down on the Comics Reading Trail in 2024 ⁠⁠⁠⁠⁠⁠https://www.kirbyskids.com/2023/11/holiday-special-kirbys-kids-giving.html⁠⁠⁠⁠⁠ For detailed show notes and past episodes please visit ⁠www.kirbyskids.com

An Audio Moment Of Cerebus
Please Hold For Dave Sim 6/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Jun 7, 2024 168:45


Dave Sim and A Moment of Cerebus' Manly Matt Dow return for the June edition of Please Hold For Dave Sim. This time out, Dave and Matt discuss: Cerebus fan Jeff Seiler Al Nickerson and his Graphic Novel: Sword of Eden and it's sequel: Sword of Eden Shinobi The change in size of Cerebus comics starting with issue #21 MJ/Mike/Dodger/Shirley(?) Sewall makes Dave take a trip down Memory Lane to the FIRST Cerebus double issue: 112/113 What ever happened to the idea of a collected Following Cerebus? Dave's enjoyment of the Santos Sisters When did Cerebus in Hell? one-shots become "Cerebus in Hell? Presents"? It's the most wonderful way you can spend two hours and forty-eight minutes. With our PATENTED double your money back guarantee: If not FULLY satisfied with your Please Hold For Dave Sim experience, return the unused portion for double the purchase price. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Comic Book Syndicate
Flea Market Fantasy #250 | Cerebus 57 (1983)

Comic Book Syndicate

Play Episode Listen Later Jun 4, 2024 43:54


After a night of drunken mischief, Cerebus wakes up married.

SILENCE!
SILENCE! #315

SILENCE!

Play Episode Listen Later May 19, 2024 116:15


IT'S LIKE A WHIRLPOOL AND IT NEVER ENDSAt last! It's time for The Big Pivot! Welcome to the SILENCE! Sportscast! All the Action! All the balls! More kicks and hits than you could reasonably kick or hit! Who's winning? YOU! Unfortunately the sports chat gets somewhat derailed by talk about gigs and the 90s before Gary Lactus and The Beast Must Die get sucked into The Reviewniverse where they find Joe Wilkinson: My Autobiography, Elf Quest, Cerebus, Viz (Gary's got a strip in it!) and Proustian comics in general.Following that, there's a lovely bit of SILENCE! (Because The Film's Started) in which The Beast Must Die has seen The Flash.All sport is finally forgotten as the hosts reckymend Comfort Blanket, Allan Quartermain and the Spear of Destiny, Blood & Flesh: The Reel Life & Ghastly Death of Al Adamson and Gary just won't stop going on about his Patreon. At this point in the blurb we usually say, "AND MORE!".@frasergeesin@thebeastmustdiesilencepodcast@gmail.comYou can support us using Patreon if you like.SILENCE! has not been sponsored by the greatest comics shops on the planet, DAVE'S COMICS of Brighton and GOSH! Comics of London for years but we still love them. Hosted on Acast. See acast.com/privacy for more information.

THE AWESOME COMICS PODCAST
Episode 463 - How Jeff Smith Started Self Publishing Comics!

THE AWESOME COMICS PODCAST

Play Episode Listen Later May 13, 2024 108:42


This week the legendary self-published comics creator Jeff Smith (Bone, Rasl, Tuki) joins the ACP crew to talk about his journey into creating comics, his upcoming book releases, the importance of retailers and libraries and so much more! It's a wonderful chat and it will be impossible to not be excited about comics afterwards. Plus there's plenty of great indie comics chat, fun and regular awesome indie comics laughs you come to except from the ACP!  Great stuff to check out this week -  Jeff Smith, Bone, Thorn, Rasl, Tuki, Cartoon Books, CXC Expo 2024, Scholastic, Neil Gaiman, Dave Sim, Cerebus, Armored, Clover Press, InkBlot Festival,  Nick Bryan, Schism, Mark Abnett, Kill the Bride, The Truck Mutts, Clear Run 2, 1900, Kaijus and Cowboys, Aliens: The Original Years

An Audio Moment Of Cerebus
Please Hold For Dave Sim 5/2024

An Audio Moment Of Cerebus

Play Episode Listen Later May 3, 2024 119:44


In this, the fifth Please Hold For Dave Sim of 2024, Aardvark-Vanaheim President Dave Sim and A Moment of Cerebus' Interim EditorManly Matt Dow answer/discuss: Dave remembers Cerebus Fan Jeff Seiler "Anything done for the first time unleases a demon" Graft: Boon or Blessing? Dave's birthday present from MJ Sewall A visit from the Home Office in Easton, Pennsylvania A NEW "Swordfish" offer. Did the First Fifth prints solve Aardvark-Vanaheim's Cerebus the Animated Portfolio cashflow problems? Will Dave ever "do" a collected glamourpuss? While you wait, email Matt, and get a FREE digital glamourpuss collection Dave offers custom Cerebus Archive Portfolios: $10USD a page for any original art page in the Cerebus Archive (minimum order of ten pages at a time). ANY page: Cerebus the Aardvark, glamourpuss, Strange Death of Alex Raymond (including the pages Dave has drawn in the past year), Cerebus in Hell? (I would suggest sending in a list of pages you're interested in, and include alternates in case Dave sold the page you REALLY want years ago.) Dave's thoughts on Todd McFarlane's "breaking" Dave's 300 issue record. What Dave wants for the Cerebus Archive from Steve Peters' "Ghosts of Rabbit Hell" Kickstarter campaign. All this and the usual sidetracks, asides, and general tomfoolery you get from Matt & Dave. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

SILENCE!
Episode 318: SILENCE! #315

SILENCE!

Play Episode Listen Later Apr 24, 2024 116:15


IT'S LIKE A WHIRLPOOL AND IT NEVER ENDSAt last! It's time for The Big Pivot! Welcome to the SILENCE! Sportscast! All the Action! All the balls! More kicks and hits than you could reasonably kick or hit! Who's winning? YOU! Unfortunately the sports chat gets somewhat derailed by talk about gigs and the 90s before Gary Lactus and The Beast Must Die get sucked into The Reviewniverse where they find Joe Wilkinson: My Autobiography, Elf Quest, Cerebus, Viz (Gary's got a strip in it!) and Proustian comics in general.Following that, there's a lovely bit of SILENCE! (Because The Film's Started) in which The Beast Must Die has seen The Flash.All sport is finally forgotten as the hosts reckymend Comfort Blanket, Allan Quartermain and the Spear of Destiny, Blood & Flesh: The Reel Life & Ghastly Death of Al Adamson and Gary just won't stop going on about his Patreon. At this point in the blurb we usually say, "AND MORE!".@frasergeesin@thebeastmustdiesilencepodcast@gmail.comYou can support us using Patreon if you like.SILENCE! has not been sponsored by the greatest comics shops on the planet, DAVE'S COMICS of Brighton and GOSH! Comics of London for years but we still love them.

An Audio Moment Of Cerebus
Please Hold For Dave Sim 4/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Apr 5, 2024 129:02


In this, the April 2024 Please Hold For Dave Sim, Dave and https://momentofcerebus.blogspot.com/https://momentofcerebus.blogspot.com/' Manly Matt Dow discuss: March's cliffhanger Dave's Collected Letters Digital Cerebus in Hell? Dave's first notebook, Albatross #1, and how you can get a Facsimile of it Matt remembers Cerebus Fan Jeff Seiler Fantastic Four #252 and it's Eerie similarity to Cerebus #44-50 The Remastered The Last Day Idyl by Jeff Jones The return of the $900* tour of the Off-White House Dave's take on the most pivotal/crucial part of Cerebus A page from the "debris field" and how it may become a swordfish Dave's three page Howard the Duck story that NEVER was (well, it is, but it's not finished) Dave's thoughts on Ed Piskor's suicide All that and Matt's usual bullshit too... *adjusted for 2024 inflation --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

The Comics Canon
Episode 207: Return of The Dark Knight Returns Part 1

The Comics Canon

Play Episode Listen Later Mar 27, 2024 77:42


Explicit (a few swears scattered throughout) and content warning (brief mentions of death by suicide and attempted sexual assault) This isn't a podcast—it's an operating table, and we're the surgeons! This time, we revisit one of our most controversial episodes (okay, pretty much our only “controversial” episode) with the first installment of a two-part, in-depth reexamination of Frank Miller's 1986 landmark miniseries Batman: The Dark Knight Returns! Does this paradigm-shifting, game-changing series hold up to scrutiny nearly 40 years since its publication? And what are its chances of finally gaining entry into that Home for the Emotionally Troubled known as ... The Comics Canon? In This Episode: ·         Howard Chaykin's American Flagg ·         Buy-ins you have to make when reading a Batman comic ·         What's the deal with the Mutant gang? ·         Curt gets something off his chest ·         Late Night With the Devil ·         Aard Labor, Tom Ewing's breakdown of Cerebus, on Freaky Trigger ·         Christine by Stephen King ·         The unique terror of calling girls back in the landline era Join us in two weeks as we conclude this two-part look back and once again render judgment on The Dark Knight Returns! Until then:Impress your friends with our Comics Canon merchandise! Rate us on Apple Podcasts! Send us an email! Hit us up on Facebook, Bluesky or The Platform Formerly Known as Twitter! And as always, thanks for listening!  

An Audio Moment Of Cerebus
Please Hold For Dave Sim 2/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Feb 2, 2024 140:59


It's the SECOND Please Hold For Dave Sim of 2024! Dave Sim and Manly Matt Dow discuss/answer: Steve Peters' latest Kickstarter Steve Peters remembrance of Cerebus Fan Jeff Seiler (R.I.P) Dave's latest Kickstarter Why Dave originally planned Cerebus to run for EXACTLY 26 years Taylor Swift wait, #TaylorSwift Taylor! Swift! Michael R. (of Easton, PA)'s question, about digital Collected Letters volumes, and Dave has an offer for anybody who wants more... Michael R. (of Easton, PA)'s comment about Catherine Zeta-Jones and Gheeta. Strange Death of Alex Raymond and Jen DiGiacomo's GoFundMe efforts to raise cash to keep Dave working on it. Dave offers a "Swordfish" deal to get a reproduction of his first notebook for $100 (US). A message for Eddie Khanna, Dave's chosen successor as President of Aardvark-Vanaheim. Dave remembers the ONE time he met Jeff Jones. There's a bunch of Visual aspects, so you're gonna wanna swing by A Moment of Cerebus for the videos... --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Comics Who Love Comic Books
New Ultimate Spider-Man and More

Comics Who Love Comic Books

Play Episode Listen Later Feb 2, 2024 58:48


Returning to the podcast is comedian Erik Marino! What is Cerebus about? How was it created? How was The Death of Captain Marvel published? What's wrong with the Fantastic Four movies? What did they do in the Fantastic Four comics that was maybe the first of its kind? Who might have leaked the Deadpool 3 footage? What is The Ballad of Beta Ray Bill? How do you say the name of Thor's hammer? What was Thor's secret identity back in the day? Why did kids torture Erik's brother in first grade? What is Big Numbers? What are the rumors about recasting Kang in the MCU? What happened in Crisis on Infinite Earths? What happened on the different Earths in DC comics? What is the new Ultimate Spider-Man about? What happens in the DC event Beast World? What comic book bummed Brett out?  Reading list: Cerebus Early Fantastic Four (free on Comixology Unlimited) The Death of Captain Marvel The Ballad of Beta Ray Bill Original Secret Wars Hickmans' Secret Wars Wolverine mini series Big Numbers by Alan Moore and Bill Sienkiewicz Dark Phoenix Saga (free on Comixology Unlimited) Secret Empire (free on Kindle Unlimited) World War Hulk (free on Kindle Unlimited) Infinity Gauntlet Hickman's Fantastic Four Truth: Red, White & Black Secret Invasion (free on Comixology Unlimited) Gotham Central (free on Comixology Unlimited) Teen Titans New Teen Titans JLA: World Without Grown-Ups Crisis on Infinite Earths (free on Kindle Unlimited) New Hickman Ultimate Spider-Man DCeased Marvel Zombies Ultimatum by Jeph Loeb (free on Kindle Unlimited) Batman: Hush Batman: The Long Halloween Watch list: Reservation Dogs Captain America: The First Avenger Thor: The Dark World Guardians of the Galaxy The Falcon and the Winter Soldier Ms. Marvel The Marvels Daredevil Jessica Jones Luke Cage WandaVision Recorded live at Everyone Comics on 1-24-24

An Audio Moment Of Cerebus
Please Hold For Dave Sim 1/2024

An Audio Moment Of Cerebus

Play Episode Listen Later Jan 5, 2024 108:46


Cerebus creator Dave Sim returns to once again talk to A Moment Of Cerebus Interim Editor, Manly Matt Dow for the FIRST (of we hope twelve) Please Hold For Dave Sims of 2024. What da fuq we talk about this time? Steve Peters has a new Kickstarter. So I got him to remember Cerebus fan Jeff Seiler for me. Dave has some tips for would-be creators. M-I-C-K-E-Y IN-THE-PUBLIC-DOMAIN! Steve asked, Dave answers. Lee Thacker should be involved in a Kickstarter soon, and Dave REALLY likes the art. (Possible Spoilers) Dodger asks about Stan Sakai. Dave asks how to pronounce "Usagi Yojimbo"? Dodger gets the REALLY. BIG. PRIZE! Zipper has an answer and a guess, he's not even close... Michael Grabowski has an answer. Fernando H. Ramirez found something Dave didn't know... And Matt talks a bunch about...not...Cerebus... Good times. Good times... --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Mid-Valley Mutations
It's A Very Cerebus Christmas!

Mid-Valley Mutations

Play Episode Listen Later Dec 13, 2023


It’s A Very Cerebus Christmas! Here are all six episodes of the short-lived “Cerebus” radio show, as it was originally heard on Shockwave Radio Theater, on KPAI, in the 80’s. Playlist & Live WFMU Chat. Enjoy! https://ia902809.us.archive.org/15/items/mutation-332/Mutation332.mp3

Kirby's Kids
Holiday Special - Kirby's Kids Giving Thanks & 2024 Graphic Novel Reading List Announcement

Kirby's Kids

Play Episode Listen Later Nov 23, 2023 20:06


HAPPY THANKSGIVING!!! Angus gives thanks for all our community members, listeners, and fellow kids. He is also joined by Comics Cowboy Troy who performs a dramatic reading of the 2024 Kirby's Kids Graphic Novel Reading List selections. KIRBY'S KIDS GRAPHIC NOVEL READING LIST FOR 2024 January - VERTIGO MONTH - PREACHER Book I https://www.amazon.com/Preacher-Book-One-GARTH-ENNIS-ebook/dp/B00C2IHYPW/ February - Teenage Mutant Ninja Turtles: The Last Ronin https://www.amazon.com/Teenage-Mutant-Ninja-Turtles-Ronin-ebook/dp/B09XZK3Y98/ March - APPENDIX N MONTH - Elric Vol. 3: The White Wolf & Elric Vol. 4: The Dreaming City https://www.amazon.com/Elric-White-Wolf-Vol-1-ebook/dp/B07H5S9P1Q/ https://www.amazon.com/gp/product/B08WHHJMPT April - Cerebus, Volume 1 https://www.amazon.com/Cerebus-1-Dave-Sim/dp/0919359086/ May - IMAGE COMICS MONTH - Savage Dragon Archives Vol. 1 https://www.amazon.com/Savage-Dragon-Archives-Vol-1-ebook/dp/B01GGBBWZI/ June - Wonder Woman: War of the Gods https://www.amazon.com/Wonder-Woman-War-Gods-1987-2006-ebook/dp/B01BLZX1P8/ July - DARK HORSE COMICS MONTH - The Complete COLDER Omnibus https://www.amazon.com/Complete-Colder-Omnibus-Paul-Tobin-ebook/dp/B076B8BV91/ August - JACK ‘KING' KIRBY MONTH - Fantastic Four Fantastic Four Omnibus Vol. 1 https://www.amazon.com/Fantastic-Four-Omnibus-Vol-1961-1996-ebook/dp/B0B35DZ33F/ Fantastic Four Visionaries: John Byrne https://www.amazon.com/Fantastic-Four-Visionaries-Byrne-1961-1996-ebook/dp/B07C84T8ZC/ September - MANGA MONTH - Akira, Vol. 1 https://www.amazon.com/Akira-Vol-1-Katsuhiro-Otomo/dp/1935429000/ October - HALLOWEEN HORROR MONTH - 30 Days of Night Vol. 1 https://www.amazon.com/30-Days-Night-Steve-Niles-ebook/dp/B008O7T890/ November - ALAN MOORE MONTH - V for Vendetta https://www.amazon.com/V-Vendetta-Alan-Moore-ebook/dp/B08LW524ZD/ December - Mouse Guard Vol. 2: Winter 1152 https://www.amazon.com/Mouse-Guard-Vol-Winter-1152-ebook/dp/B011QY3QU4/ _____________ KUDOS KIRBY - Forever People https://www.amazon.com/Forever-People-Jack-Kirby-1971-1972-ebook/dp/B08L9R66QV/ For detailed show notes and past episodes please visit ⁠www.kirbyskids.com

An Audio Moment Of Cerebus
Please Hold For Dave Sim 11/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Nov 3, 2023 140:08


Holy crow! It's the FIFTH anniversary episode of Please Hold For Dave Sim! Five years of holding. Five years of two dudes talking about a cartoon aardvark. Over sixty hours of AMOC's Manly Matt Dow trying to make Dave Sim laugh. Five years with a lot of answers to a lot of questions. Like these: Matt remembers Jeff Seiler's car. Dave skips Lank Stephens because, oh god the questions... VGDC Maroro asks about: Judith Bradford a sort of "list of exercises" for one to learn how to ink lines How does one figure out how far one can push the expression of a character until it becomes off-model How did you go about making the rock-climbing scene from Church & State so heartpoundingly thrilling that it took me all the way to 5 paragraphs of the brick wall that is Chasing YHWH in like 2 days If you had the ability to draw without any pain in any bit of your arms tomorrow, would you attempt a second magnum-opus like Cerebus was?  What is your take on the ethics of artists using an AI trained nonconsensually on the work of other artists as a tool to help them? Does Glenn Vilppu's technique work for cartoon characters that were constructed two-dimensionally like Fred Flintstone rather than ones made of 3D forms like Mickey Mouse? Dodger wants to know about Dave's thoughts on Marvel & DC's B/W reprint lines. Steve has inquiries about Jeff Seiler's copy of The Cerebus Guide to Self-Puplishing. Margaret has one of them notebook questions. Zipper wants to know if he has a rare misprint. Christon has questions about Cerebus #1. Philip Fry has questions about two unnamed guys from the first 13 issues. Mike Sewall wants to know how long SDOAR is gonna be. James Windsor-Smith has questions about Matt's favorite cartoon. Wayne Thomas has questions about Deni Sim's signature. Aaron Wood makes us hungry... And has questions about Superman. It's another two hours and twenty minutes added to the pile of content we provide you...for free. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 8/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Aug 4, 2023 137:20


In this, the eighth month of Two Thousand Twenty-Three (16th of Av, 5783 if you're Jewish...) Dave Sim and Manly Matt Dow discuss/answer: Walter Winchell *trigger warning: Cerebus in Hell? Promos* Dave remembers Cerebus Fan Jeff Seiler *trigger warning: Covid lockdowns* *trigger warning: Karaoke* Twitter/"X" *trigger warning: Twitter* *trigger warning: "X"* glamourpuss #1 editions *trigger warning: exclusivity* *trigger warning: Cerebus ephemera in the garbage* Dave selling the original art to page 9 of glamourpuss #25 *trigger warning: I can't afford it* *trigger warning: off-model cartoon beverage* Dave selling prints of page 9 of glamourpuss #25 *trigger warning: Abortion joke* *trigger warning: shameless hucksterism* Does Aardvark-Vanaheim have any Guys Party Packs? (Answer: Yes, 21 of them) *trigger warning: "toxic" masculinity* *trigger warning: Popeye jokes* Dave remembers the beginnings of the Small Press "movement" *trigger warning: the word "Movement" (Jeff Smith only) *trigger warning: multiple uses of the word "Bone"* *trigger warning: 90s nostalgia* Will Dave, Emo Dave, or Jack appear in future Strange Death of Alex Raymond pages *trigger warning: discussion of human mortality* It's a (to quote a Very Famous and Very old song) "gay ol' time! WILMA!!!!" *trigger warning: Honeymooners rip-off* --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Four Color Rolled Spine
Amazing Heroes Podcast: The Origins of “Independent” Comics by Charles Meyerson

Four Color Rolled Spine

Play Episode Listen Later Aug 2, 2023 46:11


Episode #7 00:00:12 Preamble 00:03:53 Part 1: Wally Wood and Witzend 00:10:09 Part 2: Mike Friedrich and Star*Reach 00:16:34 Part 3: Jack Katz and The First Kingdom 00:22:26 Part 4: Wendy and Richard Pini and Elfquest 00:29:03 Part 5: Dave Sim and Cerebus 00:35:56 Part 6: The future of independent comics 00:40:58 Postscript / Amazing Listeners Twitter Facebook tumblr ♞#дɱдŻİŊƓĤƐƦʘƐʂ♘ rolledspinepodcasts@gmail.com Wordpress The Origins of Independent Comics [1983] By Charlie Meyerson --- Send in a voice message: https://podcasters.spotify.com/pod/show/diabolu-frank/message

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jul 26, 2023 54:31


FlashAttention was first published by Tri Dao in May 2022 and it had a deep impact in the large language models space. Most open models you've heard of (RedPajama, MPT, LLaMA, Falcon, etc) all leverage it for faster inference. Tri came on the podcast to chat about FlashAttention, the newly released FlashAttention-2, the research process at Hazy Lab, and more. This is the first episode of our “Papers Explained” series, which will cover some of the foundational research in this space. Our Discord also hosts a weekly Paper Club, which you can signup for here. How does FlashAttention work?The paper is titled “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. There are a couple keywords to call out:* “Memory Efficient”: standard attention memory usage is quadratic with sequence length (i.e. O(N^2)). FlashAttention is sub-quadratic at O(N). * “Exact”: the opposite of “exact” in this case is “sparse”, as in “sparse networks” (see our episode with Jonathan Frankle for more). This means that you're not giving up any precision.* The “IO” in “IO-Awareness” stands for “Input/Output” and hints at a write/read related bottleneck. Before we dive in, look at this simple GPU architecture diagram:The GPU has access to three memory stores at runtime:* SRAM: this is on-chip memory co-located with the actual execution core. It's limited in size (~20MB on an A100 card) but extremely fast (19TB/s total bandwidth)* HBM: this is off-chip but on-card memory, meaning it's in the GPU but not co-located with the core itself. An A100 has 40GB of HBM, but only a 1.5TB/s bandwidth. * DRAM: this is your traditional CPU RAM. You can have TBs of this, but you can only get ~12.8GB/s bandwidth, which is way too slow.Now that you know what HBM is, look at how the standard Attention algorithm is implemented:As you can see, all 3 steps include a “write X to HBM” step and a “read from HBM” step. The core idea behind FlashAttention boils down to this: instead of storing each intermediate result, why don't we use kernel fusion and run every operation in a single kernel in order to avoid memory read/write overhead? (We also talked about kernel fusion in our episode with George Hotz and how PyTorch / tinygrad take different approaches here)The result is much faster, but much harder to read:As you can see, FlashAttention is a very meaningful speed improvement on traditional Attention, and it's easy to understand why it's becoming the standard for most models.This should be enough of a primer before you dive into our episode! We talked about FlashAttention-2, how Hazy Research Group works, and some of the research being done in Transformer alternatives.Show Notes:* FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness (arXiv)* FlashAttention-2* Together AI* From Deep Learning to Long Learning* The Hardware Lottery by Sara Hooker* Hazy Research* Is Attention All You Need?* Nvidia CUTLASS 3* SRAM scaling slows* Transformer alternatives:* S4* Hyena* Recurrent Neural Networks (RNNs)Timestamps:* Tri's background [00:00:00]* FlashAttention's deep dive [00:02:18]* How the Hazy Research group collaborates across theory, systems, and applications [00:17:21]* Evaluating models beyond raw performance [00:25:00]* FlashAttention-2 [00:27:00]* CUDA and The Hardware Lottery [00:30:00]* Researching in a fast-changing market [00:35:00]* Promising transformer alternatives like state space models and RNNs [00:37:30]* The spectrum of openness in AI models [00:43:00]* Practical impact of models like LLAMA2 despite restrictions [00:47:12]* Incentives for releasing open training datasets [00:49:43]* Lightning Round [00:53:22]Transcript:Alessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. Today we have no Swyx, because he's in Singapore, so it's a one-on-one discussion with Tri Dao. Welcome! [00:00:24]Tri: Hi everyone. I'm Tri Dao, excited to be here. [00:00:27]Alessio: Tri just completed his PhD at Stanford a month ago. You might not remember his name, but he's one of the main authors in the FlashAttention paper, which is one of the seminal work in the Transformers era. He's got a lot of interest from efficient transformer training and inference, long range sequence model, a lot of interesting stuff. And now you're going to be an assistant professor in CS at Princeton next year. [00:00:51]Tri: Yeah, that's right. [00:00:52]Alessio: Yeah. And in the meantime, just to get, you know, a low pressure thing, you're Chief Scientist at Together as well, which is the company behind RedPajama. [00:01:01]Tri: Yeah. So I just joined this week actually, and it's been really exciting. [00:01:04]Alessio: So what's something that is not on the internet that people should know about you? [00:01:09]Tri: Let's see. When I started college, I was going to be an economist, so I was fully on board. I was going to major in economics, but the first week I was at Stanford undergrad, I took a few math classes and I immediately decided that I was going to be a math major. And that kind of changed the course of my career. So now I'm doing math, computer science, AI research. [00:01:32]Alessio: I had a similar thing. I started with physics and then I took like a programming course and I was like, I got to do computer science. I don't want to do physics. So FlashAttention is definitely, everybody's using this. Everybody loves it. You just released FlashAttention 2 last week. [00:01:48]Tri: Yeah. Early this week on Monday. Yeah. [00:01:53]Alessio: You know, AI time. Things move fast. So maybe let's run through some of the FlashAttention highlights, some of the innovation there, and then we can dive into FlashAttention 2. So the core improvement in FlashAttention is that traditional attention is a quadratic sequence length. And to the two, FlashAttention is linear, which obviously helps with scaling some of these models. [00:02:18]Tri: There are two factors there. So of course the goal has been to make attention go faster or more memory efficient. And ever since attention became popular in 2017 with the Transformer paper, lots and lots of folks have been working on this. And a lot of approaches has been focusing on approximating attention. The goal is you want to scale to longer sequences. There are tons of applications where you want to do that. But scaling to longer sequences is difficult because attention scales quadratically in sequence length on both runtime and memory, as you mentioned. So instead of trying to approximate attention, we were trying to figure out, can we do the same computation and maybe be more memory efficient? So in the end, we ended up being the memory is linear in sequence length. In terms of computation, it's still quadratic, but we managed to make it much more hardware friendly. And as a result, we do get wall clock speed up on the order of 2 to 4x, which really helps because that just means that you'll be able to train with 2 to 4x longer sequence length for the same cost without doing any approximations. As a result, lots of folks have been using this. The thing is available in a lot of libraries that do language model training or fine tuning. [00:03:32]Alessio: And the approximation thing is important because this is an exact thing versus a sparse. So maybe explain a little bit the difference there. [00:03:40]Tri: For sure. So in addition, essentially you compute pairwise similarity between every single element in a sequence against each other. So there's been other approaches where instead of doing all that pairwise computation, you only compute similarity for some pairs of elements in the sequence. So you don't do quadratic number of comparison. And this can be seen as some form of sparsity. Essentially you're ignoring some of the elements. When you write down the matrix, you essentially say, OK, I'm going to pretend there's zero. So that has some benefits in terms of runtime and memory. But the trade-off is that it tends to do worse in terms of quality because you're essentially approximating or ignoring some elements. And I personally have worked on this as well for a few years. But when we talk to practitioners who actually train models, especially at large scale, they say, tend not to use these approximate attention methods. Because it turns out, this was surprising to me at the time, was that these approximation methods, even though they perform fewer computation, they tend to not be faster in walk-on time. So this was pretty surprising because back then, I think my background was more on the theoretical side. So I was thinking of, oh, how many flops or floating point operations are you performing? And hopefully that correlates well with walk-on time. But I realized that I was missing a bunch of ideas from the system side where flops or floating point operations don't necessarily correlate with runtime. There are other factors like memory reading and writing, parallelism, and so on. So I learned a ton from just talking to systems people because they kind of figured this stuff out a while ago. So that was really eye-opening. And then we ended up focusing a lot more on memory reading and writing because that turned out to be the majority of the time when you're doing attention is reading and writing memory. [00:05:34]Alessio: Yeah, the I.O. awareness is probably one of the biggest innovations here. And the idea behind it is, like you mentioned, the FLOPS growth of the cards have been going up, but the memory bandwidth, not as much. So I think maybe that was one of the assumptions that the original attention paper had. So talk a bit about how that came to be as an idea. It's one of those things that like in insight, it's like, obviously, why are we like rewriting to like HBM every time, you know, and like once you change it, it's clear. But what was that discovery process? [00:06:08]Tri: Yeah, in hindsight, a lot of the ideas have already been there in the literature. And I would say is it was somehow at the intersection of both machine learning and systems. And you kind of needed ideas from both sides. So on one hand, on the system side, so lots of systems folks have known that, oh, you know, kernel fusion is great. Kernel fusion just means that instead of performing, you know, loading the same element, instead of performing an operation, write it down, load it back up and perform the second operation, you just load it once, perform two operations and then write it down again. So that saves you kind of memory read and write in the middle there. So kernel fusion has been a classic. There's been other techniques from the system side, like tiling, where you perform things in the form of computations in block, again, so that you can load it into a really fast memory. Think of it as a cache. And this is, again, classical computer science ideas, right? You want to use the cache. So the system folks have been thinking about these ideas for a long time, and they apply to attention as well. But there were certain things in attention that made it difficult to do a complete kernel fusion. One of which is there is this softmax operation in the middle, which requires you to essentially sum across the row of the attention matrix. So it makes it difficult to kind of break it, because there's this dependency. So it makes it difficult to break things into a block. So on the system side, people have been thinking about these ideas, but it's been difficult to kind of do kernel fusion for the entire operation. On the machine learning side, people have been thinking more algorithmically. They say, okay, either we can approximate attention, or there's this trick called the online softmax trick, which says that because of softmax, the way it's written mathematically, you can actually break it up into smaller pieces, do some rescaling, and still get the right answer. So this online softmax trick has been around for a while. I think there was a paper from NVIDIA folks back in 2018 about this. And then there was a paper from Google. So Marcus, Rob, and Stats wrote a paper late 2021 on using this online softmax trick to break attention up into smaller pieces. So a lot of the ideas were already there. But it turns out, you kind of need to combine ideas from both sides. So you need to understand that, hey, we want to do kernel fusion to reduce memory written writes. But we also need this online softmax trick to be able to break the softmax into smaller pieces so that a lot of the systems tricks kind of carry through. We saw that, and it was kind of a natural idea that we ended up using ideas from both sides, and it ended up working pretty well. Yeah. [00:08:57]Alessio: Are there any downsides to kernel fusion? If I think about databases and the reasons why we have atomic operations, you know, it's like, you have observability and fallback in between them. How does that work with attention? Is there anything that we lose by fusing the operations? [00:09:13]Tri: Yeah, I think mostly on the practical side is that you lose a little bit of flexibility in the sense that, hey, now you have, for example, faster attention, it's just a subroutine that you would call to do attention. But as a researcher, let's say you don't want that exact thing, right? You don't want just attention, let's say you want some modification to attention. You want to do, hey, I'm going to multiply the query and key, but then I'm going to do this extra thing before I carry on. So kernel fusion just means that, okay, we have a subroutine that does the entire thing. But if you want to experiment with things, you won't be able to use that fused kernel. And the answer is, can we have a compiler that then automatically does a lot of this kernel fusion? Lots of compiler folks are thinking about this, either with a new language or you can embed it in PyTorch. PyTorch folks have been working on this as well. So if you write just your code in PyTorch and they can capture the graph, can they generate code that will fuse everything together? That's still ongoing, and it works for some cases. But for attention, because of this kind of softmax rewriting stuff, it's been a little bit more difficult. So maybe in a year or two, we'll have compilers that are able to do a lot of these optimizations for you. And you don't have to, for example, spend a couple months writing CUDA to get this stuff to work. Awesome. [00:10:41]Alessio: And just to make it clear for listeners, when we say we're not writing it to memory, we are storing it, but just in a faster memory. So instead of the HBM, we're putting it in the SRAM. Yeah. [00:10:53]Tri: Yeah. [00:10:54]Alessio: Maybe explain just a little bit the difference there. [00:10:56]Tri: Yeah, for sure. This is kind of a caricature of how you think about accelerators or GPUs in particular, is that they have a large pool of memory, usually called HBM, or high bandwidth memory. So this is what you think of as GPU memory. So if you're using A100 and you list the GPU memory, it's like 40 gigs or 80 gigs. So that's the HBM. And then when you perform any operation, you need to move data from the HBM to the compute unit. So the actual hardware unit that does the computation. And next to these compute units, there are on-chip memory or SRAM, which are much, much smaller than HBM, but much faster. So the analogy there is if you're familiar with, say, CPU and RAM and so on. So you have a large pool of RAM, and then you have the CPU performing the computation. But next to the CPU, you have L1 cache and L2 cache, which are much smaller than DRAM, but much faster. So you can think of SRAM as the small, fast cache that stays close to the compute unit. Physically, it's closer. There is some kind of asymmetry here. So HBM is much larger, and SRAM is much smaller, but much faster. One way of thinking about it is, how can we design algorithms that take advantage of this asymmetric memory hierarchy? And of course, lots of folks have been thinking about this. These ideas are pretty old. I think back in the 1980s, the primary concerns were sorting. How can we sort numbers as efficiently as possible? And the motivating example was banks were trying to sort their transactions, and that needs to happen overnight so that the next day they can be ready. And so the same idea applies, which is that they have slow memory, which was hard disk, and they have fast memory, which was DRAM. And people had to design sorting algorithms that take advantage of this asymmetry. And it turns out, these same ideas can apply today, which is different kinds of memory. [00:13:00]Alessio: In your paper, you have the pyramid of memory. Just to give people an idea, when he says smaller, it's like HBM is like 40 gig, and then SRAM is like 20 megabytes. So it's not a little smaller, it's much smaller. But the throughput on card is like 1.5 terabytes a second for HBM and like 19 terabytes a second for SRAM, which is a lot larger. How do you think that evolves? So TSMC said they hit the scaling limits for SRAM, they just cannot grow that much more. HBM keeps growing, HBM3 is going to be 2x faster than HBM2, I think the latest NVIDIA thing has HBM3. How do you think about the future of FlashAttention? Do you think HBM is going to get fast enough when maybe it's not as useful to use the SRAM? [00:13:49]Tri: That's right. I think it comes down to physics. When you design hardware, literally SRAM stays very close to compute units. And so you don't have that much area to essentially put the transistors. And you can't shrink these things too much. So just physics, in terms of area, you don't have that much area for the SRAM. HBM is off-chip, so there is some kind of bus that essentially transfers data from HBM to the compute unit. So you have more area to essentially put these memory units. And so yeah, I think in the future SRAM probably won't get that much larger, because you don't have that much area. HBM will get larger and faster. And so I think it becomes more important to design algorithms that take advantage of this memory asymmetry. It's the same thing in CPU, where the cache is really small, the DRAM is growing larger and larger. DRAM could get to, I don't know, two terabytes, six terabytes, or something, whereas the cache stays at, I don't know, 15 megabytes or something like that. I think maybe the algorithm design becomes more and more important. There's still ways to take advantage of this, I think. So in the future, I think flash attention right now is being used. I don't know if in the next couple of years, some new architecture will come in and whatnot, but attention seems to be still important. For the next couple of years, I still expect some of these ideas to be useful. Not necessarily the exact code that's out there, but I think these ideas have kind of stood the test of time. New ideas like IO awareness from back in the 1980s, ideas like kernel fusions, tiling. These are classical ideas that have stood the test of time. So I think in the future, these ideas will become more and more important as we scale models to be larger, as we have more kinds of devices, where performance and efficiency become much, much more important. [00:15:40]Alessio: Yeah, and we had Jonathan Frankle on the podcast, and if you go to issattentionallyouneed.com, he has an outstanding bet, and he does believe that attention will be the state of the art architecture still in a few years. Did you think flash attention would be this popular? I'm always curious on the research side, you publish a paper, and obviously you know it's great work, but sometimes it just kind of falls flat in the industry. Could you see everybody just starting to use this, or was that a surprise to you? [00:16:11]Tri: Certainly, I didn't anticipate the level of popularity. Of course, we were extremely happy to have people using this stuff and giving us feedback and so on, and help us improve things. I think when we were writing the paper, I remember sending an email to one of my advisors, and like, hey, I'm excited about this paper, but I think the most important thing will be the artifact, which is the code. So I knew that the code will be valuable. So we kind of focus a lot on the code and make sure that the code is usable and as fast as can be. Of course, the idea, the paper presents the ideas and explain it and have experiments that validate the idea, but I knew that the artifact or the code was also pretty important. And that turned out to be the right focus, which is, you know, we put out the paper, we release the code and continue working on the code. So it's a team effort with my co-authors as well. [00:17:07]Alessio: We mentioned Hazy Research a bunch of times on the podcast before. I would love for you to spend five minutes just talking about how does the group work? How do people get together? How do you bounce ideas off of each other? Yeah. [00:17:21]Tri: So Hazy Research is a research group at Stanford led by one of my advisors, Chris Re. I love the people there. It was one of the best experiences I had. They've made my PhD so much more enjoyable. And I think there are a couple of ways that the group has been working pretty well. So one is, I think there's a diverse pool of people who either, you know, some of them focus on algorithms and theory, some of them focus on building systems, some of them focus on applications. And as a result, there is this flow of idea. So as an example, some of us were working on like more algorithms and theory, and then we can talk to the folks building systems and say, hey, let's try it out and let's put it in the systems and see how it is. And there you will get feedback from systems folks. They will say, hey, we implemented this, or we tried this and this is where it doesn't work, something like that. And once we put it in the systems, the application folks can use the algorithm or new methods or new models. And we again get great feedback from them because the application folks, for example, some of my good friends, they focus on medical imaging or seizure detection. And that is the problem they care about. And if your method doesn't work on the task they care about, they will tell you. Whereas I think a lot of people in machine learning, they're a little bit more flexible. So they will be like, hey, it doesn't work on seizure detection. Let's try some other task, right? But having that direct feedback of like, hey, it doesn't work there, let's figure out why. I think that that feedback allows us to do better work. And I think that kind of process of exchanging ideas, validating it in a real system so that applications folks can try it out and give you feedback. That cycle has been very, very useful. And so that's one, having a diverse group of people. The other one is, and this is something I really appreciate from advice from Chris was try to understand the fundamental, right? And he's happy letting me go off and read some textbooks and playing with things because I think a lot of research ideas come from understanding the old literature and see how it fits with the new landscape. And so if you just new archive papers every day, that's great, but you also need to read textbooks. And that's one advice I got from Chris, which is understand the fundamentals. And I think that allows us to do more impactful work. [00:19:46]Alessio: How do you think about academia versus industry? I feel like AI / Machine Learning has been an area where up until three, four years ago, most of the cutting edge work was being done in academia. And now there's all these big industry research labs. You're obviously going to Princeton, so you're an academia believer. How should people think about where to go? Say I'm doing my master's, I have to decide between doing a PhD and going into OpenAI Anthropic. How should I decide? [00:20:15]Tri: I think they kind of play a complementary role, in my opinion. Of course, I also was considering different paths as well. So I think right now, scaling matters a lot, especially when you talk about language models and AI and so on. Scaling matters a lot. And that means that you need compute resources and you need infrastructure and you need engineers time. And so industry tends to have an advantage when it comes to scaling things. But a lot of the ideas actually came from academia. So let's take Attention, which got popular with the Transformer in 2017. Attention actually has been around for a while. So I think the first mention was in 2014, a paper from Bernadot and others and Yoshua Bengio, which is coming from academia. A lot of ideas did come from academia. And scaling things up, of course, I think OpenAI has been great at scaling things up. That was the bet that they made after, I think, GPT-2. So they saw that scaling these things up to back then was 1.5 billion parameter seemed to give you amazing capabilities. So they really committed to that. They really committed to scaling things. And that turned out to be, it's been a pretty successful bet. I think for academia, we're still trying to figure out exactly what we're doing in this shifting landscape. And so lots of folks have been focusing on, for example, evaluation. So I know the Stanford Center for Foundation Model led by Percy, they have this benchmark called HELM, which is this holistic benchmark. So trying to figure out, okay, characterizing the landscape of different kinds of models, what people should evaluate, what people should measure, and things like that. So evaluation is one role. The other one is understanding. So this has happened historically where there's been some development in the industry and academia can play a role in explaining, understanding. They have the luxury to slow down trying to understand stuff, right? So lots of paper on understanding what's really going on, probing these models, and so on. I think I'm not as familiar with the NLP literature, but my impression is there's a lot of that going on in the NLP conferences, which is understanding what these models are doing, what capabilities they have, and so on. And the third one I could see is that the academia can take more risky bets in the sense that we can work on stuff that is quite different from industry. I think industry, my impression is you have some objective. You're trying to say, hey, for this quarter, we want to scale the model in this particular way. Next quarter, we want the model to have these capabilities. You're trying to get objectives that maybe, I don't know, 70% that will work out because it's important for the company's direction. I think for academia, the way things work is you have many, many researchers or PhD students, and they're kind of pursuing independent directions. And they have a little bit more flexibility on, hey, I'm going to try out this seemingly crazy idea and see, let's say there's a 30% chance of success or something. And however you define success, for academia, a lot of the time, success just means like, hey, we found something interesting. That could eventually go into industry through collaboration and so on. So I do see academia and industry kind of playing complementary roles. And as for someone choosing a career, I think just more and more generally, industry would be probably better in terms of compensation, in terms of probably work-life balance. But my biased perspective is that maybe academia gives you a little bit more freedom to think and understand things. So it probably comes down to personal choice. I end up choosing to be a professor next year at Princeton. But of course, I want to maintain a relationship with industry folks. I think industry folks can provide very valuable feedback to what we're doing in academia so that we understand where the field is moving because some of the directions are very much influenced by what, for example, OpenAI or Google is doing. So we want to understand where the field is moving. What are some promising applications? And try to anticipate, okay, if the field is moving like this, these applications are going to be popular. What problems will be important in two, three years? And then we try to start thinking about those problems so that hopefully in two, three years, we have some of the answers to some of these problems in two, three years. Sometimes it works out, sometimes it doesn't. But as long as we do interesting things in academia, that's the goal. [00:25:03]Alessio: And you mentioned the eval side. So we did a Benchmarks 101 episode. And one of the things we were seeing is sometimes the benchmarks really influence the model development. Because obviously, if you don't score well on the benchmarks, you're not going to get published and you're not going to get funded. How do you think about that? How do you think that's going to change now that a lot of the applications of these models, again, is in more narrow industry use cases? Do you think the goal of the academia eval system is to be very broad and then industry can do their own evals? Or what's the relationship there? [00:25:40]Tri: Yeah, so I think evaluation is important and often a little bit underrated. So it's not as flashy as, oh, we have a new model that can do such and such. But I think evaluation, what you don't measure, you can't make progress on, essentially. So I think industry folks, of course, they have specific use cases that their models need to do well on. And that's what they care about. Not just academia, but other groups as well. People do understand what are some of the emerging use cases. So for example, now one of the most popular use cases is Chatbot. And then I think folks from Berkeley, some of them are from Berkeley, call them MLCs. They set up this kind of Chatbot arena to essentially benchmark different models. So people do understand what are some of the emerging use cases. People do contribute to evaluation and measurement. And as a whole, I think people try to contribute to the field and move the field forward, albeit that maybe slightly different directions. But we're making progress and definitely evaluation and measurement is one of the ways you make progress. So I think going forward, there's still going to be just more models, more evaluation. We'll just have better understanding of what these models are doing and what capabilities they have. [00:26:56]Alessio: I like that your work has been focused on not making benchmarks better, but it's like, let's just make everything faster. So it's very horizontal. So FlashAttention 2, you just released that on Monday. I read in the blog post that a lot of the work was also related to some of the NVIDIA library updates. Yeah, maybe run us through some of those changes and some of the innovations there. Yeah, for sure. [00:27:19]Tri: So FlashAttention 2 is something I've been working on for the past couple of months. So the story is the NVIDIA CUTLASS team, they released a new version of their library, which contains all these primitives to allow you to do matrix multiply or memory loading on GPU efficiently. So it's a great library and I built on that. So they released their version 3 back in January and I got really excited and I wanted to play with that library. So as an excuse, I was just like, okay, I'm going to refactor my code and use this library. So that was kind of the start of the project. By the end, I just ended up working with the code a whole lot more and I realized that, hey, there are these inefficiencies still in Flash Attention. We could change this way or that way and make it, in the end, twice as fast. But of course, building on the library that the NVIDIA folks released. So that was kind of a really fun exercise. I was starting out, it's just an excuse for myself to play with the new library. What ended up was several months of improvement, improving Flash Attention, discovering new ideas. And in the end, we managed to make it 2x faster and now it's pretty close to probably the efficiency of things like matrix multiply, which is probably the most optimized subroutine on the planet. So we're really happy about it. The NVIDIA Cutlass team has been very supportive and hopefully in the future, we're going to collaborate more. [00:28:46]Alessio: And since it's an NVIDIA library, can you only run this on CUDA runtimes? Or could you use this and then run it on an AMD GPU? [00:28:56]Tri: Yeah, so it's an NVIDIA library. So right now, the code we release runs on NVIDIA GPUs, which is what most people are using to train models. Of course, there are emerging other hardware as well. So the AMD folks did implement a version of Flash Attention, I think last year as well, and that's also available. I think there's some implementation on CPU as well. For example, there's this library, ggml, where they implemented the same idea running on Mac and CPU. So I think that kind of broadly, the idea would apply. The current implementation ended up using NVIDIA's library or primitives, but I expect these ideas to be broadly applicable to different hardware. I think the main idea is you have asymmetry in memory hierarchy, which tends to be everywhere in a lot of accelerators. [00:29:46]Alessio: Yeah, it kind of reminds me of Sara Hooker's post, like the hardware lottery. There could be all these things that are much better, like architectures that are better, but they're not better on NVIDIA. So we're never going to know if they're actually improved. How does that play into some of the research that you all do too? [00:30:04]Tri: Yeah, so absolutely. Yeah, I think Sara Hooker, she wrote this piece on hardware lottery, and I think she captured really well of what a lot of people have been thinking about this. And I certainly think about hardware lottery quite a bit, given that I do some of the work that's kind of really low level at the level of, hey, we're optimizing for GPUs or NVIDIA GPUs and optimizing for attention itself. And at the same time, I also work on algorithms and methods and transformer alternatives. And we do see this effect in play, not just hardware lottery, but also kind of software framework lottery. You know, attention has been popular for six years now. And so many kind of engineer hours has been spent on making it as easy and efficient as possible to run transformer, right? And there's libraries to do all kinds of tensor parallel, pipeline parallel, if you use transformer. Let's say someone else developed alternatives, or let's just take recurrent neural nets, like LSTM, GRU. If we want to do that and run that efficiently on current hardware with current software framework, that's quite a bit harder. So in some sense, there is this feedback loop where somehow the model architectures that take advantage of hardware become popular. And the hardware will also kind of evolve to optimize a little bit for that kind of architecture and software framework will also evolve to optimize for that particular architecture. Right now, transformer is the dominant architecture. So yeah, I'm not sure if there is a good way out of this. Of course, there's a lot of development. Things like, I think compilers will play a role because compilers allow you to maybe still be much more efficient across different kinds of hardware because essentially you write the same code and compiler will be able to make it run efficiently different kinds of hardware. So for example, there's this language Mojo, they're compiler experts, right? And their bet is AI models will be running on different kinds of devices. So let's make sure that we have really good compilers with a good language that then the compiler can do a good job optimizing for all kinds of devices. So that's maybe one way that you can get out of this cycle. But yeah, I'm not sure of a good way. In my own research, I have to think about both the algorithm new model and how it maps to hardware. So there are crazy ideas that seem really good, but will be really, really difficult to run efficiently. And so as a result, for example, we can't really scale some of the architectures up simply because they're not hardware friendly. I have to think about both sides when I'm working on new models. [00:32:50]Alessio: Yeah. Have you spent any time looking at some of the new kind of like AI chips companies, so to speak, like the Cerebras of the world? Like one of their innovations is co-locating everything on the chip. So you remove some of this memory bandwidth issue. How do you think about that? [00:33:07]Tri: Yeah, I think that's an interesting bet. I think Tesla also has this Dojo supercomputer where they try to have essentially as fast on-chip memory as possible and removing some of these data transfer back and forth. I think that's a promising direction. The issues I could see, you know, I'm definitely not a hardware expert. One issue is the on-chip memory tends to be really expensive to manufacture, much more expensive per gigabyte compared to off-chip memory. So I talked to, you know, some of my friends at Cerebros and, you know, they have their own stack and compiler and so on, and they can make it work. The other kind of obstacle is, again, with compiler and software framework and so on. For example, if you can run PyTorch on this stuff, lots of people will be using it. But supporting all the operations in PyTorch will take a long time to implement. Of course, people are working on this. So I think, yeah, we kind of need these different bets on the hardware side as well. Hardware has, my understanding is, has a kind of a longer time scale. So you need to design hardware, you need to manufacture it, you know, maybe on the order of three to five years or something like that. So people are taking different bets, but the AI landscape is changing so fast that it's hard to predict, okay, what kind of models will be dominant in, let's say, three or five years. Or thinking back five years ago, would we have known that Transformer would have been the dominant architecture? Maybe, maybe not, right? And so different people will make different bets on the hardware side. [00:34:39]Alessio: Does the pace of the industry and the research also influence the PhD research itself? For example, in your case, you're working on improving attention. It probably took you quite a while to write the paper and everything, but in the meantime, you could have had a new model architecture come out and then it's like nobody cares about attention anymore. How do people balance that? [00:35:02]Tri: Yeah, so I think it's tough. It's definitely tough for PhD students, for researchers. Given that the field is moving really, really fast, I think it comes down to understanding fundamental. Because that's essentially, for example, what the PhD allows you to do. It's been a couple of years understanding the fundamentals. So for example, when I started my PhD, I was working on understanding matrix vector multiply, which has been a concept that's been around for hundreds of years. We were trying to characterize what kind of matrices would have theoretically fast multiplication algorithm. That seems to have nothing to do with AI or anything. But I think that was a time when I developed mathematical maturity and research taste and research skill. The research topic at that point didn't have to be super trendy or anything, as long as I'm developing skills as a researcher, I'm making progress. And eventually, I've gotten quite a bit better in terms of research skills. And that allows, for example, PhD students later in their career to quickly develop solutions to whatever problems they're facing. So I think that's just the natural arc of how you're being trained as a researcher. For a lot of PhD students, I think given the pace is so fast, maybe it's harder to justify spending a lot of time on the fundamental. And it's tough. What is this kind of explore, exploit kind of dilemma? And I don't think there's a universal answer. So I personally spend some time doing this kind of exploration, reading random textbooks or lecture notes. And I spend some time keeping up with the latest architecture or methods and so on. I don't know if there's a right balance. It varies from person to person. But if you only spend 100% on one, either you only do exploration or only do exploitation, I think it probably won't work in the long term. It's probably going to have to be a mix and you have to just experiment and kind of be introspective and say, hey, I tried this kind of mixture of, I don't know, one exploration paper and one exploitation paper. How did that work out for me? Should I, you know, having conversation with, for example, my advisor about like, hey, did that work out? You know, should I shift? I focus more on one or the other. I think quickly adjusting and focusing on the process. I think that's probably the right way. I don't have like a specific recommendation that, hey, you focus, I don't know, 60% on lecture notes and 40% on archive papers or anything like that. [00:37:35]Alessio: Let's talk about some Transformer alternatives. You know, say Jonathan Franco loses his bet and Transformer is not the state of the art architecture. What are some of the candidates to take over? [00:37:49]Tri: Yeah, so this bet is quite fun. So my understanding is this bet between Jonathan Franco and Sasha Rush, right? I've talked to Sasha a bunch and I think he recently gave an excellent tutorial on Transformer alternatives as well. So I would recommend that. So just to quickly recap, I think there's been quite a bit of development more recently about Transformer alternatives. So architectures that are not Transformer, right? And the question is, can they do well on, for example, language modeling, which is kind of the application that a lot of people care about these days. So there are methods based on state space methods that came out in 2021 from Albert Gu and Curran and Chris Re that presumably could do much better in terms of capturing long range information while not scaling quadratically. They scale sub-quadratically in terms of sequence length. So potentially you could have a much more efficient architecture when sequence length gets really long. The other ones have been focusing more on recurrent neural nets, which is, again, an old idea, but adapting to the new landscape. So things like RWKV, I've also personally worked in this space as well. So there's been some promising results. So there's been some results here and there that show that, hey, these alternatives, either RNN or state space methods, can match the performance of Transformer on language modeling. So that's really exciting. And we're starting to understand on the academic research side, we want to understand, do we really need attention? I think that's a valuable kind of intellectual thing to understand. And maybe we do, maybe we don't. If we want to know, we need to spend serious effort on trying the alternatives. And there's been folks pushing on this direction. I think RWKV scale up to, they have a model at 14 billion that seems pretty competitive with Transformer. So that's really exciting. That's kind of an intellectual thing. We want to figure out if attention is necessary. So that's one motivation. The other motivation is Transformer Alternative could have an advantage in practice in some of the use cases. So one use case is really long sequences. The other is really high throughput of generation. So for really long sequences, when you train with Transformer, with flash attention and so on, the computation is still quadratic in the sequence length. So if your sequence length is on the order of, I don't know, 16K, 32K, 100K or something, which some of these models have sequence length 100K, then you do get significantly slower in terms of training, also in terms of inference. So maybe these alternative architectures could scale better in terms of sequence length. I haven't seen actual validation on this. Let's say an RNN model release with context length, I don't know, 100K or something. I haven't really seen that. But the hope could be that as we scale to long sequences, these alternative architectures could be more well-suited. Not just text, but things like high resolution images, audio, video, and so on, which are emerging applications. So that's one, long sequences. Number two is a high throughput generation, where I can imagine scenarios where the application isn't like an interactive chatbot, but let's say a company wants to batch as many requests as possible on their server, or they're doing offline processing, they're generating stuff based on their internal documents, that you need to process in batch. And the issue with Transformer is that during generation, it essentially needs to keep around all the previous history. It's called the KV cache. And that could take a significant amount of memory, so you can't really batch too much because you run out of memory. I am personally bullish on RNNs. I think RNNs, they essentially summarize the past into a state vector that has fixed size, so the size doesn't grow with the history. So that means that you don't need as much memory to keep around all the previous tokens. And as a result, I think you can scale to much higher batch sizes. And as a result, you can make much more efficient use of the GPUs or the accelerator, and you could have much higher generation throughput. Now, this, I don't think, has been validated at scale. So as a researcher, I'm bullish on this stuff because I think in the next couple of years, these are use cases where these alternatives could have an advantage. We'll just kind of have to wait and see to see if these things will happen. I am personally bullish on this stuff. At the same time, I also spend a bunch of time making attention as fast as possible. So maybe hatching and playing both sides. Ultimately, we want to understand, as researchers, we want to understand what works, why do the models have these capabilities? And one way is, let's push attention to be as efficient as possible. On the other hand, let's push other alternatives to be as efficient at scale, as big as possible, and so that we can kind of compare them and understand. Yeah, awesome. [00:43:01]Alessio: And I think as long as all of this work happens and open, it's a net positive for everybody to explore all the paths. Yeah, let's talk about open-source AI. Obviously, together, when Red Pajama came out, which was an open clone of the LLAMA1 pre-training dataset, it was a big thing in the industry. LLAMA2 came out on Tuesday, I forget. And this week, there's been a lot of things going on, which they call open-source, but it's not really open-source. Actually, we wrote a post about it that was on the front page of Hacker News before this podcast, so I was frantically responding. How do you think about what open-source AI really is? In my mind, in open-source software, we have different levels of open. So there's free software, that's like the GPL license. There's open-source, which is Apache, MIT. And then there's kind of restricted open-source, which is the SSPL and some of these other licenses. In AI, you have the open models. So Red Pajama is an open model because you have the pre-training dataset, you have the training runs and everything. And then there's obviously RandomLens that doesn't make it one-to-one if you retrain it. Then you have the open-weights model that's kind of like StableLM, where the weights are open, but the dataset is not open. And then you have LLAMA2, which is the dataset is not open, the weights are restricted. It's kind of like not really open-source, but open enough. I think it's net positive because it's like $3 million of flops donated to the public. [00:44:32]Tri: How do you think about that? [00:44:34]Alessio: And also, as you work together, what is your philosophy with open-source AI? Right, right. [00:44:40]Tri: Yeah, I think that's a great question. And I think about it on maybe more practical terms. So of course, Meta has done an amazing job training LLAMA1, LLAMA2. And for LLAMA2, they make it much less restrictive compared to LLAMA1. Now you can use it for businesses, unless you are a monthly active user or something like that. I think just this change will have a very significant impact in the kind of landscape of open-source AI, where now lots of businesses, lots of companies will be using, I expect will be using things like LLAMA2. They will fine-tune on their own dataset. They will be serving variants or derivatives of LLAMA2. Whereas before, with LLAMA1, it was also a really good model, but your business companies weren't allowed to do that. So I think on a more practical term, it's kind of shifting the balance between a closed-source model like OpenAI and Anthropic and Google, where you're making API calls, right? And maybe you don't understand as much of what the model is doing, how the model is changing, and so on. Versus now, we have a model with open weight that is pretty competitive from what I've seen in terms of benchmarks, pretty competitive with GPT 3.5, right? And if you fine-tune it on your own data, maybe it's more well-suited for your own data. And I do see that's going to shift the balance of it. More and more folks are going to be using, let's say, derivatives of LLAMA2. More and more folks are going to fine-tune and serve their own model instead of calling an API. So that shifting of balance is important because in one way, we don't want just a concentration of decision-making power in the hands of a few companies. So I think that's a really positive development from Meta. Of course, training the model takes a couple of millions of dollars, but engineers have and I'm sure they spend tons of time trying many, many different things. So the actual cost is probably way more than that. And they make the weights available and they allow probably a lot of companies are going to be using this. So I think that's a really positive development. And we've also seen amazing progress on the open source community where they would take these models and they either fine-tune on different kinds of data sets or even make changes to the model. So as an example, I think for LLAMA1, the context lane was limited to 2K. Like a bunch of folks figured out some really simple methods to scale up to like 8K. [00:47:12]Alessio: Like the RoPE. [00:47:13]Tri: Yes. I think the open source community is very creative, right? And lots of people. LLAMA2 will, again, kind of accelerate this where more people will try it out. More people will make tweaks to it and make a contribution and then so on. So overall, I think I see that as still a very positive development for the field. And there's been lots of libraries that will allow you to host or fine-tune these models, like even with quantization and so on. Just a couple of hours after LLAMA2 was released, tons of companies announcing that, hey, it's on our API or hosting and so on and together did the same. So it's a very fast-paced development and just kind of a model with available weights that businesses are allowed to use. I think that alone is already a very positive development. At the same time, yeah, we can do much better in terms of releasing data sets. Data sets tend to be... Somehow people are not incentivized to release data sets. So philosophically, yeah, you want to be as open as possible. But on a practical term, I think it's a little bit harder for companies to release data sets. Legal issues. The data sets released tend to be not as eye-catchy as the model release. So maybe people are less incentivized to do that. We've seen quite a few companies releasing data sets together. Released a red pajama data set. I think Cerebus then worked on that and deduplicate and clean it up and release slim pajama and so on. So we're also seeing positive development on that front, kind of on the pre-training data set. So I do expect that to continue. And then on the fine-tuning data set or instruction tuning data set, I think we now have quite a few open data sets on instruction tuning and fine-tuning. But these companies do pay for human labelers to annotate these instruction tuning data set. And that is expensive. And maybe they will see that as their competitive advantage. And so it's harder to incentivize these companies to release these data sets. So I think on a practical term, we're still going to make a lot of progress on open source AI, on both the model development, on both model hosting, on pre-training data set and fine-tuning data set. Right now, maybe we don't have the perfect open source model since all the data sets are available. Maybe we don't have such a thing yet, but we've seen very fast development on the open source side. I think just maybe this time last year, there weren't as many models that are competitive with, let's say, ChatGPT. [00:49:43]Alessio: Yeah, I think the open data sets have so much more impact than open models. If you think about Elusive and the work that they've done, GPT-J was great, and the Pythia models are great, but the Pyle and the Stack, everybody uses them. So hopefully we get more people to contribute time to work on data sets instead of doing the 100th open model that performs worse than all the other ones, but they want to say they released the model. [00:50:14]Tri: Yeah, maybe the question is, how do we figure out an incentive structure so that companies are willing to release open data sets? And for example, it could be like, I think some of the organizations are now doing this where they are asking volunteers to annotate and so on. And maybe the Wikipedia model of data set, especially for instruction tuning, could be interesting where people actually volunteer their time and instead of editing Wikipedia, add annotation. And somehow they acknowledge and feel incentivized to do so. Hopefully we get to that kind of level of, in terms of data, it would be kind of like Wikipedia. And in terms of model development, it's kind of like Linux where people are contributing patches and improving the model in some way. I don't know exactly how that's going to happen, but based on history, I think there is a way to get there. [00:51:05]Alessio: Yeah, I think the Dolly-15K data set is a good example of a company saying, let's do this smaller thing, just make sure we make it open. We had Mike Conover from Databricks on the podcast, and he was like, people just bought into it and leadership was bought into it. You have companies out there with 200,000, 300,000 employees. It's like, just put some of them to label some data. It's going to be helpful. So I'm curious to see how that evolves. What made you decide to join Together? [00:51:35]Tri: For Together, the focus has been focusing a lot on open source model. And I think that aligns quite well with what I care about, of course. I also know a bunch of people there that I know and trust, and I'm excited to work with them. Philosophically, the way they've been really open with data set and model release, I like that a lot. Personally, for the stuff, for example, the research that I've developed, like we also try to make code available, free to use and modify and so on, contributing to the community. That has given us really valuable feedback from the community and improving our work. So philosophically, I like the way Together has been focusing on open source model. And the nice thing is we're also going to be at the forefront of research and the kind of research areas that I'm really excited about, things like efficient training and inference, aligns quite well with what the company is doing. We'll try our best to make things open and available to everyone. Yeah, but it's going to be fun being at the company, leading a team, doing research on the topic that I really care about, and hopefully we'll make things open to benefit the community. [00:52:45]Alessio: Awesome. Let's jump into the lightning round. Usually, I have two questions. So one is on acceleration, one on exploration, and then a takeaway. So the first one is, what's something that already happened in AI machine learning that you thought would take much longer than it has? [00:53:01]Tri: I think understanding jokes. I didn't expect that to happen, but it turns out scaling model up and training lots of data, the model can now understand jokes. Maybe it's a small thing, but that was amazing to me. [00:53:16]Alessio: What about the exploration side? What are some of the most interesting unsolved questions in the space? [00:53:22]Tri: I would say reasoning in the broad term. We don't really know how these models do. Essentially, they do something that looks like reasoning. We don't know how they're doing it. We have some ideas. And in the future, I think we will need to design architecture that explicitly has some kind of reasoning module in it if we want to have much more capable models. [00:53:43]Alessio: What's one message you want everyone to remember today? [00:53:47]Tri: I would say try to understand both the algorithm and the systems that these algorithms run on. I think at the intersection of machine learning system has been really exciting, and there's been a lot of amazing results at this intersection. And then when you scale models to large scale, both the machine learning side and the system side really matter. [00:54:06]Alessio: Awesome. Well, thank you so much for coming on 3. [00:54:09]Tri: This was great. Yeah, this has been really fun. [00:54:11] Get full access to Latent Space at www.latent.space/subscribe

An Audio Moment Of Cerebus
Please Hold For Dave Sim 7/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Jul 7, 2023 122:28


The July twenty twenty-three Please Hold For Dave Sim has all the answers to all the Mike's questions. And even some Non-Mike questions. It's a hoot! Matt remembers Jeff Seiler, but not to print out his bit. Dave answers the last of Adam J. Elkhadem (R.I.P.)'s questions about making comics. Dave answers Mike's question about marketing your work...by reading one of Mike's stories. Dave answers Mike's question about Dave's drawing hand. Dave answers Mike's question about Jack Kamen's method of stylized realism in comics art. Dave answers Mike's question about...well, it's more a comment, but we discuss it... Dave answers NOT Mike's (oh thank god, I was beginning to think I was stuck in a Kafkaesque nightmare of Mikes...) question about Rodney Schroeter's book, Mission of Benevolent Greed. Dave answers another NOT Mike question, this time about where the Cerebus warehouse books ended up, and offers a new "Swordfish" deal to all the listeners who find themselves in Kitchener, Ontario, Canada. Dave answers Tony "Michael" Dunlop's question about Cerebus Archive Portfolios. What I say? A "hoot"! (If you find yourself "hoot-less", contact me for a full refund.) --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Comics for Fun and Profit
Episode 835: Episode 835 - Jason Interviews Andrew Pepoy - Simone & Ajax: What's Black & White and Read All Over?

Comics for Fun and Profit

Play Episode Listen Later Jun 26, 2023 67:37


Episode 835 - Jason Interviews Andrew Pepoy - Simone & Ajax: What's Black & White and Read All Over? https://www.pepoy.comSimone, a fun-loving, 20-something girl, and her best pal, Ajax, a small, green dinosaur, first appeared over 30 years ago and have since run rampant in many comics and been nominated for the Harvey Award for Special Award for Humor. They've been described by a reviewer as "...like the best issues of Cerebus, and a mood that harkens Bone," by a reader as "...the look of an Archie comic but the sensibilities of a Marx Brothers movie," and compared to Carl Barks' Uncle Scrooge comics that inspired Duck Tales. My true comics love is Simone & Ajax, and I previously Kickstarted The Adventures of Simone & Ajax: Lemmings and Tigers and Bears! Oh My!  And I'm back, working like mad to give you more fun Simone & Ajax comics.  Some of you noticed that the spine of that previous book of all-new comics had a 3 on it  and asked: What about books 1 and 2? Well, you can get Book One in my new campaign for The Adventures of Simone & Ajax: What's Black and White and Read All Over?Buy it: https://www.kickstarter.com/projects/pepoy/simone-and-ajax-whats-black-and-white-and-read-all-overPatreon https://www.patreon.com/comicsfunprofitOur Merch https://comicsfunprofit.threadless.comDonations Keep Our Show Going, Please Give https://bit.ly/36s7YeLThank you so much for listening and spreading the word about our little comic book podcast. All the C4FaP links you could ever need in one place https://beacons.ai/comicsfunprofit Listen To the Episode Here: https://comcsforfunandprofit.podomatic.com/ 

An Audio Moment Of Cerebus
Please Hold For Dave Sim 6/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Jun 2, 2023 136:19


It's June! It's 2023! It's Please Hold For Dave Sim! It's the Please Hold For Dave Sim of June 2023! This time Dave Sim and A Moment of Cerebus' Manly Matt Dow discuss/answer/talk about or around: "What is Jaka's inseam?" Dave remembers Jeff Seiler Dave answers the next question in the pile from Adam J. Elkhadem, creator of Octave the Artist. This time it's about dialogue. Dave answers Dodger's question about sales numbers (grab a calculator and add along!), and if you wanna see/hear more, we need to raise $45 to pay for it... Dave answers Jeff Stoltman's question about the The Last Day spine labels. And Matt starts watching for the mail, anxiously... Zipper has a question about the 2023 Strange Death of Alex Raymond GoFundMe, and Dave has an answer. Zipper has an answer about Lone Star Comics, and Matt whiffs the question. "anonymous" asks a question, and against policy, Matt sent it up, and Dave answered it. Dave answers a question about his "Lord Rodney" parody of Rodney Dangerfield in High Society. And then a “Things You May Not Know About the Beatles” gets Comics Art Metaphysical... and somewhere in there Manly talks about his upcoming "shakedown" of Cerebus fans... It's a good time. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 5/2023

An Audio Moment Of Cerebus

Play Episode Listen Later May 5, 2023 123:49


Dave Sim and Manly Matt Dow return, once again, for their monthly discussion we call Please Hold For Dave Sim This time Dave and Matt talk about: Star Wars Free Comic Book Day Why Matt lives with Jeff Seiler's wife Dave answers the next question from Adam J. Elkhadem, creator of Octave the Artist. Dave answers Dodger's question about Cerebus covers Dave answers Zipper's question about Mike Allred Dave answers Matt's question about a flash sale for the International buyers of the Remastered The Last Day (signed by Dave & Gerhard, with a Remarqued Cerebus head by Dave) Dave answers Leo's question about Small Press cameos in Guys Dave answers Charles' question about Cerebus circulation numbers With Matt's bullsh*t liberally sprinkled throughout. Because can it BE Please Hold For Dave Sim without Matt's Bullsh*t??? --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 4/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Apr 7, 2023 163:09


"Dapper" Dave Sim and Manly Matt Dow return, once again, for their Monthly phone chat that Manly records and edits together and posts here. We call it Please Hold For Dave Sim. In the April 2023 Please Hold, Dave and Matt discuss: Can Matt refer to a Strong. Independent. Female. As a... "broad" (Manly sez: "Dude, she invites four underage dudes to move in with her within hours of first meeting them. Broad is one of the NICEST terms I could use...") Dave remembers Cerebus Fan, Jeff Seiler. A BIG "Thank You!" to the 608 people who gave $88,508 to fund The Waverly Press & Aardvark-Vanaheim's Remaster of Eastman & Laird's Teenage Mutant Ninja Turtles Number Eight Featuring Dave Sim's Cerebus the Aardvark. Thank you, again. Manly let's YOU, yes, YOU! know that if you missed out on The Waverly Press & Aardvark-Vanaheim's Remaster of Eastman & Laird's Teenage Mutant Ninja Turtles Number Eight Featuring Dave Sim's Cerebus the Aardvark, you gots the 2nd chance thanks to the The Waverly Press & Aardvark-Vanaheim's Remaster of Eastman & Laird's Teenage Mutant Ninja Turtles Number Eight Featuring Dave Sim's Cerebus the Aardvark Indiegogo campaign. An announcement that "Mistakes were made." regarding the Remastered The Last Day. And Manly christens the new version "The Last Void". (See, the spine reads "Form & Void", even though it's the Remastered The Last Day, as we said, "Mistakes were made." It's either "The Last Void", or "Form & Day"...) Dave announces the beginnings of a plan to sell copies of the Remastered The Last Day, Signed by Dave AND Gerhard, with a Last Day Cerebus head sketch Remarque in silver on the cover by Dave, on CerebusDownloads.com for the low low price of $XXXCAD (we're still working out the price. BUT, the first Day will have a discount for loyal listeners/readers. More details as Dave and the Cerebus Braintrust make them up...) We take a "moment" (Heh.) to remind everyone that James Windsor-Smith has a Kickstarter for Papa Balloon and Cactus #2 WITH a Dave Sim Variant Cover. And JWS had me send stuff to Dave, and we talk around that... Dave answers the second (in a series of questions) from Adam J. Elkhadem, creator of Octave the Artist. Then Dave discusses Frank Thorne at the request of Travis H. Mike Sewall wants to know about Remastered Diamondback decks. Dave and Matt got answers...and memories. Dodger wants to know about CerebusTV. Dave and Matt have answers...and memories. John Gilsmann's question about the number 7 in The Strange Death of Alex Raymond. And somewhere in there Dave answers why he and Mark Gruenwald signed a copy of The Amazing Spider-Man #59 to "Jim". Boy howdy, that's some good Holdin'... See ya next month when we do it all again... (If YOU want YOUR question asked, feel free to send whatever ya wanna know to momentofcerebus@gmail.com.) ((And be sure to check out A Moment of Cerebus daily for all sorts of Cerebus/Dave Sim/Gerhard/whathaveya updates every. dang. day. (Including Dave's Weekly Update EVERY Friday.) (Margaret Liss' selections from Dave Sim's Cerebus Production Notebooks EVERY Thursday.) (Benjamin Hobbs Cerebus & Hobbs EVERY Wednesday.) (Jen DiGiacomo's Strange Death of Alex Raymond 2023 GoFundMe Updates EVERY Tuesday.) (And Manly Matt Dow's Bull$#!* the rest of the week. Gah-daymn there's a lot of it...)) --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 3/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Mar 3, 2023 166:47


Once again, Aardvark-Vanaheim President Dave Sim and A Moment of Cerebus Interim Editor Manly Matt Dow take three-ish hours out of their busy days to answer reader questions, drop some news, and generally "shoot the $#!*" in a lil' sumthin'-sumthin' we call Please Hold For Dave Sim. In this, the THIRD installment for 2023, Dave and Manly discuss: Matt remembers Cerebus fan, and Dave Sim Superfan, Jeff Seiler. Specifically that time Jeff nearly cancelled Judenhass and glamourpuss. Good times. Dave answers the first (in what will be a series of questions) from Adam J. Elkhadem, creator of Octave the Artist Then we head to the Big Board to see how the Remastered Teenage Mutant Ninja Turtles #8 Kickstarter is doing. As of this writing, it's at $35,775 with 250 backers. What music Matt used for the TMNT #8 countdown video he made. Then a deep dive into the can of worms Darrell Cook opened in the Cerebus Faceybookee group, wherein he talks about Jaka's molestation as a child, and his suspicions that Lord Julius was the culprit. This goes on for a while... MJ Sewall asks about the Cerebus Radio Show. Matt & Dave do a bit for Escape Pod Comics. Michael R., of the Easton, Pennsylvania Rs, asks about the "apposite biblical reference" of Jaka and the red wine at the end of Rick's Story. Michael Grabowski asks about the Cirinists adulation of Jaka in Going Home. And then three questions about the Remastered Teenage Mutant Ninja Turtles #8 Kickstarter all came in AT THE SAME TIME. What. A. Coincidence. And more! (Maybe. Probably. You got the time, listen and find out for yourself...) --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

An Audio Moment Of Cerebus
Please Hold For Dave Sim 2/2023

An Audio Moment Of Cerebus

Play Episode Listen Later Feb 4, 2023 157:01


Dave Sim and Manly Matt Dow return once again to answer submitted questions from listeners and generally just BS for two and a half hours. This time: Dave remembers Jeff Seiler with a prod from Matt and MJ Sewall Dave remembers Phil Seuling Dave remembers Bob Burden and the making of Cerebus #104 Dave remembers MJ Sewall's book Where Monsters Dance About Dave remembers who was on the Cerebus Freebies list circa 1996 Dave remembers being a "letter hack" in the 1960s Dave remembers how to write a cliffhanger Dave announces that a year from now, February 2024, you'll be able to buy a ALL NEW Cerebus comic that he's writing AND DRAWING! Dave learns there was a SF novel in 1953 titled Pirates of Cerebus Dave remembers the Cerebus: 6 Deadly Sins portfolio And Matt spoils his Spawn Cerebus in Hell parody Matt spoils his surprise to the AMOC bullpen Matt decides to send his cheap P. Craig Russell art book to Dave Matt generally says the quiet part out loud more than a person should...like normal. --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-dow/support

Rolled Spine Podcasts
The Under Guides Graphic Novel Podcast Vol VIII: Deathmatch

Rolled Spine Podcasts

Play Episode Listen Later Nov 16, 2022 50:49


Frank and José go deep into the vaults for more of our earliest unreleased recordings, dating back to 2015, with the 2012-2013 Boom! Studios maxi-series Deathmatch by Paul Jenkins & Carlos Magno. Then, Frank goes solo to talk Paul Grist's 2011 Image Comics series Mudman, Dark Horse's 2014 Tomb Raider reboot with Gail Simone & Nicolás Daniel Selma, and the late life Vertigo title Coffin Hill by Caitlin Kittredge and Inaki Miranda. Given the amount of corporate-owned IP, clearly the format of the show was still being sorted out. Afterwards, Frank has managed to outlast the actual “Swords of Cerebus” reprint series by entering his second “telephone book” collection, High Society, starting with #26-29 (1981.) Mac is present but mostly passive for this, another reason why we sat on the material for nearly eight years. Friend us on Facebook Thumb through #UnderGuides Roll over our tumblr Email us at rolledspinepodcasts@gmail.com Tweet us as a group @rolledspine, or individually as Diabolu Frank & Illegal Machine. Fixit don't tweet. The Under Guides Graphic Novel Podcast Blog Rolled Spine Podcasts 1980s, 2010s, Cerebus, Comic Books, Dave Sim, Image Comics, The Under Guides Graphic Novel Podcast, BOOM! Studios, Dark Horse, Deathmatch, Mudman, Coffin Hill, Vertigo, Tomb Raider, Gail Simone, Paul Grist, Paul Jenkins, Carlos Magno

Indie Film Review
Episode 190: The Absurd, Surreal, Metaphysical and Fractured Destiny of Cerebus the Aardvark

Indie Film Review

Play Episode Listen Later Nov 15, 2022 27:55


Film: The Absurd, Surreal, Metaphysical and Fractured Destiny of Cerebus the Aardvark (2022) This week, Jared and Dan watched the world's first completely no-budget CGI film! Made by over 200 volunteers world wide, check out this crazy aardvark adventure! Subscribe to us on Apple Podcasts and leave us a 5 star review! Twitter: @IndieFilmPod Instagram: @IndieFilmReviewPod email: theindiefilmreview@gmail.com  

Party Advantage!
A Conspiracy of Ravens, Part 3: Toasty

Party Advantage!

Play Episode Listen Later Aug 31, 2022 53:18


Welcome back to part 3 of our Legends of Arias special! Lothlaris, Vakurr, Alavira, Nettle, Delmar, Irene, Cerebus, Voltanis and Bedivere continue in their quest through Morte Keep in their search for the Lich, Sarees. Half of the group discover a long hallway that appears to lead further into the citadel, however, they find things may get a little toasty... Find us on all our Social Media! https://linktr.ee/PartyAdvantage Don't forget to check out our Sponsors! Elderwood Academy Awesome Dice Check out 5E Homebrew creator, Nim ToastHaster on Twitter, Discord, DMs Guild, and Patreon! Additional music featured with the use of Soncraft, music platform Audio, & Music by Cassie Derby Audio Editing by Kyle Voisine

Bags & Boards Podcast
X10 High Value Sales 8.8.22

Bags & Boards Podcast

Play Episode Listen Later Aug 15, 2022 11:43


Some comics are on the verge of hitting the Top 10 list and others just won't make it. Which of the comics on the Key Collector runners-up books has the power? Tom and Gem are here to break it all down for you so it's time to listen up! Black Panther key characters, Phantom Stranger Swamp Thing prototypes, Saga keys from Image Comics, Hercules keys in Avengers books, early Batman DC keys, counterfeit covers, Secret Wars comics from the 80s, Golden Age Crime SuspenStories, Vampirella, and can Cerebus the Aardvark crack the top 5?? ❤️ Mystery Mail Call (our comic book subscription service) https://www.comictom101.com/ (US ONLY) ❤️ Follow us on Whatnot!: https://www.whatnot.com/invite/comictom101 ❤️ Subscribe to our YouTube Channel: https://bit.ly/2PfSSSY

Bronze and Modern Gods
Our 2022 Comic Collecting Goals...so far! Plus, Secret Wars, Cerebus & more!

Bronze and Modern Gods

Play Episode Listen Later Aug 2, 2022 44:26


T-shirts & more are finally available!! http://tee.pub/lic/BAMG John & Richard set their 2022 comic book collecting goals back on an episode in January - so, how are they doing? Join them for a cool show & tell! Our Hot Book of the Week showcases Doctor Doom in the various Secret Wars and we wrap it all up with our Underrated Books of the Week, which include the controversial Cerebus and another milestone Marvel Comics annual! Sign up for the NearMintNFT Whitelist here - https://nearmintnft.com/ Bronze and Modern Gods is the channel dedicated to the Bronze, Copper and Modern Ages of comics and comic book collecting! Follow us on Facebook - https://www.facebook.com/BronzeAndModernGods Follow us on Instagram - https://www.instagram.com/bronzeandmoderngods #comics #comicbooks #comiccollecting #nfts #comicnfts --- Support this podcast: https://anchor.fm/bronzeandmoderngods/support

Geek Shock
Geekshock #645 - Teenage Mutant 900 Pound Elvis

Geek Shock

Play Episode Listen Later Jul 2, 2022 97:51


On this episode we do a double secret giveaway for tier 2 (Quinoa Salad) members (with AMAZING theme music composed by Fact Check Andy), and discuss the James Bond movies, Jumanji: Welcome to the Jungle, Roku, the Mystic Aquarium in Connecticut, Rule of Wolves by Leigh Bardugo, Book 4 of Rebirth of the Fallen by JR Konkol, Everything Everywhere All at Once, Knights of the Dinner Table, Cerebus the Aardvark, Eastman & Laird's TMNT, buying stuff on Kindle, Obi-Wan Kenobi, the Elvis movie, the Geekshock crew's real life Elvis stories, Smuggler's Cove by Martin Cate, Haggling at the Meepleville Board Game Flea Market, why you don't bring Vlarg to a car dealership, Merchants of the Dark Road board game, Cameron Diaz was apparently retired, our problematic IMDB, a star wars home theater that comes with a free mansion at Disney, The Electric State, the final season of See, Harley Quinn season 3, Top Gun: Maverick, James Rado, and whatever the hell you want us to doodle for you.  It's time for a gloriously Torgo-less Geekshock!

Overcome with Justin Wren
#32 - Justin Melnick

Overcome with Justin Wren

Play Episode Listen Later May 10, 2022 78:04


Justin Melnick is a police officer and former combat photographer who served in Afghanistan, best known for playing Brock Reynolds on CBS' "SEAL Team". He is also the handler of Pepper, a Belgian Malinois, who appears as the K9 dog on the series and joins in studio today, and real-life owner of now-retired Dita, Pepper's sister, who appeared as Cerebus on the show. If you enjoyed this episode, check out Justin's interview with Tim Kennedy! Don't forget to leave a review of the show, wherever you're listening! Join the Fight for the Forgotten Fight Club: https://fightfortheforgotten.org/fight-club Thanks to Onnit for sponsoring this episode! Visit https://www.onnit.com/overcome or use code “Overcome” for 10% off! See omnystudio.com/listener for privacy information.

Cartoonist Kayfabe
Comparing Dave Sim's Cerebus Issue 1 and issue 300 After 25 Years of Honing His Craft.

Cartoonist Kayfabe

Play Episode Listen Later Mar 6, 2022 36:56


Ed's Links (Order RED ROOM!, Patreon, etc): https://linktr.ee/edpiskor Jim's Links (Patreon, Store, social media): https://linktr.ee/jimrugg ------------------------- E-NEWSLETTER: Keep up with all things Cartoonist Kayfabe through our newsletter! News, appearances, special offers, and more - signup here for free: https://cartoonistkayfabe.substack.com/ --------------------- SNAIL MAIL! Cartoonist Kayfabe, PO Box 3071, Munhall, Pa 15120 --------------------- T-SHIRTS and MERCH: https://shop.spreadshirt.com/cartoonist-kayfabe --------------------- Connect with us: Instagram: https://www.instagram.com/cartoonist.kayfabe/ Twitter: https://twitter.com/CartoonKayfabe Facebook: https://www.facebook.com/Cartoonist.Kayfabe Ed's Contact info: https://Patreon.com/edpiskor https://www.instagram.com/ed_piskor https://www.twitter.com/edpiskor https://www.amazon.com/Ed-Piskor/e/B00LDURW7A/ref=dp_byline_cont_book_1 Jim's contact info: https://www.patreon.com/jimrugg https://www.jimrugg.com/shop https://www.instagram.com/jimruggart https://www.twitter.com/jimruggart https://www.amazon.com/Jim-Rugg/e/B0034Q8PH2/ref=sr_tc_2_0?qid=1543440388&sr=1-2-ent