POPULARITY
Categories
Or How to Lose a Ship in 10 Days. WE'RE BACK, lil onions! After a long hiatus, we're back with some adventure films for you in the month of March. We're kicking off with Pirates of the Caribbean: Curse of the Black Pearl, and the swashbuckling story of Captain Jack Sparrow's journey to reclaim his infamous ship from a crew of undead pirates. Somehow despite our best efforts to avoid Disney, we've accidentally hit the bullseye. Ross is struggles with calling the Isle de Muerte by its name, Carie is gobsmacked by the strength of the CGI 23 years later, and the siblings reflect on how lucky they were to grow up when movies like this came out. SUPPORT US ON PATREON!
(ORIGINAL AIRDATE: February 26, 2021) **CONTENT WARNING: SALTY LANGUAGE** It's time to check the overhead compartments and prepare for turbulence, as Lucas and Will board the infamous CGI series Jay Jay The Jet Plane! The guys explore Jay Jay's origins on a forgotten kid's programming block and its surprising connection to Thomas The Tank Engine, Theodore Tugboat, and Veggie Tales, as well as the 2 small things Will absolutely HATES about the show, its relation to the Kevin Costner film "Pushing Tin"(?), and exactly how the guys feel about those darn faces!
MONSTER PARTY MAKES A PARDO PROGNOSIS! JAMES GONIS, SHAWN SHERIDAN, LARRY STROTHE, and MATT WEINHOLD make a virtual housecall to check on the status of a past guest's geek standing. It's an episode that begs the question… ARE YOU A BIGGER NERD NOW?!!! Approximately ten years ago, the hosts of MONSTER PARTY made a very special visit to a friend and fellow podcaster's recording studio. The main reason was that this podcaster, who had been a guest on our show, was severely allergic to the cats that roamed the carpets of our own studio. But since this was still in the relatively early days of MONSTER PARTY, this situation also provided us a glimpse at the setup of an A-LIST podcast. The topic of the episode we recorded was, "WHY AREN'T YOU A BIGGER NERD?" and the guest was the seismically hilarious comedian and host of the groundbreaking/award-winning podcast NEVER NOT FUNNY, JIMMY PARDO! It was a fantastic experience—Mr. Pardo was nothing short of delightful—and we feel we made significant headway in his journey toward healthy nerdism. But after the episode was posted, we lost track of Jimmy's progress. That is, until now! In this singular life-affirming event, Jimmy Pardo returns to catch us up on how much his world of genre entertainment has opened up since he last appeared on our show. And what he has to say might surprise you. We really don't want to give away any spoilers, but if you struggle with middle-aged action figure impulse buys, prefer practical apes to CGI ones, and enjoy the happening sounds of Chicago, Journey, and REO Speedwagon, this episode is the audio intervention you've been clamoring for. WHAT ARE THE ODDS THAT JIMMY PARDO IS A BIGGER NERD NOW? WE'D GUESS IT'S 25 OR 6 TO 4.
Nesta conversa imperdível, Madá fala com Demi Getschko, engenheiro, conselheiro do CGI.br e uma das mentes fundamentais para a implementação da internet no Brasil.Exploramos a trajetória desde os primeiros pacotes de dados enviados nos anos 80 até os dilemas atuais: redes sociais, inteligência artificial e a preservação da neutralidade da rede. Se você quer entender não apenas como a tecnologia funciona, mas como ela molda a nossa sociedade, este papo é essencial.Apoie o jornalismo independente. Assine O Antagonista e Crusoé com 10% via Pix ou Google Pay: https://assine.oantagonista.com.br/ Se você busca informação com credibilidade, inscreva-se agora para não perder nenhuma atualização!
Welcome back to Lez Hang Out, the podcast that would lovingly hand-feed you a churro. This week, Leigh (@lshfoster) and Ellie (@elliebrigida) hang out and talk about why the 2024 box-office smash hit, Challengers, Should've Been Gay(er). Honestly, we went into this movie assuming it was going to be so much gayer than it actually was. All of the promos hyped the threesome and made the story sound super queer, which made us all the more disappointed when it just did not deliver. Luckily, we have plenty of ideas on how to fix Challengers so that it can be the gay movie we were advertised. For a film by the same director as Call Me By Your Name, the queerness is way too subtle. Sure, there's a threesome; but the film doubles down on both boys being into Zendaya's character, Tashi, rather than each other. And beyond that one intense make-out, Art and Patrick never get together. There's not so much a love triangle as there is a beard triangle, with each character equally in-the-closet. Between cringing at the poorly crafted CGI tennis balls and overly on-the-nose suggestive subtext, we couldn't really understand why this movie got so much gay buzz. We get that everyone is desperate for queer crumbs; but the characters are not even likeable, there's barely any on-screen queerness and they somehow made Zendaya unattractive! While the film does provide an interesting take on sex and power (and on paper looks to be the tennis-equivalent of Heated Rivalry), it falls short on delivering any resolution to the sexual tension between Art and Patrick and never touches on Tashi's sexuality at all. We know one thing for sure, Challengers Should've Been Gay(er). Join our Patreon to unlock 25+ full-length bonus episodes, ad-free weekly episodes, mp3 downloads of our original songs, exclusive Discord access, and more! You can also support the show by shopping small at bit.ly/lezmerch & picking up Lez-ssentials songs on Bandcamp. Give us your own answers to our Q & Gay on Instagram and follow along on Facebook, TikTok, YouTube and BlueSky @lezhangoutpod. Email us @lezhangoutpod@gmail.com. Connect with us individually: Ellie Brigida (@elliebrigida). Leigh Holmes Foster (@lshfoster). Learn more about your ad choices. Visit megaphone.fm/adchoices
Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con
Mickey's March Madness kicks off with a doozy! MORTAL KOMBAT!!!... Annihilation. I know, you may be asking yourself “Why do the sequel?”, well, because it's MADNESS, THAT'S WHY! We all know this one took a turn for the worst: No Christopher Lambert. No Lindon Ashby. No Cary-Hiroyuki Tagawa…BUT, we do get an upgrade for Sonya Blade as well as some badass Lin Kuei Cyborg Assassins and a number of other badass characters! Yes the silly factor is cranked up and yes the acting is terrible and yes the CGI is awful, but dudes…it's MORTAL KOMBAT! Don't just watch Action, B-Action!!!
In this Terminator 2: Judgment Day Review, the Born to Watch crew dives headfirst into what many consider the greatest sequel ever made. James Cameron didn't just follow up the original Terminator… he reinvented the blockbuster. Released in 1991, Terminator 2: Judgment Day changed action movies forever with groundbreaking visual effects, unforgettable characters, and one of Arnold Schwarzenegger's most iconic roles.This week the full team is back, and the discussion kicks off with a simple but loaded question, is Terminator 2 the greatest sequel of all time? From the opening future-war battlefield to the legendary showdown between the T-800 and the liquid-metal T-1000, the boys break down why this film still holds up more than three decades later.Arnold Schwarzenegger returns as the Terminator, but this time the formula is flipped. Instead of hunting Sarah Connor, he's protecting her son, John Connor, the future leader of the human resistance. It's a twist that audiences in 1991 didn't see coming, and it gives the film its emotional core.The crew digs into Schwarzenegger at the absolute peak of his powers. After dominating the 80s with films like Predator, The Running Man and the original Terminator, Arnie was arguably the biggest movie star on the planet when T2 arrived. The famous bar scene, the sunglasses moment, and of course the immortal line "Hasta la vista, baby" all get the Born to Watch treatment.Linda Hamilton also gets her flowers in this episode. Her transformation from the vulnerable Sarah Connor of the first film into the hardened warrior of Judgment Day is one of the most dramatic character evolutions in action movie history. The boys discuss her intense performance, the physical transformation she underwent, and why her portrayal still feels authentic today.Edward Furlong's debut as John Connor sparks plenty of debate, too. Some love his rebellious street-kid energy, others question whether he's the most annoying teenager ever put in charge of humanity's future. Either way, he plays a crucial role in the film's emotional arc, and the developing bond between John and the T-800 is one of the movie's biggest surprises.Then there's Robert Patrick's T-1000. With his cold stare, relentless pursuit, and shape-shifting liquid metal body, he created one of the most terrifying villains of the 1990s. The guys break down why the T-1000 works so well and how the visual effects still look incredible today.Of course, no discussion of Terminator 2 would be complete without talking about the action set pieces. The LA River chase, the motorcycle-and-truck pursuit, the hospital escape, and the steel mill finale are all analysed in classic Born to Watch fashion. These scenes helped redefine what audiences expected from blockbuster filmmaking.The episode also dives into the film's massive cultural footprint. From the Guns N' Roses track "You Could Be Mine" to the revolutionary CGI that brought the T-1000 to life, Terminator 2 pushed cinema technology forward and influenced action movies for decades.But the big question remains: Does Terminator 2 actually surpass the original?That's the debate the Born to Watch crew finally settles.So slide into your leathers, fire up the Harley, and join the boys as they revisit one of the biggest and most influential action films ever made.JOIN THE CONVERSATIONIs Terminator 2 the greatest sequel of all time?T-800 or T-1000 — which Terminator wins the showdown?Does Judgment Day beat the original Terminator?Drop us a voicemail at https://www.borntowatch.com.au and be part of the show!Listen now on Spotify, Apple Podcasts, or wherever you get your podcasts.#BornToWatch #Terminator2 #JudgmentDay #ArnoldSchwarzenegger #JamesCameron #90sAction #MoviePodcast #SciFiMovies #T1000 #HastaLaVistaBaby
Superhero Fatigue Isn't Real — Bad Writing IsIs superhero fatigue actually real… or are audiences just tired of lazy storytelling?In this episode of Your Nerdy Girlfriend Podcast, we break down why the superhero genre isn't dying — it's suffering from overreliance on CGI, weak emotional stakes, and multiverse shortcuts.From the emotional weight of The Dark Knight to the nostalgia-driven spectacle of Spider-Man: No Way Home, we've seen what happens when superhero movies get it right.So why do some recent releases feel hollow?We discuss:• Why CGI-heavy final battles feel weightless• How the lack of lasting consequences lowers emotional investment• Whether the multiverse trend has become a creative crutch• The role of studios like Marvel Studios in shaping modern superhero storytellingAre we truly experiencing superhero fatigue — or are we just demanding better writing?If you love sharp nerd culture analysis, movie breakdowns, and pop culture commentary from a Black nerd perspective, this one's for you.New episodes weekly.
We're back talking about latter-day Moonbeam movies, when Canadian/Romanian concern Canarom took over the brand, and this time we're focusing on some teens conducting some sorcery in, appropriately enough, TEEN SORCERY! Becky Darke is officially anointed a Freaky on her third appearance as we talk 90s fashions, CGI dragons and of course Ernest Borgnine. Hosted by Jarrod Hornbeck and Steve Guntli Theme song by Kyle Hornbeck Logo by Doug McCambridge Email: puppetmasterscastlefreaks@gmail.com Instagram/Threads: @puppetmasters_castlefreaks YouTube: @PuppetMastersCastleFreaks Next week's episode: The Killer Eye/Killer Eye: Halloween Haunt
This week we go fully corporate: Top 5 Corporate & Tech Jargon — the phrases designed to sound like progress while delivering absolutely nothing. We're talking circle back, take it offline, pivot, blue-sky thinking, synergy, and the whole “results-driven ecosystem” dialect spoken exclusively by people who describe themselves as “thought leaders” on LinkedIn.Then we hit the main feature: Thunderbolts* — Marvel's surprisingly sincere group-therapy movie disguised as an action film. Think The Breakfast Club, but everyone's a government assassin and the villain is basically existential depression with god-tier powers.Standard warning: we spoil. A lot. With confidence.What we talked aboutTop 5 Corporate / Techno-babbleWhy corporate language exists: credibility theatre, hiding behind vague phrasing, and “sounding senior” without committing to anything.The difference between useful technical language vs bullshit camouflage (and why “take it offline” can mean “I'm seething”).Techno-babble in movies: Back to the Future's flux capacitor, Avengers physics word-salad (quantum tunneling, heavy ion fusion, Coulomb barrier), and classic Star Trek “modulate the phase variance” nonsense.The “pivot” moment that sneaks into real life: Friends and the cursed sofa stairwell.Thunderbolts*Why this one lands better than recent Marvel: less quippy noise, more consistent tone, and a third act that's actually about something.The set-up: a clean-up operation that becomes a trap, plus Marvel's best “oh, we're definitely all going to die” elevator pitch.Bob / Sentry / The Void: a superhuman project gone wrong, and a villain that manifests as the darkest version of yourself.The big swing: a finale that avoids sky-portals and CGI armies and instead goes for inner trauma + solidarity (yes, basically an emotional intervention).The asterisk explained: the film's marketing payoff and the “New Avengers” naming chaos.The rough edges: runtime bloat, plot convenience, and the return of accents that should've stayed retired.Bonus life adminWalking football cup semi-final madness (knees sacrificed, glory secured).Random watches: Tarot (not recommended), “Lords of Metal” (unexpectedly wholesome), and a bit of hype for upcoming Peaky Blinders and Baz Luhrmann's Elvis project.If you're even slightly Marvel'd-out but still want something that tries to have a heart, Thunderbolts* is one of the more watchable recent entries — and if you've ever died inside hearing “let's circle back,” the Top 5 segment is basically free therapy.You can now text us anonymously to leave feedback, suggest future content or simply hurl abuse at us. We'll read out any texts we receive on the show. Click here to try it out!We love to hear from our listeners! By which I mean we tolerate it. If it hasn't been completely destroyed yet you can usually find us on twitter @dads_film, on Facebook Bad Dads Film Review, on email at baddadsjsy@gmail.com or on our website baddadsfilm.com. Until next time, we remain... Bad Dads
In Part 1 of this CLOC Talk series, host Janessa Nelson sits down with Education Advisory Council members LaResa Young, Matt Wheatley, and John Esposito to explore what inspired them to join the EAC and what excites them most about advancing legal operations education. From webinars and CGI to LMS innovation and global skill-building, the panel discusses how learning is evolving across the profession. Tune in for an insightful conversation about where legal ops education stands today — and where it needs to go in the next 3–5 years.
"After seeing robots flipping and performing Kung Fu at the Spring Festival Gala, many asked: Is this real or CGI?"In this episode of Da Shu Mandarin, we dive deep into the rapid evolution of Chinese robotics—from the stage of the CCTV Gala to our own living rooms. But this isn't just about tech; it's about what it means to be human in an era of AI.We tackle the questions that keep us up at night:The "Uncanny Valley": Why do some people find humanoid robots terrifying while others want to date them?Robot Companionship: Richard confesses why he'd prefer a robot partner over a human one in his old age.Digital Immortality: Can (and should) we use AI to "bring back" lost loved ones? We discuss the heartbreaking case of a mother and her virtual daughter.The Death of Work: If robots do everything, what happens to human creativity and the meaning of life? Is "pain" necessary for civilization?War & Ethics: The scary reality of robot military competition.
Welcome back to The Kristian Harloff Show, your daily source for the biggest movie news, TV updates, and pop culture discussions! On today's episode, Kristian Harloff breaks down the biggest headlines in entertainment—from DC Studios and HBO to major upcoming films hitting theaters in 2026. First up, the LANTERNS trailer has finally dropped, giving fans their first look at the upcoming HBO Max DC series featuring Hal Jordan and John Stewart. The grounded tone and detective-style story have sparked mixed reactions online, with some fans loving the gritty approach while others question the direction of the DCU. Is the reaction justified, or are fans overreacting to an early teaser? The series is expected to arrive on HBO and HBO Max in 2026 and will follow the two Green Lanterns investigating a mysterious murder tied to the larger DC Universe. Next, we discuss the fascinating production detail behind Project Hail Mary, the upcoming sci-fi film starring Ryan Gosling. According to the filmmakers, the movie was shot without using a single green screen, with massive practical sets built to create the spacecraft environment. This ambitious approach could give the film a more realistic look compared to typical CGI-heavy space movies. We also talk about The Bride!, the new monster film inspired by Bride of Frankenstein, which is receiving very mixed early reviews from critics. Is this a bold reinterpretation of a classic horror story, or another divisive remake? Finally, there's an update on DC projects Man of Tomorrow and The Batman Part II, as reported shooting start dates give fans a timeline for when production could begin. However, if you were hoping for a crossover between the two worlds, the current reports suggest you shouldn't count on it. Join Kristian as he breaks down all of today's stories, reacts to the latest trailers, and gives his take on where the future of DC, sci-fi films, and major studio releases might be headed. Subscribe to The Kristian Harloff Show for daily movie news, trailer reactions, and deep dives into Marvel, DC, Star Wars, and everything happening in Hollywood. SPONSOR: RUGIET: For a limited time only, head to https://www.Rugiet.com/KRISTIAN to get 15% off your order.
Send a textAI influencers are no longer science fiction—they're already in your feed, landing brand deals, and reshaping digital marketing strategy in real time. This week on the Mike & Blaine Podcast, we unpack the rapid rise of virtual creators, CGI personalities, and fully synthetic brand ambassadors that never sleep, never age, and never go off-script. From sponsored posts created by generative AI to entirely fictional personalities building massive followings, the line between human and machine is getting blurry fast—and most people don't even realize it.We dig into why brands love AI influencers (total control, 24/7 scalability, no scandals, predictable messaging) and why audiences feel uneasy about it (authenticity gaps, trust erosion, emotional disconnect). Is this the next evolution of the creator economy—or the beginning of synthetic saturation? We explore what happens when automation meets influence, how platforms reward engagement over authenticity, and where the trust equation could start to crack.Then we flip it into the business lesson. Automation scales efficiency, but authenticity still drives loyalty. If you're a founder, marketer, or executive thinking about AI content strategy, this episode breaks down the real tactical questions:• When does AI increase margin—and when does it dilute brand equity?• Can synthetic creators outperform human influencers in ROI?• What happens to customer trust when people don't know what's real anymore?• How should businesses balance automation with human storytelling?We also look at how major brands and platforms are navigating this shift, from experimental AI brand ambassadors to algorithm-driven content at scale. Whether you're building a personal brand, managing a marketing team, or just trying to understand where digital culture is heading, this conversation gives you practical insight into how AI, branding, and business strategy intersect in 2026.If you've ever wondered whether the future of marketing is human—or synthetic—this episode is for you.Visit http://mikeandblaine.com to buy us a beer and support the show
Mostly Film — Creature Feature Mayhem: Snakes in the SystemThis week, we're diving headfirst into snake-infested chaos with a double feature: the ‘90s jungle pulp classic Anaconda (1997) and the internet-era spectacle Snakes on a Plane (2006).We break down what makes snake horror so uniquely unsettling: constriction, venom, cold-blooded calculation. And debate the big question: is absurd sincerity better than full-blown meme chaos?From Jon Voight going completely off the rails to Samuel L. Jackson holding the cabin together at 30,000 feet, we compare jungle vs. airplane, singular monster vs. swarm, practical effects vs. CGI, and which film truly understands what makes snakes scary.Plus: creature rankings, worst possible deaths, survival odds, and our final verdict on which movie wins the double feature.Listen wherever you get your podcasts, and follow us on Letterboxd to see what we're watching next.
Business unplugged - Menschen, Unternehmen und Aspekte der Digitalisierung
In This Episode: Apple announcements. Coding with AI. Copyright and AI. This week the TEH Podcast is hosted by Leo Notenboom, the “Chief Question Answerer” at Ask Leo!, and Gary Rosenzweig, the host and producer of MacMost, and mobile game developer at Clever Media. (You’ll find longer Bios on the Hosts page.) Top Stories 0:00 GR: New Apple Stuff Might be a new low-end Macbook. 8:00 Lightweight? 13:00 GR: Vibe Coding Limits Coding with AI 20:00 LN: Learning how to ask for what you want Different possibilities for the future 30:00 GR & LN: Let's talk about why we use AI art again in light of https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright 38:00 Faking real 40:00 How will it apply to video? How about using AI as a CGI tool? 46:00 LN: Server moves. Very geeky, but kinda interesting maybe. And how I took 50 sites offline with a single click. Oops. Ain’t it Cool 55:00 LN: The Collected Stories of Arthur C. Clarke – (Audio) 56:00 GR: Rental Family (Disney+) BSP: Blatant Self-Promotion 58:00 LN: How Does Cloud Storage Work? – https://askleo.com/75658 59:00 GR: 10 Ways To Start Recording Video With Your iPhone Fast https://macmost.com/10-ways-to-start-recording-video-with-your-iphone-fast.html Transcript teh_262 Video https://youtu.be/E5OqViJQREk
In the latest instalment of The Commentary Booth, Jamie Apps and Corrina Mabey continue their epic 50th Anniversary Star Wars rewatch with the epic conclusion to the original trilogy, Star Wars: Episode VI - Return of the Jedi.Fresh off the heels of The Empire Strikes Back's jaw-dropping revelation, our favourite rebels are back to take down the Empire once and for all.But this isn't just any rewatch. Corrina has brought her beloved Wicket backpack along for the ride, and she's ready to die on the hill that Return of the Jedi is the best Star Wars movie ever made. (Jamie strongly disagrees. He's entitled to be wrong.)In this action-packed episode, the duo dives deep into Jabba's palace (and its surprisingly moist inhabitants), the guerrilla warfare tactics of everyone's favourite space bears, and the emotional gut-punch of Vader's redemption arc. They also tackle the elephant in the room: those controversial special edition changes that added CGI singers, a very different Force ghost, and enough digital fluff to make purists weep.From Steadicam speeder bike trivia to David Lynch almost directing this thing, Jamie and Corrina leave no stone unturned. Plus, Corrina makes a compelling case for why Ewoks are actual geniuses, not just adorable fluffballs.Highlights Breakdown:
I went to my first HOA meeting… and I think I time-traveled to 1400.This week on Seven Minutes in Evan, I officially entered my mid-30s villain arc: caring about my community, arguing about fences, and almost getting nominated as HOA president against my will. We audit missing HOA money, cosplay medieval peasants, and somehow end up on Shark Week with Michael Phelps racing a CGI shark for custody of his kids.
“Who produced this? The mob." This week's scariest movie is... The Final Destination (2009). This film has everything: Hubcapped halved hillbillies, 4th floor hot tubs, and struggling to pay attention when someone is trauma dumping. If you love late-2000s CGI chaos, racetrack carnage, and a franchise that keeps finding new ways to turn everyday errands into a homicide puzzle box, this episode's for you! Please Subscribe, Rate, and Review The Horror Virgin to help more people discover our community. What did you think of our episode on The Final Destination? Tell us on social media @HorrorVirgin FB/IG, @HorrorVirginPod Twitter Up Next: Snakes on a PlaneSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week on Our Taste is Trash, Josh and Jade head into hostile territory with a spoiler filled discussion of Predator: Badlands, the latest entry in the Predator franchise.Directed by Dan Trachtenberg, the filmmaker behind Prey, Predator: Badlands drops viewers into a brutal new environment, following a young Predator on a dangerous rite of passage. The film stars Elle Fanning and Dimitrius Schuster-Koloamatangi.Josh and Jade break down what works, what absolutely does not. They talk CGI, Predator lore, and whether Badlands earns its place alongside Predator, Predator 2, and Prey or belongs on the franchise scrap heap.Listen now! Taste is subjective. Ours is trash.
Evénements Vidéo du webinar The energetic challenges of fault-tolerant quantum computers, avec Olivier Ezratty et Yasser Omar de PQI et Quantum Green Computinghttps://www.youtube.com/watch?v=NnfffiJYuvk Conférence Quantique à Ecole NavaleL'école navale organisait sa Journée des Sciences Navales dans ses locaux à Lanvéoc au sud de la rade de Brest. https://fr.wikipedia.org/wiki/Le_Vigilant_(S618) La France dans les EmiratsVisite à Abu Dhabi, dans le cadre de journées France-Emirats. https://sorbonne.ae/fr/national-strategies-and-international-cooperation-in-the-quantum-domainhttps://www.linkedin.com/posts/barbaresco_france-uae-quantum-collaboration-national-activity-7424442854778867712-sfDo/ Chaire de Pascale Senellart au Collège de Francehttps://www.college-de-france.fr/fr/agenda/cours/technologies-quantiques-emergentes/processeurs-quantiques-photoniqueshttps://www.college-de-france.fr/fr/agenda/cours/technologies-quantiques-emergentes/processeurs-quantiques-bases-atomeshttps://www.college-de-france.fr/fr/agenda/seminaire/technologies-quantiques-emergentes/assembler-la-matiere-quantique-atome-par-atomehttps://www.college-de-france.fr/fr/agenda/cours/technologies-quantiques-emergentes/vibrations-et-technologies-quantiqueshttps://www.college-de-france.fr/fr/agenda/seminaire/technologies-quantiques-emergentes/how-does-quantum-object-gravitate La chaire se conclura par un colloque du 16 avril d'une journée sur la photonique quantique.https://www.college-de-france.fr/fr/agenda/colloque/technologies-quantiques-base-de-lumiere Séminaire MEDEF LyonOrganisé à Lyon par MEDEF Lyon-Rhône et le Hub Quantique du CEA. A venirConférence de Daniel Esteve à Bordeaux le 2 marshttps://www.sfphysique.fr/evenement/prix-nobel-de-physique-2025-aux-debuts-du-domaine-des-circuits-electriques-quantiques-supraconducteurs/Evénement Devoxx en avril 2026 avec Fanny, Olivier, Sébastien Marie de Matmut et Alice&Bob. https://www.devoxx.fr/Conférence développeurs Nvidia le 16 mars 2026. https://www.nvidia.com/gtc/sessions/quantum-computing/ Nuit du quantique organisée à la Cité des Sciences le 30 mars à la Cité des Sciences le 31 mars à 18h. https://www.sfphysique.fr/evenement/nuit-du-quantique-a-paris/ France Quobly et SealsqLa période de discussion exclusive entre les deux sociétés s'est achevée mi-février.https ://www-sealsq-com.cdn.ampproject.org/c/s/www.sealsq.com/investors/news-releases/sealsq-strengthens-its-quantum-made-in-usa-strategy-with-an-additional-strategic-investment-in-eeroq Et partenariat avec Entropica Labs à Singapour.https://quobly.io/news/quobly-and-entropica-labs-sign-strategic-mou-in-singapore-to-advance-fault-tolerant-quantum-computing/ PasqalIls annoncent une levée de fonds en cours de 200M€. Et aussi l'installation d'un QPU analogique en Italie à Cineca.https://thequantuminsider.com/2026/02/19/pasqal-in-talks-to-raise-e200m-at-unicorn-valuation-bloomberg-reports/https://thequantuminsider.com/2026/02/17/pasqal-neutral-atom-qpu-italy/ Scaleways et AQTL'opérateur de cloud du groupe Iliad propose l'accès à un QPU de 12 qubits de l'Autrichien Alpine Quantum Technologies via Qiskit. https://thequantuminsider.com/2026/02/19/aqt-and-scaleway-launch-european-quantum-cloud-access/https://www.scaleway.com/en/docs/quantum-computing/how-to/use-aqt-qpus/ Chaire quantique à Clermont-FerrandL'lSIMA de Clermont-Ferrand, sous la houlette de Philippe Lacomme que nous avions évoqué lors du précédent podcast, lance une chaire sur l'informatique quantique avec l'aide de CGI, Bull et Michelin. La chaire a vocation a faire le lien entre la pédagogie et la recherche. Elle est soutenue par deux laboratoires, le LIMOS et le LPCA et par l'lSIMA Clermont-Ferrand.https://www.linkedin.com/posts/philippe-lacomme-616a2b130_accueil-chaire-quantique-activity-7429178723154894848-2zAnA quantum feasibility preserving modeling for the min cut problem by Ali Abbassi, Yann Dujardin, Eric Gourdin, Philippe Lacomme, and Caroline Prodhon, arXiv, February 2026 Vidéo de Sabine Hossenfelder« The Quantum Computer Dream is Falling Apart ». https://youtu.be/N-9muK0mv5w Quantonation finit sa levée de fonds
Subscribe today for access to our full catalog of bonus episodes, including 2+ new episodes every month! www.patreon.com/boysbiblestudy Sensationalist Christian filmmaker Danny Carrales has a talent for conveying the urgency of Jesus's teachings. It's one thing to hear the words that those who don't accept God's grace will be doomed to a life of eternal torture; it's another thing to actually see this happen to a guy who seemed like a pretty decent person but unfortunately dropped dead before he decided to accept Jesus. In an instant, he's floating above his body, traveling through a pillar of light, until the direction suddenly switches and he screams in terror as the sky around him turns to fire. This is an example of the high drama level of ESCAPE FROM HELL, cowritten by Danny Carrales and Michael Martin, the team who gave us the high-octane rapture movie THE GATHERING and also HEAVEN'S WAR, a CGI-heavy story of the eternal spiritual struggle unseen to humans. ESCAPE FROM HELL inhabits a couple of well-traveled Christian movie tropes. It takes place in a hospital, a frequent setting for faith-based movies, since it's a hub for souls entering and leaving Earth. Also, similar to Australian thriller TABERNACLE 101, the film's action largely concerns a scientifically minded explorer inducing a near-death experience in himself to prove the idea of life beyond death. Both films are like FLATLINERS for Christians, and both show that taking such a risk costs opening the door between worlds in very uncomfortable ways. ESCAPE FROM HELL is probably the more successful of the two for its tight structure, extremely laid-on-thick melodrama of family members crying while their loved ones' souls are experiencing eternal damnation, and weird, campy experimental techniques, like putting a sign saying "Ducks Be Not Proud" on the hospital roof so dying souls can read it before getting to heaven. Every collector of Christian films should have a copy of this VHS on their Bible study shelf, along with THE GATHERING and FINAL EXIT, the Carrales film we plan to watch next. View our full episode list and subscribe to any of our public feeds: http://boysbiblestudy.com Unlock 2+ bonus episodes per month: http://patreon.com/boysbiblestudy Subscribe to our Twitch for livestreams: http://twitch.tv/boysbiblestudy Follow us on Instagram: http://instagram.com/boysbiblestudy Follow us on Twitter: http://twitter.com/boysbiblestudy
Od Užica do nominacije za EMI nagradu: Saznajte kako je jedna Srpkinja "obukla" najveće svetske zvezde! U 358. epizodi podkasta Pojačalo, Ivan је ugostio Jovanu Gospavić, istaknutu srpsku kostimografkinju koja već više od decenije gradi izuzetno uspešnu karijeru u Londonu u svetskoj filmskoj i TV industriji. Razgovor prati njen nesvakidašnji razvojni put – od odrastanja u Užicu i prvih koraka u svetu umetnosti, preko studija na Fakultetu primenjenih umetnosti u Beogradu i praškoj akademiji DAMU, pa sve do selidbe u Veliku Britaniju. Jovana otvoreno deli svoja iskustva o probijanju na zahtevno inostrano tržište, počevši od specifičnog angažmana dizajniranja uniformi za Britansku mačevalačku asocijaciju, do rada na velikim istorijskim filmskim i televizijskim projektima poput filma Emma i serija Queen Charlotte: A Bridgerton Story i Mary & George, koje su joj donele i dve nominacije za prestižnu nagradu Emi. O čemu smo pričali: - Početak - Početak razgovora - Kad porastem biću - Priprema za fakultet - Studentski dani - Studiranje u Pragu - Iz Češke u London - Dizajn u mačevanju - Prvi veliki projekat - Rad na seriji The Third Day - Drugi zanimljivi projekti - Uticaj globalnih dešavanja - CGI i AI u filmskoj industriji - Zaključak razgovora Podržite nas na BuyMeACoffee: https://bit.ly/3uSBmoa Pročitajte transkript ove epizode: https://bit.ly/4sfWXEl Posetite naš sajt i prijavite se na našu mailing listu: http://bit.ly/2LUKSBG Prijavite se na naš YouTube kanal: http://bit.ly/2Rgnu7o Pratite Pojačalo na društvenim mrežama: FB: https://www.facebook.com/PojacaloRS/ IG: https://www.instagram.com/pojacalo.rs/ X: https://x.com/PojacaloRS LN: https://www.linkedin.com/company/pojacalo TikTok: https://www.tiktok.com/@pojacalo.rs
Bobswinkles, it's a fizz whizzer! Yes, you heard that right, the time has come for us to climb into a giant bag and be transported to Giant Country, where we'll sample the finest snozzcumbers and frobscottle (though you'll forgive us if there's a whizzpopper or two). We're joined by the BFI Southbank's Head of Cinema Programme, Justin Johnson, for this wide-ranging discussion of Steven Spielberg's 2016 adaptation of Roald Dahl's THE BFG. It's an episode as giant as Fleshlumpeater but far less mean, taking in everything from the film's long gestation to its struggle to make an impact, from the mo-cap performances and CGI compositing to the wide array of accents (some more explicable than others). Whoopsie scrumpers!Follow the podcast on Twitter (@RamblinAmblin), Instagram (@ramblinamblinpod) and Blusky (@ramblinamblin.bsky.social). Be sure to like and subscribe so you don't miss an episode! Get in touch with us either via our socials or email rambinaboutamblin@gmail.com. Please feel free to give us a 5-star review, share your favourite Amblin movies and tell us if ET makes you cry.Ramblin is created and produced by Andrew Gaudion and Joshua Glenn. A special thanks as always to Emily Tatham for the artwork, and Robert J. Hunter & Greg Sheffield for the theme music.
The Dean Von Music Podcast Show Coming to you Live from Las Vegas, Nevada
Watch and Listen to The Dean Von Music Podcast Show, Live out of Las Vegas, Nevada Every Thursday Night at 6PM PST Music Podcast from around the worldTake a cinematic journey through emotion, soul, and inner transformation with her new release, “Lake of Dreams” — a musical fantasy project and a vision beyond sound, created by Karina Sher and newly released by Ragebreed Records.KARINA SHER CHAPTERS0:00:00 Intro0:01:39 Dean welcomes Karina Sher to the stage.0:03:25 Q' Do you do your own artwork and digital photography?0:04:03 Q' When did you create Ragebreed Records?0:06:40 Q' Tell me a little bit about your music career?0:08:43 Q' Tell me about your diverse style of songwriting?0:09:57 Q' Who are your influences of ‘female singers'?0:11:48 Karina talks more about ‘Lake of Dreams'…0:15:59 Q' What band is playing on ‘Rage of Masses'?0:17:45 [MUSIC VIDEO] - LAKE OF DREAM “HOLD ME” by Karina Sher0:23:01 Q' What inspired you to start ‘Lake of Dreams'? 0:26:09 Q' Tell me a little bit about where you grew up?0:28:16 Q' How did you meet Brian Mohr? 0:31:32 [CHAT ROOM SHOUT-OUT]0:33:40 Q' Is the film ‘Lake of Dreams' going to include CGI and real actors?0:36:49 Q' Tell me more about your music influences?0:42:54 Q' Tell me more about ‘Patterns of Non-Existence' Lake of Dreams?0:43:56 [MUSIC VIDEO] - LAKE OF DREAMS - PATTERNS OF NON-EXISTENCE 0:48:12 Q' Tell me more about yourself and your many talents?0:51:14 Q' Does being a ‘Life Coach' get in the way of running RAGEBREED RECORDS?0:53:59 Q' How many bands are on your label now?0:56:52 Q' What is your relationship with Earache Distribution?0:58:20 Q' How many DJ's do you have on your ‘Radio Program'?1:04:28 Karina Sher talks about the new release of the ‘Mindlapse Album' exclusive to ‘Lake of Dreams' -1:05:55 [MUSIC VIDEO] - LAKE OF DREAMS - THE DAY AFTER - BY RAGEBREED RECORDS1:13:16 Q' Tell me a little bit more about that song, The Day After from ‘Lake of Dreams'?1:18:16 Q' Do you have a Podcast show?1:20:20 [TALK TO THE BAND] STARTS NOW! 1ST CALLER IN AND WINNER OF THE PODCAST MUG!1:35:52 Q' Is there anything you want to tell the audience and your fans what's coming up next?1:37:10 “Thank You” Karina for showing up tonight and “Thank You” to the audience, Good Night!================================KARINA SHER LINKShttps://ragebreed.bigcartel.com/https://www.ragebreed.biz/https://www.karinasher.net/
Jay is back and he's done a lot in the last couple years - from vagina aliens in "Video Shop Tales of Terror II" to award-winning face rips for "The Domestication of Vampires in Essex" and greusome appliances in "Darner" we have lots of fun things to talk about, as well as dipping into movies out now with good mix of CGI and practicle effects and the new He-Man movie. Stop in and prepare for the wild and bloody, as he did bring some of his work!
Remember Cliffhanger? The movie where the Hero fights gravity, bad guys, and basic OSHA regulations? Starring Sylvester Stallone as a man whose primary skill set is hanging from things dramatically… and John Lithgow as a villain who woke up and chose theatrical villainy. You probably remember mountains. Snow. A helicopter doing things helicopters absolutely should not be doing. And Lithgow is just chewing scenery like he hasn't eaten in days. The plot? Oh, it's simple. Suitcases full of money fall out of the sky, a bunch of criminals decide mountain climbing is now part of their career path, and Stallone spends two hours dangling from things that were never meant to hold a human being. So join us as we head back to the '90s — when action heroes didn't need CGI, villains had accents for no reason, and the solution to every problem was either punching it… shooting it… or letting it fall off a cliff.
IP Fridays - your intellectual property podcast about trademarks, patents, designs and much more
I am Rolf Claessen and together with my co-host Ken Suzan I welcome you to Episode 172 of our podcast IP Fridays. Today's interview guests are Co-Founder & CEO of Inception Point AI, Jeanine Whright, and Mark Stignani, who is Partner & Chair of Analytics Practice at Barnes & Thornburg LLP. https://www.linkedin.com/in/jeaninepercivalwright https://www.linkedin.com/in/markstignani Inception Point AI But before the interview I have news for you: The Unified Patent Court (UPC) ruled on Feb 19, 2026, that specialized insurance can cover security for legal costs. This is vital for firms, as it eases litigation financing and lowers financial hurdles for patent lawsuits by removing the need for high liquid assets to enforce rights at the UPC. On Feb 12, 2026, the WIPO Coordination Committee nominated Daren Tang for a second six-year term as Director General. Tang continues modernizing the global IP system, focusing on SMEs, women, and digital transformation. His confirmation in April is considered certain. An AAFA study from Feb 4 reveals 41% of tested fakes (clothing/shoes) failed safety standards. Many contained toxic chemicals like phthalates, BPA, or lead. The study highlights that counterfeiters increasingly use Meta platforms to sell unsafe imitations directly to consumers. China's CNIPA 2026 report announced a crackdown on bad-faith patent and trademark filings. Beyond better examination quality, the agency will sanction shady IP firms and stop strategies violating “good faith” to make China’s IP system more ethical and innovation-friendly. Now, let's hear the interview with Jeanine Whright and Mark Stignani! How AI Is Rewiring Media & Entertainment: Key Takeaways from Ken Suzan's Conversation with Jeanine Wright and Mark Stignani In this IP Fridays interview, Ken Suzan speaks with two repeat guests who look at the same phenomenon from two angles: Jeanine Wright, Co-Founder & CEO of Inception Point AI, as a builder of AI-native entertainment, and Mark Stignani, Partner and Chair of the Analytics Practice at Barnes & Thornburg LLP, as a lawyer advising clients who are trying to use AI without stepping into a legal (or ethical) crater. What emerges is a clear picture: generative AI is not just “another tool.” It is rapidly becoming the default infrastructure for creative work—while the rules around ownership, consent, and accountability lag behind. 1) What “AI-generated personalities” really are (and why that matters) Jeanine's company is not primarily “cloning” real people. Instead, Inception Point AI creates original, fictional personalities—characters with backstories, ambitions, and evolving arcs—then deploys them into the world as podcast hosts and content creators (and eventually actors and musicians). Her key point: the creative work still starts with humans. Writers and creators define the concept, tone, audience, and story engine. What AI changes is speed, cost, and iteration—and therefore what is economically feasible to produce. 2) The “generative content pipeline” isn't a magic button A recurring misconception Ken raises is the idea that someone “pushes a button” and content pops out. Jeanine explains that real production looks more like a hybrid studio: A creative team defines character, voice, format, and storyline. A technical team builds what she calls an “AI orchestration layer” that combines multiple models and tools. The “stack” differs by format: the workflow for a long-form audio drama is different from a short-form beauty clip. This matters because it reframes AI content not as a single output, but as a pipeline decision: which tools, which data sources, which QA, and which governance steps are used—and where human review happens. 3) The biggest legal questions: origin, liability, ownership, and contracts Mark doesn't name a single “top issue.” He describes a cluster of problems that repeatedly show up in client conversations: Training data and “origin story” Clients keep asking: Can I legally use AI output if the tool was trained on copyrighted works? Even if the output looks new, the unease is about whether the tool's capabilities are built on unlicensed inputs. Liability for unintended harm Mark flags risk from AI content that inadvertently infringes, defames, or carries bias. The legal exposure may not match the creator's intent. Ownership and protectability He points to a big gap: many jurisdictions are still reluctant to grant classic IP rights (copyright or patent-style protection) to purely AI-generated material. That creates uncertainty around whether businesses can truly “own” what they produce. Old contracts weren't written for AI A final, practical point: many agreements—talent contracts, author clauses, data licenses—predate generative AI and simply don't address it. That leads to disputes about scope, permissions, and—crucially—indemnities. 4) Are we at a tipping point? The “gold rush” vs. “next creative era” views Jeanine frames AI as “the world's most powerful creative tool”—comparable to previous step-changes like animation, special effects, and CGI. For her, the strategic implication is simple: creators who learn to use AI well will expand what they can build and test, faster than ever. Mark's metaphor is more cautionary: he calls the moment a “gold rush” where technology is sprinting ahead of law. Courts are getting flooded with foundational disputes, while legislation is fragmented—he notes that states may move faster than federal frameworks, and that labor agreements (e.g., union protections) will be a key pressure point. 5) Democratization: more creators, more niche content, more experimentation One of the most concrete themes is access. Jeanine argues AI will: Lower production barriers for independent filmmakers and storytellers. Reduce the need for “hit-making only” economics that dominate Hollywood. Make micro-audience content commercially viable. Her example is intentionally niche: highly localized, specialized content (like a “pollen report” for many markets) that would never have made financial sense before can now exist—and thrive—because the production cost drops and personalization scales. 6) Likeness, consent, and “digital performers”: what happens when AI resembles a real actor? Ken pushes into a sensitive area: what if someone generates a performance that closely resembles a living actor without consent? Mark outlines the current (imperfect) toolbox—because, as he emphasizes, most laws weren't built for this scenario. He points to practical claims that may come into play in the U.S., such as rights of publicity and false endorsement-type theories, and notes that whether something is parody or “too close” can become a major fault line. Jeanine explains her company's operational approach: They focus on original personalities, designed “from scratch.” They build internal checks to avoid misappropriating known names, likenesses, or recognizable identities. If they ever work with real people, the model would be licensing their likeness/voice. A subtle but important business point also appears here: Jeanine expects AI-native characters themselves to become licensable assets—meaning the entertainment economy may expand to include “celebrity rights” for fully synthetic personalities. 7) Ethics: the real line is “deception,” not “AI vs. human” The ethical core of the conversation is not “AI is bad” or “AI is good.” It's how AI is used—especially whether audiences are misled. Mark highlights several ethical risks: Misuse of tools to manipulate faces and content (“AI slop” and political misuse). Displacement of creative workers without adequate transition support. A concern that AI often optimizes toward “statistical averages,” potentially flattening originality. Jeanine agrees ethics must be designed into the system. She describes regular discussions with an ethicist and emphasizes a principle: transparency. Her company discloses when content or personalities are AI-generated. She argues that if people understand what they're engaging with and choose it knowingly, the ethical problem shifts from “AI exists” to “Are we tricking people?” Mark adds a real-world warning: deepfakes are now credible enough to enable serious fraud—he references a case-like scenario where a synthetic video meeting deceived an employee into authorizing a payment. The point is clear: authenticity and verification are no longer optional. 8) The “dead actor” hypothetical: legal permission vs. moral intent Ken raises a provocative scenario: an actor's estate authorizes an AI-generated new performance, but the actor opposed such technology while alive. Neither guest offers a simplistic answer. Jeanine suggests that even if the estate holds legal rights, a company might choose to avoid such content out of respect and because the ethical “overhang” could damage the storytelling outcome. She also notes the harder question: people who died before today's capabilities may never have been able to meaningfully consent to what AI can now do—raising questions about how we interpret legacy intent. Mark underscores the practical contract problem: many rights are drafted “in perpetuity,” but that doesn't automatically settle the ethical question. 9) Five-year forecast: “AI everywhere,” but audiences may stratify Ken closes with a prediction question: in five years, how much entertainment content will significantly involve AI—and will audiences care? Jeanine predicts AI becomes the default creative layer for most content creation. Mark is slightly more conservative on the percentage, but adds an important nuance: the market will likely stratify. Low-cost, high-volume content may become saturated with AI, while premium segments may emphasize “human-made” as a differentiator—especially if disclosure norms become standard. Bottom line for business leaders and creators This interview lands on a pragmatic conclusion: AI will change how content is made at scale, and the competitive edge will go to teams that combine creative taste, operational discipline, and legal/ethical governance. If you're building, commissioning, or distributing content, the questions you can't dodge anymore are: What's the provenance of the tools and data you rely on? Who is responsible when output harms, infringes, or misleads? What rights can you actually claim in AI-assisted work? Do your contracts and disclosures match the new reality? Ken Suzan: Thank you, Rolf. We have two returning guests to the IP Friday’s podcast. Joining me today is Janine Wright and Mark Stignani. Our topic for discussion, how is AI transforming the media and entertainment industries today? We look at the issues from differing perspectives. A bit about our guests, Janine Wright is a seasoned board member, CEO, global COO and CFO. She’s led organizations from startup to a $475 million plus revenue subsidiary of a public company. She excels in growth strategy, adopting innovative technologies, scaling operations and financial management. Janine is a media and entertainment attorney and trial litigator turned technologist and qualified financial expert. She is the co-founder and CEO of Inception Point AI, a growing company that is paving new ground with AI-generated personalities and content through developing technology and story. Mark Stignani is a partner with Barnes & Thornburg LLP and is based in Minneapolis, Minnesota. He is the chair of the data analytics department with a particular emphasis on artificial intelligence, machine learning, cryptocurrency and ESG. Mark combines the power of artificial intelligence and machine learning with his skills as a corporate and IP counsel to deliver unparalleled insights and strategies to his clients. Welcome, Janine and Mark to the IP Friday’s podcast. Jeanine Whright: Thank you. Thank you. Thank you so much for having me and fun to be back. It feels nostalgic to be here. Ken Suzan: That’s right. And you both were on the program. So it’s fantastic that you’re both back again. So our format, I’m going to ask a question to Janine and or Mark and sometimes to both of you. So that’s going to be how we proceed. Let’s jump right in. Janine, your company creates AI-generated actors. For listeners who may not be familiar, can you briefly explain what that means and what’s now possible that wasn’t even two years ago? Jeanine Whright: Sure. Yeah, we are creating AI-generated personalities. So new characters, new personalities from scratch. We design who these personalities are and will be, how they will evolve. So we give them complex backstories. We give them hopes and dreams and aspirations. We every aspect of them, their families, how they’re going to evolve. And in the same way that, say, you know, Disney designs the character for its next animated feature or, you know, an electronic arts designs a character for its next major video game. We are doing that for these personalities and then we are launching them into the world as podcast hosts, content creators on social platforms like YouTube, Instagram and TikTok. And even in the future, you know, actors in feature length films, musicians, etc. Ken Suzan: Very fascinating. Mark, from your practice, what’s the single biggest legal question or dispute you’re seeing clients wrestle with when it comes to AI and media creation? Mark Stignani: Well, I think that, you know, it’s not just one thing, it’s like four things. But most of them tend to be kind of the origin story of AI data or AI tools that they use because, you know, but for the use of AI tools trained on copyrighted materials, the tools wouldn’t really exist in their current form. So a lot of my clients are wondering about, you know, can I legally use this output if it’s built upon somebody else’s IP? The second ask, the second flavor of that is really, is there liability being created if I take AI content that inadvertently infringes or defames or biases there? So there’s the whole notion of training bias from the training materials that comes out. The third phase is really, you know, can I really own this? Because much of the world does not really give IP rights into AI-generated inventions, copyrighted materials. It’s still kind of a big razor. Then at the end of the day, you know, if it’s an existing relationship, does my contract even contemplate this? So everything from authors contracts on up to just use of data rights that predate AI. Ken Suzan: And Janine and Mark, a question to both of you. How would you describe where we are right now in the AI revolution in media and entertainment? Are we approaching a tipping point? And if so, what are the things we need to watch for? Jeanine Whright: Yeah, I definitely think that we’re at a phase where people are starting to come to the realization that AI is the world’s most powerful creative tool. But that, you know, storytelling and point of view is what creates demand and audiences. And AI doesn’t threaten or change that. But it does mean that as people evolve in this medium, they’re very likely going to need to adopt, utilize and figure out how to hone their craft with these AI-generated content and these AI-generated toolings. So this is, you know, something that people have done certainly in the past in all sorts of ways in using new tools. And we’ve seen that make a significant change in the industry. So you look at, you know, the dawn of animation as a medium. You look at use of special effects, computer-generated imagery in the likes of Pixar. And this is certainly the next phase of that evolution. But because of the power of the tool and what will become the ubiquity of the tool, I think that it’s pretty revolutionary and all the more necessary for people to figure out how to embrace this as part of their creative process. Ken Suzan: Thank you, Janine. Mark, your thoughts? Mark Stignani: Yeah, I mean, I liken this to historically to like the California gold rush right now, because, you know, the technology is so far outpaced in any of the legal frameworks that are available. And so we’re just trying to shoehorn things in left and right here. So, I mean, the courts are beginning to start to engage with the foundational questions. I don’t think they’re quite there yet. I just noticed Anthropic got sued again by another group of people, big music group, because of the downloaded works they’ve done. I mean, so the courts are, you know, the courts are certainly inundated with, you know, too many of these foundational questions. Legislatively, hard to tell. I mean, federal law, the federal government is not moving uniformly on this other than to let the gold rush continue without much check and balance to it. Whereas states are now probably moving a lot faster. Colorado, Illinois, even Minnesota is attempting to craft legislation and limitations on what you can do with content and where to go with it. So, I mean, the things we need to watch for any of the fair use decisions coming out here, you know, some of the SAG-AFTRA contract clauses. And, you know, again, the federal government, I just, you know, I got a big shrug going as to what they’re actually going to come up with here in the next 90 to 100 days. So, but, you know, I think they’ll be forced into doing something sooner than later. Ken Suzan: Okay, let’s jump into the topic of the rise of generative content pipelines. My first question to Janine. Studios and production companies are now building what some call generative content pipelines. This is where AI systems produce everything from scripts to visual effects to voice performances. What efficiencies and creative possibilities does this unlock for the industry? Jeanine Whright: Yeah, so this is quite a bit of what we do. And if I could help pull the curtain back and explain a little bit. Ken Suzan: That’d be great. Jeanine Whright: Yeah, there’s this assumption that, you know, somebody is just sitting behind a machine pushing a button and an out pops, you know, what it is that we’re producing. There’s actually quite a bit of humans still in the loop in the process. You know, we have my team as creators. The other half of my team is the technologists. And those creators are working largely at what we describe as the the tip of the sphere. So they’re, of course, coming up with the concepts of who are these personalities? What are these personalities, characters, backgrounds going to be a lot of like rich personality development? And then they’re creating like what are the formats? What are the kind of story arcs? What is the kinds of content that this this character wants to tell? And what are the audiences they’re desiring to reach and what’s most going to resonate with them? And then what we built internally is what we refer to as an AI orchestration layer. So that allows us to pull from basically all of the different models and then all of these different really cool AI tools. And put those together in such a way and combine those in such a way that we can have the kind of output that our creative team envisions for what they want it to be. And at the end of the day, what you what the stack looks like for, say, a long form audio drama, like the combination of LLMs that we’re going to use in different parts of scripting and production and, you know, ideating and all of that. And the kinds of tooling that we use to actually make it and get it to sound good and have the kinds of personality characteristics that we want to be in an authentic voice for a podcast is going to be different than the tech stack and the tool stack that we might use for a short form Instagram beauty tip reel. And so there’s a lot of art in being able to pull all of these tools together to get them to do exactly what you want them to do. But I think the second part of your question is just as interesting as the first. I mean, what is what possibilities is this unlocking? So of course you’re finding efficiencies in the creative production process. You can move faster. You can do things were less expensive, perhaps, and you were able to do it before. But on the creator side, I think one thing that hasn’t been talked about enough is how it is really like blown wide the aperture of what creators can do and can envision. Traditionally, you know, Hollywood podcasting, many of these businesses that become big businesses have become hit making businesses where they need to focus on a very narrow of wide gen pop content that they think is going to get tens of millions, hundreds of millions in, you know, fans and dollars in revenue for every piece of content that they make. So the problem with that is, is that it really narrows the kinds of things that ultimately get made, which is why you see things happening in Hollywood, like the Blacklist, which is, you know, this famous list of really exceptional content that remains unpredited, unproduced, or why you see things like, you know, 70 to 80% of the top 100 movies being based on pre-existing IP, right? Because these are such huge bets that you need to feel very confident that you’re going to be able to get big, big audiences and big, big dollars from it. But with AI, and really lowering the barrier to entry, lowering the costs of production and marketing, the experimentation that you can do is really, really phenomenal. So, you know, my creative team, if they have an idea, they make it, you know, they don’t have to wring their hands through like a green lighting process of, you know, should we, shouldn’t we, like we, we can make an experiment with lots of different things, we can do various different versions of something. We can see what would this look like if I placed it in the 1800s, or what if I gave this character an Australian accent, and it’s just the power of being able to have this creative partner that can ideate with you and experiment with you at rocket speed. With the creators that are embracing it, you can see how it is really fun for them to be able to have this wide of a range of possibility. Ken Suzan: Mark, when you hear about these generative pipelines, what are the immediate red flags or concerns that come to mind from a legal standpoint? How about ethics underlying all of this? Well, Mark Stignani: that was not, that’s the number one red flag because I mean, we are seeing not just that in the entertainment industry, but it literally at political levels, and the kind of the phrase, to turn the phrase AI slop being generated, we’re seeing, you know, people’s facial expressions altered. In some cases, we’re seeing AI tools being misused to exploit various groups of individuals and genders and age groups. So I mean, there’s a whole lot of things ethically that people are using AI for that just don’t quite cover it. Especially in the entertainment industry, I mean, we’re looking at a fair amount of displacement of human workers without adequate transition support, devaluation of the creative labor. I mean, the thing though that I’m always from a technical standpoint is AI is simply a statistical average of most everything. So it kind of devalues the benefit of having a human creator, a human contribution to it. That’s the ethical side. But on the legal side, I see chain of title issues. I mean, because these are built on very questionable IP ownership stages, I mean, in most of these tools, there has been some large copying, training and taking of copyrighted materials. Is it transformational? Maybe. But there’s certainly not a chain of title, nor is there permission granted for that training. I mentioned SAG-AFTRA earlier, I think there’s a potential set of union contract aspects to this that if you know many of these agreements and use sub-licenses for authors and actor agreements, they weren’t written with AI in mind. So that’s another red flag. And also I just think in indemnification. So if we ultimately get to a point where groups are liable for using content without previous license, then who’s liable? Is the tool maker the liable group or the actual end user? So those are probably my top four red flags. But I think ethics is probably my biggest place because just because we can do something from an ethical standpoint doesn’t mean we should. Jeanine Wright: Yeah, if I can respond to both of those points. I mean, one from a legal perspective, just to be very clear, I mean, we are always pulling from multiple different models and always pulling from multiple different sources. And we even have data sources that we license or use for single source of truth on certain pieces of information. So we’re always pulling things together from multiple different sources. We also have built into our process, you know, internal QAing and checking to make sure that we’re not misappropriating the name or likeness of any existing known personality or character. We are creating original personalities there. We design their voice from scratch. We design their look from scratch. So we’re not on our personality side, we’re not pulling or even taking inspiration from existing intellectual property that’s already out there in creating these personalities. On the ethical side, I agree. I mean, when we came out of stealth, we came out of stealth in September. There was certainly quite a bit of backlash from folks in my—I previously co-founded a company in the audio space. I mean, there’s been many rounds of layoffs in audio and in many other parts of the entertainment industry. So I’m very sensitive to the feedback around, like, is this job displacement? I mean, I do think that the CEO of NVIDIA said it right when he said, you’re likely not going to lose your job to AI, but you will lose your job to somebody who knows how to use AI. I think these tools are transforming the way that content is made and that the faster that people can embrace this tooling, the more likely they’re going to be having the kinds of roles that they want in, you know, in content creation and storytelling in the future. And we are hiring. I’m hiring AI video creators, AI audio creators. I’m hiring AI developers. So people who are looking for those roles, I mean, please reach out to me, we would love to work with you and we’d love to grow with you. We also take the ethics very seriously. For the last few months or so, I’ve met regularly with an ethicist, we talk about all sorts of issues around, you know, is designing AI-generated people, you know, good for humanity? And what about authenticity and transparency and deception, and how are we in building in this space going to avoid some of the problems that we’ve seen with things like social media and other forms of technology? So we keep that very top of mind and we try to build on our own internal values-based system and, you know, continue to elevate and include the humanity as part of the conversation. Ken Suzan: Thank you, Janine. Janine, some argue that AI content pipelines will level the field for filmmaking, giving independent creators access to tools that were once available only to major studios. Is that the future you envision? Jeanine Wright: I do think that with AI you will see an incredible democratization of access to technology and access to these capabilities. So I do think, you know, rise of independent filmmakers, you won’t have as many people who are sitting on a brilliant idea for the next fantastic script or movie that just cannot get it made because they will be able to with these tools, get something made and out there, at least to get the attention of somebody who could then decide that they want to invest in it at a studio kind of level in the future. The other thing that I think is really interesting is that I think, you know, AI will empower more niche content and more creators who can thrive in micro-communities. So it used to be because of this hit generation business model, everything needed to be made for the masses and a lot of content for niche audiences and micro-communities was neglected because there was just no way to make that content commercially viable. But now, if you can leverage AI—we make a pollen report podcast in 300 markets, you know, nobody would have ever made that before, but it is very valuable information, a very valuable piece of content for people who really care about the pollen in their local community. So there’s all sorts of ways that being able to leverage AI is making it more accessible both to the creator and to the audience that is looking for content that truly resonates with them. Ken Suzan: Mark, let’s talk about the legal landscape right now. If someone creates an AI-generated performance that closely resembles a living actor without their consent, what legal recourse does that actor have? Mark Stignani: Well, I mean, I think we can go back to the OpenAI Scarlett Johansson thing where, you know, if it’s simply—well, the “walks like a duck, quacks like a duck” type of aspect there. You know, I think it’s pretty straightforward that they need to walk it back. I mean, the US doesn’t have moral rights, really, but there’s a public visage right, if you will. And so, one of the things that I find predominantly useful here is that these actors likely have rights of publicity there, we probably have a Lanham Act false endorsement claim, and you know, again, if the performance is not parody, and it’s so close to the original performance, we probably have a copyright discussion. But again, all of these laws predate the use of AI, so we’re going to probably see new sets of law. I mean, we’re probably going to see “resurrection” frameworks, we’ll probably have frameworks for synthetic actors and likenesses, but the rules just aren’t there yet. So, unfortunately, your question is largely predictive versus well-settled at this point. Ken Suzan: Janine, your company works with AI actors. How do you navigate the questions of consent and likeness compensation when creating digital performers? Jeanine Wright: I mean, if we—so first of all, if we were to work with a person who is an existing real-life person or was an existing real-life person, then we would work with them to license their name and likeness or their voice or whatever aspects of it we were going to use in creating content in partnership with them. Not typically our business model; we are, as I said, designing all of our personalities from scratch and making all of our content originally. So, we’ve not had to do that historically. Now, you know, the flip side is: can I license my characters as if they’re similar to living characters? Like will I be able to license the name and likeness and voice of my AI-generated personalities? I think the answer is yes and we’re already starting to do that. Ken Suzan: Let’s just switch gears into ethics and AI because I find this to be a really fascinating issue. I want to look at a hypothetical. And this is to both of you, Janine and Mark: an AI system creates a new performance by a beloved actor who passed away decades ago, and the actor’s estate authorizes it, but the actor was known to have expressed opposition to such technology during their lifetime. Is this ethical? Jeanine Wright: This feels like a Gifts, Wills, and Trusts exam question. Ken Suzan: It sounds like it, that’s right. Jeanine Wright: Throwing me back to my law school days. Exactly. What are your thoughts? It’d be interesting to see like who has the rights there. I mean, I think if you have the legal rights, the question is around, you know, is it ethical to go against what you knew was somebody’s wishes at the time? I guess the honest answer is I don’t know. It would depend a lot on the circumstances of the case. I mean, if we were faced with a situation like that where there was a discrepancy, we would probably move away from doing that content out of respect for the deceased and out of a feeling that, you know, if this person felt strongly against it, then it would be less likely that you could make that storytelling exceptional in some way—it would color it in a way that you wouldn’t want in the outcome. And I feel like there’s—I mean, certainly going forward and it’s already happening—there are plenty of people I think who have name, likeness, and voice rights that they are ready to license that wouldn’t have this overhang. Ken Suzan: Mark, your thoughts? Mark Stignani: Yeah, I mean, again, I have to kind of go back to our property law—the Rule Against Perpetuities. You know, from a property standpoint to AI rights and likenesses—since most of the digital replica contracts that I’ve reviewed generally do talk about things in perpetuity. But if it’s not written down for that actor and the estate is doing this—is it ethical? You know, that is the debate. Jeanine Wright: Well, gold star to you, Mark, for bringing up the Rule Against Perpetuities. There’s another one that I haven’t heard for many years. This is really taking me back to my law school days. Ken Suzan: It’s a throwback. Jeanine Wright: The other thing that’s really interesting is that this technology is really so revolutionary and new that it’s hard to even contemplate now what it is going to be in a decade, much less for people who have passed away to have contemplated what the potential for it could be today. So you could have somebody who is, perhaps, a deceased musician who expressed concerns about digital representations of themselves or digital music while they were alive. But now, the possibility is that you could recreate—certainly I could use my technology to recreate—that musician from scratch in a very detailed way, trained on tons of different available data. Not just like a digital twin or a moving image of them, but to really rebuild their personality from scratch, so that they and their music could be reintroduced to totally new generations in a very respectful and authentic way to them. It’s hard to know, with the understanding that that is possible, whether or not somebody who is deceased today would or would not agree to something like that. I mean, many of them might want, under those circumstances, for their music to live on. These deceased actors and musicians could live forever with the power of AI technology. Mark Stignani: Yeah, I really just kind of go to the whole—is deep-faking a famous actor the best way to preserve them or keep them live? Again, that’s a bit more of an ethical question because the deep fakes are getting good enough right now to create huge problems. Even zoom meetings in Hong Kong where a CFO was on a call with five synthetic actors who all looked like his coworkers and they sent a big check out based upon that. So again, the technology is getting good enough to fool people. Jeanine Wright: I think that’s right, Mark, but I guess I would just highlight the same way that it always has been: the ethical line isn’t AI versus human, the ethical line is about deception. Like, are you deceiving people? And if people know what it is that they’re getting and they’re choosing to engage with it, then I think it isn’t about the power of the technology. In our business, we have elected—not everybody has—but we have elected to be AI transparent. So we tell people when they listen to our show, we include it in our show notes, we include it on our socials. Even when we’re designing our characters to be very photo-realistic, we make an extra point to make sure that people know that this is AI-generated content or an AI personality. Like, our intention is not to deceive and to be candid. From a business model perspective, we don’t need to. I mean, there’s already people who know and understand that it is AI, and AI is different than people. Because it is AI, there’s all sorts of things that you can do with it that you would not be able to do with a real person. You know, we get people who ask us on the podcast side, we get all sorts of crazy funny requests. You know, people who say, “Can I text with this personality? Can I talk to them on the phone? Can they help me cook in the kitchen? Can they sing me Happy Birthday? Can they show up at my Zoom meeting today because I think my boss would love it?” You know, all sorts of different ways that people are wanting to engage with these characters. And now we’re in the process of rolling out real-time personalities so people will be able to engage with our personalities live. It is a totally different way that people are able to engage with content, and people can, as they choose, decide what kind of content they want to engage with. Ken Suzan: Jeanine and Mark, we’re coming to the end of this podcast. I would love to keep talking for hours but we have to stay to our timetable here. Last question: five years from now, what percentage of entertainment content do you predict will involve significant AI generation, and will audiences care about that percentage? Jeanine? Jeanine Wright: I mean, I would say 99.9%. I mean, already you’re seeing—I think YouTube did a survey—that it was like 90% of its top creators said that they’re using AI as material components of their content creation process. So, I think this will be the default way that content is created. And content that is not made with AI, you know, there’ll be special film festivals for non-AI generated content, and that will be a special separate thing than the thing that everybody is doing now. Ken Suzan: Mark, your thoughts? Mark Stignani: Yeah, I go a little lower. I mean, I think Jeanine is right that we’re seeing, especially in the low-quality content creation and like the YouTube shorts and things like that, you know, there’s so much AI being pushed forward that the FTC even acquired an “AI slop” title to it. I do think that disclosure will become normalized, that the industries will be pushed to say when something is AI and what is not. And I think it’s very much like, you know, do you care about quality or not? If you value the human input or the human factor in this, there will be an upper tier where it’s “AI-free” or low AI assistant. I think that it’s going to stratify because the stuff coming through the social media platforms right now—I can’t be on it right now just because there’s so much nonsense. Even my children, who are without much AI training at all, find it just too unbelievable for them. So, I think it will become normalized, but I think that we’re going to see a bunch of tiers. Ken Suzan: Well, Jeanine and Mark, this has been a fantastic discussion of an ever-evolving field in IP law. Thank you to both of you for spending time with us today on the IP Friday’s podcast. Jeanine Wright: Thank you so much for having me. Mark Stignani: Appreciate your time. Thank you again.
Creature Feature Mayhem: Attack of the SpidersThis week on Mostly Film, we're tackling the spider movie — and asking a simple question:What's scarier… giant mutant spiders, or the normal-sized ones living in your house right now?We're pairing Arachnophobia (1990) with Eight Legged Freaks (2002) and breaking down classic restraint vs. early-2000s CGI chaos. One goes for quiet, domestic dread — shoes, bathtubs, coffee mugs. The other? Big, loud, B-movie swarm insanity.We talk:Exaggeration vs. realismVisibility vs. suggestionCamp vs. tensionWhy spiders hit differently than sharks or lionsAnd which approach actually lingers longerPlus: worst ways to die, terrible survival decisions, and whether we'd last 24 hours.If you're new here, welcome to Mostly Film — find us wherever you get your podcasts (like or subscribe
MONSTER PARTY WANTS YOU TO LAY BACK AND CUDDLE WITH A GOOD FILM. AND YES, WE MEAN THAT LITERALLY. JAMES GONIS, SHAWN SHERIDAN, LARRY STROTHE, and MATT WEINHOLD, share a banquet of horror, sci-fi, and fantasy films that always give them a case of the warm and fuzzies. Get ready to dig in as we present… MONSTER PARTY'S GENRE COMFORT FOOD!!! The film and TV tastes of the various hosts of MONSTER PARTY are as wide-ranging as the colors of the stars in Starcrash! Between the four of us, we go from high brow to low brow, mainstream to B movie, practical effects to CGI (within reason), and any other cool monster kid viewing we can lay our claws on. But there is a very unique class of cinematic serotonin that, during a stressful time or a lazy afternoon, can unfurrow our brows like a wet compress soaked in Romulan Ale. In this intimate and revelatory episode of MONSTER PARTY, the guys reveal what special films have the ability to calm their nerves like a dog anxiety vest. But don't expect just the usual easy-on-the-system classics like Forbidden Planet or Willy Wonka And The Chocolate Factory. Because for some, comfort can come from killer robots, murderous "children," or even some hills that have eyes. And speaking of which, what's the mystery meat in your comfort food? Joining us for this poo-poo platter of pleasing pictures is a guest making his MONSTER PARTY debut. And it's long overdue! He's not only a knowledgeable genre expert, but also the owner of one of the most amazing collectible stores in the country, BLAST FROM THE PAST! Please welcome… LARRY ROSS! Larry is a wonderful guy, and his shop is located at 3117 West Magnolia Blvd in historic Burbank, California. BLAST FROM THE PAST offers a stunning collection of new and vintage toys, comics, statues, movie posters, artwork, books, cosplay items, clothing, cards, and so much more. The store is also known for hosting a regular stand-up comedy show, as well as a challenging horror trivia night that always packs the house! Oh, and the staff is pretty outstanding as well! LISTEN TO MONSTER PARTY'S GENRE COMFORT FOOD! AND WHEN YOU'RE DONE, GO BACK FOR ANOTHER HELPING!
Join the HG101 gang as they discuss and rank a Game Boy Advance game based off a hybrid live-action/CGI kids television show. Then stick around for Arctic Eggs, a futuristic physics-based frying game! This weekend's Patreon Bonus Get episode will be STARSEED PILGRIM — a mysteriously minimalistic puzzle-platforming garden experience! Donate at Patreon to get this bonus content and much, much more! Follow the show on Bluesky to get the latest and straightest dope. Check out what games we've already ranked on the Big Damn List, then nominate a game of your own via five-star review on Apple Podcasts! Take a screenshot and show it to us on our Discord server! Intro music by NORM. 2026 © Hardcore Gaming 101, all rights reserved. No portion of this or any other Hardcore Gaming 101 ("HG101") content/data shall be included, referenced, or otherwise used in any model, resource, or collection of data.
Saddle up your ostrich (or punch it) for a quick and wacky sequel to the sequel to JUMANJI! After the '90s version was dusted off the shelf for the massive hit JUMANJI: WELCOME TO THE JUNGLE, 2019's THE NEXT LEVEL was rushed into production. Will excessive character swapping, an infusion of stars and a zoo's worth of CGI beasts be enough to make this one work?Plus, thoughts on RELATIONSHIP GOALS, GOOD FORTUNE, HOW TO MAKE A KILLING, NIRVANNA THE BAND THE SHOW THE MOVIE, the TOP MODEL docuseries and more!
Logan (2017) Review kicks off this week's episode of Born to Watch, and boys… this is not your usual superhero movie.Whitey, Gow and Damo head into the wasteland of 2029 to talk about the final outing for Wolverine, and right from the start the big question is asked, is this actually a superhero movie at all… or is it a western wearing claws?After nearly two decades of Hugh Jackman playing Logan, the X-Men universe throws away the colourful costumes, the CGI sky beams and the multiverse nonsense, and replaces it with dust, silence and a dying hero who just wants it all to end.This week, the boys dive into: • Why Logan feels closer to a Clint Eastwood western than a Marvel film • The emotional weight of Professor X and Logan's relationship • Laura (X-23) stealing the movie without saying much at all • The brutality and why the R-rating actually matters • Whether this is the greatest superhero film ever madeWhitey argues that this is the natural evolution of comic book movies, a character study about regret and aging rather than saving the world. Gow admits he expected CGI chaos and instead got a real film. Damo questions the timeline, the X-Men continuity and whether the emotional ending works if it doesn't match the earlier movies.The discussion also covers how Logan was clearly inspired by classic westerns, especially Shane, and why the movie works best when it forgets it's part of a franchise entirely.Hugh Jackman delivers possibly his best performance as a broken warrior who no longer heals, drinks too much, hurts constantly and carries decades of guilt. Patrick Stewart's Professor X adds heart and tragedy, while the road-trip structure slowly turns the film into something surprisingly intimate.And then… there's the ending.No big sky battle.No final speech.Just consequences.The boys debate whether Logan's death lands emotionally, if Laura is the future of the character, and why this film changed how studios approached superhero movies afterwards.Is Logan the peak of comic-book cinema? Or just a really good western accidentally starring a superhero?JOIN THE CONVERSATION Is Logan the best comic book movie ever made? Does the R-rating improve superhero films? Is this secretly just a western?Drop us a voicemail at https://www.borntowatch.com.au and be part of the show.Listen now on Spotify, Apple Podcasts, or wherever you get your podcasts.Don't forget to LIKE, SUBSCRIBE and follow Born to Watch for your weekly dose of nostalgia, arguments and completely unnecessary movie rankings.#Logan #BornToWatch #MoviePodcast #Wolverine #HughJackman #XMen #FilmReview #WesternMovies #SuperheroMovies #MovieDebate
You can’t revisit the ‘90s without calling Jonathan Lipnicki! Between Jerry Maguire and Stuart Little, he defined two of the most iconic characters of the decade, and now he’s ready to share some memories with Danielle, Will and Rider. Jonathan talks about launching his acting career at just five years old, learning how to read screenplays (while also learning how to read in general) and the lengths he took to protect his valuable face while playing sports in school. And when looking for Tom Cruise, Jonathan does NOT disappoint. I mean, do you even know about the “Tom Cruise cake?” So catch up with the only actor in Hollywood who co-starred with a CGI mouse AND has a black belt in Brazilian Jiu Jitsu - right here on Pod Meets World!See omnystudio.com/listener for privacy information.
Tracey and Sally are back from their break and ready to continue forging a path through Hallmark's 2026 Winter Escape series. Tune in to hear our thoughts on books in suitcases and the first CGI moment that Tracey and Sally can actually support!
Send a text In this episode, first time guest Derek Schlender ("Schlender 5 Productions") joins the podcast. He brings along the movie Dragonheart. Marty and Clif give Derek the movie Rollerball to watch.Season 4 kicks off with a guest, and a double feature that couldn't be more different.Marty and Clif are joined by Derek Schlender to break down two films connected by spectacle, ambition, and wildly different results: Dragonheart (1996) and Rollerball (1975).First up is Dragonheart. Rob Cohen's mid-'90s fantasy epic starring Dennis Quaid and featuring Sean Connery as the voice of Draco, the last dragon. The crew digs into peak-'90s CGI, bad medieval wigs and whether nostalgia is the only thing keeping this cult fantasy afloat. Is it a charming kid's adventure? A tonal mess? A necessary stepping stone between Jurassic Park and Lord of the Rings? All of the above?Then it's on to Norman Jewison's Rollerball (1975), a dystopian sports thriller about corporate control, violent spectacle, and individualism crushed under the system. What starts as futuristic world-building turns into a surprisingly talky meditation on power, celebrity, and manufactured entertainment. #TalkingPondo #Dragonheart #Rollerball #FilmPodcast #90sMovies #CultCinema #FantasyFilm #DystopianFilm End of show plug by MartySupport the showFind our films here: The Love Song of William H Shaw Revenge of Zoe Writing Fren-ZeeMaking Pondo on FacebookX (formerly Twitter):@MakingPondoInstagramMaking Pondo on Letterboxd:Season One Season Two Season Three Season Four Theme Song "The Rain" by Russ PacePhotos by Geoffrey Notkin
Join Diandre, CJ, and Josh as we rank the 25 WORST Superhero Movies Of All Time in this hilarious and heated episode. We kick things off with a chaotic round of "Lyrically Correct: 80s Edition" before getting straight into the cinematic disasters that comic book fans wish they could forget.The crew holds nothing back as they roast the biggest flops in history. We break down the disappointment of the Galactus cloud in *Rise of the Silver Surfer*, the controversial CGI in *The Flash*, and the absolute betrayal of *Dragonball Evolution*. The debate gets intense over the fake Mandarin twist in *Iron Man 3*, the "Bat-nipples" in *Batman & Robin*, and the wasted potential of *X-Men Apocalypse*. From Shaq's performance in *Steel* to the confusing plot of *Glass* and the musical elements of *Joker 2*, we cover every cringeworthy moment. Tune in to see which movie earns the shameful title of the number one worst superhero film ever made.#morbiusreview #badcomicmovies #batmanandrobin #dcmoviefails #marveldcflopsCHAPTERS:00:00 - Podcast Intro & Welcome0:22 - Lyrically Correct Game Segment4:19 - Ranking Worst Superhero Movies9:58 - X-Men Apocalypse Movie Review10:47 - Iron Man 3 Critique14:28 - Spider-Man 3 Disappointment15:28 - Howard the Duck (1986)17:05 - Shaq's Steel Movie Review17:55 - Venom Let There Be Carnage19:16 - Batman and Robin (1997)21:35 - TMNT 3 Movie Review22:18 - Morbius Marvel Movie Critique23:51 - Jonah Hex DC Movie25:20 - Thor The Dark World Review28:55 - Turbo A Power Rangers Movie35:45 - Ben Affleck Daredevil Movie39:45 - Joker Folie à Deux Discussion42:20 - Fantastic Four (2015) Review45:35 - Elektra Marvel Movie Review49:00 - M. Night Shyamalan Glass53:06 - Wonder Woman 1984 Critique56:44 - Worst Superhero Movie #158:27 - Worst Superhero Movie #21:00:18 - Worst Superhero Movie #31:02:50 - Recap and Future Episodes1:04:10 - Final Thoughts and Goodbyes1:05:14 - Shili E Segment1:05:30 - Thank You to Listeners1:05:46 - Upcoming Podcast Episodes1:06:00 - Podcast Outro
BAFTA-winning filmmaker Bart Layton (The Imposter, American Animals) joins Giles Alderson and Dom Lenoir to discuss his massive new crime thriller, Crime 101 (2026). In this deep-dive interview, Bart reveals how he assembled a powerhouse cast including Chris Hemsworth, Mark Ruffalo, Halle Berry, and Barry Keoghan. We discuss his process for adapting Don Winslow's novella, why he prefers 'character-driven tension' over CGI spectacle, and the technical challenges of filming high-speed car chases on the L.A. freeways.Whether you're a fan of Michael Mann-style thrillers or looking to learn how to bridge the gap between documentary and fiction, this is a masterclass you cannot miss. FOOD FOR THOUGHT documentary out NOW | Watch it FREE HERE. A documentary exploring the rapid growth and uptake of the veganlifestyle around the world. – And if you enjoyed the film, please take amoment to share & rate it on your favourite platforms. Every review& every comment helps us share the film's important message withmore people. Your support makes a difference! Help us out and Subscribe, listen and review us on iTunes, Spotify,or wherever you get your podcasts but more importantly, tell your pals about this podcast. Thank you! PODCAST MERCH Get your very own Tees, Hoodies, on-set water bottles, mugs and more MERCH. https://my-store-11604768.creator-spring.com/ COURSES Want to learn how to finish your film? Take our POST PRODUCTION COURSE https://cuttingroom.info/post-production-demystified/ PATREON Big thank you to: Serena Gardner Mark Hammett Lee Hutchings Marli J Monroe Karen Newman Want your name in the show notes or some great bonus material on filmmaking? Join our Patreon for bonus episodes, industry survival guides, and feedback on your film projects! SUPPORT THE PODCAST Check out our full episode archive on how to make films at TheFilmmakersPodcast.com CREDITS The Filmmakers Podcast is written and produced by Giles Alderson @gilesalderson Edited by @tobiasvees Logo and Banner Art by Lois Creative Theme Music by John J. Harvey Learn more about your ad choices. Visit megaphone.fm/adchoices
After a brief hiatus, the boys are back!With Midjourney v8 expected next week, Drew and Rory zoom out and ask the bigger question: does v8 even matter as much as we think?Because while everyone waits for v8, Kling 3.0 and Seedance 2.0 are raising the bar, Claude Code and Claude Cowork are quietly changing how builders operate, and Claude Agents are turning workflows into autonomous systems.Meanwhile, Higgsfield is melting down in public, Hollywood is panick-maxxing, and creators are realizing that building “skills” inside LLMs might matter more than generating prettier images.This episode breaks down:• Why Midjourney v8's native 2K and edit models matter• Why personalization could be the real differentiator• How Claude Code is quietly enabling operator-level leverage• Why skill-building beats agent hype• What Kling 3.0 and Seedance 2.0 signal about video AI• The real lesson behind Higgsfield's fallout• Why the creative skill gap is widening right nowThis episode moves from Midjourney roadmap analysis to AI workflow engineering to business survival strategy.If you care about Midjourney v8, Claude Agents, Kling 3.0, Seedance 2.0, system prompts, autonomous workflows, or where creative leverage is actually going…This one isn't optional.---⏱️ Midjourney Fast Hour00:00 – Winter chaos & NYC survival03:59 – AI's quantum leap moment06:51 – Radio vs podcasts analogy08:55 – AI series vs Hollywood model10:59 – Game of Thrones AI sequel14:14 – CGI patchwork & filmmaking16:14 – AI replacing exec decisions18:03 – Seedance & model hype19:48 – Midjourney v8 timeline20:15 – Rating party (Round 2 + beyond)23:18 – 2K native resolution talk24:26 – Batch-four replacement25:48 – Edit model improvements26:14 – v8 text rendering progress27:03 – Arbitrary resolution support29:06 – Personalization in v830:36 – Mood boards as leverage32:18 – AI overwhelm & X fatigue34:12 – Claude agents & automation36:04 – “Something is happening”39:45 – AI skill-building strategy45:47 – Pattern matching workflows48:54 – Silicon Valley middle-out50:24 – Claude comedy experiment53:24 – Word clouds & ad thinking55:14 – Hollywood recycling IP58:41 – Marketing narrative engine01:03:57 – Higgsfield controversy01:11:36 – Final thoughts & sign-off
The game is afoot as we rush into this weeks episode as Darren tries very very hard to convince us all he went to see The Housemaid for the plot (he also reads Playboy for the articles) while Lee watches Predator: Badlands, Crime 101 aka Timu Heat and in a moment of blind Valentines day induced terror and panic tries to make a Victoria sponge while missing ingredents, equipment and basic skills... After which icing sugar based disasters leads us on to this weeks film... A ground breaking moment in CGi history and a painfully 80s movie, the guys don their deer stalkers and stride off down the dark Victorian streets to learn about Young Sherlock Holmes. Media Discussed This Week Crime 101 - Theatrical Release The Housemaid - Theatrical Release (also abandoned in the bushes next to readers wives circa 1989) Predator Badlands - Disney+ Skyrim - PC, Xbox, Playstation, Switch, Digital Watch, Fridge Thermostat Young Sherlock Holmes - VOD rental (YouTube, Apple+, Amazon Prime, Paramount+)
This week on The VHS Strikes Back, we dive into Erotic Ghost Story (1990), chosen by friend of the show John Hammond — a man clearly unafraid of wandering into the “back shelf” section of the video rental store. Directed by Lam Ngai Kai during the golden age of Hong Kong Category III cinema, this supernatural fantasy blends martial arts, folklore, horror and soft-focus seduction into one uniquely 90s experience.Released at a time when Hong Kong cinema was pushing boundaries with stylised ghost stories and adult-themed genre hybrids, Erotic Ghost Story arrived as part of a wave inspired by the success of films like A Chinese Ghost Story. With elaborate costumes, theatrical lighting, wire-work action and unapologetic late-night cable energy, this is exactly the kind of film you'd discover at 11:47pm with the volume turned suspiciously low. It's mystical. It's chaotic. It's extremely 1990.Trailer Guy Plot SummaryIn a world… where fox spirits walk among us…Three beautiful spirits descend from the mountains with one mission: to experience mortal life. But when a power-hungry sorcerer discovers their presence, desire turns to danger… and temptation awakens forces beyond control.Magic will clash. Loyalties will be tested. Floorboards will creak ominously.This summer… seduction has a supernatural side.Erotic Ghost Story.You were watching it for the kung fu.Fun FactsErotic Ghost Story was released in 1990 during the boom of Hong Kong Category III cinema, the local rating equivalent of adults-only material.The film blends Chinese folklore about fox spirits (huli jing) with martial arts choreography and supernatural horror.Director Lam Ngai Kai became known for stylised fantasy films that leaned heavily into theatrical lighting and elaborate costume design.The movie was part of a wave of sensual supernatural films following the commercial success of romantic fantasy ghost stories in late-80s Hong Kong cinema.Practical wire-work and in-camera effects were used for levitation and fight sequences — no CGI safety net here.The English export title was intentionally provocative to help the film stand out in international VHS markets.Category III films often relied on strong box office from late-night screenings and overseas rental sales.The combination of eroticism and traditional mythology made the film controversial but commercially viable at the time.The film developed a cult following among collectors of 90s Hong Kong fantasy cinema.Its blend of supernatural themes and martial arts action makes it a frequent inclusion in “so-bad-it's-fascinating” VHS-era discussions.Support the ShowIf you enjoy the show and would like to support us, we have a Patreon here.If you're listening on Apple Podcasts or Spotify, leaving us a 5-star review (and a short comment) really helps more people discover the show. It's quick, free, and makes a huge difference.Referral links also help out the show if you were going to sign up:NordVPNNordPassthevhsstrikesback@gmail.comhttps://linktr.ee/vhsstrikesback
All right, you nerds - step into the dripping, wet Deep Dark Lounge, grab a drink and listen to our chat about 1983's "The Deadly Spawn". This episode is for folks who can enjoy a movie shot on 16mm film, made largely by amateurs on a $25,000 budget in the pre-CGI age, chock-full of fake blood and surprisingly good creature design and practical effects.
Sushi dinners with Rachel McAdams on a tropical island? That sounds like heaven to us, so we're not sure what Dylan O'Brien is complaining about in Sam Raimi's new horror comedy, Send Help (2026). On our latest Spooky Tuesday, we're getting to the bottom of it, though, as we revisit our Survivor days, push Teen Wolf on Monica, and break down all that CGI gore. Whether you saw this flick in 4DX 3D or just regular stylez sans piped-in tuna smell, there's no denying that it makes for a fun ride, so join us as we hop on a plane and head out to sea to get our hands dirty.References:Rachel McAdams, Dylan O'Brien, and Sam Raimi Break Down a Survival Scene from 'Send Help'Sam Raimi Gushes About How Stephen King Gave Him His First Chance In The Film BusinessSam Raimi On Putting Rachel McAdams And Dylan O'Brien In Brutal Scenarios For 'Send Help'Dylan O'Brien Goes ROGUE Interviewing Send Help Co-Star Rachel McAdamsMean Girls "One Way or Another"Rugrats The Movie (2000) - One Way Or Another
Can spectacle replace substance? Jeff Haecker, Patrick Mason, and Rob Leonardi weigh Thorin's fall and redemption, Bilbo's brave choice with the Arkenstone, and a battle bursting with CGI. Does The Battle of the Five Armies honor Tolkien—or bury his themes beneath excess? The post The Hobbit: The Battle of Five Armies appeared first on StarQuest Media.
Design is about more than just how something looks—it's about how it works for the people using it. On this episode of On Brand, I'm joined by Lee Hoddy, Executive Creative Director at Conran Design Group, to discuss how experience-led design can solve complex brand problems. We'll dive into how he leads multi-disciplinary teams to create meaningful work for global names like Sofitel and AstraZeneca, and why every great brand starts with a deep understanding of human needs, wants, and motivations. What You'll Learn in This Episode - How to map emotional friction points to find the gold in a brand experience - Why the pursuit of human endeavor is the key to branding functional industries like pharma - The reason storytelling acts as a sticky DNA thread across physical and digital touchpoints - How to conduct a multidisciplinary orchestra by surfacing the ambition in every brief - Why original ideas are the only way to escape the sea of sameness in an AI-driven world Episode Chapters (00:00) Intro (01:22) Getting to the heart of human motivations (02:43) Mapping emotional micro-moments (04:54) Humanizing corporate and functional brands (06:39) Using storytelling as a brand DNA thread (10:53) Leading multidisciplinary creative teams (14:35) Creating the Brief 2.0 (17:31) AI and the currency of original ideas (24:14) A brand that made him smile (27:41) Outro About Lee Hoddy Lee Hoddy is the Executive Creative Director at Conran Design Group, where he is responsible for maintaining creative standards and solving brand problems through experience-led design. With a career spanning decades, Lee has lived through major industry shifts, enabling him to lead diverse teams of designers, strategists, and experience experts like a conducted orchestra. He has spearheaded major rebranding programs for global names such as Sofitel, AstraZeneca, and Bicester Motion, always focusing on the deep understanding of human needs to create meaningful, strategically grounded work. What Brand Has Made Lee Smile Recently? Lee recently found joy in the "Venture Beyond" campaign by Hermes, noting its use of evocative illustrations and artisanal craft that respects the audience's intelligence. He also highlighted Apple's "Critter Carol" for its charming, deeply human approach to technology, using puppets and physical craft rather than CGI to celebrate creativity. Resources & Links Check out the Conrad Design Group website. Connect with Lee on LinkedIn. Listen & Support the Show Watch or listen on Apple Podcasts, Spotify, YouTube, Amazon/Audible, TuneIn, and iHeart. Rate and review on Apple Podcasts and Spotify to help others find the show. Share this episode — email a friend or colleague this episode. Sign up for my free Story Strategies newsletter for branding and storytelling tips. On Brand is a part of the Marketing Podcast Network. Until next week, I'll see you on the Internet! Learn more about your ad choices. Visit megaphone.fm/adchoices
We are continuing Black History Month with the film that kick-started the success of Marvel movies in the box office! BLADE (1998) - Wesley Snipes, in a trench coat, killing vampires + 90's fashion (& CGI, lol) = LET'S GOOOO!The VIDEO versions of our episodes can be found on our YouTube - New episodes go up every Monday at Noon Eastern!https://www.youtube.com/@ChainsawGirlsPodYou can support us on our PATREON and receive early, bonus, and extended episodes! (& more!)https://www.patreon.com/chainsawgirlspodWe have our very first Chainsaw Girls t-shirt available!! Click here to get yours!Follow us on our socials!Instagram: https://www.instagram.com/chainsawgirlspodTikTok: https://www.tiktok.com/@chainsawgirlspod Hosted on Acast. See acast.com/privacy for more information.
POP 3:Salley Carson says Nick Viall and Natalie Joy privately apologized to Austen Kroll after his tense appearance on The Viall Files, but she believes a public apology is still warranted. The FBI has released surveillance images in the disappearance of Nancy Guthrie, showing a masked individual outside her Tucson home; a person was detained and released as Savannah Guthrie and her family continue pleading for answers. And in MomTok chaos, Layla Taylor and Mason McWhorter's breakup has already shifted from “no bad blood” to public shade in true TikTok fashion.DEEP DIVE:The Super Bowl reaction somehow became more dramatic than the game itself, so we're unpacking the outrage over Bad Bunny's halftime show, the Facebook think pieces, and why not everything has to be personally curated to your taste. Which brings us directly to Jill Zarin. After posting her own controversial halftime rant, Jill was reportedly fired from E!'s upcoming The Golden Life before it even began filming, and we break down what she said, why the cast had allegedly been warned about controversy, her response after being dropped, and how this feels eerily similar to past contract drama that stalled RHONY Legacy. Plus Andy Cohen's very pointed “Call E!” response.FINAL THOUGHTS:First look at The Bachelorette starring Taylor Frankie Paul! The Barbie box imagery, the MomTok scandal references, the Crocs reveal, the CGI debate, and how this promo stacks up against past leads like Tayshia and Hannah Brown.JOIN THE PATREON: www.patreon.com/MorgansPopTalksYou'll get early & ad-free episodes of Morgan's Pop Talks plus a bonus, longer podcast episode where we talk all things reality TV in one place. Think full episode recaps, breaking news, deeper context, and more unfiltered commentary on everything happening across Bravo, Bachelor Nation, and the wider entertainment world.
In episode 2001, Jack and Miles are joined by comedian and producer of the monthly Facial Recognition Comedy show, Pallavi Gunalan, to discuss… Why Was Lindsey Graham Drunk On Fox News Twice Over The Weekend? Nancy Mace Is Not Okay, Philly DA Larry Krasner Is Talking That Sh*t, The Jurassic Park-Themed Super Bowl Ad Really Missed The Point Of Jurassic Park and more! Why Was Lindsey Graham "Drunk" On Fox News Twice Over The Weekend? I’m not going to say Senator Graham is drunk because that would be unprofessional Lindsey Graham was slurring his words again on "Fox News Sunday" this morning...Is he spiraling? Sad! Nancy Mace Is Not Okay: “Something’s broken. The motherboard’s fried. We’re short-circuiting somewhere.” 'A CGI Embalming' — Xfinity's Jurassic Park Super Bowl Ad Features Digitally De-Aged Sam Neil, Laura Dern, and Jeff Goldblum Xfinity’s Jurassic Park advert is a digital de-aging nightmare. So who made it? Jurassic Park Super Bowl commercial's de-aged actors, ranked from least to most bizarre-looking What If Jurassic Park Worked Out Great? Comcast Xfinity’s Super Bowl Ad Takes a Guess Original Jurassic Park Stars Return to Solve the Sci-Fi Masterpiece’s Entire Plot in Seconds for Super Bowl Commercial The Jurassic Park Xfinity Super Bowl Commercial Is A Nostalgia Play Gone Nightmarishly Wrong Nedry Really Wasn't The Jurassic Park Villain You Remember Welcome to Jurassic Park. Now powered by Xfinity. Xfinity hack affects nearly 36 million customers. Here's what to know. Thousands of Comcast workers win $7.5 million settlement in wage and hour lawsuit Judge rejects $7.5M Comcast settlement resolving ‘systemic’ FLSA violations The biggest star of Super Bowl LVII commercials? Nostalgia. Honda 2012 Super Bowl Commercial, Matthew’s Day Off Hellmann’s mayonnaise, Meg Ryan and the allure of ‘nostalgia marketing LISTEN: Deli Kan by Melike ŞahinSee omnystudio.com/listener for privacy information.