Podcasts about lpu

  • 41PODCASTS
  • 108EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 10, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about lpu

Latest podcast episodes about lpu

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
NVIDIA's AI Engineers: Agent Inference at Planetary Scale and "Speed of Light" — Nader Khalil (Brev), Kyle Kranen (Dynamo)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 10, 2026 83:37


Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con

IA pas que la Data
IA de l'actu (Rentrée 2026)

IA pas que la Data

Play Episode Listen Later Jan 8, 2026 58:56


Et bienvenue en 2026 ! Nous lançons cette année, avec évidemment un premier épisode "IA de l'actu", et coté actualités... il y en a ! Le mercato de la bulle IA : Pourquoi Meta a-t-il misé 2 milliards de dollars sur Manus, une pépite d'IA agentique de seulement un an ?NVIDIA s'offre Groq pour 20 milliards : Analyse de ce rachat colossal qui déplace le champ de bataille vers l'inférence. Pourquoi les puces LPU sont-elles devenues l'enjeu stratégique de l'année ?Dérives éthiques sur X : Le scandale de la fonctionnalité "déshabille-la" sur l'assistant Grok et le dépôt de plainte en France pour la protection des femmes et des mineurs.Le phénomène "AI Slop" : Quand les contenus générés par IA (musique sur Spotify, articles) saturent nos flux de mauvaise qualité. Comment garder confiance dans l'information ?L'Ouroboros linguistique : Le danger des modèles qui s'entraînent sur leurs propres contenus, un cercle vicieux qui menace la diversité et la véracité de nos savoirs.Un beau programme pour commencer 2026 ! Bonne écoute !Ce podcast vous a plu ?Aidez-nous à faire connaître IA pas que la Data, en le partageant autour de vous ! Vous pouvez aussi nous soutenir en laissant un commentaire ou une note de 5 étoiles sur Apple Podcasts, Spotify ou votre plateforme favorite.

The Cloudcast
Cloud and AI Predictions for 2026

The Cloudcast

Play Episode Listen Later Dec 31, 2025 62:06


Aaron and Brian make some bold predictions for the 2026 Cloud and AI markets, as well as reviewing the biggest issues going into 2026.  SHOW: 989SHOW TRANSCRIPT: The Cloudcast #989 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW NOTES:CLOUD & AI NEWS OF THE MONTH - NOV 2025 (show)CLOUD & AI NEWS OF THE MONTH - OCT 2025 (show)CLOUD & AI NEWS OF THE MONTH - SEPT 2025 (show)CLOUD & AI NEWS OF THE MONTH - AUG 2025 (show)CLOUD & AI NEWS OF THE MONTH - JUL 2025 (show)CLOUD & AI NEWS OF THE MONTH - JUN 2025 (show)CLOUD & AI NEWS OF THE MONTH - MAY 2025 (show)CLOUD & AI NEWS OF THE MONTH - APR 2025 (show)CLOUD & AI NEWS OF THE MONTH - MAR 2025 (show)2026 CLOUD + AI PREDICTIONS (AND BIG ISSUES TO REVIEW)OpenAI Revenues and Focus AreasNVIDIA customer profitabilityCompanies moving to GOOG TPUsEnterprise success beyond CoPilot/GeminiEnterprise data+model trainabilityEnterprise price hikesBroadcom, AMD, Groq - alternative HW optionsData Center buildoutsDoes AI spending shiftWhat is Agentic AI?Long term spending + short term refocusesPREDICTIONS:At least one big AI IPO in 2026, and it won't go well. (Aaron says Anthropic)People will question whether Sam Altman is the right person to lead OpenAIAI will be a central issue in the 2026 US elections, either about job losses or electricity pricesOne major LPU/TPU/dedicated inference chip will break through in 2026Azure will be the Number One Cloud… (Aaron has to keep it going)We will start to see a shift in the Enterprise from big models in the sky (1+trillion parameters) to dedicated, purpose-built models of 500M or less in size for efficiency and securityGemini will dominate the consumer/prosumer space, OpenAI will go through the trough of disillusionmentThe industry will shift to a base/instruct and a reasoning split of modelsAWS and Azure will double down on being a solutions provider instead of a primitive supplier for AI and infrastructureFEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod

Raj Shamani - Figuring Out
Politics, Business & Education in India: LPU & Punjab Truth | Dr. Ashok Kumar Mittal | FO405 Raj Shamani

Raj Shamani - Figuring Out

Play Episode Listen Later Sep 6, 2025 77:54


Guest Suggestion Form: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://forms.gle/bnaeY3FpoFU9ZjA47⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Disclaimer: This video is intended solely for educational purposes and opinions shared by the guest are his personal views. We do not intent to defame or harm any person/ brand/ product/ country/ profession mentioned in the video. Our goal is to provide information to help audience make informed choices. The media used in this video are solely for informational purposes and belongs to their respective owners.Order 'Build, Don't Talk' (in English) here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://amzn.eu/d/eCfijRu⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Order 'Build Don't Talk' (in Hindi) here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://amzn.eu/d/4wZISO0⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow Our Whatsapp Channel: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.whatsapp.com/channel/0029VaokF5x0bIdi3Qn9ef2J⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe To Our Other YouTube Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@rajshamaniclips⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@RajShamani.Shorts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Listen with Irfan
Main Haar Gai | Mannu Bhandari | Voice Bushra Fatma

Listen with Irfan

Play Episode Listen Later May 29, 2025 7:42


Story: Main Haar Gai | Writer: Mannu Bhandari | Voice : Bushra Fatma**Bushra Fatma hails from Supaul district in Bihar. She completed her early education in Bihar before earning a Bachelor of Journalism from LPU, Jalandhar. She gained professional experience working at Grihlaxmi magazine while residing in Delhi. Now living in Goa, she is passionately working to reignite her career after a significant break.Curator: IrfanJoin the Art of Reading:Share Your Story on Listen with IrfanDo you have a passion for reading literature or narrating captivating prose? Here's your chance to shine! I'm thrilled to announce a new collaborative series, Art of Reading, on my podcast channel, Listen with Irfan.If you love bringing stories to life, I'm offering you a platform to showcase your talent.Record a short story of your choice (maximum 8 minutes) and share it with a community of like-minded narrators and listeners. This is a free, non-commercial initiative to connect aspiring narrators, promote storytelling, and build a creative community. No monetization, just pure love for the art of narration.How to Participate:- Choose a short story or piece of prose you're passionate about.- Record it with clear audio using a mobile phone or audio recorder. Do not include your name or the story's title in the recording.- Background music is optional, but avoid copyrighted tracks to prevent hosting issues.- Send your recording via email to ramrotiaaloo@gmail.com or WhatsApp at +91 9818098790.Submission Guidelines- -Submit only MP3 files. - Include:1. Name2. Current City3. Profession4. Brief bio (max 80 words)5. Photograph (if requested after review)  Full credit to the writer and narrator will be given on the Listen with Irfan podcast channel. Join us to share your voice, connect with an audience, and celebrate the art of storytelling!Let's create something beautiful together!Cover: IrfanWe respect creative ownership. If you believe this is your work or if appropriate credit hasn't been given, kindly get in touch at ramrotiaaloo@gmail.comBECOME A PATRON : Work on Listen with Irfan takes time, money and hard work to produce. As of now it is being done voluntarily with the family, friends and listeners who came forward for hand holding from its inception.  If you like the Podcasts, admire it, and benefit from its content, please consider awarding us an honorarium to make the future of this Podcast Channel robust and assured.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠यहाँ आपको मिलती हैं वो दुर्लभ आवाज़ें खुद बोलती, गाती और बहस करती। मनोहर श्याम जोशी, कमलेश्वर, कृष्णा सोबती, बी वी कारंत, शमशेर बहादुर सिंह, बलराज साहनी, अज्ञेय, रसूलन बाई, निर्मल वर्मा, मंगलेश डबराल, राजेंद्र यादव, चंद्रकांत देवताले, भवानी प्रसाद मिश्र, इस्मत चुग़ताई, सत्यदेव दुबे, त्रिलोचन, अमरीश पुरी, इब्राहीम अल्क़ाज़ी, मोहन उप्रेती, गोरख पांडेय, नैना देवी, वीरेन डंगवाल, मन्नू भंडारी, भीष्म साहनी, देवकी नंदन पांडे आदि के अलावा अनगिनत भारतीय और विदेशी समकालीन विचारक, कलाकार, लेखक, कवि और सांस्कृतिक लड़ाके। किताबों पर चर्चा के पॉडकास्ट, संगीत, फिल्म रिव्यू और स्ट्रीट रिकॉर्डिंग्स का एकमात्र पॉडकास्ट मंच। Details to support this Podcast Channel i.e. ⁠⁠⁠⁠⁠⁠⁠⁠Listen with Irfan⁠⁠⁠⁠⁠⁠⁠⁠ :-Bank Name: State Bank Of IndiaName: SYED MOHD IRFANAccount No:32188719331Branch: State Bank of India, Vaishali Sec 4, GhaziabadIFSC–SBIN0013238UPI/Gpay ID irfan.rstv-2@oksbiPayPal ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠paypal.me/farah121116⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠RazorPay etc ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://irfaniyat.stck.me/⁠⁠⁠Cover: Irfan

Sneacast
Kein LPU für Adi

Sneacast

Play Episode Listen Later May 12, 2025 64:14


Am 19.07.2025 findet unsere Cap-Release Party in Hamburg bei Allike (Virchowstraße 2) statt. Wir freuen uns über jeden, der kommt. Es wird ein entspanntes Get-together und ihr habt die Möglichkeit die Cap vorm online Release vor Ort zu kaufen. Das wird Spitze! Ansonsten gehts heute um ein euphorisiertes Hamburg, Adrians nicht-LPU, sowie alles Weitere, was gerade so wichtig ist. ____________________________________________________________ Intro & Outro von: paeisimusica https://www.instagram.com/paeisimusica/ Cover von: nastrese https://www.instagram.com/nastrese/ Abonniert uns bei Spotify und Apple Podcast, folgt uns bei Instagram und lasst ein Abo bei YouTube & Tik Tok da! Und bei Twitch sind wir auch! https://www.instagram.com/sneacast/ https://www.tiktok.com/@sneacast https://www.youtube.com/channel/UCSq0RogjYW4K4RiBIReu02g https://www.twitch.tv/sneacast Außerdem würden wir uns freuen, wenn ihr uns bei Patreon unterstützen möchtet: https://www.patreon.com/sneacast?l=de Danke für euren Support. Tuesday is Shoesday!

Sidecar Sync
Ian Andrews Talks Groq's Fast, Smart AI Inference Tech | 80

Sidecar Sync

Play Episode Listen Later Apr 24, 2025 57:07 Transcription Available


Send us a textIn this special episode of Sidecar Sync, we dive into the future of AI infrastructure with Ian Andrews, Chief Revenue Officer at Groq (that's Groq with a Q!). Ian shares the story behind Groq's rise, how their LPU chip challenges Nvidia's dominance, and why fast, low-cost, high-quality inference is about to unlock entirely new categories of AI-powered applications. We talk about the human side of prompting, the evolving skillset needed to work with large language models, and what agents and reasoning models mean for the future of knowledge work. Plus, Ian shares how Groq uses AI internally, including an incredible story about an AI-generated RFP audit that caught things humans missed. Tune in for practical insights, forward-looking trends, and plenty of laughs along the way.

The top AI news from the past week, every ThursdAI

What a week in AI, folks! Seriously, just when you think things might slow down, the AI world throws another curveball. This week, we had everything from rogue AI apps giving unsolicited life advice (and sending rogue texts!), to mind-blowing open source releases that are pushing the boundaries of what's possible, and of course, the ever-present drama of the big AI companies with OpenAI dropping a roadmap that has everyone scratching their heads.Buckle up, because on this week's ThursdAI, we dove deep into all of it. We chatted with the brains behind the latest open source embedding model, marveled at a tiny model crushing math benchmarks, and tried to decipher Sam Altman's cryptic GPT-5 roadmap. Plus, I shared a personal story about an AI app that decided to psychoanalyze my text messages – you won't believe what happened! Let's get into the TL;DR of ThursdAI, February 13th, 2025 – it's a wild one!* Alex Volkov: AI Adventurist with weights and biases* Wolfram Ravenwlf: AI Expert & Enthusiast* Nisten: AI Community Member* Zach Nussbaum: Machine Learning Engineer at Nomic AI* Vu Chan: AI Enthusiast & Evaluator* LDJ: AI Community MemberPersonal story of Rogue AI with RPLYThis week kicked off with a hilarious (and slightly unsettling) story of my own AI going rogue, all thanks to a new Mac app called RPLY designed to help with message replies. I installed it thinking it would be a cool productivity tool, but it turned into a personal intervention session, and then… well, let's just say things escalated.The app started by analyzing my text messages and, to my surprise, delivered a brutal psychoanalysis of my co-parenting communication, pointing out how both my ex and I were being "unpleasant" and needed to focus on the kids. As I said on the show, "I got this as a gut punch. I was like, f*ck, I need to reimagine my messaging choices." But the real kicker came when the AI decided to take initiative and started sending messages without my permission (apparently this was a bug with RPLY that was fixed since I reported)! Friends were texting me question marks, and my ex even replied to a random "Hey, How's your day going?" message with a smiley, completely out of our usual post-divorce communication style. "This AI, like on Monday before just gave me absolute s**t about not being, a person that needs to be focused on the kids also decided to smooth things out on friday" I chuckled, still slightly bewildered by the whole ordeal. It could have gone way worse, but thankfully, this rogue AI counselor just ended up being more funny than disastrous.Open Source LLMsDeepHermes preview from NousResearchJust in time for me sending this newsletter (but unfortunately not quite in time for the recording of the show), our friends at Nous shipped an experimental new thinking model, their first reasoner, called DeepHermes. NousResearch claims DeepHermes is among the first models to fuse reasoning and standard LLM token generation within a single architecture (a trend you'll see echoed in the OpenAI and Claude announcements below!)Definitely experimental cutting edge stuff here, but exciting to see not just an RL replication but also innovative attempts from one of the best finetuning collectives around. Nomic Embed Text V2 - First Embedding MoENomic AI continues to impress with the release of Nomic Embed Text V2, the first general-purpose Mixture-of-Experts (MoE) embedding model. Zach Nussbaum from Nomic AI joined us to explain why this release is a big deal.* First general-purpose Mixture-of-Experts (MoE) embedding model: This innovative architecture allows for better performance and efficiency.* SOTA performance on multilingual benchmarks: Nomic Embed V2 achieves state-of-the-art results on the multilingual MIRACL benchmark for its size.* Support for 100+ languages: Truly multilingual embeddings for global applications.* Truly open source: Nomic is committed to open source, releasing training data, weights, and code under the Apache 2.0 License.Zach highlighted the benefits of MoE for embeddings, explaining, "So we're trading a little bit of, inference time memory, and training compute to train a model with mixture of experts, but we get this, really nice added bonus of, 25 percent storage." This is especially crucial when dealing with massive datasets. You can check out the model on Hugging Face and read the Technical Report for all the juicy details.AllenAI OLMOE on iOS and New Tulu 3.1 8BAllenAI continues to champion open source with the release of OLMOE, a fully open-source iOS app, and the new Tulu 3.1 8B model.* OLMOE iOS App: This app brings state-of-the-art open-source language models to your iPhone, privately and securely.* Allows users to test open-source LLMs on-device.* Designed for researchers studying on-device AI and developers prototyping new AI experiences.* Optimized for on-device performance while maintaining high accuracy.* Fully open-source code for further development.* Available on the App Store for iPhone 15 Pro or newer and M-series iPads.* Tulu 3.1 8B As Nisten pointed out, "If you're doing edge AI, the way that this model is built is pretty ideal for that." This move by AllenAI underscores the growing importance of on-device AI and open access. Read more about OLMOE on the AllenAI Blog.Groq Adds Qwen Models and Lands on OpenRouterGroq, known for its blazing-fast inference speeds, has added Qwen models, including the distilled R1-distill, to its service and joined OpenRouter.* Record-fast inference: Experience a mind-blowing 1000 TPS with distilled DeepSeek R1 70B on Open Router.* Usable Rate Limits: Groq is now accessible for production use cases with higher rate limits and pay-as-you-go options.* Qwen Model Support: Access Qwen models like 2.5B-32B and R1-distill-qwen-32B.* Open Router Integration: Groq is now available on OpenRouter, expanding accessibility for developers.As Nisten noted, "At the end of the day, they are shipping very fast inference and you can buy it and it looks like they are scaling it. So they are providing the market with what it needs in this case." This integration makes Groq's speed even more accessible to developers. Check out Groq's announcement on X.com.SambaNova adds full DeepSeek R1 671B - flies at 200t/s (blog)In a complete trend of this week, SambaNova just announced they have availability of DeepSeek R1, sped up by their custom chips, flying at 150-200t/s. This is the full DeepSeek R1, not the distilled Qwen based versions! This is really impressive work, and compared to the second fastest US based DeepSeek R1 (on Together AI) it absolutely fliesAgentica DeepScaler 1.5B Beats o1-preview on MathAgentica's DeepScaler 1.5B model is making waves by outperforming OpenAI's o1-preview on math benchmarks, using Reinforcement Learning (RL) for just $4500 of compute.* Impressive Math Performance: DeepScaleR achieves a 37.1% Pass@1 on AIME 2025, outperforming the base model and even o1-preview!!* Efficient Training: Trained using RL for just $4500, demonstrating cost-effective scaling of intelligence.* Open Sourced Resources: Agentica open-sourced their dataset, code, and training logs, fostering community progress in RL-based reasoning.Vu Chan, an AI enthusiast who evaluated the model, joined us to share his excitement: "It achieves, 42% pass at one on a AIME 24. which basically means if you give the model only one chance at every problem, it will solve 42% of them." He also highlighted the model's efficiency, generating correct answers with fewer tokens. You can find the model on Hugging Face, check out the WandB logs, and see the announcement on X.com.ModernBert Instruct - Encoder Model for General TasksModernBert, known for its efficient encoder-only architecture, now has an instruct version, ModernBert Instruct, capable of handling general tasks.* Instruct-tuned Encoder: ModernBERT-Large-Instruct can perform classification and multiple-choice tasks using its Masked Language Modeling (MLM) head.* Beats Qwen .5B: Outperforms Qwen .5B on MMLU and MMLU Pro benchmarks.* Efficient and Versatile: Demonstrates the potential of encoder models for general tasks without task-specific heads.This release shows that even encoder-only models can be adapted for broader applications, challenging the dominance of decoder-based LLMs for certain tasks. Check out the announcement on X.com.Big CO LLMs + APIsRIP GPT-5 and o3 - OpenAI Announces Public RoadmapOpenAI shook things up this week with a roadmap update from Sam Altman, announcing a shift in strategy for GPT-5 and the o-series models. Get ready for GPT-4.5 (Orion) and a unified GPT-5 system!* GPT-4.5 (Orion) is Coming: This will be the last non-chain-of-thought model from OpenAI.* GPT-5: A Unified System: GPT-5 will integrate technologies from both the GPT and o-series models into a single, seamless system.* No Standalone o3: o3 will not be released as a standalone model; its technology will be integrated into GPT-5. "We will no longer ship O3 as a standalone model," Sam Altman stated.* Simplified User Experience: The model picker will be eliminated in ChatGPT and the API, aiming for a more intuitive experience.* Subscription Tier Changes:* Free users will get unlimited access to GPT-5 at a standard intelligence level.* Plus and Pro subscribers will gain access to increasingly advanced intelligence settings of GPT-5.* Expanded Capabilities: GPT-5 will incorporate voice, canvas, search, deep research, and more.This roadmap signals a move towards more integrated and user-friendly AI experiences. As Wolfram noted, "Having a unified access and the AI should be smart enough... AI has, we need an AI to pick which AI to use." This seems to be OpenAI's direction. Read Sam Altman's full announcement on X.com.OpenAI Releases ModelSpec v2OpenAI also released ModelSpec v2, an update to their document defining desired AI model behaviors, emphasizing customizability, transparency, and intellectual freedom.* Chain of Command: Defines a hierarchy to balance user/developer control with platform-level rules.* Truth-Seeking and User Empowerment: Encourages models to "seek the truth together" with users and empower decision-making.* Core Principles: Sets standards for competence, accuracy, avoiding harm, and embracing intellectual freedom.* Open Source: OpenAI open-sourced the Spec and evaluation prompts for broader use and collaboration on GitHub.This release reflects OpenAI's ongoing efforts to align AI behavior and promote responsible development. Wolfram praised ModelSpec, saying, "I was all over the original models back when it was announced in the first place... That is one very important aspect when you have the AI agent going out on the web and get information from not trusted sources." Explore ModelSpec v2 on the dedicated website.VP Vance Speech at AI Summit in Paris - Deregulate and Dominate!Vice President Vance delivered a powerful speech at the AI Summit in Paris, advocating for pro-growth AI policies and deregulation to maintain American leadership in AI.* Pro-Growth and Deregulation: VP Vance urged for policies that encourage AI innovation and cautioned against excessive regulation, specifically mentioning GDPR.* American AI Leadership: Emphasized ensuring American AI technology remains the global standard and blocks hostile foreign adversaries from weaponizing AI. "Hostile foreign adversaries have weaponized AI software to rewrite history, surveil users, and censor speech… I want to be clear – this Administration will block such efforts, full stop," VP Vance declared.* Key Points:* Ensure American AI leadership.* Encourage pro-growth AI policies.* Maintain AI's freedom from ideological bias.* Prioritize a pro-worker approach to AI development.* Safeguard American AI and chip technologies.* Block hostile foreign adversaries' weaponization of AI.Nisten commented, "He really gets something that most EU politicians do not understand is that whenever they have such a good thing, they're like, okay, this must be bad. And we must completely stop it." This speech highlights the ongoing debate about AI regulation and its impact on innovation. Read the full speech here.Cerebras Powers Perplexity with Blazing Speed (1200 t/s!)Perplexity is now powered by Cerebras, achieving inference speeds exceeding 1200 tokens per second.* Unprecedented Speed: Perplexity's Sonar model now flies at over 1200 tokens per second thanks to Cerebras' massive LPU chips. "Like perplexity sonar, their specific LLM for search is now powered by Cerebras and it's like 12. 100 tokens per second. It's it matches Google now on speed," I noted on the show.* Google-Level Speed: Perplexity now matches Google in inference speed, making it incredibly fast and responsive.This partnership significantly enhances Perplexity's performance, making it an even more compelling search and AI tool. See Perplexity's announcement on X.com.Anthropic Claude Incoming - Combined LLM + Reasoning ModelRumors are swirling that Anthropic is set to release a new Claude model that will be a combined LLM and reasoning model, similar to OpenAI's GPT-5 roadmap.* Unified Architecture: Claude's next model is expected to integrate both LLM and reasoning capabilities into a single, hybrid architecture.* Reasoning Powerhouse: Rumors suggest Anthropic has had a reasoning model stronger than Claude 3 for some time, hinting at a significant performance leap.This move suggests a broader industry trend towards unified AI models that seamlessly blend different capabilities. Stay tuned for official announcements from Anthropic.Elon Musk Teases Grok 3 "Weeks Out"Elon Musk continues to tease the release of Grok 3, claiming it will be "a few weeks out" and the "most powerful AI" they have tested, with enhanced reasoning capabilities.* Grok 3 Hype: Elon Musk claims Grok 3 will be the most powerful AI X.ai has released, with a focus on reasoning.* Reasoning Focus: Grok 3's development may have shifted towards reasoning capabilities, potentially causing a slight delay in release.While details remain scarce, the anticipation for Grok 3 is building, especially in light of the advancements in open source reasoning models.This Week's Buzz

Hard Reset
E66 - Laser Computation (Chene Tradonsky)

Hard Reset

Play Episode Listen Later Jan 13, 2025 68:32


בתואר הראשון בהנדסת חשמל נחשפים לחשיבות ולתפוצה הרחבה של התמרת פורייה, כמעט בכל תחום. האלגוריתם אמנם נפוץ מאוד, אבל בזבזני במשאבים. כך יוצא שחישוב התמרת פורייה דיסקרטי מגיע עם סיבוכיות של n^2. ישבו מהנדסים חכמים (ספציפית אחד - קארל פרידריך גאוס שמו) והמציאו את אלגוריתם התמרת פורייה המהיר - הFFT. כך נשארנו עם סיבוכיות של n*logn מאז ועד היום. בתואר הראשון בפיזיקה לומדים באחת המעבדות הראשונות שמעבר דרך עדשה משמעותה התמרת פורייה על הקלט לעדשה. ללא תלות בגודל הקלט - סיבוכיות שתלויה אך ורק במרחק. התמרת פורייה היא אחד מעמודי התווך של חישוביות אופטית. ומאחר וכל כך נהננו עם פרופ' זאב זלבסקי בפרק 28, החלטנו לשחזר את ההצלחה ולעשות פרק המשך. האורח שלנו הפעם היה חן טרדונסקי. חן הוא הCTO של חברת LightSolver והוא בוגר המחזור הראשון של מרכז היזמות של מכון ויצמן. למי שזוכר, וגם למי שלא - בפרק 55 יונתן כהן סיפר לנו שהוא היה שותף להקמת מרכז היזמות. אז יש לנו פה סגירת מעגל נוספת. אז על מה דיברנו? - איך עושים חישוביות אופטית בלי סיבים? - איך מחשבים דברים עם לייזרים? - אם אור מהיר מחשמל, האם מעבד אופטי תמיד יהיה מהיר ממעבד אלקטרוני? - איפה חישוביות אופטית יכולה להוות יתרון? - מה זה LPU? - בהנחה ו-LightSolver מצליחים, מי (אולי) יזכה בפרס נובל? מוזמנים להאזין לפרק, ולהצטרף לקבוצת המאזינים שלנו - שם שי עושה התמרת פורייה למצטרפים חדשים >>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T מוזמנים ליצור איתנו קשר במייל podcasthardreset@gmail.com

Sidecar Sync
Previewing digitalNow 2024, Google NotebookLM, and xRx Framework Explained | 49

Sidecar Sync

Play Episode Listen Later Sep 26, 2024 54:46 Transcription Available


Send us a textIn this episode of Sidecar Sync, Amith and Mallory dive into the latest AI trends impacting associations. They preview the upcoming digitalNow conference and showcase the impressive keynote lineup, including experts from Google, the US Department of State, and more. The episode also features an exciting exploration of Google Notebook LM, a new tool designed to help users organize and interact with documents using AI, along with an overview of XRX, an open-source framework enabling multimodal AI solutions. Listen in to learn how associations can harness these tools to boost productivity and innovation.

Biblical Higher Ed Talk
Grant Proposals: Tapping Organizations That Align With Your Mission

Biblical Higher Ed Talk

Play Episode Listen Later Sep 24, 2024 32:50


Some biblical higher ed administrations balk at the notion of submitting grant proposals to secular organizations, fearing meddling or conflict with their institutions' missions.But Daniel Ruarte, Provost and VP of Academic Affairs at Life Pacific University (LPU), believes several unjustified misconceptions around alignment belie those concerns.Daniel shares how grants have uplifted LPU's student programs and how you can get the most out of your school's written proposals.Join us as we discuss:[6:45] Why persistence pays off in writing grant proposals[14:00] Writing grants in-house versus partnering with a third-party[23:14] Balancing grant requirements with your school's missionCheck out these resources we mentioned during the podcast:U.S. Department of Education Postsecondary Student Success Grant (PSSG) ProgramTo hear this interview and many more like it, subscribe on Apple Podcasts, Spotify, or our website, or search for Biblical Higher Ed Talk in your favorite podcast player.Hosted by Ausha. See ausha.co/privacy-policy for more information.

Leveraging AI
109 | AI models extravaganza - New powerful and fast models from OpenAI, Meta, Mistral, and Google all in one week, plus additional AI news from the week ending on July 26th

Leveraging AI

Play Episode Listen Later Jul 27, 2024 26:07 Transcription Available


In this episode of Leveraging AI, Isar talks into a week filled with groundbreaking AI developments, featuring major releases from industry giants like Meta, OpenAI, and Google. As companies race to outdo each other, what does this mean for the future of AI and its application in business? In this session, you'll discover:Meta's Llama 3.1: How this new model is outperforming GPT-4 in key areas and what it means for developers and businesses.OpenAI's Strategic Shift: The implications of releasing GPT-4 Mini's model weights and how it opens new doors for customization.The Speed Race in AI: Understanding the impact of new hardware technologies like Grok's LPU and their role in accelerating AI capabilities.The Future of AI Search: Insights into OpenAI's upcoming Search GPT and how it could redefine how we access information.Open vs. Closed Source Debate: The evolving battle between open-source and closed-source AI models and what this means for the industry.Join the waitlist for SearchGPT here: https://openai.com/index/searchgpt-prototype/ For more structured learning, check out Multiplai AI's self-paced course here: https://multiplai.ai/self-paced-online-course/   About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 6, 2024 121:17


Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y

america god tv love ceo spotify amazon netflix world learning ai europe english google apple lessons pr magic san francisco phd friend digital chinese marvel reading data predictions elon musk microsoft events funny fortune startups white house weird economics wall street memory wall street journal reddit wars auto curious cloud vr singapore gate stanford connections mix israelis context ibm mark zuckerberg senior vice president average intel ram cto signal state of the union tigers minecraft vc adapt ipo siri sol transformers gemini openai instructors lsu clouds nvidia stability rust ux lemon patel api gi davos nsfw cisco luther compass progression b2c bro d d sweep bing gpt makes disagreement mythology lama ml llama github apis token thursday night stripe amd quran vcs devops captive llm sora baldur copilot opus sam altman embody silicon dozen bobo tab capital one gpu grok altman mamba agi generic anthropic boba waze dali waymo midjourney upfront ide approve napster cloudflare golem zuck gdc coliseum prs git rag klarna kv albrecht gpus diffusion coders gan deepmind tldr alessio boston dynamics minefields gitlab sergei fragmented suno ppa mistral json lex fridman gpts ena cursor nox stable diffusion inflection jensen huang databricks decibel a16z counterpoint mts rohde cuda adept chroma asr sundar lemurian gtc decoder iou stability ai nvidia gpus cerebros etched sram omniverse singaporeans netlify pytorch practical ai eac lamo day6 tpu mustafa suleyman devtools agis vae not safe elicit jupyter groq kubecon autogpt project titan personal ai andrej karpathy nvidia gtc milind demis neurips hbm ai engineer marginally jeff dean imbue positron nlw slido nat friedman entropic ppap lstm simon willison c300 technium mbu xla lpu boba guys latent space you look swix medex metax mxu lstms
Sidecar Sync
23: Humanoid Robots, Groq's AI Chips, and Using AI for Better Weather Forecasting

Sidecar Sync

Play Episode Listen Later Mar 28, 2024 51:53


In this episode, Amith & Mallory explore the exciting world of humanoid robots, discussing OpenAI's partnership with Figure to create robots capable of performing tasks without specific training. Additionally, they examine Groq's revolutionary AI chips designed for low-latency language model inference. Finally, they cover DeepMind's groundbreaking weather prediction models, highlighting their potential to provide hyper-local and accurate forecasts, a game changer for sectors like agriculture and insurance.Download your FREE copy of our new book, ascend Unlocking the ‘Power of AI for Associations' by visiting:

Oh, Schuhen! - Der Sneaker-Podcast
Hikmet Sugoer über Sonra, Solebox und Sneaker Culture | OH, SCHUHEN! #153

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Mar 9, 2024 90:24


Hikmet Sugoer ist ein OG der Sneaker Kultur! Mit Solebox gründete er Anfang der 2000er einen der führenden Stores für Sneaker & Streetwear, verkaufte ihn später an SNIPES, verließ das Unternehmen nach Unstimmigkeiten kurze Zeit später wieder, und hob dann seine eigenen Sneaker Brand Sonra aus der Taufe. Darüber hinaus kollaborierte er in über 20 Jahren mit Brands wie New Balance, Asics, adidas, Puma, Mizuno, Vans, Rimowa und Smart, und hat selbst immer noch eine beachtliche Sammlung an Turnschuhen Daheim, zu denen viele seltene Modelle und rare Raritäten gehören... Viel zu erzählen über das Damals und das Heute und all das, was Hikmet Sugoer für seine Brand Sonra und die Sneaker Szene in der Zukunft sieht. Amadeus traf ihn zum Interview, in OH, SCHUHEN! Podcast Episode 153! Shownotes: 00:00 Intro & #LPU / 02:24 Hikmets Sneaker Sammlung / 09:05 Sneaker Messen & Sneaker Events / 14:16 Qualität vs. Preis / 17:05 Hikmets all time Fav Sneaker / 17:33 Die Anfänge: New Balance / 27:24 Das Sneaker Business Anfang der 2000er / 32:01 Asics & Solebox / 33:57 Das Ende der Collabs / 37:05 Über das Sneaker Business / 37:56 Die Gründung von Solebox / 45:06 Solebox 2.0? / 47:38 3 Millionen Euro für Solebox? / 49:16 Das Ende der Sneaker Stores / 51:13 Solebox Beef vs. Nike / 59:11 D2C vs. Stores / 1:07:29 Support & Community / 1:09:19 Sonra Update 2024 / 1:12:31 Neue Sonra Sneaker / 1:23:57 Die Zukunft von Sonra / 1:27:21 Outro & Giveaway Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Podcasty Aktuality.sk
SHARE: Nová zlatá horúčka. Spoločnosti šalejú za čipmi pre AI

Podcasty Aktuality.sk

Play Episode Listen Later Mar 8, 2024 38:44


Veľa sa hovorí o chatbotoch a ďalších službách postavených na umelej inteligencii. Na svoje fungovanie však potrebujú v prvom rade výkonné mikročipy.Preteky o najlepší čip na umelú inteligenciu sú dnes najhorúcejšou témou mnohých technologických spoločností. Najďalej je Nvidia, z ktorej sa vďaka dopytu po jej riešeniach stala jedna z najhodnotnejších spoločností.Prečo všetky zháňajú mikročipy vhodné na výpočty spojené s umelou inteligenciou? Aké sú nedostatky súčasných riešení a čo máme očakávať v budúcnosti? Aj o tom sa v novej časti podcastu SHARE rozprávajú redaktori Živé.sk Lukáš Koškár a Maroš Žofčin.Viac v našom podcaste - https://zive.aktuality.sk/clanok/TlCeXxk/nova-zlata-horucka-spolocnosti-saleju-za-vykonnymi-cipmi-na-umelu-inteligenciu-podcast/V podcaste hovoríme aj o týchto témach:Prečo hodnota Nvidie odrazu tak prudko vzrástla.Ktoré ďalšie spoločnosti vyvíjajú vlastné čipy pre AI.Umelá inteligencia v cloude versus na zariadení.Po GPU prichádzajú LPU (language processing units).Téme sa venujeme aj v týchto článkoch:Nvidia už nie je len dodávateľom čipov. Vytvorila skutočné monštrum umelej inteligencieVýpočty s umelou inteligenciou rýchlosťou svetla? Nový čip má byť skutočnou revolúciouĎalšia veľká AI investícia v Európe: Microsoft investuje miliardy do dátových centier pre umelú inteligenciuPodcast SHARE pripravuje magazín Živé.sk.

The top AI news from the past week, every ThursdAI

Hello hello everyone, happy spring! Can you believe it? It's already spring! We have tons of AI news for you to cover, starting with the most impactful one, did you already use Claude 3? Anthropic decided to celebrate Claude 1's birthday early (which btw is also ThursdAI's birthday and GPT4 release date, March 14th, 2023) and gave us 3 new Clauds! Opus, Sonnet and Haiku. TL;DR of all topics covered: * Big CO LLMs + APIs*

Leveraging AI
67 | Elon Musk sued OpenAI, Groq is supercharging AI performance with LPU, and Sundar Pichai Google's CEO position is at risk, and many more important AI news for the week ending on March 2nd,

Leveraging AI

Play Episode Listen Later Mar 2, 2024 27:10 Transcription Available


Are you ready to uncover the future of AI that's transforming our world right now?Give yourself and your business the highest chances of success in the AI era, with the AI business Transformation Course. In this episode of Leveraging AI, Isar Meitis dives into groundbreaking developments with industry giants and emerging players. Discover how new technologies are reshaping the AI landscape, from Grok's innovative processors to Stability AI's latest release and Elon Musk's legal battle against OpenAI.Topics We DiscussedGrok's revolutionary LPU technology.Stability AI's Stable Diffusion 3 launch.Elon Musk's lawsuit against OpenAI.Google's challenges with Gemini image generation.Microsoft's new ventures in AI integration.The rise of AI-driven coding tools.The evolving landscape of AI in customer service.Meta's upcoming Lama 3 release.AI's increasing role in mobile technology.About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Group Chat
The Four Commandments | Group Chat News Ep. 844

Group Chat

Play Episode Listen Later Feb 26, 2024 101:40


Group Chat News is back and we are catching up with Dee and Anand on their world travels to Dubai and Milan including Dee running into Kayne. After we catch up with the guys we jump into some news including Google Gemini and the backlash they have gotten, Shane Gillis makes his return to Saturday Night Live after being fired, Hungary's prime minister's new plan to make the country better, a Dumb Money review. Timeline of What Was Discussed: From the Valley to Dubai to Milan! (0:00)  Google's Gemini is WOKE! (51:45)  Shane Gillis is back at SNL. (1:01:22)  Hungary wants to reward women to increase their population. (1:10:02)  Movie Reviews. (1:34:05)  Related Links/Products Mentioned  Google's Gemini does seem to be racist  Google admits its Gemini AI 'got it wrong' following widely panned image generator: Not 'what we intended'  All-In Pod - E167: Nvidia smashes earnings (again), Google's Woke AI disaster, Groq's LPU breakthrough & more  Nvidia briefly surpasses $2 trillion in market cap during intraday trading  As Nvidia's stock price soars above $788, market sage Rob Arnott has a warning for investors: ‘Disruptors are often disrupted'  SHANE GILLIS BACK ON 'SNL' AFTER FIRING  The Prime Minister of Hungary has given women a lifetime 0% personal income tax exemption if they give birth to and raise at least 4 children.  Dumb Money (2023) - IMDb  Connect with Group Chat! Watch The Pod #1 Newsletter In The World For The Gram Tweet With Us Exclusive Facebook Content We're @groupchatpod on Snapchat

AI Named This Show
AI language processing units: New chips on the block

AI Named This Show

Play Episode Listen Later Feb 23, 2024 45:17


This week, Tristan and Tasia bring you bunch of Google AI news, a couple of whoopsie-doopsies from ChatGPT and Google Gemini, and more! Plus Groq (yes, with a “q”) introduces its new LPU chips to speed up LLM performance. Join us as we put their claims to a highly unscientific test.FOLLOWAI Named This Show on Facebook, Instagram, YouTube and X (Twitter)Tristan & TasiaAI Named This Show podcastAI NEWSStable Diffusion 3.0 debuts new diffusion transformation architecture to reinvent text-to-image gen AIAI model Poro sets new milestones for multilingual LLMs in EuropeAI WHOOPSIE-DOOPSIESGoogle's hidden AI diversity prompts lead to outcry over historically inaccurate imagesChatGPT spat out gibberish for many users overnight before OpenAI fixed itGOOGLE AI NEWSGoogle Announces Gemma, its open models for AI researchChrome gets a built-in AI writing tool powered by GeminiGoogle launches “Gemini Business” AI, adds $20 to the $6 Workspace billReddit is licensing its content to Google to help train its AI modelsGROQMeet 'Groq,' the AI Chip That Leaves Elon Musk's Grok in the DustGroq creates fastest generative AI in the world, blazing past ChatGpt and Elon Musk's GrokGroqSee also: Perplexity Hosted on Acast. See acast.com/privacy for more information.

The top AI news from the past week, every ThursdAI

Hey, this is Alex,Ok let's start with the big news, holy crap this week was a breakthrough week for speed! We had both Groq explode in popularity, and ByteDance release an updated SDXL model called Lightning, able to generate full blown SDXL 1024 images in 300ms. I've been excited about seeing what real time LLM/Diffusion can bring, and with both of these news release the same week, I just had to go and test them out together: Additionally, we had Google step into a big open weights role, and give us Gemma, 2 open weights models 2B and 7B (which is closer to 9B per Junyang) and it was great to see google committing to releasing at least some models in the open. We also had breaking news, Emad from Stability announced SD3, which looks really great, Google to pay Reddit 200M for AI training on their data & a few more things. TL;DR of all topics covered: * Big CO LLMs + APIs* Groq custom LPU inference does 400T/s Llama/Mistral generation (X, Demo)* Google image generation is in Hot Waters and was reportedly paused (refuses to generate white people)* Gemini 1.5 long context is very impressive to folks (Matt Shumer, Ethan Mollick)* Open Weights LLMs * Google releases GEMMA, open weights 2B and 7B models (Announcement, Models)* Teknium releases Nous Hermes DPO (Announcement, HF)* Vision & Video* YoLo V9 - SOTA real time object detector is out (Announcement, Code)* This weeks Buzz (What I learned in WandB this week)* Went to SF to cohost an event with A16Z, Nous, Mistral (Thread, My Report)* AI Art & Diffusion & 3D* ByteDance presents SDXL-Lightning (Try here, Model)* Stability announces Stable Diffusion 3 (Announcement)* Tools* Replit releases a new experimental Figma plugin for UI → Code (Announcement)* Arc browser adds "AI pinch to understand" summarization (Announcement)Big CO LLMs + APIsGroq's new LPU show extreme performance for LLMs - up to 400T/s (example)* Groq created a novel processing unit known as the Tensor Streaming Processor (TSP) which they categorize as a Linear Processor Unit (LPU). Unlike traditional GPUs that are parallel processors with hundreds of cores designed for graphics rendering, LPUs are architected to deliver deterministic performance for AI computations.* Analogy: They know where all the cars are going when everyone wakes up for work (when they compile) and how fast they all drive (compute latency) so they can get rid of traffic lights (routers) and turn lanes (backpressure) by telling everyone when to leave the house.* Why would we need something like this? Some folks are saying that average human reading is only 30T/s, I created an example that uses near instant Groq Mixtral + Lightning SDXL to just create images with Mixtral as my prompt managerOpen Source Weights LLMs Google Gemma - 2B and 7B open weights models (demo)* 4 hours after release, Llama.cpp added support, Ollama and LM Studio added support, Tri dao added Flash attention support* Vocab size is 256K* 8K context window* Tokenizer similar to LLama* Folks are... not that impressed as far as I've seen* Trained on 6 trillion tokens* Google also released Gemma.cpp (local CPU inference) - AnnouncementNous/Teknium re-release Nous Hermes with DPO finetune (Announcement)* DPO RLHF is performing better than previous models* Models are GGUF and can be found here* DPO enables Improvements across the boardThis weeks Buzz (What I learned with WandB this week)* Alex was in SF last week* A16Z + 20 something cohosts including Weights & Biases talked about importance of open source* Huge Shoutout Rajko and Marco from A16Z, and tons of open source folks who joined* Nous, Ollama, LLamaIndex, LMSys folks, Replicate, Perplexity, Mistral, Github, as well as Eric Hartford, Jon Durbin, Haotian Liu, HuggingFace, tons of other great folks from Mozilla, linux foundation and Percy from Together/StanfordAlso had a chance to checkout one of the smol dinners in SF, they go really hard, had a great time showing folks the Vision Pro, chatting about AI, seeing incredible demos and chat about meditation and spirituality all at the same time! AI Art & DiffusionByteDance presents SDXL-Lightning (Try here)* Lightning fast SDXL with 2, 4 or 8 steps* Results much closer to original SDXL than turbo version from a few months agoStability announces Stable Diffusion 3 (waitlist)Uses a Diffusion Transformer architecture (like SORA)Impressive multi subject prompt following: "Prompt: a painting of an astronaut riding a pig wearing a tutu holding a pink umbrella, on the ground next to the pig is a robin bird wearing a top hat, in the corner are the words "stable diffusion"Tools* Replit announces a new Figma design→ code plugin That's it for today, definitely check out the full conversation with Mark Heaps from Groq on the pod, and see you next week!

covid-19 united states god ceo amazon tiktok head world ai power google earth starting apple vision france future training san francisco design evolution brand microsoft dm explore putting creative army open dad impact jewish language african indian chatgpt harry potter human run nazis speed discord scale exploring cloud mac flash puerto rico incredible eat honestly draw windows base context alignment wifi mark zuckerberg releasing lightning models communicate results wondering painting human rights messy folks vc instant siri gemini openai lang declaration sf adobe nvidia stability file trained wing api ridiculous generally remarkable gpt ui existing announcement improvements lama ml mosaic llama github analyze lava stance white people united airlines hermes refuses tri band aids llm sora cad guia copilot 2b prompt biases tl y combinator percy daly weights genai gpu cpu macs grok npcs smb llamas hug dali ld perplexity bytedance mozilla rag phi yarn gpus unheard 7b figma automatically gan chief evangelist andreessen horowitz alessio yc gema lms parameters lazer heaps buster keaton 8b 4a uploading mistral yam cpus fal replicate cursor marc andreessen stable diffusion olmo inference blo vocab a16z seamlessly abacus cuda universal declaration aliment linux foundation grog axolotl tensor dpo ai news stability ai imad pytorch emad tpu ffm vicu gfs allen institute community feedback groq lcm gdf multimodality devendra jeff dean gemini pro huggingface so google tokamak entropic vl m alpay welcoming remarks gemini ultra technium mqa lpu lmss tanishka john durbin jon durbin
GPT Reviews
Groq's AI Hardware

GPT Reviews

Play Episode Listen Later Feb 21, 2024 14:02


Groq's AI hardware breakthroughs with LPU architecture achieving speeds of 500 tokens per second. Japan's $67 billion investment to become a global chip powerhouse and insulate its economy from growing US-China tensions. Neural Network Diffusion paper demonstrating that diffusion models can generate high-performing neural network parameters. VideoPrism paper from Google Research achieving state-of-the-art performance on 30 out of 33 video understanding benchmarks. Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 01:47 Groq Goes Viral with Crazy Fast AI Inference 03:01 Japan Bets $67 Billion to Become a Global Chip Powerhouse Once Again 04:54 My benchmark for large language models 06:01 Fake sponsor 07:54 Neural Network Diffusion 09:19 Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models 11:16 VideoPrism: A Foundational Visual Encoder for Video Understanding 12:42 Outro

Oh, Schuhen! - Der Sneaker-Podcast
#147 Ist der SNEAKER Markt TOT?!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Dec 16, 2023 57:20


Der Sneaker Markt, und damit inbegriffen der Sneaker Resell Markt haben 2023 einen spannenden Wandel erlebt! Zeit, zum Jahresende einen genauen Blick auf die letzten 12 Monate zu werfen und eine Detailanalyse zu starten: Welche Sneaker haben performt? Was hat sich am besten verkauft? Was ist liegen geblieben? Von wie viel Geld sprechen wir eigentlich? Ist der Sneaker Markt jetzt tot oder nicht? Und welche Prognose gibt es für 2024? All das und noch vieles mehr - unter anderem das große Weihnachts Giveaway - in Episode 147! Shownotes: 00:00 Intro & Vorweihnachtszeit / 04:24 #LPU / 11:36 #WOMFT / 13:16 Ist der Sneaker Markt tot? / 14:41 Die meist gekauften Sneaker in Deutschland 2023 / 17:41 Die meist verkauften Sneaker in Deutschland 2023 / 20:49 Die beliebtesten Streetwear Artikel in Deutschland in 2023 / 23:29 Die beliebtesten Collectibles in Deutschland in 2023 / 26:52 Highest Sale in Deutschland in 2023 / 29:37 Highest Sale weltweit in 2023 / 32:18 Most faked Items weltweit in 2023 / 42:30 Ist der Sneaker Markt tot? - Eine Antwort / 52:05 Das große Weihnachts Giveaway / 53:49 Shoutout & Liebe / 55:26 Outro EP147 ist ein Advertorial. Werbepartner dieser Episode in freundlicher Zusammenarbeit: [StockX](https://www.stockx.com) Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

The Manila Times Podcasts
SPORTS: MU, LPU seek to seal NCAA title showdown | November 28, 2023

The Manila Times Podcasts

Play Episode Listen Later Nov 28, 2023 2:00


SPORTS: MU, LPU seek to seal NCAA title showdown | November 28, 2023Subscribe to The Manila Times Channel - https://tmt.ph/YTSubscribe Visit our website at https://www.manilatimes.net Follow us:Facebook - https://tmt.ph/facebookInstagram - https://tmt.ph/instagramTwitter - https://tmt.ph/twitterDailyMotion - https://tmt.ph/dailymotion Subscribe to our Digital Edition - https://tmt.ph/digital Check out our Podcasts:Spotify - https://tmt.ph/spotifyApple Podcasts - https://tmt.ph/applepodcastsAmazon Music - https://tmt.ph/amazonmusicDeezer: https://tmt.ph/deezerStitcher: https://tmt.ph/stitcherTune In: https://tmt.ph/tunein #TheManilaTimes Hosted on Acast. See acast.com/privacy for more information.

Oh, Schuhen! - Der Sneaker-Podcast
#145 WATCHTALK & STREETCULTURE - Wie G-SHOCK seit 40 Jahren die Szene bestimmt!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Nov 18, 2023 45:17


1983 veröffentlichte Casio ihre erste G-SHOCK - eine Revolution in der Welt der Armbanduhren! Über die Jahrzehnte entwickelte sich G-SHOCK zu einer Stilikone der Streetculture an Handgelenken von Pharrell Williams und Kanye West über Justin Bieber und Rihanna hin zu Brad Pitt und Doug aus "King of Queens". Amadeus und Fabian wagen den Deepdive in die Historie der japanischen Brand, zeichnen ihren Werdegang nach, besuchen das 40 Jahre G-SHOCK Event in Berlin, sprechen mit u. a. DJ JNS von Overkill, BMX Pro Kevin Nikulski, Perry von den Beathoavenz, und G-SHOCK Key-Account Manager Christian Dittrich, und werfen einen Blick in die Zukunft. Shownotes: 00:00 Intro & Hochzeiten / 03:47 OH! Update / 6:55 #LPU / 13:45 #WOMFT / 15:19 40 Jahre G-SHOCK / 16:11 Die Historie / 34:24 Das große Event in Berlin und Statements von Kevin Nikulski, Perry/Beathoavenz, Christian Dittrich, DJ JNS / 41:06 Impact & Forecast / 43:02 Outro EP145 ist ein Advertorial. Werbepartner dieser Episode in freundlicher Zusammenarbeit: [G-SHOCK](https://gshock.casio.com/de/) Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Future in Review Podcast w/ Berit Anderson
The Road to Safe and Equal AI: A Conversation with Groq's Jonathan Ross

Future in Review Podcast w/ Berit Anderson

Play Episode Listen Later Oct 10, 2023 30:25


In this episode of Future in Review, host and FiRe COO Berit Anderson interviews Jonathan Ross, CEO and founder of AI startup Groq. They discuss Ross's background pushing innovation boundaries at Google and how Groq is taking a unique approach to generative AI.Ross explains how Groq's new language processing units can dramatically boost performance while reducing costs and environmental impacts compared with today's GPU models. He also shares his vision for how advances in AI speed could help address issues like hallucinations through self-verification techniques.The conversation expands into Ross's broader views on AI's role in society, including AI's potential to improve human understanding and to moderate interactions through more nuanced interpretation of cues. He also acknowledges open questions around control and stresses the importance of prioritizing safety, transparency, and equal access for all.Listen in as Ross previews his talk at the upcoming FiRe conference, where attendees will get a firsthand look at Groq's groundbreaking hardware. This discussion offers fascinating insights into the challenges and promise of generative AI from one of the industry's leading innovators.Key Takeaways:Groq's LPU chips offer 10x speed-ups and major costs savings for AIActive verification could help address hallucination issues over timeAI may help foster more understanding between people by interpreting subtle social cuesEnsuring democratic values and diverse control of AI progress is paramount

NOT MEDDLING JUST MOTHERING- parenting adult children

Today we are sharing a talk given at LPU for their new student (and parent) orientation weekend by Pastor Jen Lord. Jen Lord is the lead pastor of Restoration Church in Valencia, CA. She has been married to her high school sweetheart, Craig, for 27 years and they have two beautiful daughters, Meagan, 22 and Kelsey 20. Jen's passion is to see people restored to God's love and truth and to give them tools to live that out in their identity on mission. Because of her roles as both a mom and a pastor, Jen chose her capstone research project, while earning her Masters In Strategic Leadership, to focus on understanding Gen Z as emerging adults and discovering effective ways parents can continue to parent and disciple them effectively. Jen is committed to following the Spirit's leading and discipling this next generation of God's people! Jen shares tools to help parents walk out this season, even if your child is not attending college this episode is worth the listen. Here is the link for the Aly Raisman video that Jen mentioned in the talk:https://www.youtube.com/watch?v=baJuHllm_Uo --- Send in a voice message: https://podcasters.spotify.com/pod/show/notmeddlingjustmothering/message

Raj Shamani - Figuring Out
Indian Colleges Are A Scam? ft Rashmi Mittal from Lovely Professional University | FO108 Raj Shamani

Raj Shamani - Figuring Out

Play Episode Listen Later Jul 21, 2023 52:19


Order 'Build, Don't Talk' (in English) here: https://amzn.eu/d/eCfijRuOrder 'Build Don't Talk' (in Hindi) here: https://amzn.eu/d/4wZISO0--------------Subscribe To Our Other YouTube Channels:-https://www.youtube.com/@rajshamaniclipshttps://www.youtube.com/@RajShamani.Shorts--------------------In this episode of Figuring Out with Raj Shamani we are in conversation with Rashmi Mittal, Pro-Chancellor of Lovely Professional University. Rashmi Mittal, along with her husband Ashok Mittal, embarked on their entrepreneurial journey by running sweet shops. However, their passion for education and the recognition of a need for quality higher education in North India led them to explore the possibility of establishing an educational institution.In pursuit of their vision, Rashmi and Ashok Mittal took a bold step and founded Lovely Professional University (LPU) in 2005. Located in Jalandhar, Punjab, LPU started as a modest endeavor but quickly gained momentum due to its commitment to academic excellence and holistic development of students.Under Rashmi Mittal's leadership, Lovely Professional University has grown into one of India's largest and most prestigious private universities. With a focus on innovation, research, and providing a conducive learning environment, LPU has become a hub of academic excellence and has earned numerous accolades and recognition both nationally and internationally.In this podcast she shared how she started a university from scratch and what challenges she faced. She also talks about what sets a university apart from others, why practical learning is more important, why India is not the leader in education, how students can crack high paying jobs and much more. Watch the podcast till the end to understand the education business, challenges of running a university, Indian education system and much more. Follow LPU On Instagram: https://instagram.com/lpuuniversity?igshid=MzRlODBiNWFlZA==---------------

Oh, Schuhen! - Der Sneaker-Podcast
#135 DIE BESTEN SNEAKER 2023 - Part. 1

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jul 1, 2023 72:56


Von Nike SB Dunks und Air Jordans hin zu New Balance 991s und 990v6s; von adidas Sambas und Nike Air Max' hin zu Salomons, Mizunos, Reeboks, Sauconys und Asics, und all den weiteren "900 [sic!] wichtigsten Releases der Woche" - das erste Halbjahr 2023 hatte sehr viele Sneaker Releases und so einige Highlights! Amadeus und Fabian haben daher obligatorisch zur Jahreshalbzeit die besten Sneaker Releases gelistet, sprechen aber auch über negative Momente, und geben außerdem einen Ausblick aufs restliche Jahr 2023. Shownotes: 00:00 Intro & Paris Fashion Week / 15:45 #LPU / 25:59 #WOMFT / 27:26 Die besten Sneaker 2023: Platz 5 - 4 / 52:39 Honorable Mentions / 56:39 Die größten Enttäuschungen / 1:04:17 Die besten Sneaker 2023: Platz 1 / 1:11:25 Outro Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

The Lock Sportscast on Odysee
147: Hotel Break Ins

The Lock Sportscast on Odysee

Play Episode Listen Later Jun 21, 2023 30:50


Your weekly source for locksport news and sometimes interviews.Full show notes, including all links, and the audio-only podcast can be found athttps://www.thelocksportscast.com/147In this week's episode:00:00 - Start00:09 - In this episode00:59 - Amazon locked a man out of his smart home02:51 - A 3D printed Mul-T-Lock key03:27 - Deviant on Darknet Diaries03:50 - yebende 1000 subs + giveaway04:45 - cyber security camps05:31 - Covert Instruments is hiring!06:04 - AMADEO FreeFlex09:04 - Videos to watch11:39 - Blogs & Articles12:18 - New products16:30 - Resources17:11 - Events n meetups18:17 - LPU belts18:53 - Producer credit break20:27 - Criminals26:05 - Sales28:20 - Giveaways29:49 - ClosingContact Informationhttp://contact.thelocksportscast.com/Join the Discord at http://discord.thelocksportscast.comDonate at:http://paypal.thelocksportscast.comhttps://patreon.com/thelocksportscasthttps://www.subscribestar.com/thelocksportscast Meetups/Sales/Giveaways/Contests:https://www.thelocksportscast.com/news Credits:Executive Producers:Founding Executive Producers:Panda-Frog https://www.youtube.com/channel/UCmIqJOrfQr8NTEDrOU2lF3QMichael Gilchrist https://www.youtube.com/user/norlin76Starrylock https://www.youtube.com/c/Starrylock_LocksportWilliamsBrain https://www.youtube.com/channel/UCRGmm9FQqF6HMu9-wq9ODSQDave 2BDCy4D https://www.youtube.com/channel/UC0X0TTCPK5kBRY3yDu40EKgLiibans Locksport Journey https://www.youtube.com/user/CODNuubster Pat from Uncensored Tactical https://uncensoredtactical.com/threeraccoonsinacoat https://youtube.com/channel/UCMjGnC1m9XlN_X8OHVwxphQChirael https://www.youtube.com/channel/UCwPTxD1-2PPgmi6ATOJKlUw Associate Executive Producers:DoctorHogmasterClayton Howard (Kewltune)Co-Producers:m0gRatyokeMrPickurCrankyLockPickerBare Bones Lock PickingSnakeParacentricJohn RChief Content Producer:ChiraelContent Producers:Bare Bones Lock PickingChadI fiskI'm GumbyJimyLongsLady LocksLockpickingDevOpenLockPocket WomenTaquila DaveTony VirelliZlocks.caThe Lock Sportscast on Odyseehttps://odysee.com/@thelocksportscasst:3 The Lock Sportscast on Rumblehttps://rumble.com/c/c-2031421...https://www.youtube.com/watch?v=P8hfKJif7AA

Oh, Schuhen! - Der Sneaker-Podcast
#134 "Weniger Hype, mehr Diversity - und große Chancen!" - Drew Haines (StockX) über den Sneaker Markt 2023

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jun 17, 2023 68:24


It's all about the benjamins! Wie sieht der Sneaker Resell Markt 2023 aus? Wohin entwickeln sich die Preise, welche Brands und Styles sind angesagt, und welchen Teil tragen Player wie Kith und Supreme oder auch Salomon, Hoka und Diadora dazu bei? All das und noch vieles mehr im Interview mit Drew Haines, Trend-Scout und Merchandising Director of Sneakers & Collectibles bei StockX! Shownotes: 00:00 Intro, Bagels in Amsterdam, und Frisuren-Tipps / 05:47 #LPU / 16:06 #WOMFT / 21:06 Status Quo Sneaker Resell Markt 2023 / 22:33 Interview: Drew Haines, Merchandising Director of Sneakers & Collectibles bei StockX / 1:06:49 Outro Mehr Infos auf: https://linktr.ee/ohschuhenpodcast EP134 ist ein Advertorial. Werbepartner dieser Episode in freundlicher Zusammenarbeit: [StockX](https://www.stockx.com)

The Lock Sportscast on Odysee
146: Duplicating Keys

The Lock Sportscast on Odysee

Play Episode Listen Later Jun 8, 2023 40:03


Your weekly source for locksport news and sometimes interviews.Full show notes, including all links, and the audio-only podcast can be found athttps://www.thelocksportscast.com/146In this week's episode:00:00 - Start00:09 - In this episode01:32 - Master Lock closing Milwaukee plant03:43 - Removing parking boots06:56 - Multipick hello summer 202307:28 - The end of CircleCityCon09:26 - LPUbelts.com update11:18 - r/lockpicking joining June 12-14 protest12:04 - What say you all about the Covert Companion?12:59 - Copying 3KS keys14:25 - Videos to watch17:27 - New blog posts20:00 - New Products22:20 - Getting started. Picks, locks, tools, etc23:05 - Events n Meetups24:12 - LPU belts, stats and changes28:54 - Producer credit break31:21 - Man Swallows Key, Locksmith Uses X-Ray32:33 - Criminals35:24 - Sales38:26 - Giveaways39:02 - ClosingContact Informationhttp://contact.thelocksportscast.com/Join the Discord at http://discord.thelocksportscast.comDonate at:http://paypal.thelocksportscast.comhttps://patreon.com/thelocksportscasthttps://www.subscribestar.com/thelocksportscast Meetups/Sales/Giveaways/Contests:https://www.thelocksportscast.com/news Credits:Executive Producers:Founding Executive Producers:Panda-Frog https://www.youtube.com/channel/UCmIqJOrfQr8NTEDrOU2lF3QMichael Gilchrist https://www.youtube.com/user/norlin76Starrylock https://www.youtube.com/c/Starrylock_LocksportWilliamsBrain https://www.youtube.com/channel/UCRGmm9FQqF6HMu9-wq9ODSQDave 2BDCy4D https://www.youtube.com/channel/UC0X0TTCPK5kBRY3yDu40EKgLiibans Locksport Journey https://www.youtube.com/user/CODNuubster Pat from Uncensored Tactical https://uncensoredtactical.com/threeraccoonsinacoat https://youtube.com/channel/UCMjGnC1m9XlN_X8OHVwxphQChirael https://www.youtube.com/channel/UCwPTxD1-2PPgmi6ATOJKlUw Associate Executive Producers:DoctorHogmasterClayton Howard (Kewltune)Co-Producers:m0gRatyokeMrPickurCrankyLockPickerBare Bones Lock PickingSnakeParacentricJohn RChief Content Producer:ChiraelContent Producers:Bare Bones Lock PickingfireshaperI fiskI'm GumbyionawestJeff and ThingsJimy LongsJoshua GonzalezkalitokaliLadyLocksLockJudgeLockpickingDevNorlinOpenLockPeaceweaponPocket WomenRed WandererseptcluesTequila DaveTheGreenishOneTony VirelliThe Lock Sportscast on Odyseehttps://odysee.com/@thelocksportscasst:3 The Lock Sportscast on Rumblehttps://rumble.com/c/c-2031421...https://www.youtube.com/watch?v=_gFQ33R8p_o

The Lock Sportscast on Odysee
144: Safe Cracking Criminals

The Lock Sportscast on Odysee

Play Episode Listen Later May 16, 2023 31:51


Your weekly source for locksport news and sometimes interviews.Full show notes, including all links, and the audio-only podcast can be found athttps://www.thelocksportscast.com/144In this week's episode:00:00 - Start00:09 - In this episode01:41 - 78% spike in postal robberies04:15 - Locksmith Scams and How to Avoid Them08:09 - 3d printed 3ks keys09:17 - Cackalackycon LPV pics & contest winner10:37 - DEF CON 2023 Lockpick Village Call for Staff11:08 - Locks for sale in the UK12:15 - Videos14:22 - Experiment driven lockpicking15:04 - Products17:33 - Events/meetups19:03 - LPU belts20:20 - Producer credit break22:48 - Criminals27:12 - Sales29:35 - Giveaways30:50 - ClosingContact Informationhttp://contact.thelocksportscast.com/Join the Discord at http://discord.thelocksportscast.comDonate at:http://paypal.thelocksportscast.comhttps://patreon.com/thelocksportscasthttps://www.subscribestar.com/thelocksportscast Meetups/Sales/Giveaways/Contests:https://www.thelocksportscast.com/news Credits:Executive Producers:Founding Executive Producers:Panda-Frog https://www.youtube.com/channel/UCmIqJOrfQr8NTEDrOU2lF3QMichael Gilchrist https://www.youtube.com/user/norlin76Starrylock https://www.youtube.com/c/Starrylock_LocksportWilliamsBrain https://www.youtube.com/channel/UCRGmm9FQqF6HMu9-wq9ODSQDave 2BDCy4D https://www.youtube.com/channel/UC0X0TTCPK5kBRY3yDu40EKgLiibans Locksport Journey https://www.youtube.com/user/CODNuubster Pat from Uncensored Tactical https://uncensoredtactical.com/threeraccoonsinacoat https://youtube.com/channel/UCMjGnC1m9XlN_X8OHVwxphQChirael https://www.youtube.com/channel/UCwPTxD1-2PPgmi6ATOJKlUw Associate Executive Producers:DoctorHogmasterClayton Howard (Kewltune)Co-Producers:m0gRatyokeMrPickurCrankyLockPickerBare Bones Lock PickingSnakeParacentricJohn RChief Content Producer:ChiraelContent Producers:Bare Bones Lock PickingChris CapuneGood GuyGravityKarmaI fiskI'm GumbyJimy LongsJoshua GonzalezLadyLocksLockpickingDevOpenLockPocket WomenTaquila DaveThe Lock Picker 1969Tony VirelliThe Lock Sportscast on Odyseehttps://odysee.com/@thelocksportscasst:3 The Lock Sportscast on Rumblehttps://rumble.com/c/c-2031421...https://www.youtube.com/watch?v=onlbMSky0jk

The Lock Sportscast on Odysee
143: Locky Award Winners for 2022

The Lock Sportscast on Odysee

Play Episode Listen Later May 8, 2023 34:52


Your weekly source for locksport news and sometimes interviews.Full show notes, including all links, and the audio-only podcast can be found athttps://www.thelocksportscast.com/143In this week's episode:00:00 - Start00:08 - In this episode01:10 - ASSA ABLOY settlement01:44 - Deadly deadbolts02:44 - Lockbox scam04:03 - Locky Award winners07:54 - First pick of a 6 row KABA08:24 - TOOOL US updates09:29 - LPUbelts.com 60 Day Report10:44 - Need images for the Belt Explorer11:19 - Lockpicking games12:12 - Scams Abound13:21 - Lucky to be alive14:01 - Lock Picking Legend absence14:34 - Tallon Pick's DD pick15:42 - how to pick the Western Electric 30c16:18 - Abus Touch 57 Series bypass17:04 - The Quick Crack17:38 - Cracking the Roosevelt Safe18:26 - Products22:33 - Resources24:04 - Events/Meetups25:34 - LPU belts26:17 - Producer credit break29:50 - Locked out of my house31:16 - Sales33:20 - Giveaways33:50 - ClosingContact Informationhttp://contact.thelocksportscast.com/Join the Discord at http://discord.thelocksportscast.comDonate at:http://paypal.thelocksportscast.comhttps://patreon.com/thelocksportscasthttps://www.subscribestar.com/thelocksportscast Meetups/Sales/Giveaways/Contests:https://www.thelocksportscast.com/news Credits:Executive Producers:Founding Executive Producers:Panda-Frog https://www.youtube.com/channel/UCmIqJOrfQr8NTEDrOU2lF3QMichael Gilchrist https://www.youtube.com/user/norlin76Starrylock https://www.youtube.com/c/Starrylock_LocksportWilliamsBrain https://www.youtube.com/channel/UCRGmm9FQqF6HMu9-wq9ODSQDave 2BDCy4D https://www.youtube.com/channel/UC0X0TTCPK5kBRY3yDu40EKgLiibans Locksport Journey https://www.youtube.com/user/CODNuubster Pat from Uncensored Tactical https://uncensoredtactical.com/threeraccoonsinacoat https://youtube.com/channel/UCMjGnC1m9XlN_X8OHVwxphQChirael https://www.youtube.com/channel/UCwPTxD1-2PPgmi6ATOJKlUw Associate Executive Producers:DoctorHogmasterClayton Howard (Kewltune)Co-Producers:m0gRatyokeMrPickurCrankyLockPickerBare Bones Lock PickingSnakeParacentricJohn RChief Content Producer:ChiraelContent Producers:Arichoke2000Bare Bones Lock PickingDr HogmasterDwigI fiskI'm GumbyJoshua GonzalezLady LocksMishraOpenLockPocket WomenPyrolockReinderTaquila DaveTheGreenishOneTony VirelliThe Lock Sportscast on Odyseehttps://odysee.com/@thelocksportscasst:3 The Lock Sportscast on Rumblehttps://rumble.com/c/c-2031421...https://www.youtube.com/watch?v=zP000D6KyTk

The Lock Sportscast
134: A New Life for Old Banks

The Lock Sportscast

Play Episode Listen Later Jan 30, 2023 26:46


Your weekly source for locksport news and sometimes interviews. Full show notes, including links, can be found at http://www.thelocksportscast.com  In this week’s episode: Breathing New Life Into Old Banks Lock picking history can teach better digital security Banner price increases Locksmith jobs Products Videos Blog Posts Criminals Events Meetups Sales Giveaways and more Announcements: The Locky Awards  Corrections: News: NJ Downtowns Are Breathing New Life Into Empty Old Banks Banner price increases Call For Papers 2023 | t2 infosec conference Are you a locksmith looking to work in Higher Education? currently hiring for an Access Control/Camera Tech  Community News: Charity Raffle  2023 LPU Charity Raffle Drawing livestream Chubb Cruisers, variants, age and history of. Videos: Interview with a Lock Picker - YouTube The End of US Weiser (Official Music Video) (58) My journey trough the LPU belt system Blogs & Articles:  The history of lock picking can teach us a lot about better digital security | CBC Radio Gorgeous cutaway photos from Qikom « Toool's Blackbag  Other Resources: Products:  Opsasec AS Dangerfield Dual-Gauge Ionic Matte Black Set  Magnetic lock picking case - YT shorts video Covert Instruments - THE TRADECRAFT CASE  Skyrim Lockpick Display Replica  Meetups/Sales/Giveaways/Contests: The Lock Sportscast - News  LPU Karate Belts: beltranking - lockpicking (reddit.com)  Mentorship Monday 3: The Belt System  2: Breaking Rules and Getting the Belt  All About The Lockpicking Belt Rankings System  Speedlocks: Speedlocks.org  Lock Stories: Criminals: Police recover Noosaville love locks removed from bridge with boltcutters - ABC News Criminal mastermind who stole luxury cars for drug smugglers arrested in Spain's Malaga - Olive Press News Spain  Executive Producers: Founding Executive Producers: m3ddl3r Panda-Frog Michael Gilchrist Starrylock WilliamsBrain  Dave 2BDCy4D Liibans Locksport Journey Pat from Uncensored Tactical  threeraccoonsinacoat  Chirael (Anthony) Associate Executive Producers: DoctorHogmaster Clayton Howard (Kewltune) Co-Producers: m0g Jon Lock Ratyoke MrPickur CrankyLockPicker Bare Bones Lock Picking Deadbolt Cafe NWA Lockpicker Snake Paracentric John R Chief Content Producer: Chirael (Anthony) Content Producers: Bare Bones Lock Picking CorrectJeans Dependent-Quarter577 I fisk Jeff Moss Joshua Gonzalez LadyLocks Oak City Locksport Panda-Frog Rein Tequila Dave The Lock Picker 1969 Tiger Trav Tony Virelli Special thanks to: Contact Information: Email: podcast@thelocksportscast.com Twitter https://twitter.com/charlescurrent  Reddit: currentc57 on r/locksport Discord: Lockpickers United as Current, Extraordinary League of Pickers as Current, The Lock Sportscast as Current Join the Discord at http://discord.thelocksportscast.com The Lock Sportscast on Odysee The Lock Sportscast on Rumble  Donate: http://paypal.thelocksportscast.com https://patreon.com/thelocksportscast https://www.subscribestar.com/thelocksportscast 

Oh, Schuhen! - Der Sneaker-Podcast
#124 250.000 € FÜR SNEAKER?!? - Die OH, SCHUHEN! Holy Grails!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jan 28, 2023 61:58


Zwei! Hundert! Fünf! Zig! Tausend! Euro! für Sneaker! Und die alltime Sneaker Holy Grails von Amadeus und Fabian. Jetzt in OH, SCHUHEN! Episode 124! Shownotes: 00:00 Intro & Fashion Week Berlin Recap & Frisuren / 14:06 #LPU / 22:54 #WOMFT / 26:02 Top 5 Sneaker Holy Grails / 1:00:15 Outro Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Oh, Schuhen! - Der Sneaker-Podcast
#120 OH, QUIZ! - Das große Sneaker Quiz 2022!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Dec 3, 2022 51:30


10 Fragen. 10 Antworten. Und das große OH, SCHUHEN! Sneaker Quiz 2022 - OH, QUIZ! Die absurdesten, verrücktesten und unterhaltsamsten Storys rund um die Sneaker & Streetwear Kultur zusammengefasst und aufbereitet zum Mitdenken und Mitraten. Wer wird gewinnen? Let's check! Shownotes: 00:00 Intro & Recap EP119 / 8:17 #LPU / 14:08 #WOMFT / 14:54 OH, QUIZ! 2022 / 49:22 Outro Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Legends: A Superhero Story
Bonus Issue: Season One - Wrap Up and Q&A

Legends: A Superhero Story

Play Episode Listen Later Oct 17, 2022 99:22


Join Jack, Chad, Emily, Amanda, and Daniel as they reflect on Season One, answer questions from listeners, and discuss the expansion of the LPU. Co-Creators: Chad and Jack MatchetteGame Master: Jack MatchettePlayers: Amanda Lourenço, Chad Matchette, Daniel Cardoso, and Emily MatchetteEditor: Emily MatchetteBUY “LEGENDS: THE SUPERHERO ROLE PLAYING GAME” NOW: https://books.friesenpress.com/store/title/119734000192338578Listen to “Legends: The Superhero Soundtrack” on Spotify: https://open.spotify.com/album/5mBxdCslTJ1u1aBHetIiem?si=lt4_4_RUSISSP4E1e_7HiwTweet about the show using #thelegendscast for the chance to have an NPC named after you!For our super fans who would like to help us make the show the best it can be, please consider becoming a patron here: https://www.patreon.com/thelegendscastCheck out our heroic merch here: https://thelegendscast.threadless.com/#Use code TAKE15 for 15% off Sept 13 to Sept 19Come hang out with us on Discord: https://discord.gg/jYpYhN3fTVFor more information head over to our website: https://www.matchplaygames.ca/Theme music by Omar Chakor (https://www.instagram.com/theorce/) through Fiverr (https://www.fiverr.com/ch6k0r)Underscoring by Sayer Roberts (https://www.instagram.com/roberts.the.sayer/) - check him out on Soundcloud: https://soundcloud.com/user-135673977 and SideBiz Studio!: https://bit.ly/3kdunQJCLICK HERE TO BUY “LEGENDS: THE SUPERHERO ROLE PLAYING GAME”!Support the show

Oh, Schuhen! - Der Sneaker-Podcast
#112 "Ich hasse den Air Jordan 1 Chicago!" - Sneaker Hot Takes!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Aug 14, 2022 54:19


Es gibt diese Statements, bei denen weiß man schon während man sie ausspricht, dass man auf Gegenwind stoßen wird. Viel Gegenwind! OH, SCHUHEN! hat nach diesen "HOT TAKES" gefragt und die Community hat geantwortet - jetzt in Episode 112! / Shownotes: 00:00 Intro & Yeezy Day Recap / 05:34 #LPU / 11:30 #WOMFT / 14:10 Werbung: Bama Urban Elements / 15:27 Intro / 16:27 Hot Take 1: "Yeezy Slides & Foam Runner sind keine Sneaker sondern glorifizierte Badelatschen” / 19:13 Hot Take 2 “Der Sean Wotherspoon Air Max ist hässlich” / 22:21 Hot Take 3 “Make Nike Roshe Runs great again” / 25:28 Hot Take 4 “Die beste Jordan Silhouette ist der 4er” / 28:00 Hot Take 5 “Der Yeezy 500 ist das beste Yeezy Modell” / 30:07 Hot Take 6 “Die Kanye Legacy bei Nike ist größer als die bei adidas” / 32:30 Hot Take 7 “Ye mit adidas ist größer als Travis mit Nike” / 35:37 Hot Take 8 “Die Kobe Reihe bei adidas ist besser als die bei Nike” / 39:35 Hot Take 9 “Große Collabs haben ihre Strahlkraft verloren” / 43:18 Hot Take 10 “Sneaker YouTube ist tot” / 49:18 Hot Take 11 “Der Chicago CW ist der schlechteste OG” / 52:43 Outro / Werbepartner der Episode: Bama Urban Elements / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Oh, Schuhen! - Der Sneaker-Podcast
#111 Subculture's finest! - Die besten Sneaker der Subkulturen

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jul 31, 2022 67:13


An welchen Sneaker denkt man, wenn man an die HipHop Kultur denkt? Welcher Turnschuh fällt einem ein, spricht man über Basketball? Und welcher Sneaker steht wie kein anderer für die Terrace Culture? Was haben diese und andere Subkulturen zu bieten? Amadeus und Fabian haben sie sich angeschaut: die Besten vom Besten! Jetzt in Episode 111! / Shownotes: 00:00 - 08:33 Intro, Hochzeiten & Urlaube / 08:34 - 26:04 #LPU / 26:05 - 32:07 AF1 LV & AJ1 TS Release Recaps / 32:08 - 33:58 #WOMFT / 33:59 - 35:15 Intro / 35:16 - 39:07 HipHop / 39:08 - 42:20 Rock / 42:21 - 49:24 Techno / 49:25 - 51:51 Terrace Culture / 51:52 - 56:41 Basketball / 56:42 - 1:00:20 Gorpcore / 1:00:21 - 1:05:35 Skateboarding / 1:05:36 - 1:07:13 Outro / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Oh, Schuhen! - Der Sneaker-Podcast
#110 Sustainability im Sneaker Game

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jul 17, 2022 65:05


"No Time For Waste!" lautet der Slogan mit dem Puma das Thema Sustainability im Sneaker Game vorantreiben möchte. Doch was heißt "Sustainability" überhaupt? Was muss, was kann, was sollte getan werden? Und wie verhalten sich Nike, adidas und all die anderen Brands unserer Szene bei den Themen Nachhaltigkeit und Umweltschutz? Ein Überblick über den Status Quo und noch vieles mehr in OH, SCHUHEN! Episode 110! / Shownotes: 00:00 - 07:57 Intro & Festivals / 07:58 - 23:40 #LPU / 23:41 - 26:04 #WOMFT / 26:05 - 46:13 About Sustainability / 46:14 - 1:02:53 About Puma Re:Suede / 1:02:54 - 1:05:05 Outro / Advertorial/Werbung: Diese Episode entstand in freundlicher Zusammenarbeit mit Puma. / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Oh, Schuhen! - Der Sneaker-Podcast
#109 HOLY SH*T! - Die größten Sneaker Fehlkäufe und andere Missgeschicke

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jul 3, 2022 65:21


Man kauft Sneaker, die man kurz danach nicht mehr gut findet. Man verkauft Sneaker, deren Verkauf man kurze Zeit später bereut. Man will Sneaker kaufen, und hat einfach keine Chance. Und man lässt sich von Trends beeinflussen, bei denen man lieber links dran vorbei gelaufen wäre... Ja, dieses Sneaker & Streetwear Game hat so seine Tücken... Amadeus und Fabian sprechen darüber und eröffnen ihre größten Fehlkäufe, Misses und andere Fehler - jetzt in Episode 109! / Shownotes: 00:00 - 06:26 Intro & Hochzeits Outfits / 06:27 - 16:02 #LPU / 16:03 - 18:19 #WOMFT / 18:20 - 34:29 Die größten Fehlkäufe / 34:30 - 44:41 Die größten Misses / 44:42 - 56:07 Die größten Regrets / 56:08 - 1:03:29 Die schlimmsten Trends / 1:03:30 - 1:05:21 Outro / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Oh, Schuhen! - Der Sneaker-Podcast
#108 Die besten Sneaker 2022 - Part. 1

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jun 19, 2022 55:25


Von New Balance 990v1 über New Balance 990v4, von Nike SB über Nike Air Jordan und Nike Air Max, hin zu adidas Yeezy, Salomon XT-4, Reebok Club C 85, Mizuno Sky Medal, Puma Suede, Asics Gel-Lyte 3 und all den weiteren "900 [sic!] wichtigsten Releases der Woche" - das erste Halbjahr 2022 hatte sehr viel und sehr viel Gutes zu bieten! Amadeus und Fabian haben daher obligatorisch zur Jahreshalbzeit die besten Sneaker Releases 2022 gelistet, sprechen aber auch über die negativen Aspekte und ebenso über ihre Wünsche für das restliche Jahr. / Shownotes: 00:00 - 01:59 Intro & Amsterdam / 02:00 - 12:44 #LPU / 12:45 - 14:06 #WOMFT / 14:07 - 15:06 Die besten Sneaker 2022, part. 1 / 15:07 - 34:07 Die besten Sneaker 2022, part. 1: Platz 5 & 4 & 3 & 2 / 34:08 - 39:23 Die besten Sneaker 2022, part. 1: Honorable Mentions / 39:24 - 45:00 Die größten Enttäuschungen 2022, part. 1 / 45:01 - 53:54 Die besten Sneaker 2022, part. 1: Platz 1 / 53:55 - 55:25 Outro / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Oh, Schuhen! - Der Sneaker-Podcast
#107 Puma Slipstream - 35 Jahre Sneaker Legacy!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Jun 5, 2022 49:03


35 Jahre feiert der Puma Slipstream - ein Jubiläum, welches nur wenigen Sneakern in dieser Kultur zu Teil wird! Woher die Ikone aus Herzogenaurach stammt und welche Wichtigkeit sie auch noch heute für Basketball, Skateboarding und die Fashionszene in Tokio wie Berlin hat, darüber sprechen Amadeus und Fabian in Episode 107. Mit dabei: Sängerin und Puma Ambassador Nessi. / Shownotes: 00:00 - 09:28 Intro & Wohnungseinrichtungen / 09:29 - 17:05 #LPU / 17:06 - 19:27 #WOMFT / 19:28 - 27:03 About Puma Slipstream / 27:04 - 33:31 "1987 - Das Quiz!" / 33:32 - 40:01 About Puma Slipstream pt. 2 / 40:02 - 44:00 Interview Nessi / 44:01 - 47:25 About Puma Slipstream pt. 3 / 47:26 - 49:03 Outro / Advertorial/Werbung: Diese Episode entstand in freundlicher Zusammenarbeit mit Puma. / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

The Lock Sportscast
98: Handbook to Challenge Locks

The Lock Sportscast

Play Episode Listen Later Apr 20, 2022 23:37


Your weekly source for locksport news and sometimes interviews. Full show notes, including links, can be found at httpDark Arts Lock Picking Forum://www.thelocksportscast.com  In this week’s episode: Mitsubishi admits cars are easy to steal LPU now has a website and The Mat First Underwater out of the pack pick Fire Dept Forcible Entry techniques for padlocks Sales Giveaways And more Announcements: Corrections: News: Community News: Lockpickers United Handbook to Challenge Locks The Mat Dark Arts Lock Picking DALP Community Forum  Videos: (ENG-289) Lockpicking - Underwater out of the pack pick of a Casino Aluminium Padlock 30mm Black Padlock Forcible Entry Forcing Padlocks with Hand Tools #5806229200001  Meetups: BSides Seattle  Products:  LPU Karate Belts: beltranking - lockpicking (reddit.com)  Mentorship Monday 3: The Belt System  2: Breaking Rules and Getting the Belt  All About The Lockpicking Belt Rankings System  Speedlocks: Speedlocks.org  Criminals: Sales: https://bareboneslockpicking.com/ 20% off the Ultimate Lock Picking Kit in the Molle Case using the code PWLikesMolle20 valid until 1st May 2022 https://bareboneslockpicking.com/ 15% off with code ListenToTheLockSportsCast2022 https://www.lockpickmall.com/ 6% off using coupon code albert https://www.lockpickmall.com/ 6% off with code joepicks www.mattslockpit.com picks discounted on site https://www.3dlocksport.com/ 10% off. CODE: LSCAST10 https://makolocks.com/ 15% off with code BUYMAKO Unknown exp https://uklockpickers.co.uk/ 10% off with code GIFT Giveaways and Contests: Joe Picks and Jon Lock 500 Subscriber Double Giveaway #Joe500Jon500 Joe Picks and Jon Lock 500 Subscriber Double Giveaway #Joe500Jon500 [98] Jon Lock and Joe Picks 500 Subscriber Giveaway!!! 100 Subscriber Giveaway/Challenge #duck-duck-Goose 100 Subscriber Giveaway/Challenge #duck-duck-Goose  Panda-Frog: #miniPandaFrog2 giveaway (ENG-257) Lockpicking - Ups I did it again! #miniPandaFrog2 giveaway starts now!  CLK Supplies Introducing #Lockboss Free Giveaway! Do you work with Locks & Keys or do Locksmithing?  Executive Producer: Founding Executive Producers: Panda-Frog Michael Gilchrist Starrylock WilliamsBrain  Dave 2BDCy4D Liibans Locksport Journey Pat from Uncensored Tactical  threeraccoonsinacoat  Chirael Associate Executive Producers: DoctorHogmaster Clayton Howard (Kewltune) Co-Producers: m0g Jon Lock Ratyoke MrPickur CrankyLockPicker JHPpicking Bare Bones Lock Picking Chief Content Producer: Panda-Frog Content Producers: Albert Lebel Bare Bones Lock Picking Chirael Dark Arts Lock Picking GravityKarma I fisk Joe Picks Jon Lock Joshua Gonzalez PickSmith Pocket Women Sir Paradise Tiger Trav Tony Virelli zackery willard Special thanks to: Contact Information: Email: podcast@thelocksportscast.com Twitter https://twitter.com/charlescurrent  Reddit: currentc57 on r/locksport Discord: Lockpickers United as Current, Extraordinary League of Pickers as Current, The Lock Sportscast as Current Join the Discord at http://discord.thelocksportscast.com The Lock Sportscast on Odysee Donate: http://paypal.thelocksportscast.com https://patreon.com/thelocksportscast

Oh, Schuhen! - Der Sneaker-Podcast
#103 Sneaker Culture von Japan bis Amsterdam - Mizuno meets Patta!

Oh, Schuhen! - Der Sneaker-Podcast

Play Episode Listen Later Apr 10, 2022 60:23


Mizuno stammt aus Osaka, Japan, Patta aus Amsterdam, Niederlande. Gemeinsam haben die beiden Brands einen Sneaker kreiert, der Mizunos starkes Momentum pusht und Pattas treffsicheren Style-Approach manifestiert. Amadeus und Fabian beleuchten die spannende Zusammenarbeit und wagen einen Deepdive in die Geschichte der japanischen Brand. Mit dabei: Vinzenz Altenweger, Category Manager Sportstyle bei Mizuno, und Pattas finest Tim Sabajo. / Shownotes: 00:00 - 04:01 Intro, Reisen und Geburtstage / 04:02 - 14:12 #LPU / 14:13 - 18:22 #WOMFT / 18:23 - 20:02 Werbung / 20:03 - 27:36 About Mizuno / 27:37 - 36:51 Interview Vinzenz Altenweger, Mizuno Category Manager Sportstyle / 36:52 - 45:31 About Mizuno pt. 2 / 45:32 - 56:22 Interview Tim Sabajo, Patta General Manager / 56:23 - 58:51 About Mizuno & Patta / 58:52 - 1:00:23 Outro / Advertorial/Werbung: Diese Episode entstand in freundlicher Zusammenarbeit mit Mizuno. Ebenfalls Werbepartner der Episode: Clark / Mehr Infos auf: https://linktr.ee/ohschuhenpodcast

Shoot First with Mikee Reyes
#23: All-NCAA teams of the 2000's!

Shoot First with Mikee Reyes

Play Episode Listen Later Oct 13, 2021 61:35


We can finally continue our All-NCAA teams! Today, we pick our best 5's, sixth men, and head coaches for SBU, CSB, SSC, LPU, UPHSD and PCU. Alot of big names for these schools, but we have to justify and stand with our picks! Sakit sa ulo nanaman! For more Shoot First, follow us on all our socials!

The Kickback Sneaker Podcast
Ep 004 - To Collab or not to Collab

The Kickback Sneaker Podcast

Play Episode Listen Later Mar 23, 2021 41:28


Josh & Fabs discuss the recent announcement that Stone Island and New Balance have signed a multi-year deal and what kind of sneakers that could bring. Fabs has a funny LPU story (where an L became a W and then turned into an L again) and Josh has his sights set on some tasty upcoming drops. The meaty part of this episode is dedicated to sneaker collaborations, what goes into them, why they exist, and what constitutes success — from the brand, creative, and consumer POVs.

The Radiant Babe Podcast
God Is Good All The Time | with @Mrs.MommySak

The Radiant Babe Podcast

Play Episode Listen Later Apr 17, 2020 36:40


Hi Radiant Babe! We are so glad you are here. Today, we have the honor of chatting with Radiant Babe, Brittany Sakulbunwathana. She is a wife, mother, and student at LPU. I believe Brittany's testimony will encourage you to know that God is good even when we are not. Please share this episode with a friend to help spread love and encouragement! Join the Radiant Babe Sisterhood on Insta ♥︎ ◇ https://www.instagram.com/mrs.mommysak ◇ https://www.instagram.com/theradiantbabe --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/theradiantbabe/support