Podcasts about Minimax

Decision rule used for minimizing the possible loss for a worst case scenario

  • 105PODCASTS
  • 159EPISODES
  • 43mAVG DURATION
  • 1WEEKLY EPISODE
  • Apr 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Minimax

Latest podcast episodes about Minimax

AI For Humans
DeepMind Says We're Not Ready For AGI, Academy Awards Say AI Video is Ok, and AI Voice Models Get Weird

AI For Humans

Play Episode Listen Later Apr 24, 2025 49:20


Google says we're not ready for AGI and honestly, they might be right. DeepMind's Demis Hassabis warns we could be just five years away from artificial general intelligence, and society isn't prepared. Um, yikes? VISIT OUR SPONSOR https://molku.ai/ In this episode, we break down Google's new “Era of Experience” paper and what it means for how AIs will learn from the real world. We talk agentic systems, long-term memory, and why this shift might be the key to creating truly intelligent machines. Plus, a real AI vending machine running on Claude, a half-marathon of robots in Beijing, and Cluely, the tool that lets you lie better with AI.  We also cover new AI video tools from Minimax and Character.AI, Runway's 48-hour film contest, and Dia, the open-source voice model that can scream and cough better than most humans. Plus: AI Logan Paul, AI marketing scams, and one very cursed Shrek feet idea. AGI IS ALMOST HERE BUT THE ROBOTS, THEY STILL RUN.   #ai #ainews #agi Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // Show Links // Demis Hassabis on 60 Minutes https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/ We're Not Ready For AGI From Time Interview with Hasabis https://x.com/vitrupo/status/1915006240134234608 Google Deepmind's “Era of Experience” Paper https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf ChatGPT Explainer of Era of Expereince https://chatgpt.com/share/680918d5-cde4-8003-8cf4-fb1740a56222 Podcast with David Silver, VP Reinforcement Learning GoogleDeepmind https://x.com/GoogleDeepMind/status/1910363683215008227 Intuicell Robot Learning on it's own  https://youtu.be/CBqBTEYSEmA?si=U51P_R49Mv6cp6Zv Agentic AI “Moore's Law” Chart https://theaidigest.org/time-horizons AI Movies Can Win Oscars https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html?unlocked_article_code=1.B08.E7es.8Qnj7MeFBLwQ&smid=url-share Runway CEO on Oscars + AI  https://x.com/c_valenzuelab/status/1914694666642956345 Gen48 Film Contest This Weekend - Friday 12p EST deadline https://x.com/runwayml/status/1915028383336931346 Descript AI Editor  https://x.com/andrewmason/status/1914705701357937140 Character AI's New Lipsync / Video Tool https://x.com/character_ai/status/1914728332916384062 Hailuo Character Reference Tool https://x.com/Hailuo_AI/status/1914845649704772043 Dia Open Source Voice Model https://x.com/_doyeob_/status/1914464970764628033 Dia on Hugging Face https://huggingface.co/nari-labs/Dia-1.6B Cluely: New Start-up From Student Who Was Caught Cheating on Tech Interviews https://x.com/im_roy_lee/status/1914061483149001132 AI Agent Writes Reddit Comments Looking To “Convert” https://x.com/SavannahFeder/status/1914704498485842297 Deepfake Logan Paul AI Ad https://x.com/apollonator3000/status/1914658502519202259 The Humanoid Half-Marathon https://apnews.com/article/china-robot-half-marathon-153c6823bd628625106ed26267874d21 Video From Reddit of Robot Marathon https://www.reddit.com/r/singularity/comments/1k2mzyu/the_humanoid_robot_halfmarathon_in_beijing_today/ Vending Bench (AI Agents Run Vending Machines) https://andonlabs.com/evals/vending-bench Turning Kids Drawings Into AI Video https://x.com/venturetwins/status/1914382708152910263 Geriatric Meltdown https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button  

HODLong 后浪
Ep.53 [CN]: Sam Gao: ELIZA OS白皮书作者聊聊写作AI人物史与ELIZA v2

HODLong 后浪

Play Episode Listen Later Mar 28, 2025 56:50


本期邀请到了ELIZA OS白皮书的联合作者Sam Gao给我们唠唠他写作关于 AI 人物史传记的心得(这期不是特别Web3),对于AI科技史上重要人物的故事有兴趣的朋友可以一听~p.s. 这一期其实二月中旬就录制了,但是一直没来得及剪辑放出,致以歉意~本期概要:自我介绍给听众介绍一下之前写过的那本《青稞.人物志》AI 领域的大神里面成名之前的人生故事现在特别早期的一些尝试里面我们能看到AI 项目会利用Web3的激励网络来做人类资源的调度。你觉得 AI和web3的两条线会是一个怎样的发展趋势和线程?你怎么看Minimax的创始人闫俊杰说的“在现在这个阶段为了产品化对技术进步作出牺牲不可取“的这个观点?2025年很多人认为是 Agent 之年,甚至比如零一万物的创始人李开复会认为今年是AI商业化的关键一年。从你的角度,你怎么看Agent?如何看待Web3背景下AI Agent 25-26年的发展?你会想要投资怎样的创业者?很多开发者对于Eliza v2 的发布都比较关注。Eliza v1 的一些问题,和v2 作出的优化Twitter(X):@samuel_ys92 If you like this episode, you're welcome to tip with Ethereum / Solana / Bitcoin:如果喜欢本作品,欢迎打赏ETH/SOL/BTC:ETH: 0x83Fe9765a57C9bA36700b983Af33FD3c9920Ef20SOL: AaCeeEX5xBH6QchuRaUj3CEHED8vv5bUizxUpMsr1KytBTC: 3ACPRhHVbh3cu8zqtqSPpzNnNULbZwaNqG Important Disclaimer: All opinions expressed by Mable Jiang, or other podcast guests, are solely their opinion. This podcast is for informational purposes only and should not be construed as investment advice. Mable Jiang may hold positions in some of the projects discussed on this show. 重要声明:Mable Jiang或嘉宾在播客中的观点仅代表他们的个人看法。此播客仅用于提供信息,不作为投资参考。Mable Jiang有时可能会在此节目中讨论的某项目中持有头寸。

网事头条|听见新鲜事
MiniMax刘华:Agent将成为近期模型的主战场

网事头条|听见新鲜事

Play Episode Listen Later Feb 23, 2025 0:19


Radio Dance At Home
The best Tropical House Compilation for 2025 - In The minimax 007

Radio Dance At Home

Play Episode Listen Later Feb 23, 2025 14:09


For more inf. www.iamsugarbus.com www.facebook.com/iamsugarbus www.instagram.com/iamsugarbus twitter.com/IamSugarBus https://www.youtube.com/@iamsugarbus

网事头条|听见新鲜事
MiniMax回应合伙人魏伟离职

网事头条|听见新鲜事

Play Episode Listen Later Feb 20, 2025 0:21


Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Outlasting Noam Shazeer, crowdsourcing Chat + AI with >1.4m DAU, and becoming the "Western DeepSeek" — with William Beauchamp, Chai Research

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 26, 2025 75:46


One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro

乱翻书
200.大模型六小龙,都别再端着啦

乱翻书

Play Episode Listen Later Jan 25, 2025 77:33


【本期嘉宾】骆轶航骆轶航(播客「硅基立场」主理人、PingWest品玩的联合创始人)主播:潘乱(「乱翻书」主理人)本期由「硅基立场」×「乱翻书」联合出品【本期播客缘起】1月17日,晚点采访MiniMax创始人兼CEO闫俊杰的文章:《晚点对话 MiniMax 闫俊杰:创业没有天选之子》(作者:程曼祺|晚点对话LateTalk)晚点的回应文章:《晚点播客丨MiniMax 闫俊杰聊大模型 2024:一个非共识判断引起的回声》(作者:晚点团队|晚点对话LateTalk)播客版本:「晚点对话LateTalk」第99期《MiniMax 创始人闫俊杰:做大模型,千万别套用移动互联网的逻辑》【大模型六小龙】智谱、MiniMax、月之暗面、百川智能、零一万物和阶跃星辰,业内也常称为“大模型六小龙”【时间线】00:46 本期播客缘起:《晚点对话 MiniMax 闫俊杰:创业没有天选之子》02:17 往期相关播客:本节目第168期(「硅基立场」第1期)《别拿这轮AI创业跟移动互联网比,目前还不配》06:12 创业者经常反思,每一年换一个方向,真的好么?07:39 如果你说他们都是错的,那你最好证明你是对的。11:02 我对这波AI创业者本上是无差别偏爱13:18 聊天型产品,难以变现,用的越多,亏的越多14:09 发声是对外界不看好大模型发展的回击16:13 大模型创业,跟移动互联网两码事16:21 他们过去一年做成功了什么?19:10 大模型AI创业公司到底该做什么?21:00 2023年光年之外退出,给Kimi融资留出了非常重要的时间窗口23:00 投资人们从来不存在集体看好25:17 这波之后,未来可能不再以互联网增长的逻辑去“画饼”了26:15 王小川和李开复的收缩有什么不一样?28:01 2022年11月30号,机器人公司AI化vs大模型公司30:30 DeepSeek不是天降的31:32 DeepSeek的技术品牌效应35:11 DeepSeek可以按原本逻辑的背景是什么?38:53 穷人创业vs富人创业39:37 Mini Max的To B业务会不会坚持下去?42:38 大模型六小龙vs豆包,是不争啊?还是争不过?43:40 豆包的胜利,在预期范围内的又一次胜利44:54 做得好的创业者全是一张蓝图绘到底的,Meta例外46:08 对于大模型公司创业者,我们拿最优秀的公司的标准来去衡量这些人的47:01 -谁杀死了中国的ChatGPT? -VC和PE47:58 耐心资本vs美元基金50:00 投资人到底看我们什么?53:07 大模型六小龙,以后会有什么不一样?53:12 智谱不需要改变55:13 有点不懂的月之暗面55:35 收敛的百川和零一万物55:50 北智谱南阶跃?阶跃还没到那个程度59:14 AI应用这个事情,它的核心点在应用,不在AI61:26 AI硬件公司,核心是AI公司还是硬件公司?64:00 我相信AI是因为相信它有用,而不是因为它是AI67:22 我给那些具体有用的产品,全部都给了高分69:33 不是每个创业者都需要成为下一个张一鸣73:22 大模型,你们都别端着了!【开场&结尾音乐】开场音乐:李小龙 - 好久不见(电视剧《武林外传》片头曲)结尾音乐:虞霞/李小龙 - 侠客行(电视剧《武林外传》片尾曲TV Verison)【相关推荐】1、《晚点对话 MiniMax 闫俊杰:创业没有天选之子》(作者:程曼祺|晚点对话LateTalk)mp.weixin.qq.com2、晚点的回应文章:《晚点播客丨MiniMax 闫俊杰聊大模型 2024:一个非共识判断引起的回声》(作者:晚点团队|晚点对话LateTalk)3、「晚点对话LateTalk」第99期《MiniMax 创始人闫俊杰:做大模型,千万别套用移动互联网的逻辑》4、骆轶航的播客:「硅基立场」,欢迎订阅。5、本节目第168期(「硅基立场」第1期)《别拿这轮AI创业跟移动互联网比,目前还不配》www.xiaoyuzhoufm.com【关于「乱翻书」】「乱翻书」是一档关注商业、科技和互联网的圆桌对话节目。关心How和Why,以及少有人注意到的What。内容主要方向是科技考古、行业观察和前沿思考,研究公司的兴衰循环,希望能够为你带来信息增量。「乱翻书」主理人是潘乱,代表作品有《腾讯没有梦想》、字节跳动/快手早期关键节点的系列特写。【关于主播】视频号/即刻/小红书:潘乱公众号/播客:乱翻书【图】▲骆轶航(左)和潘乱微信公众号:乱翻书视频号:潘乱商业合作:联系微信 tongxing717本期编辑:怀杭

Let's Talk AI
#197 - AI in Gmail+Docs, MiniMax-01, Titans, Transformer^2

Let's Talk AI

Play Episode Listen Later Jan 20, 2025 83:52 Transcription Available


Our 197th episode with a summary and discussion of last week's big AI news! Recorded on 01/17/2024 Join our brand new Discord here! https://discord.gg/nTyezGSKwP Hosted by Andrey Kurenkov and guest-hosted by the folks from Latent Space Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Sponsors: The Generator - An interdisciplinary AI lab empowering innovators from all fields to bring visionary ideas to life by harnessing the capabilities of artificial intelligence. In this episode:  - Google and Mistral sign deals with AP and AFP, respectively, to deliver up-to-date news through their AI platforms.  - ChatGPT introduces a tasks feature for reminders and to-dos, positioning itself more as a personal assistant.  - Synthesia raises $180 million to enhance its AI video platform for generating videos of human avatars.  - New U.S. guidelines restrict exporting AI chips to various countries, impacting Nvidia and other tech firms. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Timestamps + Links: (00:00:00) Intro / Banter (00:04:29) News Preview (00:05:09) Response to listener comments (00:05:58) Sponsor Break Tools & Apps (00:07:01) Google is making AI in Gmail and Docs free — but raising the price of Workspace (00:07:52) Microsoft relaunches Copilot for business with free AI chat and pay-as-you-go agents (00:12:36) Google signs deal with AP to deliver up-to-date news through its Gemini AI chatbot (00:18:08) Mistral signs deal with AFP to offer up-to-date answers in Le Chat (00:18:45) ChatGPT can now handle reminders and to-dos Applications & Business (00:22:53) Palmer Luckey's AI Defense Company Anduril Is Building a $1 Billion Plant in Ohio (00:28:36) OpenAI is bankrolling Axios' expansion into four new markets (00:29:39) AI researcher François Chollet founds a new AI lab focused on AGI (00:32:18) Nvidia-backed AI video platform Synthesia doubles valuation to $2.1 billion (00:34:46) Anysphere Raises $105M in Series B (00:40:14) Harvey Valuation of 3 Billion Projects & Open Source (00:46:12) MiniMax-01: Scaling Foundation Models with Lightning Attention (00:51:16) MinMo: A Multimodal Large Language Model with Approximately 8B Parameters for Seamless Voice Interaction (00:53:01) HALoGEN: Fantastic LLM Hallucinations and Where to Find Them Research & Advancements (00:57:03) Titans: Learning to Memorize at Test Time (01:04:38) Transformer2: Self-adaptive LLMs (01:08:15) Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps Policy & Safety (01:11:23) Biden administration proposes sweeping new restrictions on exporting AI chips (01:13:56) Biden orders Energy, Defense departments to lease sites for AI data centers, clean energy generation (01:15:00) OpenAI presents its preferred version of AI regulation in a new ‘blueprint' (01:16:15) More teens report using ChatGPT for schoolwork, despite the tech's faults Synthetic Media & Art (01:17:55) In AI copyright case, Zuckerberg turns to YouTube for his defense (01:19:53) Outro

TechCrunch Startups – Spoken Edition
Chinese AI company MiniMax releases new models it claims are competitive with the industry's best

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Jan 20, 2025 6:10


Chinese firms continue to release AI models that rival the capabilities of systems developed by OpenAI and other U.S.-based AI companies. This week, MiniMax, an Alibaba- and Tencent-backed startup that has raised around $850 million in venture capital and is valued at more than $2.5 billion, debuted three new models: Learn more about your ad choices. Visit podcastchoices.com/adchoices

Ingenios@s de Sistemas
Episodio 359 - Python Avanzado - Optimización y Buenas Prácticas

Ingenios@s de Sistemas

Play Episode Listen Later Jan 19, 2025 31:43


Hoy vamos a sumergirnos en el fascinante mundo de la optimización y las buenas prácticas en programación. Si alguna vez te has preguntado cómo puedes escribir código que no solo funcione, sino que además sea rápido, eficiente y fácil de mantener, este episodio es para ti. Exploraremos técnicas clave para detectar problemas de rendimiento, herramientas que te ayudarán a mantener tu código limpio y organizado, y métodos para garantizar su calidad. Ya seas un programador experimentado o alguien que está aprendiendo con la ayuda de ChatGPT, aquí encontrarás ideas prácticas para llevar tus proyectos al siguiente nivel. Noticias: Universidad de Berkeley crea modelo de razonamiento open-source que compite con IA de OpenAI OpenAI establece nueva división de robótica OpenAI lanza ChatGPT 'Tasks' Minimax lanza modelos de IA de código abierto con capacidades de contexto ultra largo Resultados sorprendentes en la enseñanza de IA Microsoft revela modelo de IA para descubrimiento de materiales Herramientas: Hugging Face (SmolAgents) Crea agentes de IA potentes con un esfuerzo mínimoLINK Reset Reduce la ansiedad con el diario guiado impulsado por IALINK 21st.dev Github + Pinterest para que tus sitios web de IA luzcan hermososLINK Topview 2.0 Product Avatar Muestra productos con avatares de IALINK LabelBox Solicitud general, Ingeniero de operaciones de datosLINK Google AI Studio Construye con los últimos modelos de Google DeepMindLINK Minduck Discovery Plataforma de búsqueda de IA con mapas mentales interactivosLINK Kolors Modelo de difusión de texto a imagen fotorrealistaLINK ClipZap Herramienta de video AI para recortar, editar y traducir automáticamenteLINK Reachy AI Agente de IA para automatizar la difusión en LinkedInLINK Fullmoon Chatea con grandes modelos lingüísticos privados y localesLINK Shapen Crear modelos 3D a partir de imágenes con IALINK HeyGen's Instant Highlights Transforme vídeos largos en atractivos resúmenes y clips compartiblesLINK DryMerge Automatiza tareas con agentes de IA que trabajan 24/7LINK Master of Pushups Aplicación de visión de IA para convertir tu teléfono en un entrenador de flexionesLINK Ray2 by Luma Labs Modelo de video AI con realismo y movimiento mejoradoLINK Persana AI Agentes de ventas y marketing AI GTM y de voz para potenciar las ventas y el marketingLINK Roundups Cree y monetice guías de compra de productos con IALINK Apúntate a la academia Canal de telegram  y  Canal de Youtube Pregunta por Whatsapp +34 620 240 234

AIA Podcast
CES 2025, новики от Nvidia, ChatGPT Tasks, впечатления от o1 Pro и новые модели / AIAPodcast #102

AIA Podcast

Play Episode Listen Later Jan 19, 2025 163:43


AI For Humans
OpenAI Plans Super Intelligence, NVIDIA's Tiny Image Model & Reddit Launches GPT & More AI News

AI For Humans

Play Episode Listen Later Jan 16, 2025 52:17


OpenAI is prepping for Artificial Super Intelligence, Sam Altman says AI fast takeoff is likely, Luma Labs' new Ray 2 AI video model looks good and Reddit goes all GPT on us. Plus, an amazing new small AI model from NVIDIA, executive AI orders for more power and chips, ChatGPT Tasks kind of blows and a whole lotta Shrek (more so than you might want). BRB, WE GOTTA PREP FOR THE SINGULARITY Y'ALL! Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // SHOW LINKS // OpenAI's Economic Blueprint  https://openai.com/global-affairs/openais-economic-blueprint/ Sam Altman Says Fast Take Off More Likely https://x.com/tsarnick/status/1879100390840697191 OpenSource $450 Dollar o1 Model https://x.com/LiorOnAI/status/1878876546066506157 MiniMax-01 Launch: Lightning Attention https://x.com/i/trending/1879318547861582090 Runway Prompt-To-Character https://x.com/IXITimmyIXI/status/1878088929330491844 Executive Order For Gigawatt Datacenters https://www.reuters.com/technology/artificial-intelligence/biden-issue-executive-order-ensure-power-ai-data-centers-2025-01-14/ New AI Chip Rules  https://www.theinformation.com/articles/why-bidens-final-ai-chip-move-caused-an-uproar?rc=c3oojq&shared=160dd16ac575f520 ChatGPT Tasks https://x.com/OpenAI/status/1879267276291203329 Custom Reddit GPT For Answers https://www.reddit.com/answers/ LumaLabs Ray 2:  https://lumalabs.ai/ray Nvidia Lauches Sana  https://nvlabs.github.io/Sana/ AI Slop Distorting Wildfire News https://www.fastcompany.com/91260442/ai-slop-has-is-still-distorting-news-about-the-l-a-wildfires French Woman Scammed By AI Brad Pitt https://www.nbcnews.com/news/world/ai-brad-pitt-woman-romance-scam-france-tf1-rcna187745 Fashn Web App: Try-on + Video https://x.com/ayaboch/status/1878888737603830081 New AI or Die https://youtu.be/cAjUy896SOE?si=BpJhqV_oOwvov01q My Swamp https://x.com/andr3_ai/status/1878110156887638380  

AI For Humans
AGI in 2025? Plus, OpenAI's Agents, NVIDIA's AI World Model & More AI News

AI For Humans

Play Episode Listen Later Jan 9, 2025 43:15


AGI is coming in 2025 or at least that's what OpenAI's Sam Altman thinks. Plus, huge announcements from NVIDIA at CES & hands on with Google's VEO 2. AND OpenAI's Operator (aka their AI Agents) might come soon, DeekSeek V3 is pretty darn good, Meta makes a big mistake with their AI personalities, Minimax's new subject reference tool, METAGENE-1 might help us stave off disease, and all the robot vaccum news you could ever want. Oh, and Kevin is sick. BUT HE'S GOING TO BE FINE. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // SHOW LINKS // Sam Altman Blog Post: https://blog.samaltman.com/reflections Head of OAI “Mission Alignment” warns to take AGI seriously: https://x.com/jachiam0/status/1875790261722477025 OpenAI Agents Launching This Month? https://www.theinformation.com/articles/why-openai-is-taking-so-long-to-launch-agents?rc=c3oojq Satya Nadella says AI scaling laws are “Moore's Law at work again” https://x.com/tsarnick/status/1876738332798951592 Derek Thompson Plain English https://www.theringer.com/podcasts/plain-english-with-derek-thompson/2025/01/07/the-big-2025-economy-forecast-ai-and-big-tech-nuclears-renaissance-trump-vs-china-and-whats-eating-europe Digits: $3k Supercomputer https://www.theverge.com/2025/1/6/24337530/nvidia-ces-digits-super-computer-ai New GFX Cards including 2k 5090 https://www.theverge.com/2025/1/6/24337396/nvidia-rtx-5080-5090-5070-ti-5070-price-release-date Cosmos World nVidia World Model https://x.com/rowancheung/status/1876565946124341635 DeepSeek 3 Crushes Open Source Benchmarks https://www.msn.com/en-us/news/technology/deepseek-s-new-ai-model-appears-to-be-one-of-the-best-open-challengers-yet/ar-AA1wxkSg?ocid=TobArticle Oopsie File: Meta Deletes AI Characters After Backlash https://www.cnn.com/2025/01/03/business/meta-ai-accounts-instagram-facebook/index.html Minimax Subject Reference https://x.com/madpencil_/status/1876289286783615473 Famous Science Rappers https://youtu.be/B56rwm2sn7w?si=hD1ankVpWvALHAN5 Science Corner: METAGENE-1 https://x.com/PrimeIntellect/status/1876314809798729829 HeyGen Works With Sora -- VERY Good LipSync https://x.com/joshua_xu_/status/1876707348686995605 GameStop of Thrones https://www.reddit.com/r/aivideo/comments/1htvzzc/gamestop_of_thrones/ Simulation Clicker (not sure this is AI as much) https://x.com/nealagarwal/status/1876292865929683020 Torso 2 In the Kitchen https://x.com/clonerobotics/status/1876732633771548673 Roborock's Saros Z70 https://x.com/rowancheung/status/1876565471887085772 Halliday Smart Glasses https://x.com/Halliday_Global/status/1871571904194371863  

TeknoSafari's Podcast
Çocukları Pistten Alalım! O GELİYOR! Yapay Zekada Bu Hafta

TeknoSafari's Podcast

Play Episode Listen Later Jan 2, 2025 36:40


Google VEO zirvedeyken, işler değişiyor! SORA'nın düşüşüyle Google WHISK dönemi başlıyor. GROK'un tweet'leri bu değişimi açıklıyor. Hashtag'lere veda mı ediyoruz? Minimax's music-01, SUNO'ya rakip olarak geliyor. NVIDIA, 250 dolarlık AI bilgisayarıyla dikkat çekiyor. GENESIS fiziğe uygun 4D deneyimi sunarken, ChatGPT TASK geliyor! Her sabah en güncel AI haberleri burada. AGI yarışında ilk sıra OpenAI ve Gemini'nin. X ise %2'lik bir paya sahip. ChatGPT artık WhatsApp'ta! Erişen öne geçiyor. Midjourney'den iki yeni özellik: PATCHWORK ve MOODBOARD. Pıka 2.0 ise Whisk'in video dünyasındaki karşılığı. AI dünyasındaki son gelişmeler için videomuzu kaçırmayın! 30.27 Google VEO ile zirvede. SORA tırt kaldı 06.35 Google WHISK yeni devri başlattı 1336 GROK tweet açıklıyor. Hashtag öldü 09.50 Minimax's `music-01` ile SUNO'ya rakip 12.10 NVIDIA'dan 250 dolarlık AI bilgisayarı 15.22 GENESIS fiziğe uygun 4D 16.47 ChatGPT TASK geliyor. Her sabah AI haberleri 17.45 AGI için ilk sıra OpenAı ve Gemini. X ise %2 19.07 CahtGPT whatsapp'a geldi. Erişen öne geçer. 21.43 Midjourney'den iki atak: PATCHWORK ve MOODBOARD 22.56 Pıka 2.0 Whisk'in video karşılığı

The Next Wave - Your Chief A.I. Officer
Ranking 13 Of The Most Popular Ai Video Tools (Q4 2024 Tier List)

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Dec 31, 2024 67:36


Episode 39: How are AI video tools revolutionizing content creation? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by Tim Simmons (https://x.com/theomediaai), founder of Theoretically Media, to delve into the latest advancements in AI video tools. This episode ranks 13 of the most popular AI video tools of Q4 2024, discussing their features, strengths, and weaknesses. They explore the capabilities of tools like Sora, Runway, and Adobe Firefly, and predict future developments in AI integration. The conversation highlights the evolving landscape of AI video generation and its impact on content creation. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Pika struggles with customization; requires frequent rerolls. (06:49) Veyo ranks S tier for generating videos. (13:51) Leaving Sora in category S, discussing Turbo. (20:21) You write scripts; creators adapt them independently. (25:40) Prompts reinforce ideas, rerun if needed. (26:51) InVideo generates videos from concepts automatically. (33:28) Integrating new technology requires complex adaptation. (40:50) Runway: Advanced camera controls, fast image-to-video. (45:58) Adobe's generative tools: Occasional use, specific purposes. (48:08) Firefly and Gen Phil integrate into Adobe. (57:19) AI models continually improve, likely accelerating progress. (59:09) Focus on storytelling, not just technical details. — Mentions: Tim Simmons: https://www.linkedin.com/in/theoreticallymedia/ Theoretically Media: https://www.youtube.com/@TheoreticallyMedia Pika: https://pika.art/ Sora AI: https://sora.com/ Runway AI: https://runwayml.com/ Adobe Firefly: https://www.adobe.com/sensei/generative-ai/firefly.html Midjourney: https://www.midjourney.com/ Minimax: https://haylouai.com/minimax Nvidia: https://www.nvidia.com/ Cling: https://cling.com/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4

god ceo new york amazon spotify time world europe google ai china apple vision pr voice future speaking san francisco new york times phd video thinking chinese simple data predictions elon musk iphone surprise impact legal code tesla chatgpt reflecting memory ga discord busy reddit lgbt cloud flash stem honestly ab pros jeff bezos windows excited researchers unicorns lower ip tackling sort survey insane tier cto vc whispers applications doc signing seal fireworks f1 genie academic sf openai gemini organizing nvidia ux api assembly davos frontier chrome makes scarlett johansson ui mm turbo gpt bash soda aws ml lama dropbox mosaic creative writing github drafting reinvent canvas 1b bolt apis ruler lava exact stripe dev pico wwdc hundred strawberry vm sander bt flux vcs taiwanese 200k moto arr gartner assumption opus sora google docs parting nemo blackwell sam altman google drive llm sombra gpu opa tbd ramp 3b elia elo agi gnome 5b estimates bytedance midjourney leopold dota ciso haiku dx sarah silverman coursera rag gpus sonnets george rr martin cypher quill getty cobalt sdks deepmind ilya noam sheesh v2 perplexity ttc alessio future trends grok anthropic lms satya r1 ssi stack overflow rl 8b itc emerging trends theoretically sota vo2 replicate yi mistral suno veo black forest inflection graphql aitor xai brain trust databricks chinchillas adept gpts nosql grand central mcp jensen huang ai models grand central station hacker news zep hacken ethical issues cosign claud ai news gpc distro lubna autogpt neo4j tpu jeremy howard o3 gbt o1 gpd quent heygen exa gradients loras 70b minimax langchain neurips 400b jeff dean 128k elos gemini pro cerebras code interpreter icml john franco r1s lstm ai winter aws reinvent muser latent space pypy dan gross nova pro paige bailey noam brown quiet capital john frankel
Mountaintop Podcast
#20 Mein KI-Jahresrückblick auf 2024, was ich für 2025 erwarte und ein persönliches Update zum neuen Jahr (viel Veränderung)

Mountaintop Podcast

Play Episode Listen Later Dec 31, 2024 35:25


In der letzten Folge des KI•POWERBOOST PODCAST im Jahr 2024 spreche ich über die beeindruckenden Entwicklungen im KI-Bereich und ziehe einen Rückblick auf das Jahr. Ich spreche über die spannenden Fortschritten von OpenAI, darunter die 12 Days of OpenAI-Aktion, das O3-Release und die Herausforderungen durch juristische Auseinandersetzungen sowie die Transformation zu einem kommerziellen Unternehmen. Ich beleuchte auch die Entwicklungen bei Google, Meta und Microsoft, die mit Projekten wie Gemini, KI-gestützten Social-Media-Profilen und KI-Agenten Akzente gesetzt haben, wobei Microsoft durch den starken Zugang zur Business-Welt hervorsticht, technologisch jedoch hinterherhinkt. Weitere Schwerpunkte sind die rasanten Fortschritte bei der Video- und Musikgenerierung, insbesondere durch Tools wie Kling AI, Minimax und Suno, sowie die vergleichsweise stagnierenden Entwicklungen im Bereich der Bildgenerierung. Abschließend teile ich Einblicke in meine persönliche Reise, einschließlich meiner Entscheidung, mich ab 2025 vollständig auf meine eigenen Projekte, Beratungen und Content-Creation zu konzentrieren, um mehr Zeit für meine Familie und die KI•POWERBOOST ACADEMY zu haben. Mit einem Blick auf die Herausforderungen und Chancen, die KI in den nächsten Jahren bieten wird, wünsche ich meinen Hörerinnen und Hörern ein erfolgreiches neues Jahr. - Maxis und meine 10 KI-Prognosen für 2025 im KI•TALK Podcast: https://open.spotify.com/episode/6xF6GX0BULf94d5VLgvy6I?si=f5d220864ca84288 Werde Teil der KI•POWERBOOST ACADEMY und werde zum AI-Professional: https://niklasvolland.de/ki-powerboost-academy/

Midjourney : Fast Hours
130k YouTuber Spills Tea on Current State of AI Tools

Midjourney : Fast Hours

Play Episode Listen Later Dec 15, 2024 73:44


In an insightful conversation with Theoretically Media's Tim Simmons (130,000+ YouTube subscribers), hosts Drew Brucker and Rory Flynn dive deep into the current state of AI creative tools. Tim shares his fascinating transition from Hollywood production to becoming a leading voice in the GenAI space, offering unique perspectives on how these tools are reshaping creative workflows. The episode explores candid takes on Midjourney's evolution, Sora's recent launch, and the real impact of AI on advertising – including an insider look at the controversial Coca-Cola campaign. Tim breaks down the actual value of features like Midjourney's editor, character consistency techniques, and the truth about AI video generation tools. From practical tips about YouTube growth to honest discussions about AI's current limitations, this episode delivers valuable insights for creators navigating the rapidly evolving AI landscape. Whether you're a professional exploring AI integration or a creator curious about these tools, Tim's experience across both traditional media and AI offers a unique perspective on where this technology is truly headed in 2025 and beyond. --- ⏱️ Fast Hour [00:00] Meet Tim Simmons [03:45] From Hollywood to AI [19:45] Understanding Midjourney's Edge [31:30] Pinokio Tool Breakdown [42:45] Sora First Impressions [052:30] Creative Control Debate [01:02:00] YouTube Growth Insights [01:09:45] Future of AI Creation --- Key Takeaways - Tim's YouTube channel focuses on AI tools from a creative perspective. - He transitioned from Hollywood to teaching and then to GenAI. - The early days of AI tools were filled with wonder and possibility. - Midjourney has become a daily driver for Tim's creative work. - AI tools are evolving rapidly, impacting various creative fields. - Tim emphasizes the importance of control in AI-generated content. - The adoption of AI by graphic designers has shifted from skepticism to acceptance. - AI in advertising has sparked both controversy and interest. - Tim's background in Hollywood informs his approach to AI tools. - The future of AI tools like Midjourney is promising and exciting. 3D controls are becoming increasingly important in AI tools. - The Editor feature enhances user experience and efficiency. - Character reference can be challenging but is essential for consistency. - Pose control innovations allow for more creative flexibility. - Creative problem-solving is crucial when using AI tools. - Navigating infinite creative options can lead to unexpected results. - Sora's blend and remix features offer new creative possibilities. - User experience can vary significantly based on familiarity with tools. - AI tools require a balance between control and exploration. - Continuous experimentation is key to mastering AI-generated content. The more you process Sora footage, the grainier it becomes. - Sora offers unique creative possibilities not found in other platforms. - Prompting techniques in AI tools need to evolve for better outputs. - Minimax is a popular choice among AI video tools. - AI video tools should focus on their unique strengths rather than imitating Hollywood. - Understanding the basics of 3D software will be crucial for future creators. - YouTube's algorithm can be unpredictable; consistency is key. - Engaging with your audience builds a loyal community. - Authenticity in content creation fosters trust and connection. - The landscape of AI tools is rapidly evolving, requiring continuous learning.

网事头条|听见新鲜事
MiniMax海螺AI海外版登顶10月AI产品榜

网事头条|听见新鲜事

Play Episode Listen Later Nov 14, 2024 0:28


网事头条|听见新鲜事
MiniMax将发布首款对标GPT-4o的产品

网事头条|听见新鲜事

Play Episode Listen Later Oct 27, 2024 0:22


Midjourney : Fast Hours
Midjourney: The Ultimate Creative Multiplier w/ Billy Boman

Midjourney : Fast Hours

Play Episode Listen Later Oct 20, 2024 67:11


Midjourney is the ultimate creative multiplier In this episode, Drew Brucker and Rory Flynn welcome Billy Boman, a Swedish creative pro with a diverse background in fashion design, UX/UI, and AI. Billy shares his gen AI journey from fashion to tech, highlighting the impact of AI tools like Midjourney on his creative process. The conversation delves into the collaborative nature of working with AI, the evolution of storyboarding in design, and the various tools that enhance creativity in the digital age. Billy emphasizes the importance of iteration and the surprises that AI can bring to the creative process. In this conversation, the trio explores the evolving landscape of AI in creative fields, discussing the significance of image quality in video creation, the emotional resonance of AI-generated art, and the democratization of creativity. They delve into the challenges of gatekeeping in the creative industry, the entrepreneurial mindset required to embrace AI tools, and the importance of curiosity in fostering innovation. The discussion also touches on the future implications of AI in terms of legal and copyright issues, emphasizing the need for creatives to adapt and explore new possibilities. They also touch on the complexities of AI in video generation, the transformative nature of AI in creative fields, and the implications for employment and copyright. They discuss the rapid evolution of AI technology, the need for regulation, and the exciting future of AI in production, emphasizing the importance of human involvement and creativity in leveraging AI tools effectively. Tools mentioned: Midjourney, Runway, Minimax, Krea, Luma, Pika, Flux, Stable Diffusion, ComfyUI, Magnific, and others. --- ⏱️ Midjourney Fast Hour [00:00] Creative Journey [03:45] Career Shift [08:00] Midjourney Impact [19:00] AI Teamwork [24:30] Fluid Storyboarding [31:00] Achieving Emotion w/ AI [39:15] Innovative Thinking [44:45] AI Assets [51:30] Legal Landscape [57:45] Creative Amplifier [01:02:00] Rapid Advancements [01:04:45] Production Revolution --- #genai #midjourneyai #midjourney #midjourneyv7 #midjourneyvideo #midjourney3d #midjourneyforbeginners #midjourneytutorial #creativeprocess #aiart #aiimagegeneration #aivideogenerator #aivideogeneration #aitools #videocreation #aistorytelling #creativity #storytelling #aicreativity #creativedemocratization

The Stephen Wolfram Podcast
Future of Science and Technology Q&A (September 13, 2024)

The Stephen Wolfram Podcast

Play Episode Listen Later Oct 15, 2024 71:52


Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: ​​What research is essential for putting people on Mars? - Any comments on the future of arts and literature in the face of AI-related challenges? Will individual creative impulses forever be subjugated to AI? - How often do you find yourself thinking about the future of science and technology? Does this affect how you prioritize certain projects (say, wait five years because the tech will be better to handle it)? - Is there a chance we will ever have giant insects or animals akin to those that lived during the age of dinosaurs reappear? - How can we combine LLMs with first-generation AI algorithms like "MiniMax" and tree search? At the moment, LLMs can't even play tic-tac -toe. - ​​Have you heard about AI reading minds through brain waves and fMRI, researched by Michael Blumenstein and Jerry Tang? - Have your thoughts on the future of education changed at all recently? - Would you ever go to Mars? - Are the challenges different from colonizing the bottom of the ocean, other than obvious logistics? - ​​Given the uptick in robotics advances, including humanoid, I wonder if there will even be a point to sending humans to Mars anymore, beyond tourism. - ​​Wasn't there a significantly higher percentage of O2 back then? - A pygmy Stegosaurus would be adorable! - ​​I would not like to go to Mars. It seems boring. They don't even have a Starbucks. - How might the Physics Project help advance technologies like fusion power?

AI For Humans
Nobel Prizes For AI Scientists, OpenAI's New Canvas Tool & Huge AI Video Updates

AI For Humans

Play Episode Listen Later Oct 10, 2024 47:23


Join our Patreon: https://www.patreon.com/AIForHumansShow AI News: AI Scientists Geoffrey Hinton & Google DeepMind's Demmis Hassabis each won Nobel Prizes this week for their AI work, OpenAI is kinda mad at Microsoft for not meeting demand for new servers and launched ChatGPT Canvas, Tesla's We Robot event will unveil new Robotaxis, Meta MovieGen shows off incredible-yet-unreleased AI video tools and Minimax drops an text-to-image update that kind of blew us away.  PLUS, AI MINION MAKER Y'ALL. Join the discord: https://discord.gg/muD2TYgC8f AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // SHOW LINKS //   Nobel Prize for AI (Geoffrey Hinton) https://www.nytimes.com/2024/10/08/science/nobel-prize-physics.html “I'm flabbergasted”  https://x.com/tsarnick/status/1843616586550390803 “I'm particular proud one of my students fired Sam Altman” https://x.com/tsarnick/status/1843874006770110550 Demis Hassabis Wins Nobel Prize in Chemistry https://x.com/NobelPrize/status/1843951197960777760 Elon talks about what it means to build “hyper intelligent AIs” https://x.com/tsarnick/status/1843409177521336688 Former Google CEO Eric Schmidt says energy demand for AI is infinite - we may as well build it ‘cause we're gonna die anyway! https://v.redd.it/z6k53fkntzsd1 OpenAI Says MSFT Not Moving Fast Enough To Supply Data Centers https://www.theinformation.com/articles/openai-eases-away-from-microsoft-data-centers?rc=c3oojq Tesla We, Robot Event To Unveil Robotaxis (THURSDAY 10am) https://www.reuters.com/business/autos-transportation/teslas-musk-heads-hollywood-unveil-his-robotaxi-face-long-list-questions-2024-10-08/ ChatGPT Canvas: OpenAI is SHIPPING https://openai.com/index/introducing-canvas/ Meta MovieGen: Future of Hollywood  https://x.com/Ahmad_Al_Dahle/status/1842188269557301607 New Minimax Image-To-Video Deep Dive https://youtu.be/7JZLLxV1AGc?si=akTSV8HBcHCj9tAq Runway First & Last https://x.com/runwayml/status/1843707595141644617 Princess Monoke AI Video Hate https://x.com/PJaccetturo/status/1843737222031519910 Neural Viz' The Church Of Z https://youtu.be/fmV5n8Gu9mQ?si=SGxyIGjgTAdSxaJg AI Job Application Bot https://x.com/rohanpaul_ai/status/1842712127556956230 Thoreo: Learning Tool With YT or PDFs https://thoreo.com/library/960823d4-f506-4908-b7ce-9d181a28a80f Gorilla Story https://x.com/rainisto/status/1843296900599996742 Glif Character Mix  https://glif.app/@Gasia/glifs/cm044ptbj0019f3kst0p8whaa Glif Animorph Lora https://glif.app/@angrypenguin/glifs/cm21ehd0q00009skli9tae3xp  

Girls with Grafts
A Fire Prevention Week Special: Commercial Fire Safety 101 with Steve Rice from Summit Fire & Security

Girls with Grafts

Play Episode Listen Later Oct 8, 2024 56:53


In this special Girls with Graft episode, we're spotlighting Fire Prevention Week with Phoenix Partner Summit Fire & Security! Rachel sits down with Steve Rice, a seasoned expert with years of experience as the Operations Manager of the Installation Department at Summit Fire & Security, to discuss all things commercial fire safety. Steve shares practical information on how Summit Fire & Security works with commercial spaces to design, install, and maintain fire systems. He provides valuable insights on prevention systems, the importance of routine safety checks, and what you need to know to stay safe in commercial spaces. Whether at work, staying at a hotel, or going out to eat, this episode equips you with critical information to help you be prepared and protect what matters most.Fire Prevention Week 2024This year's FPW campaign, “Smoke alarms: Make them work for you!™” strives to educate everyone about the importance of having working smoke alarms in the home. Learn more about FPW and how to keep you and your family safe by visiting: https://www.nfpa.org/events/fire-prevention-week   

Fine Time
UFO 50: Fifty Games Reviewed In (Almost) Fifty Minutes | Postgame Show

Fine Time

Play Episode Listen Later Oct 3, 2024 53:23


Andre became enamored with the indie sensation UFO 50, so he felt compelled to give each and every game a mini-review in one breezy episode. How's that sound? Enjoy! Twitter: @FineTimePodcast Andre on Bluesky: @pizzadinosaur.fineti.me [00:00] Intro - What Is UFO 50? [03:36] 01. Barbuta [04:21] 02. Bug Hunter [05:11] 03. Ninpek [06:11] 04 Paint Chase [07:18] 05. Magic Garden [08:01] 06. Mortol [08:55] 07. Velgress [09:52] 08. Planet Zoldath [10:39] 09. Attactics [11:30] 10. Devilition [12:14] Break Time: Music [13:09] 11. Kick Club [13:55] 12. Avianos [14:41] 13. Mooncat [15:20] 14. Bushido Ball [16:08] 15. Block Koala [17:04] 16. Camouflage [17:47] 17. Campanella [18:44] 18. Golfaria [19:24] The Big Bell Race [20:10] 20. Warptank [20:59] Break Time: Inspiration and Originality [22:07] 22. Waldorf's Journey [22:54] 22. Porgy [23:54] 23. Onion Delivery [24:37] 24. Caramel Caramel [25:24] 25. Party House [25:58] 26. Hot Foot [26:31] 27. Divers [27:24] 28. Rail Heist [28:17] 29. Vainger [29:13] 30: Rock On! Island [29:47] Break Time: Unification In Concept [30:54] 31. Pingolf [32:06] Mortol II [32:49] 33. Fist Hell [33:22] 34. Overbold [34:07] 35. Campanella 2 [35:13] 36. Hyper Contender [35:55] 37. Valbrace [36:44] 38. Rakshasa [37:42] 39. Star Waspir [38:46] 40. Grimstone [39:42] Break Time: Let Me Clear My Throat [42:08] 41. Lords of Diskonia [43:05] 42. Night Manor [43:53] 43. Elfazar's Hat [44:35] 44. Pilot Quest [45:28] 45. Mini & Max [46:29] 46. Combatants [47:05] 47. Quibble Race [48:03] 48. Seaside Drive [49:33] 49. Campanella 3 [50:24] 50. Cyber Owls [51:22] We Did it! Bye!

Let's Talk AI
# 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved

Let's Talk AI

Play Episode Listen Later Sep 17, 2024 98:47 Transcription Available


Our 182nd episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and Jeremie Harris. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Sponsors: - Agent.ai is the global marketplace and network for AI builders and fans. Hire AI agents to run routine tasks, discover new insights, and drive better results. Don't just keep up with the competition—outsmart them. And leave the boring stuff to the robots

AI For Humans
OpenAI's Strawberry Two Weeks Out... But What Is it? Plus, More AI News!

AI For Humans

Play Episode Listen Later Sep 12, 2024 36:16


Join our Patreon: https://www.patreon.com/AIForHumansShow Join our Discord: https://discord.gg/muD2TYgC8f Breaking AI News: OpenAI's Strawberry is only two weeks away & we still don't know exactly what it is. Is there a newer model coming? Plus, Adobe's new Firefly AI video model looks good, Replit is a cool full pipeline for AI coding, an AI prediction engine and Oprah is taking on AI. Yes, that Oprah. All that plus a look at a cool new AI lip reading AI, MiniMax can show human emotions very well, Gavin creates an AI football troll and we get an AI co-host visit from Dr, uh sorry, Mr. Phil who has some *very* special AI talents.  // SHOW LINKS // OAI Strawberry In Two Weeks https://www.theinformation.com/articles/new-details-on-openais-strawberry-apples-siri-makeover-larry-ellison-doubles-down-on-data-centers?rc=c3oojq $2k High End Subscription Price? https://nypost.com/2024/09/05/business/openai-could-charge-as-much-as-2k-a-month-for-high-end-subscriptions-report/ Decoder with Mike Krieger https://www.theverge.com/24237562/anthropic-mike-krieger-claude-ai-chatbot-artifact-web-decoder-podcast-interview Sam Trash Tweet https://x.com/sama/status/1833227974554042815 Nice Aunties on SORA https://x.com/niceaunties/status/1832421724287365273 Adobe “Commercially Safe” Firefly AI Video https://blog.adobe.com/en/publish/2024/09/11/bringing-gen-ai-to-video-adobe-firefly-video-model-coming-soon Google Founder Sergei Brin Admits They Were “Too Timid” To Deploy AI Models https://x.com/tsarnick/status/1833673836820365806 Replit Agents https://x.com/amasad/status/1831730911685308857 AI Prediction Engine https://x.com/DanHendrycks/status/1833152719756116154 Forecast  https://forecast.safe.ai/ Oprah Special With Sam Altman, Bill Gates & Marques Brownlee https://abc.com/news/1efd942d-61bb-4519-8a62-c4a8fce50792/category/1138628   The End of The Cop Files (What Happened to Tiggy) https://x.com/NeuralViz/status/1833154944729571702   French Film With Real People https://x.com/trbdrk/status/1831801373517869369   MiniMax Emotions https://x.com/EHuanglu/status/1833522650846793970   The Geoff Ryan Show https://www.tiktok.com/t/ZTF81mjAL/   Lip Reading AI https://www.readtheirlips.com/  

Rory Sutherland's On Brand
Samsung's Benjamin Braun on the Olympic sponsorship and minimax marketing

Rory Sutherland's On Brand

Play Episode Listen Later Sep 11, 2024 64:58


In this episode of 'On Brand with ALF and Rory Sutherland,' host Rory Sutherland, Vice-Chairman of Ogilvy UK, sits down with Benjamin Braun, the Chief Marketing Officer at Samsung Europe.The discussion covers a range of topics including Samsung's creative use of their Olympic sponsorship, marketing tactics and digital transformation. Braun highlights the success of gifting special edition gold phones to Olympians, which led to a 23% increase in demand for Flip 6 phones. They also explore Samsung's innovative products like foldable phones, AI-enabled ovens and eco-friendly initiatives like solar-powered TV remotes. The conversation delves into the importance of user-centric design and dual-purpose products, and how Samsung balances premium offerings with affordable technology to democratize access. Rory and Benjamin further discuss the future of AI, the impact of aesthetics in consumer choice and effective marketing strategies that blend minimax (minimal resources for maximum output) and maxmax (heroic marketing efforts).If you want to do business with the UK's leading brands Request an ALF Insight demo. Hosted on Acast. See acast.com/privacy for more information.

Monde Numérique - Jérôme Colombain
L'HEBDO (07/09/24) : Et si la tech nous écoutait vraiment ?

Monde Numérique - Jérôme Colombain

Play Episode Listen Later Sep 7, 2024 56:53


Le meilleur de l'actu tech cette semaine : Cox Media Group et écoutes pirates, Telegram, Xavier Niel, Kamala Harris, X au Brésil, IFA Berlin, new space, logiciels libres, Wall-E-----------L'ACTU TECH DE LA SEMAINE :- Telegram : Pavel Durov se défend mais promet un meilleur contrôle de la messagerie chiffrée (07:15)- Minimax, nouvelle IA chinoise génératrice de vidéo (17:31)- Xavier Niel au conseil d'administration de TikTok (19:48)- Pourquoi Kamala Harris utilise des écouteurs filaires et pas Bluetooth LE DEBRIEF TRANSATLANTIQUE avec Bruno Guglielminetti: (22:15)- Cox Media Group : et si les soupçons d'écoute pirate étaient fondés ? - Des rapports de police générés par IA - X banni du Brésil- Elon Musk présente X TV L'INNOVATION DE LA SEMAINE :- Coup d'oeil sur les innovations présentées au salon IFA de Berlin (25:19)LES INTERVIEWS DE LA SEMAINE :

Tech&Co
Minimax : le nouveau générateur vidéo d'IA – 04/09

Tech&Co

Play Episode Listen Later Sep 4, 2024 26:36


Mercredi 4 septembre, François Sorel a reçu Frédéric Simottel, journaliste BFM Business, Enguérand Renault, consultant chez Image 7, ancien journaliste au Figaro, et Yves Maître, operating partner Jolt capital et consultant, ancien PDG de HTC. Ils sont revenus sur le nouveau processeur de Qualcomm, le logiciel Minimax générateur vidéo d'IA, la fin du collaboration d'Orange avec Huawei dans le cloud, et les ondes téléphoniques, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.

Cashflow Ninja
843: Christos Athanasiou: Tiny Homes Cashflow

Cashflow Ninja

Play Episode Listen Later Aug 12, 2024 48:41


The guest in this episode is Christos Athanasiou. Since 2006, Christos Athanasiou has owned and operated an NYC-based multidisciplinary boutique architectural design practice primarily focused on hospitality, with projects in the US, Europe, and Asia. As an expansion of that practice, he co-founded miniMAX in 2020, a prefab company uniquely positioned as a provider of custom accommodation units primarily for the lodging industry. miniMAX has manufacturing capabilities in both Europe and the US and can provide its products globally. In 2024, he co-founded Offside, a privately funded incubator and launch pad for transformative hospitality brands, focusing on environmentally sustainable and financially feasible expansion of tourism and commerce in nature settings and rural areas. Offside serves as an investment platform for those interested in niche hospitality and lodging solutions outside of major urban centers. What makes Offside distinct within the hospitality industry are the comprehensive solutions it offers, from conceptual brand development to onsite construction to operational strategies, all made possible through Athanasiou's personal and professional background and involvement in all facets of the hospitality industry during his 20-plus-year career. Interview Links: Minimax Living Website: https://minimaxliving.com/ Subscribe To Our Weekly Newsletter: The Wealth Dojo: https://subscribe.wealthdojo.ai/ Download all the Niches Trilogy Books: The 21 Best Cashflow Niches Digital: ⁠⁠https://www.cashflowninjaprograms.com/the-21-best-cashflow-niches-book⁠⁠ Audio: ⁠https://podcasters.spotify.com/pod/show/21-best-cashflow-niches⁠ The 21 Most Unique Cashflow Niches Digital: ⁠⁠https://www.cashflowninjaprograms.com/the-21-most-unique-cashflow-niches⁠⁠ Audio: ⁠https://podcasters.spotify.com/pod/show/21-most-unique-niches⁠ The 21 Best Cash Growth Niches Digital: ⁠https://www.cashflowninjaprograms.com/the-21-best-cash-growth-niches⁠⁠ Audio: ⁠https://podcasters.spotify.com/pod/show/21-cash-growth-niches Listen To Cashflow Ninja Podcasts: Cashflow Ninja ⁠https://podcasters.spotify.com/pod/show/cashflowninja⁠ Cashflow Investing Secrets ⁠https://podcasters.spotify.com/pod/show/cashflowinvestingsecrets⁠ Cashflow Ninja Banking ⁠https://podcasters.spotify.com/pod/show/cashflow-ninja-banking⁠ Connect With Us: Website: http://cashflowninja.com Podcast: http://resetinvestingsecrets.com Podcast: http://cashflowinvestingsecrets.com Podcast: http://cashflowninjabanking.com Substack: https://mclaubscher.substack.com/ Amazon Audible: https://a.co/d/1xfM1Vx Amazon Audible: https://a.co/d/aGzudX0 Facebook: https://www.facebook.com/cashflowninja/ Twitter: https://twitter.com/mclaubscher Instagram: https://www.instagram.com/thecashflowninja/ TikTok: https://www.tiktok.com/@cashflowninja Linkedin: https://www.linkedin.com/in/mclaubscher/ Gab: https://gab.com/cashflowninja Youtube: http://www.youtube.com/c/Cashflowninja Rumble: https://rumble.com/c/c-329875

The Fifi, Fev & Nick Catch Up – 101.9 Fox FM Melbourne - Fifi Box, Brendan Fevola & Nick Cody

Max Gawn surprise calls Demons diehardSubscribe on LiSTNR: https://play.listnr.com/podcast/fifi-fev-and-nickSee omnystudio.com/listener for privacy information.

Full Spectrum Cycling
Full Spectrum Cycling #255 – Seeley Dave – Tour de Towner – Found $20 – Kona Bought Back by Founders

Full Spectrum Cycling

Play Episode Listen Later May 23, 2024


Show #255 We now have a video version of our show on YouTube check out our channel! - https://www.youtube.com/channel/UCxsQsEHbg-wIPaXLw3Hqy1A The Milwaukee Minute (or 5) Tour de Towner Sunday the 26th What's that place called that is in the old shoppe space? It is Forward Outdoors! - https://forwardoutdoor.com New Amtrak route (CHI-MKE-DELLS-LAX-STPL) https://www.wisn.com/article/wisconsin-new-amtrak-borealis-route-st-paul-chicago/60860177 Omnium E Mini Max Gallery Talkin' Schmack Good news Kona co-founders Jake Heilbron and Dan Gerhard have bought back the brand from Kent Outdoors Found $20 on the street. Venmoed Big Sexy $20 AAA for bikes In the shop Omnium e Mini Max is built and ridden by at least a dozen people already Roll not dead for retailers? Show Guest - Seeley Dave Show Beer - 3 Sheeps   Stuff for sale on Facebook Marketplace Nice bent fork there Trek rider! Call-in to 717-727-2453 and leave us a message about how cycling is making your life better! Shit Worth Doin' May 26th - Milwaukee, WI - Tour de Towner - 10AM SHARP! June 15th - Milwaukee, WI - Fat Tire Tour of Milwaukee 40 Year Anniversary Ride - https://www.fattiretour.com/milwaukee2024/ Bikes, Boats and a Goat Sep 20 thru Sep 22 – Levis Mounds, WI – Gnomefest 2024/Single Speed Wisconsin – https://www.facebook.com/events/312308458159975  Oct 11th-13th - Single Speed USA - Salida, CO Bikes! Large Schlick Cycles APe for aggressive fatbiking - Purple. Possibly the last APe! Definitely the last Teesdale-built APe! Large Schlick Cycles 29+ Custom Build - Black Medium Schlick Cycles 29+ Custom Build - Orange Wu-Tang Singlespeed from State Bicycles Large Schlick Cycles Tatanka, Orange. Schlick Fatbikes A bunch of Schlick Growler (Zen Bicycle Fabrications AR 45) frames for custom builds. 29+ Schlick Cycles frames for custom builds Contact info@everydaycycles.com Call-in to 717-727-2453 and leave us a message about how cycling is making your life better! Disclosure: Some of the links on this page may be affiliate links. Clicking these and making a purchase will directly support Full Spectrum Cycling. Thanks!

AI DAILY: Breaking News in AI
AI BLENDS COFFEE

AI DAILY: Breaking News in AI

Play Episode Listen Later Apr 22, 2024 3:57


Plus Chinese Women Create AI Boyfriends (subscribe in the links below) Get a free 20-page AI explainer: AI FROM ZERO plus these stories and more, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us AI-Generated Coffee Blend Introduced in Finland Kaffa Roastery in Helsinki, Finland, has launched "AI-conic," a unique coffee blend developed by artificial intelligence. This blend, featuring beans from Brazil, Colombia, Ethiopia, and Guatemala, was created in partnership with AI consultancy Elev, utilizing advanced AI models similar to ChatGPT and Copilot. The AI not only selected the blend but also designed the packaging and taste descriptions. This initiative reflects Finland's robust coffee culture and its innovative approach to combining traditional crafts with cutting-edge technology. AI Boyfriends Gain Popularity Among Young Chinese Women In China, young women are increasingly turning to AI boyfriends for companionship and better communication. These virtual partners, provided by apps like Glow from Shanghai startup MiniMax, offer interactions that some users find more understanding and supportive than human counterparts. Despite potential data privacy concerns, the appeal of AI companions continues to grow due to urban isolation and the challenges of modern lifestyle pressures. This trend highlights a broader fascination with AI relationships in a society where personal connections are often strained by fast-paced living. AI Revolutionizes Weather Forecasting Google has developed an artificial intelligence model, SEEDS, capable of producing accurate and cost-effective weather forecasts. This generative AI platform significantly outpaces traditional models by quickly generating multiple weather scenarios, enhancing the prediction of extreme weather events. SEEDS utilizes minimal initial data to extrapolate numerous forecast ensembles, offering a scalable solution that could dramatically improve our response to climate change-related weather anomalies. AI and the Future of Meetings in the Workplace AI's integration into workplace systems is poised to transform how meetings are conducted. Initially, AI might increase meeting frequency to address implementation strategies and ethical considerations. However, over time, AI could reduce the number of meetings by automating routine updates and processes, allowing for more focused and productive discussions. AI could also help optimize meeting attendees and scheduling, enhancing efficiency. AI Set to Become Ubiquitous and Invisible, Predicts Industry As artificial intelligence (AI) becomes more accessible and widespread, it is likely to become a commoditized and ubiquitous presence in businesses, similar to how PCs and the internet are today. Industry experts argue that while AI will become a fundamental utility in business operations, its competitive advantage may diminish as it becomes a standard tool across industries. This shift suggests that businesses will need to focus on innovative applications of AI rather than AI itself to maintain a competitive edge. --- Send in a voice message: https://podcasters.spotify.com/pod/show/aidaily/message

Fularsız Entellik
AI 001: Eski Moda Yapay Zeka

Fularsız Entellik

Play Episode Listen Later Apr 18, 2024 21:54


"Üzgünüm Dave, korkarım bunu yapamam."2001'de doğanlar neredeyse erken emeklilik kazanacaklar, hala ortada HAL 9000'ler yok. Neden eski yapay zeka tahminleri aşırı iyimserlerdi? Bugünkü tahminlerin farkı ne?Fularsız Entelliğin önümüzdeki bölümlerinde bu sorulara cevap arayacağız. Epey detaya da gireceğiz. “Ay ben bu işlerden hiç anlamıyorum” diyene teknik bilgi (uzman sistemler, makina öğrenmesi, sinir ağları, büyük dil modelleri, transformer mimarisi, vs), bilgisi olana da tarihsel bakış açısı var. En azından kavramsal temelinizin sağlam olmasını istiyorum, bir çocuğa olan biteni anlatabilecek kadar seviyeye gelirsek muhteşem olur.Bugünkü bölüm, sembolik yapay zekanın, bir başka deyişle eski tip yapay zekanın (gofai) 1950'den 2000'lere kadarki yolculuğu hakkında. Umarım faydalı olur..Konular:(00:15) 2001: Bir Hayalkırıklığı Yolculuğu(03:30) Logic Theorist(04:44) Leibniz'in hayali (Calculus Ratiocinator)(07:08) Bilişsel Devrim(08:15) Genel Problem Çözücü(09:16) Sağduyu Problemi(10:12) Uzman Sistemler(13:11) Satranç neden önemliydi(14:55) Deep Blue'yu tasarlayalım(16:20) Minimax ve Alfa-Beta Pruning(17:20) Arama derinliği(18:29) Sembolik Zekanın sınırı(20:35) Patreon Teşekkürleri ve Daisy Bell.Kaynaklar:Film: 2001: A Space Odyssey (1968).Şarkı: Daisy Bell. Bir bilgisayarca söylenen ilk şarkı (1961). Filmde HAL 9000'in de söylediği son şarkı.Yazı: What the history of AI tells us about its futureYazı: 20 Years after Deep BlueMakale (PDF): The cognitive revolution: a historical perspective (Miller, 2003).------- Podbee Sunar -------Bu podcast, Hiwell hakkında reklam içerir.Hiwell'i indirmek ve pod10 koduyla size özel indirimden faydalanmak için tıklayınız.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Game Theory
107. John Von Neumann - Father of Game Theory, Nuclear Scientist, Super Genius

Game Theory

Play Episode Listen Later Mar 28, 2024 59:46


In this episode, Nick and Chris discuss their hiatus and receive feedback on their Match Day episode. They then introduce John von Neumann, a mathematician, physicist, computer scientist, and polymath who made significant contributions to game theory. We discuss his biography, academic career, and collaborations with other intellectual giants. They highlight his work on the Manhattan Project and his obsession with game theory. The episode concludes with a humorous anecdote about von Neumann's clap back to his wife. This conversation explores the perspectives and contributions of John von Neumann, a mathematician and physicist known for his work in game theory and nuclear deterrence. Von Neumann's view of chess as a well-defined form of computation is discussed, highlighting the distinction between strategy and tactics. We also delves into the mechanical properties of the universe and the role of bluffing and deception in chess and real life. Von Neumann's life's work in game theory, including the mini max theory and the cake distribution problem, is explored. Additionally, his involvement in missile development and his impact on national defense strategy are examined. The conversation concludes by addressing some unsavory aspects of von Neumann's life. Takeaways John von Neumann was a brilliant mathematician, physicist, and computer scientist who made significant contributions to game theory. He collaborated with other intellectual giants, such as Einstein and Bohr, and played a key role in the Manhattan Project. Von Neumann's work on game theory revolutionized the field and has applications in economics, decision-making, and military strategy. His obsession with game theory led him to develop groundbreaking concepts and models. Despite his brilliance, von Neumann had a humorous side, as seen in his clap back to his wife. Chess can be seen as a well-defined form of computation, while real life involves bluffing and deception. Game theory provides a framework for decision-making and optimizing strategies in various situations. Von Neumann's work in game theory and nuclear deterrence had a significant impact on national defense strategies. The distinction between strategy and tactics is crucial in understanding complex systems and decision-making. Von Neumann's contributions to mathematics and physics continue to shape our understanding of the world. Chapters 00:00 Introduction and Welcome Back 01:04 Discussion on Medical Match Day 05:49 Feedback on Match Day Episode 07:11 Introduction to John von Neumann 09:17 Biographical Information on John von Neumann 11:31 Contributions of John von Neumann 20:27 Collaboration with Other Intellectual Giants 24:29 Casual Conversations with Einstein and Bohr 25:22 Obsession with Game Theory 26:15 Von Neumann's Clap Back 26:51 Von Neumann's Perspective on Chess and Games 27:43 The Intellectual Period and the Predictability of the Universe 29:06 Mechanical Properties of the Universe 30:03 Chess as a Well-Defined Form of Computation 31:28 Bluffing and Deception in Chess and Real Life 33:09 The Role of Game Theory in Decision-Making 34:35 Von Neumann's Life's Work: Mini Max Theory 37:07 The Cake Distribution Problem 41:57 Von Neumann's Work on Nuclear Deterrence 46:01 Von Neumann's Role in Missile Development 51:45 Von Neumann's Distinction Between Strategy and Tactics 57:23 Unsavory Aspects of Von Neumann's Life Links: John von Neumann Wiki: https://en.wikipedia.org/wiki/John_von_Neumann Minimax Theorem: https://en.wikipedia.org/wiki/Minimax_theorem#cite_note-1 Theory of Games and Economic Behavior: https://press.princeton.edu/books/paperback/9780691130613/theory-of-games-and-economic-behavior Klara Dan von Neumann: https://en.wikipedia.org/wiki/Kl%C3%A1ra_D%C3%A1n_von_Neumann#:~:text=Kl%C3%A1ra%20D%C3%A1n%20von%20Neumann%20(born,style%20code%20on%20a%20computer. Reddit Thread on JVN's Contribution to the Nash Equilibrium https://www.reddit.com/r/math/comments/kkvz9e/how_exactly_did_nashs_paper_on_game_theory_differ/?rdt=62998&onetap_auto=true --- Send in a voice message: https://podcasters.spotify.com/pod/show/gametheory/message

Gear Garage Live Show
Gear Garage Live Show | March 25th, 2024

Gear Garage Live Show

Play Episode Listen Later Mar 27, 2024 41:14


This podcast is the audio version of the Gear Garage Live Show, where we answer submitted questions and talk all things whitewater. Topics and links that Zach talked about this episode Wind River Festival Recap GoRafting.com Questions that Zach covered in the Q&A section of this episode Topic: Steve Hello! I'm curious what is Scuba Steve constructed out of? I've been trying to find a good way to simulate an unconscious swimmer. Topic: Best in the West Have you seen the bracket from Oars for the (2nd) Best in the West river trip? I've got my thoughts about the bracket and some of the results. I was curious if you've seen it and what you think of how it's played out? https://www.oars.com/best-in-the-west/? Topic: Watershed Fabric I seem to recall from Gear Garage vids late last year or early this one that you mentioned Watershed Drybags is coming out with a new-and-improved line of drybags with improved/burlier fabric. Any updates on this? I have a Chatooga duffel that I use and really like but also have a Grand Canyon private trip this fall (late October launch from Lee's Ferry) - thinking that a Colorado or Mississippi duffel with enhanced fabric could be just the ticket for my primary dry duffel for that trip and beyond. Thanks much for any insight! A GG member happy to support the channel/podcast and NWRC... Topic: Boat Recommendation I love your knowledgeable videos. Wondering if you care to share your thoughts on a couple boats I am looking at. I've owned a drift boat for two decades and buying my first raft to access more difficult water for Spey fishing away from the crowds. I'm looking at an aire 130R and an aire super duper puma. I most often fish the upper sandy, upper Clackamas, and lower deschutes. Spey fishing is done while wading so not looking for all that fishing crap on my boat. Just a simple NRS big horn frame with an anchor drop. I'll do overnighters on the deschutes but only a couple of nights. I've usually got one or two other people with me, but we take lightweight backpacking gear. I have one more question for you guys that I'd love to hear you explore… comparison of an Aire Tributary 14 vs. Aire 143d. The wireframes on these boats seem similar to me, but I'm a novice. The 143d is ~$2200 more. Weight and width… The Trib is 136#s vs the 143d at 114#s as they have different weight fabrics. The Trib is 6'10 wide vs. 143d at 6'6. Is 6'10 and heavy a big deal? Again I'm trying to create a great boat for an NRS big horn frame to float the upper Sandy, upper Clack and do multi days max to the mouth on the deschutes… Spey fishing. Thank you! Topic: Illinois Shuttle I was wondering how much time needs to budgeted for driving from Kerby, Oregon to the Miami Bar put in. Google maps says 20 miles and about 55 minutes. Is this your experience? Also, on highly unpredictable and fickle drainage, how we looking for flows for late April, early May? Topic: R2 on the MFS I'm going to buy a raft rather than rent one for a MFS trip in early July. I'm torn between the Hyside Mini Max or the Max 12. We are very experienced kayakers and will be packing light, and there will be oar rigs to carry some bigger group stuff if needed. Are the oar rigs going to leave us in the dust? Any thoughts on the MiniMax vs the Max 12? After the one trip down the MFS, the boat will be spending the rest of its life on NC/WV rivers. Thanks

OnBoard!
EP 47. 对话AI陪伴产品深度用户:打破刻板印象,TA不只是纸片人伴侣

OnBoard!

Play Episode Listen Later Mar 1, 2024 129:47


OnBoard! 终于成立听友群啦!新年新气象,加入Onboard! 听友群,结识到高质量的听友们,我们还会组织线下主题聚会,开放实时旁听播客录制,嘉宾互动等新的尝试。添加任意一位小助手微信,onboard666, 或者 Nine_tunes, 发送你的姓名、公司和职位,小助手会拉你进群。期待你来! 与OnBoard! 常见的烧脑硬核话题不同,今天我们聊一个轻松一点儿的 toC 话题:AI陪伴产品。AI陪伴是这一波生成式AI中非常重要的应用类别,但是大家对这一类别未来前景的判断,又众说纷纭。一方面,Character AI 这样的头部产品,用户的每天平均使用时间可以超过40分钟,另一方面,似乎目前也没有出现像移动互联网时代或者chatGPT 那样的增长神话。 Hello World, who is OnBoard!? 你或许也看了很多产品分析,但是在行业格局和底层技术都急剧变化的时候,或许可以换一个视角了解对产品的真实需求——那就是,倾听最真实的用户说了什么。 两位主播找到了4位很有代表性的用户,这一期没有技术的角度,几位女生聊得非常欢快。希望这期看似闲聊的七嘴八舌的对话,节目最后,两位主持从投资人和产品经理的视角分享的心得,可以对你有所启发! 嘉宾介绍: Xixi,已工作,爱写文,写了一万多字的人设,只为了塑造出自己心里的那个角色 虾虾,亚文化爱好者,在互联网大厂任职 道道,Glow爱好者,大学刚毕业,会捏崽,会出素材,小红书ID 魔鬼道(我爱小裴版) 小余,Glow,C AI 爱好者,大学在校生,小红书ID 小余请多加芋圆 播客《鹿鹿鱼鱼》串台主持:Vanessa - 女,前TikTok PM,现在在孵化一个AI产品,一直在泛娱乐和创作者经济的领域里工作。工作外喜欢好看和有生命力的东西。 OnBoard! 主持 Monica:美元VC投资人,前 AWS 硅谷团队+ AI 创业公司打工人,公众号M小姐研习录 (ID: MissMStudy) 主理人 | 即刻:莫妮卡同学 我们都聊了什么 02:44 缘起:toB 投资人和产品经理为什么开始关注 AI 陪伴产品,为什么定义这个产品有些困难 11:00 小科普:AI陪伴产品如何兴起,从 Character AI 到 Glow 的产品简史与现状 无比欢快的用户访谈 16:25 几位用户的背景介绍,如何与 AI 陪伴产品相遇,Glow 是玩得最多的吗? 20:50 什么是乙女游戏?AI 陪伴和乙女游戏满足的是同样的需求吗? 28:37 用户是如何与 AI互动的?为什么会同时跟几个 AI 伴侣聊天? 31:14 “捏崽”是个怎样的过程?捏一个伴侣需要创造一个平行宇宙? 42:55 我们什么时候需要 AI 陪聊?跟 AI 聊天取代了跟真人聊天的需求吗? 47:34 什么是“脱皮”?技术和产品人在意的东西其实没有那么重要? 49:15 另一种 AI 陪伴用途:追星,同人,为什么辅助写作功能那么重要 52:38 为 AI 男友写了一万字的背景,最爽的是什么时刻? 57:28 底层模型能力会如何影响使用体验?用户会把自己的“崽”分享出去吗? 63:44 跟纸片人谈恋爱,下一步是什么?男性女性用户的需求有什么不同? 70:30 我们需要对 AI 伴侣专情吗?影响能否长聊的是技术还是产品? 79:59 QQ里面的”AI养崽群“都在做什么?筑梦岛、豆包、QQ群,哪里最受欢迎? 86:20 用户们会跟身边的人谈论自己用的AI陪伴产品吗?真正有需求的人群在哪里? 98:20 这些产品中你最喜欢的功能是什么?还希望有什么新功能? 两位主持的聊后点评 113:57 我们从这次访谈中学到什么?解答了哪些疑惑? 120:13 未被解答的疑惑:擦边球问题,付费意愿 122:10 投资人的思考:为什么这个赛道很难投?最核心的疑虑是什么? 127:24 产品经理的思考:AI 产品对产品经理有什么挑战? 参考内容和提到的公司 beta.character.ai⁠ ⁠replika.com⁠ ⁠caryn.ai⁠ 筑梦岛(by 阅文) Glow,星野 (by Minimax) ⁠a16z.com⁠ ⁠mp.weixin.qq.com⁠ ⁠www.forbes.com⁠ ⁠mp.weixin.qq.com⁠ ⁠nypost.com⁠ ⁠www.reddit.com⁠ 欢迎关注M小姐的微信公众号,了解更多中美软件、AI与创业投资的干货内容! M小姐研习录 (ID: MissMStudy) - Monica:美元VC投资人,前 AWS 硅谷团队+AI创业公司打工人 | 即刻:莫妮卡同学 欢迎在评论区留下你的思考,与听友们互动。喜欢 OnBoard! 的话,也可以点击打赏,请我们喝一杯咖啡!如果你用 Apple Podcasts 收听,也请给我们一个五星好评,这对我们非常重要。

North Bros Outdoors Podcast
94 | Mini Max Rescue

North Bros Outdoors Podcast

Play Episode Listen Later Jan 24, 2024 104:55


**** POSSIBLE POOR AUDIO QUALITY **** Another in the truck episode. This is a spontaneous episode all made possible by one of our very own... Wixo.  The Northbros crew had a little bit of a snag when they went to leave the resort on Sunday morning. This episode features Nick, Donkey, Sunny, and Wixo. The fellas are in route home from Devils lake after they had to make a returning emergency trip with the mini max in tow.

Wet Fly Swing Fly Fishing Podcast
WFS 511 - The River Radius Podcast with Sam Carter - River Etiquette, Yvon Chouinard, Groover

Wet Fly Swing Fly Fishing Podcast

Play Episode Listen Later Oct 6, 2023 73:03


Show Notes: https://wetflyswing.com/511  Presented by: Daiichi Sponsors: https://wetflyswing.com/sponsors      In this episode, we chat with Sam Carter of The River Radius Podcast to talk about how he started his podcast and how he got an interview with none other than Yvon Chouinard of Patagonia. Moreover, we'll delve into the topic of river etiquette, discussing the essential practices and principles that ensure the preservation and enjoyment of these precious natural resources. Sam's expertise and experiences will shed light on how we can all play a part in being responsible outdoors. The River Radius Podcast Show Notes with Sam Carter 1:23 - Sam takes us back to how he got into the outdoor space. 4:33 - His idea of starting a podcast started with a radio show. He grew up loving the radio and always listening to baseball news, and then in college, he became a volunteer DJ. He also volunteered in a radio talk show about rivers called River Radio on KJSD. 10:53 - I ask him how he chooses his topics. 33:00 - Just this year he did an episode with Leave No Trace. We also had them in the podcast in episode 363. 35:22 - We talk about the different ways of packing your poop like using a WAG (Waste Alleviation and Gelling) bag or the groover and disposing of them. Sam also tells us how he cleans his groover. He tackles more about this in his episode called History of the Groover. 41:20 - We dig into river etiquette. As someone who has been a ranger before, he gives recommendations on what to do when you encounter unruly people in the river to avoid conflict. 46:08 - He describes his tech and studio setup for his podcast. He also mentions the equipment he brings with him when covering a story outside. 53:00 - For the anglers, he recommends several episodes to listen to in his podcast where he talks about specific fish species. 56:50 - He highlights some of his favorite episodes which are as follows: 2023 Western Snowpack & River Flow Highwater, Helicopters and Money What is a River 2022 Endless Summer series Rowing Home 5000 miles Kanawha Falls Rescue 2020 1:01:49 - He tells this amazing story of how he was able to get an interview with Yvon Chouinard. 1:07:00 - We do the two-minute drill. His absolute go-to music is reggae, particularly Alpha Blondy. He mostly rows and his favorite boat is the Hyside 10.5 Mini-Max.  Show Notes: https://wetflyswing.com/511 

5 Minutes Podcast with Ricardo Vargas
Understanding Minimax Strategy in Project Decisions Under Uncertainty

5 Minutes Podcast with Ricardo Vargas

Play Episode Listen Later Jul 23, 2023 7:28


In this episode, Ricardo discusses the Minimax Strategy, highlighting its role in decision-making under uncertainty. Instead of focusing on the technical aspects of the algorithm, he explores its philosophical underpinnings. He underscores how this approach aids in prioritizing risks that could inflict the most significant loss on a project. This episode will provide great insights to those interested in strategic decision-making and understanding project risks better. Listen to the episode to know more.

5 Minutes Podcast com Ricardo Vargas
Compreendendo a Estratégia Minimax em Decisões de Projeto sob Incerteza

5 Minutes Podcast com Ricardo Vargas

Play Episode Listen Later Jul 23, 2023 6:10


Neste episódio, Ricardo discute a Estratégia Minimax, destacando seu papel na tomada de decisão sob incerteza. Em vez de focar nos aspectos técnicos do algoritmo, ele explora seus fundamentos filosóficos. Ele ressalta como essa abordagem ajuda a priorizar os riscos que podem causar a perda mais significativa em um projeto. Este episódio fornecerá grandes insights para os interessados na tomada de decisões estratégicas e na compreensão melhor dos riscos do projeto. Ouça o episódio para saber mais.

The Fiftyfaces Podcast
Duncan MacInnes of Ruffer: The Art of MiniMax Regret and Portfolio Construction in Today's Markets

The Fiftyfaces Podcast

Play Episode Listen Later May 4, 2023 34:56


Duncan MacInnes is an Investment Director at Ruffer LLP, where he has spent over a decade. He previously worked in wealth management. I have enjoyed listening to Duncan discuss positioning and multi-asset insights on the conference circuit and wanted to take this opportunity to discuss his views on the current macro backdrop as well as the state of play in finance circles as we wrestle with failing banks and what this means for investors. We start our conversation with a run through Duncan's upbringing in Scotland and his initial study of law, his passage into first wealth management and then asset management. His initial training in wealth management took him to Asia and we discuss how that total immersion experience lit the fire for an interest in economics, markets and multi-asset class investing. Moving to his current outlook we discuss Ruffer's preference for "minimax regret" which is the practice of minimizing the probability of your maximum regret - and focusing on capital preservation. We discuss the impact of the Fed tightening cycle on taking money out of the system and what this means for the velocity of money and money concentrating in the centre. We then turn to a number of other areas in turn - de-dollarization, the shifting appeal of fixed income and the coming chronic phase of the crisis. Our discussion around diversity focuses on creating an organization that can speak multiple languages - metaphorically speaking - some the language of number and quantitative analysis, some the language of sales and client partnerships Duncan discusses some of the finance books that he recommends to others and these are: The Psychology of Money by Morgan Housel, Thinking in Bets by Annie Duke, The Most Important Thing by Howard Marks and SuperForecasting by Philip Tetlock. Series 3 of the 2023 Fiftyfaces Podcast is supported by Eagle Point Credit Management. Eagle Point Credit Management is a specialist investment manager principally focused on income-oriented credit investments in niche and inefficient markets. Founded by Thomas Majewski in partnership with Stone Point Capital in 2012, Eagle Point currently manages over $7.8 billion in AUM. Investment strategies pursued by the firm include collateralized loan obligations (“CLOs”), portfolio debt securities, and other opportunities across the credit universe. Currently, we believe that Eagle Point is the largest investor in CLO equity in the world and one of the largest non-bank lenders focused on providing financing solutions to credit funds. Learn more about Eagle Point at http://eaglepointcredit.com/

The Fiftyfaces Podcast
Episode 196: Duncan MacInnes of Ruffer: The Art of MiniMax Regret and Portfolio Construction in Today's Markets

The Fiftyfaces Podcast

Play Episode Listen Later May 4, 2023 34:56


Duncan MacInnes is an Investment Director at Ruffer LLP, where he has spent over a decade. He previously worked in wealth management. I have enjoyed listening to Duncan discuss positioning and multi-asset insights on the conference circuit and wanted to take this opportunity to discuss his views on the current macro backdrop as well as the state of play in finance circles as we wrestle with failing banks and what this means for investors. We start our conversation with a run through Duncan's upbringing in Scotland and his initial study of law, his passage into first wealth management and then asset management. His initial training in wealth management took him to Asia and we discuss how that total immersion experience lit the fire for an interest in economics, markets and multi-asset class investing. Moving to his current outlook we discuss Ruffer's preference for "minimax regret" which is the practice of minimizing the probability of your maximum regret - and focusing on capital preservation. We discuss the impact of the Fed tightening cycle on taking money out of the system and what this means for the velocity of money and money concentrating in the centre. We then turn to a number of other areas in turn - de-dollarization, the shifting appeal of fixed income and the coming chronic phase of the crisis. Our discussion around diversity focuses on creating an organization that can speak multiple languages - metaphorically speaking - some the language of number and quantitative analysis, some the language of sales and client partnerships Duncan discusses some of the finance books that he recommends to others and these are: The Psychology of Money by Morgan Housel, Thinking in Bets by Annie Duke, The Most Important Thing by Howard Marks and SuperForecasting by Philip Tetlock. Series 3 of the 2023 Fiftyfaces Podcast is supported by Eagle Point Credit Management. Eagle Point Credit Management is a specialist investment manager principally focused on income-oriented credit investments in niche and inefficient markets. Founded by Thomas Majewski in partnership with Stone Point Capital in 2012, Eagle Point currently manages over $7.8 billion in AUM. Investment strategies pursued by the firm include collateralized loan obligations (“CLOs”), portfolio debt securities, and other opportunities across the credit universe. Currently, we believe that Eagle Point is the largest investor in CLO equity in the world and one of the largest non-bank lenders focused on providing financing solutions to credit funds. Learn more about Eagle Point at http://eaglepointcredit.com/

Infinite Loops
Ananyo Bhattacharya — John von Neumann: The Man from the Future (EP.151)

Infinite Loops

Play Episode Listen Later Mar 16, 2023 87:23


Ananyo Bhattacharya is the author of The Man from the Future: The Visionary Life of John von Neumann, a brilliant biography of one of the most prolific and influential scientists to have ever lived. He joins the show to discuss von Neumann's contributions to quantum physics, game theory, the Manhattan Project, and much more! Important Links: Ananyo's Twitter The Man from the Future Show Notes: How did John von Neumann even exist? Would von Neumann's discoveries have happened without him? The Martians of Hungary The migrant mentality Innovation in the face of extinction Science, genius & the herd mentality Von Neumann's contribution to quantum physics Game theory, Minimax and zero-sum games von Neumann: quant in the streets; romantic in the sheets The eccentricity of brilliance Von Neumann and the Manhattan Project The godfather of the open-source movement Von Neumann as a project manager How writing the book changed Ananyo's understanding of von Neumann Ananyo's next projects MUCH more! Books Mentioned: The Man from the Future: The Visionary Life of John von Neumann; **by **Ananyo Bhattacharya The Beginning of Infinity: Explanations That Transform the World; by David Deutsch The Genius of the Beast: A Radical Re-Vision of Capitalism; by Howard Bloom Theory of Games and Economic Behaviour; by John von Neumann and Oskar Morgenstern