POPULARITY
Send us a textThe Worlds MOST Interesting Man Kevin KellyKevin Kelly is the founding executive editor of Wired and one of the world's most original technology thinkers. He's known for his optimistic, mind-bending takes on the future, from AI to the “Technium” — a living system of technology. His books like The Inevitable and What Technology Wants have shaped how a generation thinks about innovation. Kelly also coined the iconic “1,000 True Fans” concept that predicted the rise of the creator economy.Connect with Kevin! https://kk.orgConnect with Ryan! https://twitter.com/RyanJAyalaConnect with Us!https://www.instagram.com/alchemists.library/https://twitter.com/RyanJAyala
BestPodcastintheMetaverse.com Canary Cry News Talk #826 - 03.24.2025 - Recorded Live to 1s and 0s CHRIST TECHNIUM | Trump Commonwealth, Christ in Silicon Valley Deconstructing World Events from a Biblical Worldview Declaring Jesus as Lord amidst the Fifth Generation War! CageRattlerCoffee.com SD/TC email Ike for discount Join the Canary Cry Roundtable This Episode was Produced By: Executive Producers Dame Miss G Beauty*** Sir Jamey Not the Lanister*** Sir LX Protocol Baron of the Berrean Protocol*** Producers of TREASURE (CanaryCry.Support) Dame Gail, Julie S, Cage Rattler Coffee, Christopher, Joseph H Producers of TALENT JonathanF, LittleOwen, Dame iAnon, BigSis, Sir Marty K, El Canario Valiente Producers of TIME Timestampers: Jade Bouncerson, Morgan E Clippy Team: Courtney S, JOLMS, Kristen Reminders: Clankoniphius Links: JAM SHOW NOTES/TIMESTAMPS HELLO WORLD EFNO RUN DOWN EXECS TRUMP Trump Hangs copy of the Declaration of Independence in Oval Office Trump TRUTHED - I love kings Charles (Truth) Trump invitation to commonwealth of nations (The Sun) The commonwealth can solve all our problems (Daily Mail) Alex Jones responds to Trump (X) Full Vid (X) PRODUCERS TALENT WORLDCOIN Sam Altman's World Network in talks with Visa for stablecoin wallet, CoinDesk reports (Reuters) CHRISTIANITY Christianity Was “Borderline Illegal” in Silicon Valley. Now It's the New Religion (Vanity Fair) AOC Clip: AOC and Bernie at “Fight the Oligarchy” (MSNBC) PRODUCERS TALENT/MEET UP TIME/END
Thank you for 1m downloads of the podcast and 2m readers of the Substack!
Stop obstructing scientific progress! We already know how to dramatically accelerate science: by getting out of the way. https://betterwithout.ai/stop-obstructing-science How to science better. What do exceptional scientists do differently from mediocre ones? Can we train currently-mediocre ones to do better? https://betterwithout.ai/better-science-without-AI Scenius: upgrading science FTW. Empirically, breakthroughs that enable great progress depend on particular, uncommon social constellations and accompanying social practices. Let's encourage these! https://betterwithout.ai/human-scenius-vs-artificial-genius Matt Clancy reviews the evidence for scientific progress slowing, with citations and graphs. https://twitter.com/mattsclancy/status/1612440718177603584 "Scenius, or Communal Genius", Kevin Kelly, The Technium. https://kk.org/thetechnium/scenius-or-comm/
Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y
Hey everyone, this is Alex and can you believe that we're almost done with Q1 2024? March 2024 was kind of crazy of course, so I'm of course excited to see what April brings (besides Weights & Biases conference in SF called Fully Connected, which I encourage you to attend and say Hi to me and the team!) This week we have tons of exciting stuff on the leaderboards, say hello to the new best AI in the world Opus (+ some other surprises), in the open source we had new MoEs (one from Mosaic/Databricks folks, which tops the open source game, one from AI21 called Jamba that shows that a transformers alternative/hybrid can actually scale) and tiny MoE from Alibaba, as well as an incredible Emotion TTS from Hume. I also had the pleasure to finally sit down with friend of the pod Tanishq Abraham and Paul Scotti from MedArc and chatted about MindEye 2, how they teach AI to read minds using diffusion models
"...Happy birthday dear ThursdAIiiiiiiii, happy birthday to youuuuuu
Hello hello everyone, happy spring! Can you believe it? It's already spring! We have tons of AI news for you to cover, starting with the most impactful one, did you already use Claude 3? Anthropic decided to celebrate Claude 1's birthday early (which btw is also ThursdAI's birthday and GPT4 release date, March 14th, 2023) and gave us 3 new Clauds! Opus, Sonnet and Haiku. TL;DR of all topics covered: * Big CO LLMs + APIs*
Happy leap year day everyone, very excited to bring you a special once-in-a-4 year edition of ThursdAI
Hey, this is Alex,Ok let's start with the big news, holy crap this week was a breakthrough week for speed! We had both Groq explode in popularity, and ByteDance release an updated SDXL model called Lightning, able to generate full blown SDXL 1024 images in 300ms. I've been excited about seeing what real time LLM/Diffusion can bring, and with both of these news release the same week, I just had to go and test them out together: Additionally, we had Google step into a big open weights role, and give us Gemma, 2 open weights models 2B and 7B (which is closer to 9B per Junyang) and it was great to see google committing to releasing at least some models in the open. We also had breaking news, Emad from Stability announced SD3, which looks really great, Google to pay Reddit 200M for AI training on their data & a few more things. TL;DR of all topics covered: * Big CO LLMs + APIs* Groq custom LPU inference does 400T/s Llama/Mistral generation (X, Demo)* Google image generation is in Hot Waters and was reportedly paused (refuses to generate white people)* Gemini 1.5 long context is very impressive to folks (Matt Shumer, Ethan Mollick)* Open Weights LLMs * Google releases GEMMA, open weights 2B and 7B models (Announcement, Models)* Teknium releases Nous Hermes DPO (Announcement, HF)* Vision & Video* YoLo V9 - SOTA real time object detector is out (Announcement, Code)* This weeks Buzz (What I learned in WandB this week)* Went to SF to cohost an event with A16Z, Nous, Mistral (Thread, My Report)* AI Art & Diffusion & 3D* ByteDance presents SDXL-Lightning (Try here, Model)* Stability announces Stable Diffusion 3 (Announcement)* Tools* Replit releases a new experimental Figma plugin for UI → Code (Announcement)* Arc browser adds "AI pinch to understand" summarization (Announcement)Big CO LLMs + APIsGroq's new LPU show extreme performance for LLMs - up to 400T/s (example)* Groq created a novel processing unit known as the Tensor Streaming Processor (TSP) which they categorize as a Linear Processor Unit (LPU). Unlike traditional GPUs that are parallel processors with hundreds of cores designed for graphics rendering, LPUs are architected to deliver deterministic performance for AI computations.* Analogy: They know where all the cars are going when everyone wakes up for work (when they compile) and how fast they all drive (compute latency) so they can get rid of traffic lights (routers) and turn lanes (backpressure) by telling everyone when to leave the house.* Why would we need something like this? Some folks are saying that average human reading is only 30T/s, I created an example that uses near instant Groq Mixtral + Lightning SDXL to just create images with Mixtral as my prompt managerOpen Source Weights LLMs Google Gemma - 2B and 7B open weights models (demo)* 4 hours after release, Llama.cpp added support, Ollama and LM Studio added support, Tri dao added Flash attention support* Vocab size is 256K* 8K context window* Tokenizer similar to LLama* Folks are... not that impressed as far as I've seen* Trained on 6 trillion tokens* Google also released Gemma.cpp (local CPU inference) - AnnouncementNous/Teknium re-release Nous Hermes with DPO finetune (Announcement)* DPO RLHF is performing better than previous models* Models are GGUF and can be found here* DPO enables Improvements across the boardThis weeks Buzz (What I learned with WandB this week)* Alex was in SF last week* A16Z + 20 something cohosts including Weights & Biases talked about importance of open source* Huge Shoutout Rajko and Marco from A16Z, and tons of open source folks who joined* Nous, Ollama, LLamaIndex, LMSys folks, Replicate, Perplexity, Mistral, Github, as well as Eric Hartford, Jon Durbin, Haotian Liu, HuggingFace, tons of other great folks from Mozilla, linux foundation and Percy from Together/StanfordAlso had a chance to checkout one of the smol dinners in SF, they go really hard, had a great time showing folks the Vision Pro, chatting about AI, seeing incredible demos and chat about meditation and spirituality all at the same time! AI Art & DiffusionByteDance presents SDXL-Lightning (Try here)* Lightning fast SDXL with 2, 4 or 8 steps* Results much closer to original SDXL than turbo version from a few months agoStability announces Stable Diffusion 3 (waitlist)Uses a Diffusion Transformer architecture (like SORA)Impressive multi subject prompt following: "Prompt: a painting of an astronaut riding a pig wearing a tutu holding a pink umbrella, on the ground next to the pig is a robin bird wearing a top hat, in the corner are the words "stable diffusion"Tools* Replit announces a new Figma design→ code plugin That's it for today, definitely check out the full conversation with Mark Heaps from Groq on the pod, and see you next week!
Hihi, this is Alex, from Weights & Biases, coming to you live, from Yosemite! Well, actually I'm writing these words from a fake virtual yosemite that appears above my kitchen counter as I'm not a Vision Pro user and I will force myself to work inside this thing and tell you if it's worth it. I will also be on the lookout on anything AI related in this new spatial computing paradigm, like THIS for example! But back to rfeality for a second, we had quite the show today! We had the awesome time to have Junyang Justin Lin, a dev lead in Alibaba, join us and talk about Qwen 1.5 and QwenVL and then we had a deep dive into quite a few Acronyms I've been seeing on my timeline lately, namely DSPy, ColBERT and (the funniest one) RAGatouille and we had a chat with Connor from Weaviate and Benjamin the author of RAGatouille about what it all means! Really really cool show today, hope you don't only read the newsletter but listen on Spotify, Apple or right here on Substack. TL;DR of all topics covered: * Open Source LLMs * Alibaba releases a BUNCH of new QWEN 1.5 models including a tiny .5B one (X announcement)* Abacus fine-tunes Smaug, top of HF leaderboard based Qwen 72B (X)* LMsys adds more open source models, sponsored by Together (X)* Jina Embeddings fine tune for code* Big CO LLMs + APIs* Google rebranding Bard to Gemini and launching Gemini Ultra (Gemini)* OpenAI adds image metadata (Announcement)* OpenAI keys are now restricted per key (Announcement)* Vision & Video* Bria - RMBG 1.4 - Open Source BG removal that runs in your browser (X, DEMO)* Voice & Audio* Meta voice, a new apache2 licensed TTS - (Announcement)* AI Art & Diffusion & 3D* Microsoft added DALL-E editing with "designer" (X thread)* Stability AI releases update to SVD - video 1.1 launches with a webUI, much nicer videos* Deep Dive with Benjamin Clavie and Connor Shorten show notes:* Benjamin's announcement of RAGatouille (X)* Connor chat with Omar Khattab (author of DSPy and ColBERT) - Weaviate Podcast* Very helpful intro to ColBert + RAGatouille - NotionOpen Source LLMs Alibaba releases Qwen 1.5 - ranges from .5 to 72B (DEMO)With 6 sizes, including 2 new novel ones, from as little as .5B parameter models to an interesting 4B, to all the way to a whopping 72B, Alibaba open sources additional QWEN checkpoints. We've had the honor to have friend of the pod Junyang Justin Lin again, and he talked to us about how these sizes were selected, that even thought this model beats Mistral Medium on some benchmarks, it remains to be seen how well this performs on human evaluations, and shared a bunch of details about open sourcing this.The models were released with all the latest and greatest quantizations, significantly improved context length (32K) and support for both Ollama and Lm Studio (which I helped make happen and am very happy for the way ThursdAI community is growing and connecting!) We also had a chat about QwenVL Plus and QwebVL Max, their API only examples for the best open source vision enabled models and had the awesome Piotr Skalski from Roborflow on stage to chat with Junyang about those models! To me a success of ThursdAI, is when the authors of things we talk about are coming to the show, and this is Junyang second appearance, which he joined at midnight at the start of the chinese new year, so greately appreciated and def. give him a listen! Abacus Smaug climbs to top of the hugging face leaderboard Junyang also mentioned that Smaug is now at the top of the leaderboards, coming from Abacus, this is a finetune of the previous Qwen-72B, not even this new one. First model to achieve an average score of 80, this is an impressive appearance from Abacus, though they haven't released any new data, they said they are planning to! They also said that they are planning to finetune Miqu, which we covered last time, the leak from Mistral that was acknowledged by Arthur Mensch the CEO of Mistral.The techniques that Abacus used to finetune Smaug will be released an upcoming paper! Big CO LLMs + APIsWelcome Gemini Ultra (bye bye Bard) Bard is no longer, get ready to meet Gemini. it's really funny because we keep getting cofusing naming from huge companies like Google and Microsoft. Just a week ago, Bard with Gemini Pro shot up to the LMSYS charts, after regular gemini pro API were not as close. and now we are suppose to forget that Bard even existed?
What A SHOW folks, I almost don't want to write anything in the newsletter to MAKE you listen haha but I will I know many of you don't like listening to be babble. But if you chose one episode to listen to instead of just skimming the show-notes, make it this one. We've had 2 deep dives, one into the exciting world of multi-modalilty, we chatted with the creator of Moondream1, Vik and the co-founders of Prophetic, Wes and Eric about their EEG/fMRI multimodal transformer (that's right!) and then we had a DEEP dive into the new Hourglass Diffusion Transformers with Tanishq from MedArc/Stability. More than 1300 tuned in to the live show
The Latent Space crew will be at NeurIPS on Tuesday! Reach out with any parties and papers of interest. We have also been incubating a smol daily AI Newsletter and Latent Space University is making progress.Good open models like Llama 2 and Mistral 7B (which has just released an 8x7B MoE model) have enabled their own sub-industry of finetuned variants for a myriad of reasons:* Ownership & Control - you take responsibility for serving the models* Privacy - not having to send data to a third party vendor* Customization - Improving some attribute (censorship, multiturn chat and chain of thought, roleplaying) or benchmark performance (without cheating)Related to improving benchmark performance is the ability to use smaller (7B, 13B) models, by matching the performance of larger models, which have both cost and inference latency benefits.Core to all this work is finetuning, and the emergent finetuning library of choice has been Wing Lian's Axolotl.AxolotlAxolotl is an LLM fine-tuner supporting SotA techniques and optimizations for a variety of common model architectures:It is used by many of the leading open source models:* Teknium: OpenHermes, Trismigestus, CollectiveCognition* OpenOrca: Mistral-OpenOrca, Mistral-SlimOrca* Nous Research: Puffin, Capybara, NousHermes* Pygmalion: Mythalion, Pygmalion* Eric Hartford: Dolphin, Samantha* DiscoResearch: DiscoLM 120B & 70B* OpenAccess AI Collective: Manticore, Minotaur, Jackalope, HippogriffAs finetuning is very formatting dependent, it also provides prompt interfaces and formatters between a range of popular model formats from Stanford's Alpaca and Steven Tey's ShareGPT (which led to Vicuna) to the more NSFW Pygmalion community.Nous Research MeetupWe last talked about Nous at the DevDay Recap at the e/acc “banger rave”. We met Wing at the Nous Research meetup at the a16z offices in San Francisco, where they officially announced their company and future plans:Including Nous Forge:Show NotesWe've already covered the nuances of Dataset Contamination and the problems with “Open Source” in AI, so we won't rehash those topics here but do read/listen to those if you missed it.* Axolotl GitHub and Discord* The Flan paper and dataset* StackLlama model and blogpost* Multipack paper* Our episode with Tri Dao* Mamba state space models - Tri Dao and Albert GuTimestamps* [00:00:00] Introducing Wing* [00:02:34] SF Open Source AI Meetup* [00:04:09] What is Axolotl?* [00:08:01] What is finetuning?* [00:08:52] Open Source Model Zoo* [00:10:53] Benchmarks and Contamination* [00:14:29] The Case for Open Source AI* [00:17:34] Orca and OpenOrca* [00:23:36] DiscoLM and Model Stacking* [00:25:07] Datasets and Evals over Models* [00:29:15] Distilling from GPT4* [00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ* [00:41:55] Axolotl vs HF Transformers* [00:48:00] 20x efficiency with StackLlama and Multipack* [00:54:47] Tri Dao and Mamba* [00:59:08] Roadmap for Axolotl* [01:01:20] The Open Source AI CommunityTranscript[00:00:00] Introducing Wing Lian[00:00:00] [00:00:00] swyx: Welcome to Latent Space, a special edition with Wing Lien, but also with our new guest host, Alex. Hello, hello. Welcome, welcome. Again, needs no introduction. I think it's like your sixth time on Latent Space already. I think so, yeah. And welcome, Wing. We just met, but you've been very prolific online. Thanks for having me.[00:00:30] Yeah. So you are in town. You're not local. You're in town. You're from Minneapolis?[00:00:35] Wing Lian: Annapolis. Annapolis. It's funny because a lot of people think it's Indianapolis. It's I've got Minneapolis, but I used to live out at least in the San Francisco Bay Area years ago from like 2008 to 2014. So it's fairly familiar here.[00:00:50] swyx: Yep. You're the maintainer of Axolotl now, which we'll get into. You're very, very prolific in the open source AI community, and you're also the founder of the Open Access AI Collective. Yeah. Cool. Awesome. Maybe we can go over a little bit of your backgrounds into tech and then coming into AI, and then we'll cover what[00:01:06] Wing Lian: happens and why you're here.[00:01:08] Yeah. So. Back on tech, so I started years ago, I started way back when I was scraping, Apartment websites for listings and then, and then building like SEO optimized pages and then just throwing Google AdSense on it.[00:01:24] And that got me through like college basically. Is[00:01:27] swyx: that decent money? And what year[00:01:28] Wing Lian: was this? Like 2004, 2005. Yeah, that's decent money. It's like thousand bucks a month. But as a college student, that's like. Gravy. Really good money, right? So, and then there's just too much competition It's just sort of like died off. I was writing stuff in like Perl back then using like like who nobody hosted anything on Perl anymore, right? Still did a little bit more like computer tech support and then software, and web more professionally.[00:01:54] So I spent some time working on applications in the blood industry. I came out to San Francisco for, I was at SGN, so Social Gaming Network, as a startup. They started doing, with Facebook apps, and then they pivoted into doing mobile apps. And then, from there, I spent time.[00:02:14] I've quite a few more startups since then and in the last few years I've been in the music space So like I was at United Masters for a while and then past year I've been at SoundCloud, but not doing that anymore and now that I have a lot more time It's just like all right.[00:02:30] We're going full bore on axolotl and we're gonna we're gonna crush AI So yeah,[00:02:34] SF Open Source AI Meetup[00:02:34] swyx: totally you so you're here in town for the open source. Yeah, I meet up that we had yesterday Yep, yeah, that was amazing. Yeah, it was a big collection. Olama, Noose Research, Alignment Lab, Anyone else that I missed? I mean, Jeremy Howard is his own thing.[00:02:47] Yeah.[00:02:49] And Alex, you're also there. You love to bring SF to the world. Your takes?[00:02:55] Alex Volkov: It's incredible that we recorded a Thursday Eye episode after that one. And LDJ, who's usually co hosts Thursday Eye, just like briefly mentioned, Oh yeah, I talked about it.[00:03:04] Like, I saw Karpathy, and then I talked to Jeremy Howard, and the guy from Mistral came in, and it's like, He's talking about all these, titans of industry, basically, that outside of SF, You just don't meet casually hanging out in the same space. You can't, pull somebody. He ran into the Laylow from Mistral, he ran into him while, drinking water.[00:03:20] He didn't even know he was there. It's just, that type of stuff is really hard to find outside of SF. So, absolutely, absolutely great. And also, presentations from Alignment Labs, presentations from News Research, news issues, talked about. Forge, and some of[00:03:33] swyx: the other stuff they announced. We can say now they're officially a company.[00:03:36] I met Technium.[00:03:37] He[00:03:37] Alex Volkov: came over here. He didn't want to get recorded. But maybe.[00:03:41] Wing Lian: We'll wear him down at some point. Yeah, I'm excited for Forge. They've positioned it as this agentic sort of framework where it's just Drag and drop things and, fill in text with where you want to inject different variables and it opens up all of these potentials for data pipelines now, right?[00:03:56] And using your own local LLMs and not relying on GPT 4 or anything like that. Yeah, yeah,[00:04:02] swyx: good stuff. Okay, so let's maybe go into the Axolotl origin story and then we have, we have some intro or background.[00:04:09] What is Axolotl?[00:04:09] swyx: To do on like the open source model universe and also on fine tuning, but maybe just, since you're talking about your personal journey, what was your personal journey into[00:04:18] Wing Lian: axolotl?[00:04:19] Yeah, so my personal journey started like back in mid March, completely unrelated to AI and axolotl. And it really started, I fell while skiing, I torqued. Great 3 MCL sprain and being sort of like an active person that can no longer be active because the two, couldn't play soccer, because that is requires to have having knees until I, it's healed.[00:04:42] So I. I decided I needed to find something to do to take up my free time. And that became, well, let's learn how to train in, these language models. It was everywhere. So I was like, all right, I'm just going to sit down, learn. I think I used like other, I think I was using like Alpacalora.[00:05:00] Cause I think the Alpaca paper had just came out, come out then. So I was like using Alpacalora repo and sort of like learning how to use like. None of us were like GPU rich back then, and none of us, most of us still we're still all GPU poor, but I was doing what was it, like 4 bit, Alpaca Lord, there was like a 4 bit version where we were doing quant, or 8, no, 8 bit quantizations, and then I think they had released QLOR a little bit later, and I think right when, before QLOR came out, I was already starting to do fine tunes, but having this need to sort of like mix data sets together, and If you've ever looked at all the various different datasets available on HuggingFace, they all have various different prompt formats, and, it's sort of a nightmare, and then I think the other piece is if you've ever tried to fine tune, at least Back then probably the ecosystem's a little better now.[00:05:54] Everybody required that you say, alright, you put your hyperparameters as command line arguments. And so it's always like, well, I now have to go copy and paste my previous thing and to change things out. And I really wanted it. to be in a YAML file because it was more portable and reproducible.[00:06:09] So I was doing that and then the QLOR paper came out. Tim Dettmer announced that and then somebody looked it up for me yesterday and it's like between that announcement it took us seven days to get that integrated into Axolotl, right? Which is like, it's not. I wouldn't say it's really fast, but in a manner that, is in a, a reusable framework, I think it was quite the accomplishment then.[00:06:33] And so we started, picking up traction with people there. And then it's just been building models, and then just iterating what my needs are. So, yeah. Excellent. Yeah. I[00:06:44] Alex Volkov: want to ask, for folks who are listening who never heard of Axolotl, now do you describe how you got there?[00:06:49] Can you, how do you summarize this for folks who maybe haven't fine tuned anything. They know about open source LLM exists, they maybe know like LLAML, what's XLR for somebody who doesn't know. I've never heard of a data set curation[00:07:01] Wing Lian: creation before. We sort of have to take a step back and understand that, when you've got these language models, you have what I think most people refer to as like base models, also known as like foundational models, right?[00:07:15] Where some benefactor, whether it's Meta or Mistral or whoever, has gone and spent all this money. To train these models on huge corpuses of text, right? And these, these corpuses, they're generally good across lots of different things, but they're really good at just saying, talking on and on and on, but they're not good at, following instructions or having chats or anything like that.[00:07:40] So, when you think about fine tuning, it's like Saying, all right, we have this really sort of good generalized, text completion thing, and I want to turn it into something that I can talk to or have, follow instructions. So, I think fine tuning is probably best defined in like that.[00:07:58] swyx: Okay, got it.[00:07:59] And we actually[00:08:01] What is finetuning?[00:08:01] swyx: Do want to make sure that we have like an overall introduction to fine tuning for people because again like trying to make sure that we bring everyone along in this, in this journey. We already went into Loras and QLoras without explaining what[00:08:12] Wing Lian: they are. Oh yes, yes, sorry.[00:08:14] swyx: And so I will put things in my words and you can correct me as, as, as my I'll be the village idiot here.[00:08:21] So, so fine tuning is basically sort of grabbing an open source model off the shelf, and then basically doing further training on it with a custom dataset of your own. Primarily, people use it, think about it as fine tuning for JSON output, or fine tuning for a style of response. Let's say you wanted to tell jokes, or be funny, or be short, or whatever.[00:08:43] Just the open source AI community has really fine tuned in all sorts of different manner. I think we'll go over those those things now. Let's go over those things now, and then we'll talk about fine tuning methods.[00:08:52] Open Source Model Zoo[00:08:52] swyx: So there's a universe of people who fine tune stuff. Yesterday in your slides, you had, I'll just list some of these and then we'll maybe go through some of them, right?[00:08:59] So Technium is personally leading Open Hermes, which is I think the sort of premier model out of the news. news community. There's OpenOrca, which you had a hand in. News, the news research itself also has Capybara and Puffin and all the others. There's Pygmalion, which I've never messed with.[00:09:14] Eric Hartford, I am aware of his Uncensored Models and his Samantha Models. Disco Research with Disco LM. And then you personally have done Manticore, Minotaur, Jackalope, and Hippogriff. What should people know about all these names? Being part of AI Twitter is seeing all these things and going dude, I'm being DDoS'ed by all these things and I don't know how different they are.[00:09:32] What should people know? Yeah, so[00:09:34] Wing Lian: I think on a lot of these models, generally, we like to think of those as sort of general models, so If you think about it, what is GPT 4, what is Chad GPT? It's a good general model, and then, One of the services I think that OpenAI offers is like these fine tunings where you're a business and you have very specific business use cases and you might fine tune for that use case.[00:10:00] All of these models are really just general use case that you can then go and maybe Fine tune another lore over it for your use cases, but they tend to be good. With good being relative, it's open source. Open source AI is still sort of is infancy. So, good is, it's pretty reasonable.[00:10:18] It's probably still better than most, high schoolers at answering questions and being able to like figure things out and, and reasoning skills and math and those sorts of things, right?[00:10:27] swyx: And also as measured on the Hugging[00:10:29] Wing Lian: Face leaderboard. Yes, well, that's like a whole other discussion, right, there's a whole other, group of people who, and I, I mostly agree with them that, benchmarks can be, are pretty bogus these days, LM says, I think they published something recently where, even if you think the dataset's not contaminated, you can go and, find contamination And maybe we should step back and say what contamination is, right?[00:10:53] Benchmarks and Contamination[00:10:53] Wing Lian: So we have all of these data, when you go and do these benchmarks, there's a specific data set where there are these questions and usually it's multiple choice. And what can happen is, well, sometimes someone It puts the question, maybe maliciously, maybe accidentally, into the training dataset, and now the, the, your model knows how to answer the test questions really well, but it doesn't, it hasn't generalized the ability to actually do that[00:11:20] Alex Volkov: right.[00:11:21] We've seen some folks competitively announce models that are like the best at that leaderboard, but then it's, it's quite obvious that, In open source? Yeah, and in that leaderboard, for Hugging Face specific, I don't know if LMCs, if that had suffered, but we, there's been some models that seem to have been competitively trained and some leakage happened into their,[00:11:41] swyx: like, supposal.[00:11:43] I understand, once there's been a credible assertion, Hugging Face actually does take them down, right? Yeah, yeah,[00:11:48] Alex Volkov: which is really hard to know, right?[00:11:50] swyx: It's really hard to know, sometimes it's like a pure accident,[00:11:52] Alex Volkov: it's oh, oops. You're going through a mixer. I think, a responsible So acknowledgement, that this kind of happened to you is also important.[00:11:58] I saw LDJ from news research can acknowledge that. Because many of these datasets are collections of other datasets. There's a bunch of people are baking, basically. It's alchemy. Right. And so sometimes you don't know. Sometimes you pull an open source dataset and they announce, oh, you know what, actually, the MMLU benchmark which we used to Specifically identify models that did go into this data set, that then went into that data set.[00:12:22] So sometimes it's actually an accident and folks take it down. But I've seen some competitive folks who want to put their name out there because people are starting to notice which is the top[00:12:30] swyx: model. For those who want a fun take on this so the file one dataset. FindOne model from Microsoft was accused of being contaminated.[00:12:37] And I saw this joke paper that was fantastic. It was called, training on the test set is all you need. It's a super small model that just memorizes everything. It was fantastic. So yeah, contamination, I think we've actually covered it in a previous episode before. So we're good. But again, I want to give people a map into the open source AI model, the universe.[00:12:57] And Alex, you can also jump in here because you guys have spent a lot more time with them than I have. So, what should people know about Technium? What should people know about Noose? And then we can go down the list. Yeah,[00:13:05] Wing Lian: I think so. I think if we start with, Technium. When you talk to him, he's gonna say, I think, I think his response is that he wants to build GP4 on his laptop, right?[00:13:14] So, very, very good at building general models. I think with Noose, Noose Research, they're looking at more, sort of, More, more research focused things, like their Yarn models, I don't, I don't, they didn't actually train their, they have their own trainer for their Yarn models, but So they did not use Xlato for that one?[00:13:30] They didn't use that, but like Is that, you don't have support for it? I think we do support Yarn, I think, I'd have to double check that answer. Yeah, I'm just kind of curious what you can and cannot support, and Yeah, I mean, Yarn is supportable, it's basically, I think it's just replacing, I think, the rope part of that, so Yeah, not, not a big deal.[00:13:48] Yeah, it's not a big deal, it's just I haven't gotten to it, not enough people have asked, I think a lot of people have asked for other things, so it's just, squeaky wheel, right? I think at the end of the day, people are like building these data sets and I think if you sort of map things chronologically, these make more sense because it's like, how do we incrementally improve all of these models?[00:14:07] So a lot of these models are just incremental improvements over the last thing, right? Whether it is sort of through methods of how do we, how did we curate the data set? How did we improve the quality of the data set? So, you maybe LDJ talked about it right on I think for, for Capybara and Puffin, like how those, those were very specific dataset curation techniques that he works on.[00:14:29] The Case for Open Source AI[00:14:29] Alex Volkov: So there's, folks are doing this for dataset curation. Folks are doing this for skillset building as well. Definitely people understand that open source is like very important, especially after the, the, the, the, the march, the debacle, the OpenAI weekend that we all had. And people started noticing that even after developer day in OpenAI, the APIs went out.[00:14:48] And then after that, the whole leadership of the company is swiftly changed and people, there was worries about, you know. How can people continue building AI products based on these like shaky grounds that turned attention definitely to Technium at least in open RMS I started seeing this more and more on Twitter, but also other models and many companies They're gonna start with open AI just to get there quick, and then they they think about okay Maybe I don't want to share my knowledge.[00:15:13] Maybe I don't want to sign up for Microsoft. Maybe they will change their terms and conditions so What else is out there? They turned to other companies. Up until yesterday, Google was nowhere to be found. We've talked about Gemini a little bit before in a previous And you can tune in[00:15:26] swyx: to[00:15:26] Alex Volkov: Thursday Eye.[00:15:26] Yeah, you can tune in to Thursday Eye. We covered the Gemini release a little bit. And but many are turning into the open source community and seeing that Meta released and continues to release and commit to open source AI. Mistral came out and the model is way smaller than LLAMA and performs Significantly better.[00:15:43] People play with OpenRMS, which is currently techniums based, news researched, sourced, axolotl trained OpenRMS, I assume, right? And then they play with this and they see that, okay, this is like GPT 3. 5 quality. We had GPT 4. 5 birthday just a week ago. A week ago, a year ago, a week ago, we never, interacted with these models of this caliber.[00:16:04] And now there's one open source, one that's on my laptop, completely offline, that, I can continue improving for my use cases. So enterprises, companies are also noticing this. And the open source community folks are building the skill set, not only the data sets. They're building the actual kind of, here's how we're going to do this, with Axelotl, with these data sets.[00:16:21] The curation pieces. Now. Interesting. There's like recipes of curation. The actual model training is kind of a competitive thing where people go and compete on these leaderboards that we talked about, the LMC arena, and that recently added open air and recently added open chat and a bunch of other stuff that are super cool.[00:16:37] The hug and face open source leaderboard. And so there's a competitive aspect to this. There's the open source. Aspect to this, like Technium says, I want GPT 4 on my laptop. There's the, let me build a skill set that potentially turns into a company, like we saw with Noose. Noose just, started organizing, a bunch of people on Discord, and suddenly, they're announcing their company.[00:16:54] It's happening across all these modalities, and suddenly all these people who saw these green pastures and a fairly quick way to, hey, here's a cool online community I can, start doing cool stuff with. You mentioned the same in the beginning, right? Like, after your accident, what's cool, let me try this out.[00:17:08] Suddenly I start noticing that there's a significant movement of interest in enterprising companies into these areas. And, this skill set, these data sets, and this community is now very Very important, important enough to create an event which pulls in Andrei Karpathy from OpenAI to come and see what's new Jeremy Howard, like the event that we just talked about, people are flying over and this is just a meetup.[00:17:28] So, definitely, the community is buzzing right now and I think Axelot is a big piece as well.[00:17:34] Orca and OpenOrca[00:17:34] Wing Lian: Cool. Maybe we can talk about like Orca real quick, Orca, OpenOrca rather, I think there was a lot of buzz when, the first Orca paper came out. And just briefly, what is Orca? Yeah, Orca was basically having traces of like chain of thought reasoning, right?[00:17:48] So they go and they, they distill sort of GPT 4. They take, they take a sampling of data from the Flan dataset. Maybe we can like add some show notes in the Flan dataset. Yeah, but we've covered it. Okay, cool. Use GPT 4 to say, all right, explain this in a step by step reasoning, right?[00:18:06] And then you take that and you, they train the model and it showed, very good improvements across a lot of benchmarks. So OpenOrca was sort of the open reproduction of that since Microsoft Research never released that particular data set. And going back to sort of the Hugging Face leaderboard thing, those models did really well.[00:18:23] And then I think, so sort of the follow up to that was SlimOrca, right? I think Going into and building the OpenOrca dataset, we never really went in and, validated the actual answers that GPT 4 gave us, so what we did was one from OpenChat actually cross referenced the original Flan, the original Flan response, the human responses, the correct answers with the dataset, and then I went and took it and sent all of, both of them to GPT 4 and said, is this answer mostly correct, right?[00:18:54] Yeah. And then we were able to filter the dataset from, At least of the GPT 4 only answers from like 800, 000 to like 500, 000 answers or rows and then, and then retrain the model and it had the same performance as the original model to within I think, 0. 1 percent here about, and 30 percent less data.[00:19:13] So, yeah. Okay.[00:19:15] swyx: Interesting. So, I mean, there's, there's so much there that I want to highlight, but yeah. Orca is interesting. I do want people to know about it. Putting chain of thought into the data set like it's just makes a ton of sense one thing I think it would be helpful for people to scope thing these things out is how much data are we talking about when when you When people are fine tuning and then how much time or resources or money does it take to train to fine[00:19:36] Wing Lian: tune?[00:19:37] Yeah, so I think there's a little bit of overlap there with sort of like fine tuning techniques, but let's say Orca and I think even Hermes, they're both relatively large data sets like 10 billion tokens. Yeah. So large data sets being or the original Orca was, or the original open Orca was 800,000 rows.[00:19:55] I believe it was somewhere in the ballpark of like a gigabyte of data, of gigabyte, of text data. And I, I don't. I believe, Hermes was, is like a quarter million rows of data, I don't know the actual byte size on that particular one. So, going and training a, let's, let's say everybody's training 7 billion Mistral right now, right?[00:20:15] So, to tri I, I believe to fine tune 7 billion Mistral on, let's say, 8 A6000s, which have 48 gigabytes of VRAM, I believe, It takes about 40 hours, so 40, and then that's, depending on where you get your compute, 40 times 6, so it's like 500 to fine tune that model, so, and, and that's assuming you get it right the first time, right?[00:20:44] So, you know.[00:20:45] swyx: Is, is that something that X. Lotto handles, like, getting it right the first[00:20:48] Wing Lian: time? If you talk to anybody, it's like you've probably tried at least three or four runs or experiments to like find the right hyperparameters. And after a while you sort of have a feel for like which, where you need your hyperparameters to be.[00:21:04] Usually you might do like a partial training run, do some benchmark. So I guess for Al Farouk, whether you're going by his. This is Jeremy, he's, his actual name, or his twitter handle. He released the Dharma dataset, which is basically a subset of all the benchmarks. And Axolotl actually supports, you know taking that subset and then just running many benchmarks across your model every time you're doing an evaluation so you can sort of like see sort of relative it's not going to be the actual benchmark score, but you can get ideas alright, is this benchmark improving, is this benchmark decreasing, based on, you know Wait,[00:21:39] swyx: why don't you run the full benchmark?[00:21:41] What, what, what The[00:21:42] Wing Lian: full benchmarks take Take a long time. Significant, yeah, significant amount of time. Yeah. And Okay, so that's like[00:21:48] swyx: mini MMLU. Yeah. Like,[00:21:49] Wing Lian: mini BigBench or whatever. Yep, exactly.[00:21:51] Alex Volkov: It's really cool. We, when I joined Web2Masters just recently, and one of the things that I try to do is hey I'm not, I'm a software engineer by trade, I don't have an MLE background, But I joined a company that does primarily MLE, and I wanted to learn from the community, Because a lot of the open source community, they use weights and biases, And the benchmark that you said that Pharrell did, remind me of the name, sorry.[00:22:13] Dharma? Dharma, yeah, yeah. So Luigi showed me how Dharma shows inside the dashboard. In Wi and Biases dashboard and so you can actually kinda see the trending run and then you can see per each kind of iteration or, or epoch or you can see the model improving trending so you can on top of everything else.[00:22:29] The wi and biases gives like hyper parameter tracking, which like you, you started with common line and that's really hard to like remember. Also the Dharma data set, like the quick, the mini orca mini, you mini many different things. It's pretty cool to like visualize them as well. And I, I heard that he's working on a new version of, of Dharma, so Dharma 2, et cetera.[00:22:47] So hopefully, hopefully we'll see that soon, but definitely it's hard, right? You start this training around, it said like 40, 50 hours. Sometimes, sometimes it's like your SSHing into this machine. You, you start a process, you send it with God and you just go about your day, collecting data sets, and then you have to return.[00:23:04] And the whole process of instrumentation of this is still a little bit like squeaky but definitely. Tuning performance, or like grabbing performance in the middle of this, like with Dharma and some other tools, is very helpful to know that you're not wasting precious resources going somewhere you shouldn't go.[00:23:21] Yeah.[00:23:22] swyx: Yeah. Very cool. Maybe I'll, I'll, before we go into like sort of more details on fine tuning stuff, I just wanted to round out the rest of the Excel autoverse. There's, there's still Eric Hartford stuff. I don't know if you want to talk about Pygmalion, Disco, anything that you know about[00:23:35] Wing Lian: those, those things.[00:23:36] DiscoLM and Model Stacking[00:23:36] Wing Lian: Yeah, I think like one of the, definitely one of the more interesting ones was like the Disco 120b, right? Yeah, I know nothing about it. Yeah. So, so. Alpen from Pygmalion AI, right, so they, so Pygmalion is a sort of a, it's, it's, they have their own community, a lot of it is based around, roleplay models, those sorts of things, and Alpen, like, put together, merged together Llama270B, so, and Alpen, like, put together, merged together Llama270B, so, I don't remember how he stacked them together, whether he merged the layers in between. There's a whole, there's a whole toolkit for that by Charles Goddard, where you can like take a single model and like stack them together or multiple models merge.[00:24:18] That's like a whole other talk and a whole other tool set, but was able to create this 120. Billion parameter model out of a LAMA two 70 B. And then I believe the, yeah, disco is a fine tune of, of the, the, the sort of the base one 20 B is, I believe Goliath one 20 B. So, and, and what are the[00:24:37] swyx: headline results that people should know about[00:24:39] Wing Lian: disco?[00:24:39] I think for the headline results, I, I've, I haven't played with it personally because it's. It's a very large model and there's a lot of GPU, right? But, like, from what I've heard anecdotally, it performs really well. The responses are very good. Even with, like, just, even the base model is a lot better than, Llama70b.[00:24:57] So, and we, I think generally everybody's like, we would all love to fine tune Llama70b, but it's just, it's so much, it's so much memory, so much compute, right?[00:25:07] Datasets and Evals over Models[00:25:07] Wing Lian: I[00:25:07] Alex Volkov: want to touch on this point because the interesting thing That comes up out of being in this ecosphere and being friends with open source folks, tracking week to week state of the art performance on different models.[00:25:19] First of all, a lot of the stuff that the folks do a couple of weeks ago, and then something like Mistral comes out, and a lot of the stuff back then, Doesn't technically make sense anymore. Like the artifacts of that work, the actual artifacts, they don't no longer make sense. They're like lower on the on, on the hug and face leaderboard or lower on LM CS leaderboard.[00:25:36] But some of the techniques that people use, definitely the datasets. The datasets keep traveling, right? So open airmen, for example, is the dataset. The tum cleaned up for only. Open sourceable data that previously was just Hermes. And that, it was previously used to train Lama. And then once Mistral came out, it was used to train Mistral.[00:25:54] And then it became significantly better on the 7b base Mistral. So the data sets keep traveling, keep getting better a little bit here and there. And so the techniques improve as well. It looks like both things are simultaneously true. The artifacts of a month and a half ago. The, the actual models themselves, it's great the hug and face has them, because not every company can keep up with the next weeks', oh, I, I'll install this model instead, sell this model instead.[00:26:19] But the, the techniques and the, the dataset keep improving as we go further, and I think that's really cool. However, the outcome of this is that for a long time. For many, many people, including us, that we do this every week. We literally talk with people who release these models every week. It's really hard to know.[00:26:36] So, there's a few aspects of this. One, I think, like you said, the bigger model, the 70B models, you actually have to have somebody like Perplexity, for example, giving you access to the 70B really fast. Or you have to, like, Actually, find some compute, and it's expensive, especially for the bigger models. For example Falcon 180B came out, like the hugest open source model.[00:26:56] How do you evaluate this if you can't run it? Nobody liked it. It's really, so first of all, nobody liked it, but secondly, only the people who were able to find compute enough to run inference on this, they only had like, I can't run this on my laptop, and so that's why it's much easier, something like OpenRMS 7 to be, 7B, it's much easier, because you can run this on your MacBook.[00:27:14] It's much easier to evaluate. It's much easier to figure out the vibes, right? Everybody talks about the vibes as an evaluation check. If you're plugged in enough, if you follow the right people, if they say pretty much the same things all independently, then you run into a problem of whether they're repeating, and their stochastic parents are repeating the same thing, or they actually evaluated themselves.[00:27:31] Yeah, you never know. But, you never know, but like, I think on a large enough scale on Twitter, you start getting the feel. And we all know that like, OpenRMS is one of the top performing models, benchmarks, but also vibes. And I just wanted to highlight this vibes checks thing because you can have the benchmarks, you can have the evaluations, they potentially have contamination in them, potentially they not necessarily tell you the whole story because some models are good on benchmarks, but then you talk to them, they're not super helpful.[00:28:00] And I think it's a combination of the benchmarks, the leaderboards, the chatbot, because LMSys, remember, their ranking is not only based on benchmarks, it's also people playing with their arena stuff. People actually like humans, like, get two answers. I think they completely ignore benchmarks. Yeah, and then They only do ELO.[00:28:18] Oh, they do ELO completely, right? So that, for example, is just like people playing with both models and say, Hey, I prefer this one, I prefer that one. But also there's like some selection bias. The type of people who will go to LMCs to play with the models, they're a little bit specific in terms of like who they are.[00:28:33] It's very interesting. There's so many models. People are doing this in this way, that way. Some people are doing this for academic rigor only to test out new ideas. Some people are actually doing this like the Intel fine tunes of Mistral. Intel wanted to come out and show that their hardware approach is possible, Mistral, etc.[00:28:51] And it's really hard to know, like, what to pick, what to use. And especially on the bigger models, like you said, like the Llama 70B, the Falcon 180B. It's really because, like, who has the compute to validate those? So I would mention that, like, use with caution. Like, go and research and see if the biggest model that just released was actually worth the tokens and the money you spend on it.[00:29:12] To try and, if you're a business, to integrate it.[00:29:15] Distilling from GPT4[00:29:15] swyx: Since you said use of caution, I'll bring in one issue that has always been in the back of my mind whenever I look at the entire universe of open source AI models, which is that 95 percent of the data is derived from GPC 4, correct?[00:29:30] Which technically you can't use for commercial licenses,[00:29:34] Wing Lian: right?[00:29:35] swyx: What is the community's stance on this kind of stuff?[00:29:40] Wing Lian: I think from the community stance, like I feel like a lot of us are just experimenting, so for us, it's like, we're not going and building a product that we're trying to sell, right?[00:29:49] We're just building a product because we think it's interesting and we want to use it in our day to day lives, whether or not we try and integrate it. Personal use, yeah. Yeah, personal use, so like, as long as we're not selling it, yeah, it's fine. But[00:30:01] swyx: like, I as a company cannot just take OpenHermes and start serving[00:30:05] Alex Volkov: it and make money on it.[00:30:06] OpenHermes you can. Because the opening of OpenHermes, I think, is a clean up. That did after the regular Hermes, please folks, check your licenses before you listen to podcasts and say, Hey, I will tell you though, you could say the same thing about OpenAI. You could say the same thing kind of makes sense, where OpenAI or StabilityAI trains their diffusion model on a bunch of pictures on the internet, and then the court kind of doesn't strike down Sarah Silverman, I think, or somebody else, who came and said, hey, this has my work in it, because of the way how it processes, and the model eventually builds this knowledge into the model, and then it doesn't actually reproduce one to one what happened in the dataset.[00:30:45] You could claim the same thing for open source. Like, we're using And by we, I mean the, the open source community that I like happily report on uses GPT 4 to rank, for example, which is the better answer you, you, that's how you build one, one type of data set, right? Or DPO or something like this, you, you basically generate data set of like a question and four answers, for example, and then you go to GPT 4 and say, Hey, smartest model in the world right now, up to Gemini Ultra, that we should mention as well.[00:31:11] Which one of those choices is better? But the choices themselves are not necessarily written with GPT 4. Some of them may be, so there's like full syntactic datasets. But there's also, datasets are just ranked with GPT 4. But they're actually generated with a sillier model, or like the less important model.[00:31:25] The lines are very blurry as to what type of stuff is possible or not possible. And again, when you use this model that's up on Hug Face, the license says you can use this. OpenAI is not going to come after you, the user. If anything, OpenAI will try to say, hey, let's prevent this, this type of thing happening, and the brain, but I honestly don't think that they could know even, not that it makes it okay, it's just like, They also kind of do this with the Internet's archive, and also, I think that some of it is for use.[00:31:55] You use models to help you augment tasks, which is what GPT 4 lets you do.[00:32:00] swyx: Yeah, the worst thing that OpenAI can do is just kick you off OpenAI. That's because it's only enforced in the terms of service.[00:32:05] Alex Volkov: Sure, but just like to make sure, to clarify who they're going to kick out, they could kick out like News, for example, if news are abusing their service, a user of the open source, fully Apache 2 open source, for example, They won't get kicked out if they use both, just because they use both.[00:32:22] I don't believe so. I don't think OpenAI has a claim for that.[00:32:25] swyx: Well, we're not lawyers, but I just want to mention it for people to know it's an issue.[00:32:30] Wing Lian: And one of the things, like, I talked to someone recently, and I think that they also are like interested in it, but also to the point of like, right, if I use a model trained on data, using GPT for data, But I use that model to then regenerate new data.[00:32:46] Is that model, is that data okay? So like you start going down this whole rabbit hole. So yeah. All right.[00:32:53] swyx: Fantastic. Cool. Well, I think that's roughly highlights most of the open source universe. You also have your own models. Do you want to shout out any one of them? Yeah.[00:33:01] Wing Lian: I mean, I think like, I think Early on, Manicore got a lot of love.[00:33:04] I think it was mostly popular in, like, the roleplay communities. It was, it tended to be pretty truthful. It tended to be, like, have relatively good answers, depending on who you ask, right? But, I think for me, it was just, Releasing models was a way to try and, like, continue to build out the product, figure out what I needed to put into the product, how do I make it faster, and, if you've got to, like, go and debug your product, you may as well have it do something useful.[00:33:29] Awesome. So, yeah.[00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ[00:33:31] swyx: Okay, and then maybe we'll talk about just fine tuning techniques. So this is going to be a little bit more technical than just talking about model names and datasets. So we started off talking about LoRa, QLoRa. I just learned from your readme there's ReLoRa. Which I've never heard about.[00:33:45] Could you maybe talk about, like, just parameter efficient fine tuning that whole, that[00:33:50] Wing Lian: whole journey, like, what people should know. Yeah, so with parameter efficient fine tuning, I think the popular ones, again, being, let's, we'll start with lore, right? So, usually what you do is you freeze all the layers on your base, on the base model, and then you, at the same time, you sort of introduce additional Oh, this is tight.[00:34:08] No. You introduce, another set of layers over it, and then you train those, and it is done in a way that is mathematically possible, particularly with LORs that you can, then you, you, When you, when you train the model, you, you run your inputs through the base model, whose weights are frozen, but you, then you also run it through the additional weights, and then at the end you combine the weights, and then, and then, or you combine the weights to get your outputs, and then at the end, and when you're done training, you're left with this other set of weights, right, that are completely independent, and And then from that, what you can do is, some person smarter than I figured out, well, oh, they've done it in such a way that now I can merge these weights back into the original model without changing the architecture of the model, right?[00:35:03] So, so, that tends to be, like, the go to, and You're training much fewer parameters so that when you do that, yes, you still need to have all of the original weights, but you have a smaller gradient, you have a smaller optimizer state, and you're just training less weights, so you can tend to train those models on, like, much smaller GPUs.[00:35:27] swyx: Yeah. And it's roughly like, what I've seen, what I've seen out there is roughly like 1 percent the number of parameters that you're trading. Yeah, that sounds about right. Which is that much cheaper. So Axelotl supports full fine tune, LoRa, QLoRa,[00:35:40] Wing Lian: Q. Yes. So, so QLoRa is, is very similar to LoRa. The paper was, if I remember correctly, the paper was Rather, traditionally, most people who did Loras were, were, they were quant, they were putting the model weights in 8 bit, and then fine tune, parameter efficient fine tuning over the Lora weights, and then with QLora, they were quantizing all of those, they were then quantizing the weights down to 4 bit, right, and then I believe they were also training on all of the linear layers in the model.[00:36:15] And then with ReLore, that was an interesting paper, and then, I think, like, it got implemented. Some people in the community tried it, tried it out, and it showed that it didn't really have the impact that the paper indicated that it would. And from what I was told recently, that they re I guess they re released something for Relora, like, a few weeks ago, and that it's possibly better.[00:36:44] I personally haven't had the time. What was the[00:36:46] swyx: main difference,[00:36:47] Wing Lian: apart from quantization? I don't know. Okay. What was the main difference, sorry?[00:36:49] swyx: Apart from quantization, right? Like,[00:36:50] Wing Lian: Qlora's thing was, like, we'll just drop off some bits. With Relora, what they did was, you would go through, you would define some number of steps that you would train, like, your Lora with, or your Qlora.[00:37:01] Like, you could do Like, ReqLore, if you really wanted to, you would, you would train your LoRa for some number of steps, And then you would merge those weights into your base model, and then you would start over. So by starting, so, then by starting over, The optimizer has to find, like, sort of, re optimize again, and find what's the best direction to move in, and then do it all again, and then merge it in, do it all again, and theoretically, according to the paper, doing ReLore, you can do parameter efficient fine tuning, but still have sort of, like, the performance gains of doing a full fine tuning, so.[00:37:38] swyx: Yeah, and[00:37:39] Wing Lian: GPTQ? And GPTQ, so it's, I think with GPTQ, it's very similar to, more similar to QLore, where you're, it's mostly a quantization of the weights down to like 4 bit, where GPTQ is a very, is a specific methodology or implementation of quantization, so. Got it.[00:37:57] Alex Volkov: Wang, for, for folks who use Axolotl, your users, some people who maybe, Want to try it out?[00:38:03] And do they need to know the differences? Do they need to know the implementation details of QLora versus ReLora? Or is it okay for them to just know that Axolotl is the place that already integrated them? And if that's true, if that's all they need to know, how do they choose which method to use? Yeah,[00:38:22] Wing Lian: so I think like, I think most people aren't going to be using ReLora.[00:38:25] I think most people are going to be using either Lora or QLora. And I think they should have it. They should have an understanding of why they might want to use one over the other. Most people will say that with Qlora, the quality of the final model is not quite as good as like if you were to do a LoRa or a full fine tune, right?[00:38:44] Just because, you've quantized these down, so your accuracy is probably a little off, and so that by the time you've done the Qlora, you're not moving the weights how you would on a full fine tune with the full parameter weights.[00:38:56] Interesting.[00:38:57] swyx: Okay, cool. For people who are more interested, obviously, read the papers. I just wanted to give people, like, a high level overview of what these things are. And you've done people a service by making it easy for people to try it out. I'm going to, I'm going to also ask a question which I know to be wrong, but I'm curious because I get asked this all the time.[00:39:15] What is the difference between all these kinds of fine tunes[00:39:17] Wing Lian: and RLHF? Okay, between all of these sorts of fine tunes and RLHF. So all of these sorts of fine tunes are based, are, ideally, this, they are taking knowledge that the base model already knows about, and presenting it in a way to the model that you're having the model answer like, Use what it already knows to sort of answer in a particular way, whether it's, you're extracting general knowledge, a particular task, right?[00:39:44] Instruct, tune, chat, those sorts of things. And then generally with RLHF, so what is, let's go back, what is it? Reinforcement Learning with Human Feedback. So if we start with the human feedback part, What you're doing is you generally have, you have like a given prompt and then you, maybe you have one, maybe you have two, I think, like if you look at with Starling, you have like up to what, seven different, seven different possible responses, and you're sort of ranking those responses on, on some sort of metric, right, whether the metric is how much I, I might like that answer versus or I think with like starling is like how how how helpful was the answer how accurate was the answer how toxic was the answer those sorts of things on some sort of scale right and then using that to go back and like sort of Take a model and nudge it in the direction of giving that feedback, to be able to answer questions based on those preferences.[00:40:42] swyx: Yeah, so you can apply, and is it commutative? Can you apply fine tuning after and onto an RLHF model? Or should the RLHF apply, come in afterwards,[00:40:54] Wing Lian: after the fine tune? Um, I, yeah, I don't know that there's There's been enough research for one way or another, like, I don't know.[00:41:02] That's a question that's been asked on Discord. Yeah, like, I definitely would say I don't know the answer. Go and try it and report back to me and let me know so I can answer for the next guy.[00:41:10] swyx: It's shocking how much is still unknown about all these things. Well, I mean, that's what research is for, right?[00:41:16] Wing Lian: So actually I, I think I saw on the top of a leaderboard, it was a, it was a mytral base model, and they didn't actually fine tune it. They, or they, they just did RLH, they did like an RLHF fine tune on it using like, I don't, I don't recall which dataset, but it was like, and it benchmarked really well.[00:41:37] But yeah, you'd have to go and look at it. But, so it is interesting, like going back to that, it's like. Traditionally, most people will fine tune the model and then do like a DPO, PPO, some sort of reinforcement learning over that, but that particular model was, it seemed like they skipped like the supervised fine tuning or Scott.[00:41:55] Axolotl vs HF Transformers[00:41:55] swyx: Cool. One thing I did also want to comment about is the overall, like, landscape, competitive landscape, I don't know. Hugging Face Transformers, I think, has a PFT module.[00:42:05] Wing Lian: Yeah, yeah, the PEFT, the Parameter Efficient Fine Tuning, yep. Is that a competitor to you? No, no, so we actually use it. We're just a wrapper over sort of, sort of the HuggingFace stuff.[00:42:15] So, so that is their own sort of module where They have, taken the responsibility or yeah, the responsibility of like where you're doing these parameter efficient fine tuning methods and just sort of like, it is in that particular package where transformers is mostly responsible for sort of like the modeling code and, and the trainer, right.[00:42:35] And then sort of, there's an integration between the two and, there's like a variety of other fine tuning packages, I think like TRL, TRLX, that's the stability AI one. Yeah, I think TRL likes the stability, yeah, Carper, and TRL is a hugging face trainer. Even that one's just another wrapper over, over the transformers library and the path library, right?[00:43:00] But what we do is we have taken sort of those, yes, we've We also use that, but we also have more validation, right? So, there are some of us who have done enough fine tunes where like, Oh, this and this just don't go together, right? But most people don't know that, so like Example?[00:43:19] Like, people want to One and one doesn't go together. I don't have an example offhand, but if you turn this knob and this knob, right? You would think, all right, maybe this will work, but you don't know until you try. And then by the time you find out it doesn't work, it's like maybe five minutes later, it's failed.[00:43:34] It's failed in the middle of training or it's failed during the evaluation step. And you're like, ah, so we've, we've added a lot of, we've added a lot more validation in it. So that like, when you've, you've created your configuration, you run it through and now you say. The validation code says this is probably not right or probably not what you don't, not what you want.[00:43:52] So are you like a, you[00:43:53] swyx: do some linting of your YAML file?[00:43:56] Wing Lian: There, I guess you could call it linting, it's sort of like Is there a set of rules out[00:44:00] swyx: there somewhere? Yeah, there's a set of rules in there. That's amazing, you should write documentation like This rule is because, this user at this time, like, ran into this bug and that's what we invested in.[00:44:10] It's like a good collection[00:44:11] Wing Lian: of knowledge. Yeah, it is, and I guess like, if you really wanted to, like, figure it out, I guess you could, like, git blame everything, and But, yeah, it's, so, I think that's always a useful thing, it's like Because people want to experiment but they don't, people will get frustrated when you've experiment, you're experimenting and it breaks and you don't know why or you know why and you've just gone down the rabbit hole, right?[00:44:37] So, so I think that's one of the big features that's, that I think I find important because it's It prevents you from doing things you probably shouldn't have, and it, and sometimes we will let you do those things, but we'll try and warn, warn you that you've done that.[00:44:50] I[00:44:51] Alex Volkov: have a follow up question on this, actually, because yesterday we hung out to this open source event, and I spent time by you a couple times, like when people told you, oh, XLR, I use XLR, it's super cool, and then the first thing you asked is, like, immediately, like, what can we improve?[00:45:04] And yes, from multiple folks, and I think we talked about this a little bit, where there's It's a developer tool. It's like a machine learning slash developer tool. Your purpose in this is to help and keep people, as much as possible, like, Hey, here's the best set of things that you can use right now. The bear libraries are, or the bear trainer, for example, is a bear trainer.[00:45:28] And also, maybe we should talk about how fast you're implementing these things. So you mentioned the first implementation took a week or so. Now there's a core maintainer group, right? There's like, features are landing, like Qlora, for example. Neftune, I don't know if that's one example of something that people potentially said that it's going to be cool, and then eventually, like, one of those things that didn't really shake out, like, people quickly tested this out.[00:45:48] So, there's a ton of Wait, Neftune is cancelled? I don't know if it's fully canceled, but based on vibes, I heard that it's not that great. So like, but the whole point that I'm trying to make with Neftune as well is that being existing in the community of like XLR or like, I don't know, even following the, the GitHub options or following the Discord, it's a fairly good way to like, learn these, Kind of gut feelings that you just, you just said, right?[00:46:14] Like where this, maybe this knob, that knob doesn't work. Some of these are not written down. Some of these are like tribal knowledge that passes from place to place. Axel is like a great collection of many of them. And so, do you get That back also from community of folks who just use, like, how do you know who uses this?[00:46:30] I think that's still an issue, like, knowing if they trained with XLR or should they add this to things? Talk about, how do you get feedback and how else you should get feedback?[00:46:38] Wing Lian: Yeah, I mean, most of the feedback comes from the Discord, so people come in and , they don't get a training running, they run into, like, obscure errors or, errors that That's a lot of things that maybe, maybe as a product we could catch, but like, there's a lot of things that at some point we need to go and do and it's just on the list somewhere.[00:46:58] Right that's why when people come up, I'm like, what, what were your pain points? Because like, as a developer tool, if you're not happy with it, or you come in and in the first, Takes you 30 minutes and you're still not happy. You leave the tool and you may, you might move on maybe to a better tool, maybe to, one with less frustration, but it may not be as good, right?[00:47:17] So I'm trying to like, figure out, all right, how can I reduce all this frustration? Because like for me, I use it every day for the most part, right? And so I am blind to that, right? Mm-Hmm. . Mm-Hmm. . I just know, I, I go do this, this, and this. It pretty much mostly works, right? But, so I don't have sort of that, alright, that learning curve that other people are seeing and don't understand their pain points.[00:47:40] Yeah,[00:47:40] Alex Volkov: you don't have the The ability to onboard yourself as a new user completely new to the whole paradigm to like get into the doors of like, Oh, no, I don't even know how to like ask about this problem or error.[00:47:53] swyx: Cool. The last few things I wanted to cover was also just the more advanced stuff that you covered yesterday.[00:48:00] 20x efficiency with StackLlama and Multipack[00:48:00] swyx: So I'll just, caution this as like, yeah, this is more advanced. But you mentioned Stackllama and Multipack. What are they[00:48:06] Wing Lian: and what should people know? Yeah, so, so, Stack Llama was, that paper came out, so Stack Llama I think was like, two, two, two separate, two separate concepts that they announced, so the first one was They being hugging face.[00:48:20] Yeah, sorry, yes, they being hugging face, so the first one being sort of like, this idea of packing, like some packing sequences together, so like, if we think about training data, right, your training data is, let's say, to keep the math easy, let's say your training data is 500, We, we, we, we will use the terminology words.[00:48:39] Let's say your training data is 500 words long, and let's say your, your context length, you know how much data your, that your model can accept is like, or that you want feed into your model. It's, let's say, we won't use tokens again, we'll we'll use it is it's 4,000 tokens, right? So if you're training at 4K Con or four 4,000 4K contacts and you're only using 500 of it, you're sitting like with the other 1500.[00:49:05] 3, 500 words that you're not using, right? And typically that's either filled with these PAD tokens, so I think I made the analogy last night that it's like having sort of like a glass here you fill it up with a shot of liquor and then you're and that's your training data and then you just fill it up with more water and those are your PAD tokens and it's just, it doesn't do much, right?[00:49:27] It's still the same thing, but you still have to go through all of that to go through all your training data. And then, so what Stack Llama showed was you could just sort of take your training data, append the next row of training data until you filled that entire 4k context, so in this example, right, with 500 words to 4k, that's 8 rows of training data.[00:49:48] But, the problem with that is, is that with a lot of these transformer models, they're very much relying on attention, right? So, like, if you now have this sequence of words that now, in order for the, the model has seen all of these other words before, right? And then it sees another set of words, another set of words, but it's learning everything in context of all the words that it's seen before.[00:50:13] We haven't corrected the attention for that. And just real quickly, since I said that that paper was two concepts, the other one was, I believe it was like a reinforcement learning, but outside the scope of this. So going from that, I implemented that early on because I was like, Oh, wow, this is really great.[00:50:29] And. Yes, because it saves you a bunch of time, but the trade off is a little bit of accuracy, ultimately, but it still did pretty well. I think when I did Manicore, I think it used sort of that concept from Stack Llama of just sort of appending these sequences together, right? And then sort of the next evolution of that is Multipack, right?[00:50:51] So, there was a separate paper on that, it was, I believe it was referenced, it got referenced in the Orca paper, where you could, you could properly mask those out using like a, I think it was like a lower block triangular attention mask, and then sort of, so, So, there's that. I did try implementing that, manually recreating that mask, but then one from the OpenChat, so he was helping with OpenOrca as well, and he had done an implementation of Multipack, and where he used FlashAttention, so FlashAttention So that was released by TreeDAO, and it was this huge performance gain.[00:51:35] Everybody uses it now, even the Transformers library now, they've taken all of these, like, people are taking all of these models and sort of like, making it compatible with FlashAttention. But in Flash Tension, there is one particular implementation that lets you say, Well, I'm sending you all of these sequences like you would in Stack Llama, But let me send you another, another, Set of information about, this is where this set of sequences is, this is where the second set of sequences is.[00:52:06] So like, if it was like, 500 words long, and you stacked them all together, you would just send it a row of information that was like, 0, 500, 1000, 1500, etc, etc, out to 4000. And it would know, alright, I need to break this up, and then run the forward pass with it. And then it would be able to, and it was much more, much more performant.[00:52:29] And I think you end up seeing like 10x, 20x improvements over sort of, I mean, I think FlashAttention was like a 2x improvement, and then adding that with the Multipack, you start to see like, depending on, how much data you have, up to like a 20x improvement sometimes. 20x. 20x. Wow. Yeah.[00:52:48] And I only know the 20x because I, like, before last night, I was like, I re ran the alpaca, I looked up the alpaca paper because it was like, I just need a frame of reference where somebody did it, and I think they used eight A100s for three hours, and they said it cost them 100. I don't, I don't think eight A100s cost, I don't know how much it costs right now.[00:53:14] But I ended up rerunning it. Usually a dollar an hour, right? Yeah, so eight. The cheapest is like a[00:53:18] Alex Volkov: dollar, a dollar an hour for one.[00:53:20] Wing Lian: Yeah, so that's still like 24, 25. But maybe if you're going on Azure, maybe it's like, maybe it's 100 on Azure. I mean, it used to be more expensive, like, a year ago.[00:53:31] Yeah, and then, so I re ran it with sort of like, I turned on all of the optimizations just to see what it would be. And like, and usually Multipack is the biggest optimization, so Multipack with Flash Detention. And it, I think I spun it up on 8 L40s, and it ran, and I didn't let it run all the way through, I just grabbed the time, the estimated completion time, and it was like 30 minutes, so it would have cost like 4 or 5 to run the entire, like, reproduce the alpaca paper, right?[00:54:00] Which is crazy. It's crazy. 20x,[00:54:02] Alex Volkov: yeah. I want to ask about, like, you said you turned on all the optimization. Is that the yaml file with xlodl, you just go and like check off, like, I want this, I want that? Yeah, yeah,[00:54:10] Wing Lian: so there's like one particular yaml file in there, That, there's one particular YAML file in there that's like, it's under examples, llama2, fft, optimize.[00:54:20] So, I think someone had created one where they just turned, they put in all of the optimizations and turned them on. I mean, it actually, it does run, which is like, sort of surprising sometimes, because sometimes, you optimize this, optimize this, and sometimes they just don't work together, but, yeah.[00:54:36] Just turn the knobs on, and like, fine tuning should really just be that easy, right? I just want to flip the knob and move on with my life and not figure out how to implement it.[00:54:47] Tri Dao and Mamba[00:54:47] Alex Volkov: Specifically, the guy behind FlashAttention came up with something new. You want to talk about this a little bit? You want to briefly cover Mamba?[00:54:53] Yeah, let's talk about Mamba. Let's talk about Mamba. So, what is Mamba?[00:54:57] Wing Lian: Oh, gosh. I
ThursdAI October 26thTimestamps and full transcript for your convinience## [00:00:00] Intro and brief updates## [00:02:00] Interview with Bo Weng, author of Jina Embeddings V2## [00:33:40] Hugging Face open sourcing a fast Text Embeddings## [00:36:52] Data Provenance Initiative at dataprovenance.org## [00:39:27] LocalLLama effort to compare 39 open source LLMs +## [00:53:13] Gradio Interview with Abubakar, Xenova, Yuichiro## [00:56:13] Gradio effects on the open source LLM ecosystem## [01:02:23] Gradio local URL via Gradio Proxy## [01:07:10] Local inference on device with Gradio - Lite## [01:14:02] Transformers.js integration with Gradio-lite## [01:28:00] Recap and bye byeHey everyone, welcome to ThursdAI, this is Alex Volkov, I'm very happy to bring you another weekly installment of
Hey there, welcome to this special edition of ThursdAI. This episode is featuring an interview with Nous Research, a group of folks who fine-tune open source large language models to make them better. If you are interested to hear how finetuning an open source model works, dataset preparation, context scaling and more, tune in! You will hear from Karan, Teknium, LBJ from Nous Research and Enrico who worked along side them. To clarify, Enrico is going in depth into the method called Rope Scaling, which is a clever hack, that extends the context length of LLaMa models significantly and his project LLongMa which is an extended version of LLaMa with 8000 token context window. The first voice you will hear is Alex Volkov the host of ThursdAI who doesn't usually have a lisp, but for some reason, during the recording, twitter spaces decided to mute all the S sounds. Links and acknowledgments: * Nous Research - https://nousresearch.com/ (@nousresearch)* Redmond Puffin 13b - First LLaMa Finetune* LLongMa - LLaMa finetune with 8K context (by Encrico, emozilla and KaioKenDev)* Nous-Hermes-Llama2-13b-GPTQ - Hermes Finetune was released after the recording
My guest today is Kevin Kelly. Kevin co-founded Wired magazine and has published a number of seminal books and essays on technology over the past three decades. I have devoured everything Kevin has put out into the world and many of his ideas shape the way I live today. Our conversation explores media, family, money, his concept of the Technium, AI, and more but the central theme of this episode is that we should be as generous and unique as possible. You will hear us refer to his latest book, Excellent Advice for Living, throughout and I highly recommend reading it if you haven't already. Please enjoy this great conversation with Kevin Kelly. Founders Podcast Founders Episode 136 - Estee Lauder Founders Episode 288 - Ralph Lauren For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors, and provider of Canalyst. Tired of calculating fully-diluted shares outstanding? Access every publicly-reported datapoint and industry-specific KPI through their database of over 4,000 driveable global models handbuilt by a team of sector-focused analysts, 35+ industry comp sheets, and Excel add-ins that let you use their industry-leading data in your own spreadsheets. Tegus' models automatically update each quarter, including hard to calculate KPIs like stock-based compensation and organic growth rates, empowering investors to bypass the friction of sourcing, building and updating models. Make efficiency your competitive advantage and take back your time today. As a listener, you can trial Canalyst by Tegus for free by visitingtegus.co/patrick. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes (00:03:05) - (First question) - Excellent Advice for Living - a journey towards authenticity (00:05:05) - Uncovering the essence of oneself is a lifelong journey of self-reflection (00:06:47) - What he would have done differently at 30 had he internalized this concept earlier (00:08:51) - The highest form of self-expression is being authentically unique and redefining success (00:11:05) - Conforming to others' definition of success and societal biases hinders progress (00:13:07) - Surrender and collaboration are both essential in becoming your authentic self (00:14:38) - Prototype your life to embrace imperfections and make ideas tangible (00:17:34) - Mastering cultural photography in Asia and developing a keen ability to spot trends (00:19:59) - Energy signatures reveal depth, breadth, discovery, and momentum in events (00:22:02) - The reward for good work is more work (00:23:42) - Money is a tool for doing things, but beware its imprisoning burden (00:28:35) - Imagination can be cultivated and improved, often by challenging expectations (00:31:38) - Imaginative individuals include lateral thinkers who challenge norms (00:34:41) - Rites of passage and rituals provide stability and identity for children (00:38:15) - Mealtime without screens, family traditions, and cultivating a family identity (00:41:44) - An overview of “The three gates” (00:43:02) - Humans are naturally kind (00:47:23) - The Technium: an evolving ecosystem of interdependent tech and their tendencies (00:52:01) - Thoughts on AI (00:55:55) - Overestimating the existential threat of AI (00:57:38) - Idiosyncratic expression of creators (00:59:48) - Lessons learned about media (01:01:34) - Be the only, not the best. (01:05:09) - The kindest thing anyone has ever done for him
IN THIS EPISODE, YOU'LL LEARN:What is the Technium and how did Kevin come to understand the term?Why are ecosystems NOT in equilibrium?How does technology have its on evolution?What are Kevin's thoughts on AI helping biology and longevity research?Does Biology seek efficiency and is that what we are trying to accomplish with technology?Thoughts about Kevin's new book, "Excellent Advice for Living".Kevin's experience with the 10,000 year clock.Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences.BOOKS AND RESOURCESKevin's New Book, Excellent Advice for Living.Kevin's book, What Technology Wants.Kevin's Twitter.NEW TO THE SHOW?Check out our We Study Billionaires Starter Packs.Browse through all our episodes (complete with transcripts) here.Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.Enjoy exclusive perks from our favorite Apps and Services.Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets.P.S The Investor's Podcast Network is excited to launch a subreddit devoted to our fans in discussing financial markets, stock picks, questions for our hosts, and much more! Join our subreddit r/TheInvestorsPodcast today!SPONSORSHave peace of mind knowing River holds Bitcoin in multi-sig cold storage with 100% full reserves.Join over 5k investors in the data security revolution with Atakama.Make connections, gain knowledge, and uplift your governance CV by becoming a member of the AICD today.What does happen when money and big feelings mix? Tune in to find out on the new podcast, Open Money, presented by Servus Credit Union.If you're into marketing podcasts that walk through how successful entrepreneurs, politicians, and influencers have convinced and persuaded people, then you should give Nudge, hosted by Phil Agnew, a listen!Apply for the Employee Retention Credit easily, no matter how busy you are, with Innovation Refunds.Discover the leading community for engaged investors, and the best source of investment analysis and opinion with Seeking Alpha. Listeners get a special deal: Alpha Picks for $99 + 1 month of FREE Seeking Alpha Premium!Invest your retirement savings in what YOU know and are passionate about with a Self-Directed IRA with New Direction Trust Company.Unravel the fascinating story of Elon Musk's unexpected bid to buy Twitter, and all of the drama that has happened since then with Flipping The Bird: Elon vs. Twitter.Support our free podcast by supporting our sponsors.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Array Cast - March 3, 2023 Show NotesThanks to Bob Therriault and Adám Brudzewsky for gathering these links:[01] 00:01:26 Technium podcast https://www.youtube.com/watch?v=SUzMb5PdGBM[02] 00:03:42 Canadian Aboriginal Syllabics https://en.wikipedia.org/wiki/Canadian_Aboriginal_syllabics[03] 00:04:20 Newest APL Show episode: https://apl.show/2023/02/24/Primitives-Extravaganza.html[04] 00:04:40 Under: https://mlochbaum.github.io/BQN/doc/under.html Henry Rich Episode 6 https://www.arraycast.com/episodes/episode-06-henry-richs-deep-dive-into-j Henry Rich Episode 18 https://www.arraycast.com/episodes/episode18-henry-rich-presents-j903[05] 00:12:30 Amend J adverb https://code.jsoftware.com/wiki/Vocabulary/curlyrt#dyadic At Operator Dyalog https://apl.wiki/At[06] 00:16:20 History of J releases https://code.jsoftware.com/wiki/System/ReleaseNotes Winget https://learn.microsoft.com/en-us/windows/package-manager/winget/ J9.4.1 https://code.jsoftware.com/wiki/System/Installation#J9.4_release[07] 00:18:39 u t. https://code.jsoftware.com/wiki/Vocabulary/tdot[08] 00:22:00 Matrix Multiplication https://en.wikipedia.org/wiki/Matrix_multiplication Matrix Division https://en.wikipedia.org/wiki/Invertible_matrix LAPACK https://en.wikipedia.org/wiki/LAPACK[09] 00:29:40 Pyx https://code.jsoftware.com/wiki/Vocabulary/Glossary#Pyx[10] 00:33:00 CPU Cache https://en.wikipedia.org/wiki/CPU_cache[11] 00:36:22 Immutable objects https://en.wikipedia.org/wiki/Immutable_object[12] 00:43:15 Dyalog Primitive for running OS threads: https://docs.dyalog.com/latest/Parallel%20Language%20Features.pdf#page=11 Dyalog Primitive for running green threads: https://apl.wiki/Spawn[13] 00:46:20 Futex https://en.wikipedia.org/wiki/Futex Mutex https://en.wikipedia.org/wiki/Mutex[14] 00:53:18 GMP library https://en.wikipedia.org/wiki/GNU_Multiple_Precision_Arithmetic_Library Fourier Transform Multiplication https://en.wikipedia.org/wiki/Convolution_theorem[15] 00:55:05 New Error Messages https://github.com/jsoftware/dev_eformat Hook in J https://code.jsoftware.com/wiki/Vocabulary/hook Fork In J https://code.jsoftware.com/wiki/Vocabulary/fork[16] 01:05:25 John Daintree Token Debugging https://dyalog.tv/Dyalog22/?v=b2at0Sa8v3E Bob's Visual Debugger https://www.youtube.com/watch?v=iSpJxv1BUhI[17] 01:07:00 J's new u/..: https://code.jsoftware.com/wiki/Vocabulary/slashdot#dyadic Dyalog's Key: https://apl.wiki/Key[18] 01:11:40 J9.4.1 Release Notes https://code.jsoftware.com/wiki/System/ReleaseNotes/J9.4 J9.4.1 link https://code.jsoftware.com/wiki/System/Installation/J9.4[19] 01:16:29 JQt style editing https://code.jsoftware.com/wiki/Guides/Qt_IDE/Configure/Styles[20] 01:17:08 BQNPad https://bqnpad.mechanize.systems/[21] 01:18:10 Romilly Cocking "Solution by successive iteration" https://www.arraycast.com/episodes/episode34-romilly-cocking[22] 01:21:38 J Playground https://jsoftware.github.io/j-playground/bin/html2/[23] 01:23:12 Contact AT ArrayCast DOT Com[24] 01:23:35 J Fold Conjunction https://code.jsoftware.com/wiki/Vocabulary/fcap
Canary Cry News Talk #497 - 06.15.2022 FROSTY TECHNIUM Ukraine Escalates, Obama 24, Monkeypox Rebrand LINKTREE: CanaryCry.Party SHOW NOTES: CanaryCryNewsTalk.com CLIP CHANNEL: CanaryCry.Tube SUPPLY DROP: CanaryCrySupplyDrop.com SUPPORT: CanaryCryRadio.com/Support MEET UPS: CanaryCryMeetUps.com Basil's other podcast: ravel Gonz' New Youtube: Facelikethesun Resurrection Gonz' Video Archive: Facelikethesun.Live App Made by Canary Cry Producer: Truther Dating App LEAD 5:29 V / 2:50 P UKRAINE/RUSSIA/SPACE POPE REPTILIAN Breaking: Biden sends another $1 billion to Ukraine (CNBC) Zel strikes back, hits military base in Russian territory (The Sun, DailyMail) Pope Francis suggests Ukraine conflict might have been ‘provoked' (DailyMail) → ‘Shilling For The Devil'–Media Explodes As Pope Admits NATO ‘Provoked' Russia (Sputnik) INTRO (M-W-F) 39:51 V / 37:12 P B&G Update, V4V/Exec./Asso./Support FLIPPY 46:33 V / 43:54 P Poland largest retail chain introduces hot-dog making robot arm (First News) [Party, Ravel, Clips, Text, Clue] 55:59 V / 53:20 P BILL GATES 59:19 V / 56:40 P Gates predicts macroeconomics “Tech winter” (AXIOS) PSYOP/POLYTICKS 1:05:28 V / 1:02:49 P → Obama requests 2,500 gallon commercial propane tank on MV property (MV Times, WND) → AOC with Dana Bash, doesn't support Biden 2024 (CNN) → Brittany Griner Russia extended Update: Local Coeur d'Alene “White Lives Matter” Group Affiliated With Patriot Front — Leaked Chat Reveals Celebration After LGBT Event (Idaho Tribune) [TREASURE/SPEAKPIPE/BYE YOUTUBE] 1:32:00 V / 1:29:21 P COVID 1:57:10 V / 1:54:31 P No, Sudden Adult Death Syndrome is not linked to vaccines (NBC12) → Researchers use AI to identify long Covid (MarketPlace) MONKEYPOX 2:05:15 V / 2:02:36 P Monkeypox in semen, but NOT STD (Fortune) (Reuters) → Monkeypox to get a new name, says WHO (BBC) [TALENT] 2:33:21 V / 2:30:42 P ANTARCTICA 2:46:00 V / 2:43:21 P Multi Million Dollar Base to remain the color Green, vote ends (Stuff NZ) AI 2:52:16 V / 2:49:37 P AI-driven robot boat Mayflower crosses Atlantic Ocean (BBC) [TIME/OUTRO] 2:56:08 V / 2:53:29 P EPISODE 497 WAS PRODUCED BY… Executive Producers Sir Redbeard Pirate King of the Demoochers** Felicia D** Producers Melinda NOT GATES, MORV, Sir James Knight and Servant of the Lion of Judah, Brother G, LX Protocol V2, Sir JC Knight of the Technosquath, James M, Gail M, Darris S, Veronica D, Sir Scott Knight of Truth, Sir Casey the Shield Knight AUDIO PRODUCTION (Jingles, Iso, Music): Jonathan F ART PRODUCTION (Drawing, Painting, Graphics): Dame Allie of the Skillet Nation, Sir Dove Knight of Rusbeltia CONTENT PRODUCTION (Microfiction etc.): Runksmash: Basil and Gonz sit in the Canarium Studio, behind them as they live stream a glowing green spider opens the door as Basil reaches the climax of his I-Told-You-So rant about the zombi robot girl scouts. Unnoticed is Rooster plugging in his beak. CLIP PRODUCER Emsworth, FaeLivrin, Epsilon Timestamps: Mondays: Jackie U Wednesdays: Jade Bouncerson Fridays: Christine C ADDITIONAL STORIES: Dementia in a patient with autoimmune disease and hypercoagulable state worsened by COVID-19 vaccination: A case report (ScienceDirect) Children are being infected with up to THREE viruses at a time because COVID measures have worn down their immune systems (DailyMail) Detective tried to uncover vaccine status of dead children's parents, sources say (CBC) New tick diseases emerging across US and world, fueled by climate change (Insider) → Conn. man Dies from Rare Tick-Borne Virus, 2nd U.S. Fatality This Year (People) …sources… → IBM-powered Mayflower autonomous ship sets sail across the Atlantic (CNET) → After IBM failed to sail an autonomous boat across the Atlantic, it's trying again (WaPo) → AI, Captain: A Robot-Driven Boat Will Recreate the Mayflower's Historic Voyage—Without a Single Crew Member (Robb Report) → IBM AI US Mayflower voyage finally lands…in Canada (The Register) …more Ukraine… Putin mad at Xi for lack of support (News AU) Russian Oligarchs' Jets Flying Over Europe Despite Sanctions: Report (Newsweek) Ukraine says Elon Musk's Starlink has been 'very effective' in countering Russia, and China is paying close attention (Insider) US Quietly Urges Russia Fertilizer Deals to Unlock Grain Trade (Insider)
durée : 00:02:48 - L'info vue par France Bleu Maine - ► avec Julien Bonnet, dirigeant de la société Technium et de la marque "ma belle verrière" à Thorignée-sur-Dué. Youtubeur et influenceur dans le secteur du bâtiment.
The semantic web was an effort to mark up webpages with facts about their content. The dream was to enable computers, to understand the world by reading these webpages. We discuss why the effort failed, how it relates to symbolic AI/knowledge graphs, and how the vision could still be realized today.Links/Resources:Introducing the Knowledge Graphhttps://twobithistory.org/2018/05/27/semantic-web.htmlMetacrap: Putting the torch to seven straw-men of the meta-utopiaWhatever happened to the Semantic Web? https://news.ycombinator.com/item?id=16806657http://ai.stanford.edu/blog/introduction-to-knowledge-graphs/Aggregation Theory: https://stratechery.com/concept/aggregation-theory/Chapters:0:00 Intros2:12 What is the Semantic Web?3:59 The Early 2000's8:32 The Idealism of the Semantic Web11:30 How it worked18:19 Roots in Symbolic AI22:53 The OG Web 3.025:28 Why Did the Semantic Web Fail?35:46 A Misalignment with the Medium41:10 Crypto and Semantic Web Collide43:25 Deep Learning and the Semantic Web Collide47:09 Vestiges of the Semantic Web49:44 Path to Intelligent Agents55:03 Incentive structures1:02:49 Token Curated Registries and the Semantic Web===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/ SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7 APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
TLA+ is a language for formally modeling and verifying the correctness of programs, especially concurrent and distributed ones. We discuss how it could be used to improve the practice of software engineering and what it would take for formal methods to gain more popularity.Links/Resources:TLA+ Homepage https://lamport.azurewebsites.net/tla/tla.htmlhttps://learntla.com/introduction/https://lamport.azurewebsites.net/video/videos.htmlhttps://lamport.azurewebsites.net/tla/tutorial/contents.htmlhttps://lamport.azurewebsites.net/tla/industrial-use.htmlhttps://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltextChapters:0:00 Intros2:14 What is TLA+?5:10 Bugs in Concurrent and Distributed Programming12:58 What's it like to use?19:09 Global State Machine20:34 Testimonials of Industrial Use Cases24:56 Why is it not more popular?27:32 How do we currently design our programs?37:16 Everyone is a distributed systems programmer now41:29 Thick client app management49:47 Can haz use for Security?53:08 TLA+ is for the design, not implementation59:35 TLA+ all the things1:02:55 Formal methods in Crypto?1:08:19 Intrigued with Formal Methods===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/ SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7 APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
Transformers are a building block of Machine Learning systems that have seen great success recently at subsuming all other techniques.We discuss at a high level, its attention mechanism and its multimodal properties, and the types of applications this can be put to use now and in the future.Links/Resources:Introductions to transformershttps://www.machinecurve.com/index.php/2020/12/28/introduction-to-transformers-in-machine-learning/https://deeplobe.ai/machine-learning-for-transformers-explained-with-language-translation/https://www.youtube.com/watch?v=yGTUuEx3GkAAttention is All you Need https://arxiv.org/pdf/1706.03762.pdfAttention: https://distill.pub/2016/augmented-rnns/Transformers replacing CNNs https://becominghuman.ai/transformers-in-vision-e2e87b739febAI models consolidating https://twitter.com/karpathy/status/1468370605229547522?ref_src=twsrc^tfwGTP implementation https://github.com/karpathy/minGPT/blob/master/mingpt/model.pyGoogle introduces new arch to reduce cost of transformers https://analyticsindiamag.com/google-introduces-new-architecture-to-reduce-cost-of-transformers/LaMDA by Google https://gpt3demo.com/apps/lamda-googleChapters:0:00 Intros1:57 What are Transformers?4:59 How does it work at a high level?9:27 Self Attention Mechanism14:03 Input structure agnostic16:59 Stack it high, pump it with data25:28 More MultiModal Learning27:22 The Narrow Waist34:34 Transformers for Compilation40:24 Specialized Hardware43:52 Multimodal Applications47:19 Generating Media as a Self-sustaining Entity52:42 The Jobs this Destroys58:02 Two machines need to talk to each other1:04:42 A Young Lady's Primer1:11:19 Try them out!===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
CORBA is an acronym that stands for common object request, broker architecture. It provides interoperability among distributed objects regardless of whether they're remote or local, regardless of whether they're written in different languages or in different locations of the network. We discuss its influence on distributed systems, and where we are today with these large scale systemsLinks/Resources:Chapters:0:37 Intros2:20 What is CORBA?4:08 90's Throwback7:25 The Idea of Distributed Objects9:55 How does it work?13:16 The Promise of Interoperability15:35 Influence on Computing Today18:52 Why did CORBA fail?24:39 Lots of People Missed the Web30:09 Distributed Computing36:06 The REST model of Distributed40:35 RPC lives on today47:13 Hard to get this right52:08 Negotiating Interfaces from Scratch===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
Unison is a programming language for distributed systems by leveraging immutable and content addressable code. We discuss the many implications of a language whose code is content addressable, from fast rename, easy caching, to feature flagging.Links/Resources:- Unison Lang https://www.unisonweb.org/- Unison at StrangeLoop https://www.youtube.com/watch?v=gCWtkvDQ2ZI- Unison and its Scala-based Runtime https://www.youtube.com/watch?v=knqlWboqf_U- What color is your function? https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/- Python Environment XKCD https://xkcd.com/1987/ Chapters:0:48 Intros2:32 What is Unison?6:46 Advantages of Content Addressable Code15:39 Realizing the Scale before it's Too Late22:03 Controlled Refactoring Experience25:54 Abilities or Algebraic Effects31:08 Programming the Whole Network34:57 Facilitating the Culture around Coding50:37 A Library Ecosystem1:00:50 Reproducibility and Old Computers1:07:55 Hot Patch Your Server1:10:37 The Road to Popularity===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
Smalltalk was one of the first object oriented programming languages and a software authoring environment that was highly influential in the history of computing.We discuss its designer Alan Kay, his vision of personal computing, and the lasting influence of Smalltalk despite its limited commercial adoption.Links/Resources:Actor Model - https://www.brianstorti.com/the-actor-model/https://dspace.mit.edu/handle/1721.1/6272Smalltalk Demo - https://www.youtube.com/watch?v=uknEhXyZgsgSmalltalk Build your own Debugger - https://youtu.be/NqKyHEJe9_w?t=355Early History of Smalltalk - http://worrydream.com/EarlyHistoryOfSmalltalk/“A remarkable number of artists, scientists, philosophers are quite dull outside of their specialty (and one suspects within it as well). The first siren's song we need to be wary of is the one that promises a connection between an interesting pursuit and interesting thoughts. The music is not in the piano, and it is possible to graduate Juilliard without finding or feeling it.”Why was smalltalk not a commercial success? http://www.wirfs-brock.com/allen/posts/914https://gbracha.blogspot.com/2020/05/bits-of-history-words-of-advice.htmlLively and Caffine https://lively-kernel.org/Chapters:0:00 Intros1:57 The Retro Future2:34 Origin and Motivation of Smalltalk7:20 Object Orientation is about message passing11:51 Meta-circular inspiration14:31 Smalltalk is an entire environment18:59 Designed with the GUI in mind24:35 The Music isn't in the Piano27:21 Taking a bet on the future31:42 Surprising things about Smalltalk34:27 Learning to Program By Reading "Literature”39:56 Did Smalltalk Achieve It's Goals?44:43 The Failure of Enterprise Smalltalk52:30 Even Good Ideas Have to Survive the Marketplace55:23 Big Ideas Meet Reality===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
“Hurtigere, højere” har ikke noget med vinter-OL at gøre, men derimod menneskehedens stræben efter mere og bedre teknologi – eller måske teknologiens stræben efter at blive skabt?Efter et par dystre episoder til at starte sæsonen med vil vi gerne gøre det klart, at vi i bund og grund er optimister: Teknologioptimister. Teknologi kan ikke vælges fra. Vi er som mennesker groet sammen med den for længe siden og er en del af det, som Kevin Kelly kalder Technium.Vi taler om, hvorfor det ikke er en reel mulighed ikke at blive ved med at skabe teknologi, men at vi til gengæld kan overkomme de fleste udfordringer, vi står overfor, med teknologi. Ikke at teknologi kun er af det gode, men netto har summen af teknologi i historiens løb gjort vores liv bedre, lige fra vi fandt ud at at bruge ild og kastede den første sten for at ramme et byttedyr på afstand – og kan fortsat gøre vores liv bedre og være en del af løsningen på de planetære udfordringer, hvis vi satser på at udvikle og bruge det, som ikke forekommer naturligt.
Dynamicland is a communal computer where the computer is the room. It's a new dynamic, medium to think about systems with other people. We discuss taking computation out of the realm of abstraction and putting it in the realm of the physical world, it, and how it breaks down barriers to entry.Chapters:0:00 Intros2:10 What is Dynamicland?11:17 The Communal Aspect of Computing18:30 Computing other than Symbol Manipulation31:29 The Concrete System38:43 Realtalk is Declarative43:39 Leverage the Real World for Compute46:21 Squint beyond the Art Project49:07 Having your thoughts out in the open53:55 The Libraries of the Future1:01:25 Interactive decision making 1:06:59 Throwaway Computing1:19:46 Dynamicland for Games1:23:08 A wish for the futureLinks/Resources:- Dynamicland website: https://dynamicland.org/- Research Agenda: http://worrydream.com/cdg/ResearchAgenda-v0.19-poster.pdf- Humane representation of thought: https://vimeo.com/115154289- Seeing Spaces: http://worrydream.com/SeeingSpaces/- Embodiment and Visualization: https://vimeo.com/114252897- Notes about the experience - https://dynamicland.org/research-notes/social-dynamics-of-programming-together/ - https://omar.website/posts/notes-from-dynamicland-geokit/ - http://vitor.io/on-dynamicland**](http://vitor.io/on-dynamicland) - https://tashian.com/articles/dynamicland/ - https://www.phenomenalworld.org/analysis/the-next-big-thing-is-a-room/ - https://futureofcoding.org/notes/bret-victor/dynamicland.html===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world. Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7
Non-Fungible Tokens (NFTs) are unique digital collectibles that are represented on a blockchain in the form of a token. NFTs make it possible to own digital media assets in the same way that you can own a digital currency asset, like Bitcoin.We discuss various applications of NFTs outside of art, and its potential as a building block for future applications.Links/Resources:Girl with House on Fire Meme. https://knowyourmeme.com/memes/disaster-girlEthereum Name Service https://ens.domains/Party Bid to Fractionalize NFTs https://www.partybid.app/Flash Loans https://coincodecap.com/flash-loanAavegotchi https://aavegotchi.com/Uniswap https://uniswap.org/3LAU and NFTs https://decrypt.co/59788/musician-3lau-sells-album-in-nft-form-for-11-millionDemand Curves https://corporatefinanceinstitute.com/resources/knowledge/economics/demand-curve/Davie Bowie on the relationship of media and the internet https://www.youtube.com/watch?v=FiK7s_0tGsg&t=636sSkeuomorph https://en.wikipedia.org/wiki/SkeuomorphERC-998 Composable standard https://github.com/ethereum/EIPs/blob/master/EIPS/eip-998.mdNeolastics https://neolastics.com/Latent Space https://stats.stackexchange.com/questions/442352/what-is-a-latent-spaceJack Dorsey Tweet for $2.9 million https://v.cent.co/tweet/20Schelling Point https://medium.com/@willemvandenbergh_85885/on-schelling-points-network-effects-and-lindy-inherent-properties-of-communication-c4eb69b55c60Invoice Refactoring https://en.wikipedia.org/wiki/Factoring_(finance)Accruing Value in a Value Chain https://stratechery.com/2015/netflix-and-the-conservation-of-attractive-profits/The Technium NFT! https://opensea.io/assets/matic/0x2953399124f0cbb46d2cbacd8a89cf0599974963/73071695664951725756103887812533794393262849524101052862455651661423314993153Chapters:0:14 Intros2:00 What are Non-fungible Tokens?7:21 Ethereum Name Service domains are NFTs20:05 "Non-fungible" only notable in the digital world22:54 NFTs are Digital Native Objects26:21 Fractionalizing NFTs31:46 NFTs as Collateral36:02 NFT as Social Proof42:15 NFT as meme-ified finance44:36 Permissionless Composability of NFTs51:15 NFTs for Community Building55:41 The Curator Economy1:00:29 Creator Feedback Loop with Fans1:07:15 Permissionless Remixing1:10:28 We're still making Skeuomorphs1:13:43 Content Agnostic and Responsive1:20:37 Owning the Latent Space1:24:09 Financialization of Memes1:30:50 NFTs as boring assets1:42:27 NFTs Scaled Up and Down1:49:50 Yet Another Building Block===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7
IPFS is a distributed storage network. The content is accessible through peers located anywhere in the world that might relay information store it or both, and IPFS finds data by its content address rather than its locations. We discuss the main principles behind IPFS, the current use cases, and how it changes the basic unit economics of some businesses, as well as its interplanetary future.Chapters:00:00 Intros02:25 What is IPFS?13:58 Three Principles of IPFS15:01 Content-addressable URIs18:44 Content Linking in a DAG21:27 Distributed Hash Table for Discovery22:48 Pinning Content30:58 Censorship-resistance36:38 Used for NFTs39:04 Used for Video and Music Streaming42:56 Use for Package Manager49:14 Use for Machine Learning54:38 Interplanetary Linked Data56:37 A Key Building Block59:22 As a Public Good01:06:36 Developers Tools on top of IPFS01:10:07 Shifting Operational Burden01:18:15 The Interplanetary FutureLinks/Resources:Content Addressing https://simpleaswater.com/ipfs-cids/Linked Data https://ontola.io/what-is-linked-data/Distributed Hash Tables https://www.cs.cmu.edu/~dga/15-744/S07/lectures/16-dht.pdfNapster, Kaaza, Gnutella https://www.slideshare.net/uschmidt/peertopeer-systems/20-Comparison_Napster_Gnutella_KaZaAType_ofJuan Benet of IPFS https://research.protocol.ai/authors/juan-benet/ProtoSchool https://proto.school/Marc Andressen's Blog Archive https://pmarchive.com/Left Pad Debacle https://www.davidhaney.io/npm-left-pad-have-we-forgotten-how-to-program/NPM as a private company https://www.youtube.com/watch?v=MO8hZlgK5zc&t=46sTransfer Learning https://builtin.com/data-science/transfer-learningDeno Programming language https://deno.land/IPLD https://ipld.io/Jack Dorsey Regrets Shutting down API https://www.revyuh.com/news/software/developers/twitters-founder-admits-that-shutting-down-the-api-was-worst-thing-we-did-it-affected-users-and-developers/Datomic https://www.datomic.com/===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7
End-user programming is a vision of computing that allows people who are not professional software developers to program their own computers.We discuss some of our favorite end-user programming environments, the qualities that make an end-user programming environment successful and what the world would look like if everyone could program.Chapters:0:00: Intros3:15: What is end-user programming?6:28: Hypercard: An Early Pioneer15:22: Quality #1 : Embodiment17:52: Quality #2: Liveness19:51: Quality #3: In-Place Toolchain23:10: Elitism: Programmers vs. Script Kiddies26:37: Yahoo Pipes: A Web 2.0 Programming Environment32:25: What makes an environment successful?40:28: Twitter, Hashtags, and "Situated Software”47:36: Is programming just inherently hard?1:04:26: Modern EUP Environments1:10:39: The World If Everyone Could Program1:15:01: Programming in the MetaverseLinks/Resources:[0] https://www.inkandswitch.com/end-user-programming/][1] https://www.robinsloan.com/notes/home-cooked-app/[2] https://www.johndcook.com/blog/2011/07/21/software-exoskeletons/[3] https://www.gwern.net/docs/technology/2004-03-30-shirky-situatedsoftware.html===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world. Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====[WEBSITE](https://technium.transistor.fm/)[SPOTIFY](https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7)
This week on The Technium, we talk about Zero Knowledge Proofs, a relatively new cryptographic primitive that lets people prove they know a secret without divulging it.
This week on the Technium, we talk about functional programming invading mainstream programming languages
Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics
Kevin Kelly has a variety of interesting pursuits. He is the Senior Maverick for Wired, which he helped co-found 28 years ago. He is the author of several books on technological evolution like What Technology Wants. He is also an excellent photographer and is kickstarting a massive photo book, Vanishing Asia. https://www.kickstarter.com/projects/kk-org/vanishing-asia https://www.recomendo.com/ https://www.goodreads.com/book/show/7954936-what-technology-wants https://patreon.com/rhyslindmark https://www.roote.co/
Biel Casals is a Data Science student at Pompeu Fabra University in Barcelona. His interests range from everything technology-related to economics, biohacking, and philosophy. He is also a fitness freak and DJ in his spare time. Timestamps/Chapters 0:00 - Introduction & the utility of podcasts 06:04 - Biel's story 12:25 - Identity, self-improvement and the pursuit of an ideal 19:05 - Systems thinking & marginal gains 26:02 - The interconnection between future technology & self-improvement 31:36 - Economic efficiency & the struggle to find balance 37:12 - How can we reach the 'Truth'? Thoughts from a data-science student 56:34 - How concepts and language affect our reality 1:05:37 - Bio-hacking, nootropics and the limitations of the human condition 1:19:06 - The future of the human experience & consciousness 1:29:18 - Thoughts on Utopia & the future 1:49:47 - Final thoughts Books/People Mentioned 1. 'Utopia' by Thomas Moore 2. Lex Fridman (https://www.youtube.com/user/lexfridman) - is an AI researcher working on autonomous vehicles, human-robot interaction, and machine learning at MIT and beyond. Teaching: deeplearning.mit.edu. 3. Richard Dawkins - "Memes" 4. 'Atomic Habits' by James Clear 5. Slavoj Žižek is a Slovenian-born political philosopher and cultural critic. He was described by British literary theorist, Terry Eagleton, as the “most formidably brilliant” recent theorist to have emerged from Continental Europe. 6. Slavoj Žižek - "What is nature"? ( https://www.youtube.com/watch?v=DIGeDAZ6-q4) 7. Terence Mckenna was an American ethnobotanist and mystic who advocated for the responsible use of naturally occurring psychedelic plants. 8. Kevin Kelly is the founding executive editor of Wired Magazine and a former editor/publisher of the Whole Earth Review. He has also been a writer, photographer, conservationist, and student of Asian and digital culture 9. Kevin Kelly's "Technium" (https://palladiummag.com/2020/12/10/kevin-kelly-on-why-technology-has-a-will/) 10. Andy Clark and David Chalmers - "The Extended Mind" (https://www.nyu.edu/gsas/dept/philo/courses/concepts/clark.html) 11. Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test 12. Joscha Bach is a cognitive scientist focusing on cognitive architectures, models of mental representation, emotion, motivation, and sociality --- Send in a voice message: https://anchor.fm/utopia-is-now/message
Oliver James Damian is based in Sydney, Australia. A flâneur playing with adjacent possibles in the Technium. Find more of Oliver: Instagram, Twitter, YouTube: @oliverdamian Facebook: oliver.damian A Statement from Oliver: "This was written on 9 September 1999 in Adelaide South Australia at a time when I was still finding my way as a newly arrived immigrant, saying goodbye to former lovers and life long friends in my country of birth, and looking for new connections. The piece seeks to capture the feelings of walking a tightrope between the joys of aloneness and the sorrows of loneliness in an interregnum. Perhaps imagination fueled by desire offers the way through. Desire is my compass, the body my GPS, warm data my gyroscope. I acknowledge the Kaurna meyunna traditional custodians of the lands now referred to as Adelaide that inspired and where this piece was written. I pay respects to their elders, past, present and emerging." -------------------------------------------------------------------------------------------------------------- Created during a time of quarantine in the global Coronavirus pandemic, A Moment Of Your Time's mission is to provide a space for expression, collaboration, community and solidarity. In this time of isolation, we may have to be apart but let's create together. Follow Us: Instagram | Twitter Created by CurtCo Media Concept by Jenny Curtis Theme music by Chris Porter A CurtCo Media Production See omnystudio.com/listener for privacy information.
On his recent 68th birthday, acclaimed futurist, best-selling author, and global tech authority Kevin Kelly took a moment to reflect back on his nearly seven decades of life. The result—sixty-eights bits of unsolicited advice. In this episode, Nelda sits down to discuss these "bytes" of wisdom. Acclaimed futurist Kevin Kelly has always had his finger on the pulse of what he calls the Technium or ecosystem of technologies. He authored the best-selling New Rules for the New Economy and the classic book on decentralized emergent systems, Out of Control. As co-founder of Wired, the preeminent source for how technology affects culture, economy, and politics, and former editor of Whole Earth Review which covers unorthodox technical news, Kelly is an advocate for taking the long view—the really long view. From his celebrated books and publications to his online footprint, Kelly is investing time and energy into processes and research that will benefit humankind for tens of thousands of years. Kelly's efforts include The Long Now Foundation, a cultural institution whose mission is to promote long-term thinking and reframe the notion of time. Projects include a self-sustaining clock embedded in a west Texas mountain built to run for ten thousand years. As a symbol of time, the clock is an iconic reminder to invest in things for the distant future. Says Kelly, “The point is to explore whatever may be helpful for thinking, understanding, and acting responsibly for tens of thousands of years.” Kelly's philosophy is that just knowing that things like the clock are possible can suggest even greater ideas. Ideas are Kelly's stock-in-trade. With his daily blog Cool Tools and weekly newsletter Recomendo, he shares thoughts and recommendations on anything that could be considered a “tool”—defined broadly as anything useful. Recently, Kelly even offered up 68 bits of unsolicited advice to mark his 68th birthday. With another season passing, Kelly reflects, “the only real gift that we have on our short life here is our time and how we use that time is really far more important than what we say or what we intend or what we actually preach.” There is no question, this future thinker is using his time wisely.
Senior Maverick at Wired Magazine Interview starts at 5:05 and ends at 45:39 Links Kevin Kelly's back articles at Wired Magazine Kevin Kelly's Technium blog Weekly newsletter Recomendo Vanishing Asia book in Instagram Weekly podcast about the four favorite tools of remarkable people Daily Cool Tools website TRY video channel Kevin Kelly on Twitter Kevin Kelly's YouTube Channel “AR Will Spark the Next Big Tech Platform--Call It Mirrorworld” by Kevin Kelly at Wired - February 12, 2019 The Long Now Foundation “Confessions of a Phone Bank Captain” by Len Edgerly at Medium - November 5, 2020 The Freewrite Traveler: The Ultimate Distraction-Free Writing Tool On the Road with Alexa If you'd like daily updates during Darlene's and my Tesla drive from Cambridge, Mass., to Sanibel Island in Florida, you can follow along with my Morning Journal flash briefing. From your Echo device, just say, “Alexa, enable Morning Journal.” Then each morning say, “Alexa, what's up?” or “Alexa, what's my flash briefing?” I post a five-minute audio journal each day except Sunday, usually by 7:30 am Eastern Time. Accompanied by the puppy Sophie, we leave tomorrow morning (Saturday, November 14, 2020.) Right-click here and then click "Save Link As..." to download the audio to your computer, phone, or MP3 player.
In this episode Justin discusses the true potential of artificial intelligence and what a future of self-aware machines will mean for humanity. Credits:https://plato.stanford.edu/index.htmlhttps://www.edge.org/conversation/kevin_kelly-the-technium-and-the-7th-kingdom-of-lifehttps://www.forbes.com/sites/bernardmarr/2019/10/04/the-magic-of-smart-mirrors-artificial-intelligence-augmented-reality-and-the-internet-of-things/#350adf79615bhttps://www.researchgate.net/publication/310610871_Using_Inspiration_from_Synaptic_Plasticity_Rules_to_Optimize_Traffic_Flow_in_Distributed_Engineered_Network
It's my birthday. I'm 68. I feel like pulling up a rocking chair and dispensing advice to the young 'uns. Here are 68 pithy bits of unsolicited advice... https://kk.org/thetechnium/68-bits-of-unsolicited-advice/
What is our relationship with AI technology as it grows? What will happen to us? We will continue to make these machines and as we make them, we will keep improving as humans. - Kevin Kelly What can technology teach us about self-love, society, and the divine? Get 15% off your CURED Nutrition order with the code WELLNESSFORCE ---> Get The Morning 21 System: A simple and powerful 21 minute system designed to give you more energy to let go of old weight and live life well. JOIN THE FACEBOOK GROUP | *REVIEW THE PODCAST* Wellness Force Radio Episode 312 Co-Founder and CEO of WIRED magazine, Co-Host of the Cool Tools podcast, and Author of his latest book, The Inevitable, Kevin Kelly, shares what technology can teach us about ourselves, explores how technology expresses its conscious self through The Technium and discusses the differences being healing and destructive technology. Discover why we shouldn't automatically dismiss technology but continue to seek ways to improve it. Organifi Get 20% off your purchase today with code: WELLNESSFORCE Are you getting the quality nutrition your body needs? It's what provides us with energy every day. And when we don’t give our body the nutrition it needs, we feel it. It’s in the way we lack motivation. It’s in the way we get stuck in ruts, dragging ourselves to work, to lunch, back to work, and back home... You don’t need to hire a nutritionist, though. You don’t need to live at the gym, either. In fact, all you really need is about 3 minutes a day… and Organifi will do the rest. Morning, Noon, And Night… Organifi Superfoods Have Your Nutritional Needs Covered. ORGANIFI GREEN JUICE Start the day with a refreshing glass of 11 perfect detoxing superfoods. Moringa, chlorella, ashwagandha, wheatgrass, coconut water, and a whole lot more to get energy levels up, cortisol levels down, and your day started off right. ORGANIFI RED JUICE Boost metabolism and energy with a delicious ruby-red fruit punch. All the best berries and super-fruits for a youthful glow, and real hardcore adaptogenic herbs, roots, and mushrooms for that extra kick of power. Great for those afternoon slumps! ORGANIFI GOLD What better way to end the day, than with a delicious mug of something sweet, nostalgic, and relaxing? Our Gold tea is made with turmeric, ginger, lemon balm, reishi, and more. Calming… soothing… for a great night’s sleep. Wake up feeling refreshed and ready to go! The Inevitable by Kevin Kelly Click here to get your copy of The Inevitable by Kevin Kelly A New York Times Bestseller from one of our leading technology thinkers and writers, a guide through the twelve technological imperatives that will shape the next thirty years and transform our lives. Much of what will happen in the next thirty years is inevitable, driven by technological trends that are already in motion. In this fascinating, provocative new book, Kevin Kelly provides an optimistic road map for the future, showing how the coming changes in our lives—from virtual reality in the home to an on-demand economy to artificial intelligence embedded in everything we manufacture—can be understood as the result of a few long-term, accelerating forces. Kelly both describes these deep trends—interacting, cognifying, flowing, screening, accessing, sharing, filtering, remixing, tracking, and questioning—and demonstrates how they overlap and are codependent on one another. The 12 Technological Forces These larger forces will completely revolutionize the way we buy, work, learn, and communicate with each other. By understanding and embracing them, says Kelly, it will be easier for us to remain on top of the coming wave of changes and to arrange our day-to-day relationships with technology in ways that bring forth maximum benefits. Kelly’s bright, hopeful book will be indispensable to anyone who seeks guidance on where their business, industry, or life is heading—what to invent, where to work, in what to invest, how to better reach customers, and what to begin to put into place—as this new world emerges. Listen To Episode 312 As Kevin Kelly Uncovers: 1:30 The Future Of Humanity And AI WIRED magazine Cool Tools podcast The Inevitable by Kevin Kelly Cool Tools website Kevin Kelly The Technium M21 Organifi Kevin's background before he launched WIRED and what inspires his work today. (6:30) How technology can be so liberating and is leading the way to improve our world. (10:40) Our role in this world and why we will become better humans as AI technology grows. 12:00 Ethics Lessons From Technology How artificial intelligence is teaching us to have better values and make greater ethical decisions. Why our own ethics are very inconsistent and we excuse each other's mistakes compared to not giving a pass to technology. How to go deeper and understand why we act and treat ourselves and others the way we do. (15:00) 275 Paul Chek 16:00 What Nature Can Teach Us About AI His book, The Inevitable, and the 12 larger, technological forces that will completely revolutionize the way we buy, work, learn, and communicate with each other. Why inventors like Thomas Edison didn't even know what use their inventions would be for us and their impact on society. The fact that it can take as long as up to a generation for us to fully decide what use a piece of technology is to the world. Why the only way to understand the impact of technology and whether it will be good or bad is through the embrace and use of it. How to properly decide if the cons of a piece of technology outweigh the pros. (21:00) Why he goes to nature and human biology to look for answers on how we can better construct our artificial systems. (24:00) The impact of traveling to give us a new perspective on the world we've created today. 28:15 The Impact Of The Technium On Our World What The Technium is and how it allows technology to express its consciousness. How The Technium helps create order in our world as an extension of both life and accelerated evolution. (31:30) Why inventors have a moral obligation to keep inventing in order to let others such as Beethoven share their incredible gifts with the world. (33:00) How we are both the Creators and The Created in our relationship with technology. 35:00 Human Connection Through Technology How to challenge our relationship with technology as a society. Why technology is such a blessing for us vs. the fear people might have about it. The fact that some people are better at connecting in person rather than through communication tools. Alan Watts The Elder Concept in Native American Culture: our ability to gather, apply, and embody lessons and knowledge. Kevin's All Species Foundation non-profit organization to categorize all living things on Earth and why we launched it. 40:00 Technology As A Reflection Of The Divine His personal beliefs on creation and how technology comes into place. Why technology is actually a reflection of the divine. The old technology we still use as a vital part of our society from concrete and paving to plumbing. His out-of-body experiences during a dentist visit and later with taking LSD as a sacrament on his 50th birthday. (44:20) What he learned from taking LSD including that we are all connected and supported. 46:00 The Ying-Yang of Technology The fact that most of our future problems are with technology are going to come from the technology of today. Why we should aim to improve and revamp technology as soon as we find a problem instead of completely dismissing it and seeing it as something bad. His belief that we cannot make a world in the future until we visualize it and all its technological improvements first now in the present. The power of solving old problems with new technology in order to move forward. (48:00) Power Quotes From The Show Our Small Contribution To The Universe "The difference between living systems and artificial systems are very small. When we're inventing new technology, even if they're consumables that will be thrown away, we are participating in a very long arc through the universe of these increasing choices and possibilities; the same thing that evolution and life are doing. So, we're part of something bigger when we make and invent things. Technology is a big thing; it has a spiritual dimension." - Kevin Kelly When To Embrace Or Let Go Of Technology "The only way to understand how technology is used for the greater good and find out what its bad parts are is through the use of it. That's why I preach this embrace of technology but we should also be quick to let it go or change it as we see fit. This stance of initial embracing technology and then reworking, adjusting, and revamping it is the pattern we're going to have forever. Instead of immediately rejecting technology, let's bring it on and see what it's actually good for in the world." - Kevin Kelly We Are The Creators & The Created "There is a fundamental contradiction and tension between the fact that we individual humans are both The Masters of what we create and The Created at the same time. So, as we make the tools, the tools make us. We are both the parent and the child of technology. Even in a thousand years, we will still be struggling with the fact that we are two-faced; we are both the master and slave to technology." - Kevin Kelly Links From Today's Show The Technium M21 Organifi 275 Paul Chek Wired Interviews Bill Gates 1996 The Origins of Cool Tools: Kevin Kelly in Conversation with Stewart Brand The Technium And The 7Th Kingdom Of Life 12 Technological Forces That Will Shape The Future Cool Tools podcast WIRED magazine Cool Tools website The Inevitable by Kevin Kelly Kevin Kelly Facebook Twitter YouTube About Kevin Kelly Kevin Kelly is Senior Maverick at Wired magazine. He co-founded Wired in 1993, and served as its Executive Editor for its first seven years. His new book for Viking/Penguin is called The Inevitable, which is a New York Times and Wall Street Journal bestseller. He is also founding editor and co-publisher of the popular Cool Tools website, which has been reviewing tools daily since 2003. From 1984-1990 Kelly was publisher and editor of the Whole Earth Review, a journal of unorthodox technical news. He co-founded the ongoing Hackers’ Conference and was involved with the launch of the WELL, a pioneering online service started in 1985. Other Works From Kevin Other books by Kelly include 1) best-selling New Rules for the New Economy, an early book outlining the digital economy, 2) Out of Control, the classic book on decentralized emergent systems, 3) The Silver Cord, a graphic novel about robots and angels, 4) Cool Tools, an oversize catalog of the best tools in the universe, and 5) What Technology Wants, a robust theory of technology. Join The #WellnessWarrior VIP Club **Click on the photo above to get exclusive discounts on new wellness tools, be first in line for new podcasts, get access to invite-only events, and so much more.** More Top Episodes 226 Paul Chek: The Revolution Is Coming (3 Part Series) 131 Drew Manning: Emotional Fitness 129 Gretchen Rubin: The Four Tendencies 183 Dr. Kyra Bobinet: Brain Science 196 Aubrey Marcus: Own The Day 103 Robb Wolf: Wired To Eat Best of The Best: The Top 10 Guests From over 200 Shows Get More Wellness In Your Life Join the #WellnessWarrior Community on Facebook Tweet us on Twitter: Send us a tweet Comment on the Facebook page
In Episode 351 seht Ihr den Test eines Whyte T 130 Trailfullys. Außerdem haben wir die Technium-Brille von Shimano ausprobiert und uns Amplifis MKX Knee Knieschoner übergezogen. Ein Gewinnspiel wartet auch.
All images and links for this episode can be found on CISO Series (https://cisoseries.com/who-are-the-perfect-targets-for-ransomware/) If you've got lots of critical data, a massive insurance policy, and poor security infrastructure, you might be a perfect candidate to be hit with ransomware. This week and this week only, it's an extortion-free episode of CISO/Security Vendor Relationship Podcast. This episode is hosted by me, David Spark (@dspark), producer of CISO Series and founder of Spark Media Solutions and Mike Johnson. Our guest this week is Sean Walls (@sean_walls2000), vp, cybersecurity, Eurofins. Thanks to this week's podcast sponsor Core Security Assigning and managing entitlements rapidly to get employees the access they need is critical, but it can come at the cost of accuracy and security. Core Security’s identity governance and administration (IGA) solutions provide the intelligent, visual context needed to efficiently manage identity related security risks across any enterprise. On this week's episode How CISOs are digesting the latest security news An article in the NYTimes points to a new trend in ransomware that is specifically attacking small governments with weak computer protections and strong insurance policies. Payments from $400-$600K. Lake City, Florida, population 12K paid $460K to extortionists. They got some of their information back but they have been set back years of what will require rescanning of paper documents. Mike, I know your standard philosophy is to not pay the ransom, but after a ransomware attack against the city of Atlanta, the mayor refused to pay $51,000 in extortion demands, and so far it's cost the city $7.2 million. Probably more. These payments by the small cities must be incentivizing more attacks. Does this information change the way you're willing to approach ransomware. What can a small city with zero cybersecurity staff do to create a program to reduce their risk to such a ransomware attack? Ask a CISO Bindu Sundaresan, AT&T Consulting Solutions, asks a very simple question, "How is each security initiative supporting the right business outcome?" Do you find yourself selling security into the business this way? If not, would you be more successful selling security to the business if you did do this? What's Worse?! We've got a split decision on what information we prefer after a breach. Listen up, it’s security awareness training time Jon Sanders, Elevate Security, said, "Security awareness involves A LOT of selling… there’s no cookie cutter approach in security awareness or sales!" Is the reason security training is so tough because so many security people are not born salespeople? I've interviewed many and there's a lot of "just listen to me attitude," which really doesn't work in sales. Cloud Security Tip, sponsored by OpenVPN We talk a lot about penetration testing here, given that it remains a staple of proactive IT security. But not everyone feels it’s all it’s cracked up to be. Or should that be, all it’s hacked up to be?” More than one cybersecurity organization points out there are a few flaws in the pen testing concept that make it worth a second look. Pen testing often consists of a small collection of attacks performed within a set time period against a small sample of situations. Some experts doubt the efficacy of testing against a limited field of known vulnerabilities, without knowing what other weaknesses exist in plain sight, or merely invisible to jaded eyes. More on CISO Series... What do you think of this pitch? We have a pitch from Technium in which our CISOs question what exactly are they selling?
Sabine Winters is filosofe en oprichter van Denkkaders. Ze maakt podcasts en organiseert bijeenkomsten vanuit Future Based over onder andere technologie-ethiek. We hebben het over de impact en moraliteit van technologie en innovatie. Wat is technologie ethiek? Waarom is technologie filosofie belangrijk? Met filosofe Sabine Winters praat ik over algoritmes, privacy en het belang van ethiek. De shownotes kun je vinden op https://biohackingimpact.nl
Wired magazine’s own “Senior Maverick” talks with Ken Wilber about some of the ideas behind Kevin’s blog The Technium, which explores the various ways humanity defines and redefines itself through the interface of science, technology, culture, and consciousness. Kevin also shares some of his own thoughts about the role of spirituality in the 21st century, going into considerable depth around his own spiritual awakening several decades ago. The universe, we are told, is winding down. Nothing escapes the remorseless grasp of the 2nd Law of Thermodynamics—and with each passing moment, our world, our solar system, indeed our entire galaxy slowly approaches its inevitable heat-death. But this is not the full story, for while the universe is winding down, it is also winding up, bringing forth new forms from old, adding new layers of complexity where there was once only an empty vacuum. It is what Alfred Whitehead called the “creative advance into novelty,” referring to a distinct “tilt” of the universe toward more complexity, more significance, and more wholeness. From atoms, to molecules, to single-cell and multi-cellular organisms, to the reptilian brain, mammalian brain, and the human neocortex—the universe is abound with inexhaustible creativity, pushing deeper and wider towards its own limitless potential. Entropy and evolution: these two “arrows of time” exert their pull upon everything that ever is, was, and will be—one pulling us up toward the eternal light, the other pulling us down toward the infinite black. But it is not just physical matter that is evolving! Alongside the increasing complexification of the material world, evolution brings forth novelty in at least three other dimensions, particularly evident within human evolution: - We see the evolution of systems, such as the movement from foraging to horticulture, to agriculture, to industrial, to informational modes of techno-economic production. - We see the evolution of cultural worldviews, such as the developmental model offered by Jean Gebser, in which cultures develop through archaic/instinctual, magic/animistic, mythic/traditional, rational/scientific, pluralistic/postmodern, and integral worldviews, each offering radically different ways of interpreting our world and our roles within it. - And, perhaps most profoundly, we see the evolution of consciousness, with cognitive faculties developing from Piaget’s pre-operational, to concrete operational, to formal-operational, to Wilber’s suggested “vision-logic” stage—and with values developing from pre-modern, to modern, to post-modern (or pre-rational, rational, and trans-rational) stages, and beyond. Taken together, we notice a rich mosaic of evolutionary emergence, in at least four important dimensions: subjective and objective development in both individuals and collectives. This gives rise to Wilber’s famous “Four Quadrant” map, one of a handful of basic components that comprise the Integral model. The Integral approach helps to reveal some of the deepest patterns that run through all human knowledge, showing the relationships that exist between physical evolution, systemic evolution, cultural evolution, and conscious evolution.
Wired magazine’s own “Senior Maverick” talks with Ken Wilber about some of the ideas behind Kevin’s blog The Technium, which explores the various ways humanity defines and redefines itself through the interface of science, technology, culture, and consciousness. Kevin also shares some of his own thoughts about the role of spirituality in the 21st century, going into considerable depth around his own spiritual awakening several decades ago. The universe, we are told, is winding down. Nothing escapes the remorseless grasp of the 2nd Law of Thermodynamics—and with each passing moment, our world, our solar system, indeed our entire galaxy slowly approaches its inevitable heat-death. But this is not the full story, for while the universe is winding down, it is also winding up, bringing forth new forms from old, adding new layers of complexity where there was once only an empty vacuum. It is what Alfred Whitehead called the “creative advance into novelty,” referring to a distinct “tilt” of the universe toward more complexity, more significance, and more wholeness. From atoms, to molecules, to single-cell and multi-cellular organisms, to the reptilian brain, mammalian brain, and the human neocortex—the universe is abound with inexhaustible creativity, pushing deeper and wider towards its own limitless potential. Entropy and evolution: these two “arrows of time” exert their pull upon everything that ever is, was, and will be—one pulling us up toward the eternal light, the other pulling us down toward the infinite black. But it is not just physical matter that is evolving! Alongside the increasing complexification of the material world, evolution brings forth novelty in at least three other dimensions, particularly evident within human evolution: - We see the evolution of systems, such as the movement from foraging to horticulture, to agriculture, to industrial, to informational modes of techno-economic production. - We see the evolution of cultural worldviews, such as the developmental model offered by Jean Gebser, in which cultures develop through archaic/instinctual, magic/animistic, mythic/traditional, rational/scientific, pluralistic/postmodern, and integral worldviews, each offering radically different ways of interpreting our world and our roles within it. - And, perhaps most profoundly, we see the evolution of consciousness, with cognitive faculties developing from Piaget’s pre-operational, to concrete operational, to formal-operational, to Wilber’s suggested “vision-logic” stage—and with values developing from pre-modern, to modern, to post-modern (or pre-rational, rational, and trans-rational) stages, and beyond. Taken together, we notice a rich mosaic of evolutionary emergence, in at least four important dimensions: subjective and objective development in both individuals and collectives. This gives rise to Wilber’s famous “Four Quadrant” map, one of a handful of basic components that comprise the Integral model. The Integral approach helps to reveal some of the deepest patterns that run through all human knowledge, showing the relationships that exist between physical evolution, systemic evolution, cultural evolution, and conscious evolution.
Wired magazine’s own “Senior Maverick” talks with Ken Wilber about some of the ideas behind Kevin’s blog The Technium, which explores the various ways humanity defines and redefines itself through the interface of science, technology, culture, and consciousness. Kevin also shares some of his own thoughts about the role of spirituality in the 21st century, going into considerable depth around his own spiritual awakening several decades ago. The universe, we are told, is winding down. Nothing escapes the remorseless grasp of the 2nd Law of Thermodynamics—and with each passing moment, our world, our solar system, indeed our entire galaxy slowly approaches its inevitable heat-death. But this is not the full story, for while the universe is winding down, it is also winding up, bringing forth new forms from old, adding new layers of complexity where there was once only an empty vacuum. It is what Alfred Whitehead called the “creative advance into novelty,” referring to a distinct “tilt” of the universe toward more complexity, more significance, and more wholeness. From atoms, to molecules, to single-cell and multi-cellular organisms, to the reptilian brain, mammalian brain, and the human neocortex—the universe is abound with inexhaustible creativity, pushing deeper and wider towards its own limitless potential. Entropy and evolution: these two “arrows of time” exert their pull upon everything that ever is, was, and will be—one pulling us up toward the eternal light, the other pulling us down toward the infinite black. But it is not just physical matter that is evolving! Alongside the increasing complexification of the material world, evolution brings forth novelty in at least three other dimensions, particularly evident within human evolution: - We see the evolution of systems, such as the movement from foraging to horticulture, to agriculture, to industrial, to informational modes of techno-economic production. - We see the evolution of cultural worldviews, such as the developmental model offered by Jean Gebser, in which cultures develop through archaic/instinctual, magic/animistic, mythic/traditional, rational/scientific, pluralistic/postmodern, and integral worldviews, each offering radically different ways of interpreting our world and our roles within it. - And, perhaps most profoundly, we see the evolution of consciousness, with cognitive faculties developing from Piaget’s pre-operational, to concrete operational, to formal-operational, to Wilber’s suggested “vision-logic” stage—and with values developing from pre-modern, to modern, to post-modern (or pre-rational, rational, and trans-rational) stages, and beyond. Taken together, we notice a rich mosaic of evolutionary emergence, in at least four important dimensions: subjective and objective development in both individuals and collectives. This gives rise to Wilber’s famous “Four Quadrant” map, one of a handful of basic components that comprise the Integral model. The Integral approach helps to reveal some of the deepest patterns that run through all human knowledge, showing the relationships that exist between physical evolution, systemic evolution, cultural evolution, and conscious evolution.
Wired magazine’s own “Senior Maverick” talks with Ken Wilber about some of the ideas behind Kevin’s blog The Technium, which explores the various ways humanity defines and redefines itself through the interface of science, technology, culture, and consciousness. Kevin also shares some of his own thoughts about the role of spirituality in the 21st century, going into considerable depth around his own spiritual awakening several decades ago. The universe, we are told, is winding down. Nothing escapes the remorseless grasp of the 2nd Law of Thermodynamics—and with each passing moment, our world, our solar system, indeed our entire galaxy slowly approaches its inevitable heat-death. But this is not the full story, for while the universe is winding down, it is also winding up, bringing forth new forms from old, adding new layers of complexity where there was once only an empty vacuum. It is what Alfred Whitehead called the “creative advance into novelty,” referring to a distinct “tilt” of the universe toward more complexity, more significance, and more wholeness. From atoms, to molecules, to single-cell and multi-cellular organisms, to the reptilian brain, mammalian brain, and the human neocortex—the universe is abound with inexhaustible creativity, pushing deeper and wider towards its own limitless potential. Entropy and evolution: these two “arrows of time” exert their pull upon everything that ever is, was, and will be—one pulling us up toward the eternal light, the other pulling us down toward the infinite black. But it is not just physical matter that is evolving! Alongside the increasing complexification of the material world, evolution brings forth novelty in at least three other dimensions, particularly evident within human evolution: - We see the evolution of systems, such as the movement from foraging to horticulture, to agriculture, to industrial, to informational modes of techno-economic production. - We see the evolution of cultural worldviews, such as the developmental model offered by Jean Gebser, in which cultures develop through archaic/instinctual, magic/animistic, mythic/traditional, rational/scientific, pluralistic/postmodern, and integral worldviews, each offering radically different ways of interpreting our world and our roles within it. - And, perhaps most profoundly, we see the evolution of consciousness, with cognitive faculties developing from Piaget’s pre-operational, to concrete operational, to formal-operational, to Wilber’s suggested “vision-logic” stage—and with values developing from pre-modern, to modern, to post-modern (or pre-rational, rational, and trans-rational) stages, and beyond. Taken together, we notice a rich mosaic of evolutionary emergence, in at least four important dimensions: subjective and objective development in both individuals and collectives. This gives rise to Wilber’s famous “Four Quadrant” map, one of a handful of basic components that comprise the Integral model. The Integral approach helps to reveal some of the deepest patterns that run through all human knowledge, showing the relationships that exist between physical evolution, systemic evolution, cultural evolution, and conscious evolution.
You're listening to the 35th episode of the Humans 2.0 podcast, solo-series, FLOW! Today's episode is about an experience that every human shares...an upsetting and unfortunate day. At first when I came across days like this...I would give in and succumb to the seeming BS of my existence and call a time out for myself. After many failures, I've learned to use these events as a way to PUSH harder! A signal for my brain to triple down and pursue the greatest version of myself. Please let me know how you like these daily FLOW episodes.Please do NOT hesitate to reach out to me on Instagram, LinkedIn or via email mark@vudream.comHumans 2.0 Twitter - https://twitter.com/Humans2PodcastTwitter - https://twitter.com/markymetryMedium - https://medium.com/@markymetryFacebook - https://www.facebook.com/mark.metry.9Instagram - https://www.instagram.com/markmetry/LinkedIn - https://www.linkedin.com/in/mark-metry/Mark Metry - https://www.markmetry.com/
You're listening to the 35th episode of the Humans 2.0 podcast, solo-series, FLOW! Today's episode is about an experience that every human shares...an upsetting and unfortunate day. At first when I came across days like this...I would give in and succumb to the seeming BS of my existence and call a time out for myself. After many failures, I've learned to use these events as a way to PUSH harder! A signal for my brain to triple down and pursue the greatest version of myself. Please let me know how you like these daily FLOW episodes.Please do NOT hesitate to reach out to me on Instagram, LinkedIn or via email mark@vudream.comHumans 2.0 Twitter - https://twitter.com/Humans2PodcastTwitter - https://twitter.com/markymetryMedium - https://medium.com/@markymetryFacebook - https://www.facebook.com/mark.metry.9Instagram - https://www.instagram.com/markmetry/LinkedIn - https://www.linkedin.com/in/mark-metry/Mark Metry - https://www.markmetry.com/
“Thank you Mr. Machine...or Ms. Machine?” —John Zerzan About: What does one make of our future—like the one 50 years from now? The answer, or rather a possible answer, has a lot to do with choice and technology. Whether we survive or go extinct is, however, another question. Show Notes: [01:20] A link to download the Park Mobile app [01:25] “Inamorata (Marimba Solo)” by Blue Dot Sessions [04:40] “Inside the Tower” by Visager [06:15] More on Clooneys Pub (SFGate) [06:40] Light reading on the James Comey hearings (CNN) [07:50] More on Peter Eckersley (Electronic Frontier Foundation) [08:20] Light reading on the latest allegations of Russian interference in the 2016 US Election (The New York Times) [09:40] “When in the West” by Blue Dot Sessions [09:45] Light reading on: The abacus (Ryerson University) The astrolabe (Smithsonian Magazine) The sundial (How Stuff Works) The analytical engine (ThoughtCo) And a worthwhile video on Charles Babbage And this Crash Course series on the history of computing is worth the watch [10:35] Related: How much Internet v.1 cost (Larry Press) [10:20] Newsreel from 1964 track meet between the US and USSR [10:35] Some worthwhile videos on how the Internet started: In animation form In SciShow series form with Part I, II and III And in a super-short condensed form [10:45] “Timesharing: A Solution to Computer Bottlenecks” (Computer History Museum) [11:25] Light reading on ARPANET (Live Science) Related: Original sketch of ARPANET (SRI International) Related: Room where the first message was sent (Gizmodo) Related: The growth of the Internet over the years (Vox) Related: An overview of the Internet (WebpageFX) Related: A list of Internet firsts Related: What happens every second on the Internet [11:55] “Gregorian Chant” by Kevin MacLeod [12:25] “Inside the Tower” by Visager [12:55] Some background on: TCP/IP Protocols and the individual behind it (WIRED) Advanced packet switching (Editor’s note: you should watch this) Email (Nethistory.com) Related: How the Internet was viewed in 1969 and in 1981 [13:10] Light reading on the World Wide Web And on Tim Berners-Lee Related: Why it’s called surfing the web [13:30] What it was like being online in the 80s (Gizmodo) [13:50] Light reading on AOL (Fast Company) ...and Compuserve (Ars Technica) Animation illustrating “the series of tubes” Sound of a 56K dial up modem (10Stripe.com) [14:00] Light reading on the Eternal September (Wikipedia) Related: A guide on the Internet before it took off (TIME) [14:10] The man behind the “You Got Mail” audio (Great Big Story) [14:15] Light reading on the early browser days of Erwise, Mosaic and Netscape (Ars Technica) [14:25] Commercialization begins (New York Times) Sort of related: Vice President Al Gore logs onto the Internet (CSPAN) Also sort of related: Al Gore’s relationship with the Internet (Gizmodo) [14:25] “Astrisx” by Blue Dot Sessions [14:35] A montage of Pets.com commercials—and a brilliant overview on the dotcom bubble (WIRED) [14:45] Is there a 2010s Tech Bubble? (in no particular order) Bloomberg Vanity Fair The Telegraph Business Insider Inc Forbes [15:15] More on the more modern history of the Internet and related services [15:30] More on Aral Balkan (@aral) And on his mission for an independent Internet (Paste) [16:30] See the Ethical Design Manifesto here [18:00] Some short videos on how online advertising works: The evolution of online display advertising (IABUK) The life of a programmatic ad (Media Crossing) Background on real-time bidding (Acuity Ads) ...and one more pivot to video—the history of ads (Mashable) [20:10] More on Kevin Kelly (@Kevin2Kelly) Bio Author of What Technology Wants The Inevitable Read his writing here (WIRED) Hear about his trip around the world (This American Life — First Act of the show) Read his “Network Nation” report here Light reading on the Whole Earth Catalog (Rolling Stone) And the WELL And his book Out of Control, which was a required reading for actors on The Matrix [22:50] “Cyclotrak” by Blue Dot Sessions [23:20] More on the Technium [25:45] “Drone Birch” by Blue Dot Sessions [27:30] See related study on early human brain vs. later human brain (Smithsonian Magazine) [27:40] Light reading on John Zerzan (The Guardian) [27:50] Light reading on the Unabomber (The Atlantic) [28:05] Kaczynski's Manifesto and its reviews [28:45] A 60 Minutes report on the Battle for Seattle (CBS News) [29:00] “Tralaga” by Blue Dot Sessions [29:40] Zerzan’s book—Elements of Refusal [29:45] More on Anarchy Radio [29:55] And a link to the Black and Green Review [32:50] Maker Faire 2017 (San Jose Mercury News) [33:40] See the tesla coil here [34:20] See Randy Gallegos’ paintings here [35:35] “Epilogue” by Visager [36:00] More on Jason Malcolm Stewart (@sabbathsoldier) [38:55] More on Ronald Moore (@rondmoore) See related Portlandia skit [40:00] Related: See Ray Kurzweil on the three technologies that will shape our future [40:30] More on Alex Schultink Related: Dr. Jennifer Doudna (New York Times) Related: Radiolab episode on CRISPR [43:40] More on Dr. Heather Berlin (@heather_berlin) An interview where Berlin discusses what happens to your brain on creativity (Studio 360) [47:40] Meet Pepper Inside Pepper (Nikkei Asian Review) [48:55] Light reading on Softbank (Wikipedia) [50:30] “How Realistic Is Westworld?” (Vulture) [50:55] Light reading on technological singularity (The Guardian) The concerns over AI (The Hive) Open letter on artificial intelligence [51:05] Stephen Hawking on AI (BBC) [52:30] Kevin Kelly’s full takedown of the singularity (WIRED) [53:05] Kelly’s $1,000 bet with Kirk Sale (Wired) [57:05] “A Simple Blur” by Blue Dot Sessions [58:00] Light reading on the state of mental illness in the States (Salon) Related: Study finds more Americans suffering from stress, anxiety and depress (CBS News) [01:02:50] Recommended Podcast to Listen to Until the Next Show: Terrible, Thanks For Asking More at thisissomenoise.com/ep-19
This episode is a dream come true. I have long been a fan of Kevin Kelly - THE Senior Maverick at Wired magazine and the sherpa of the technium. He is here to talk about his new book The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. -How does technology shape what it means to be human? -How does the nature of authority change as we move from texts to screens? -What class would Kevin Kelly add to a minister's theological education? -What can we learn from the Amish about adopting technology? -Is Big Brother a necessary evil? -"The price of absolute personalization is absolute transparency." -What would Kevin Kelly do if he was made Czar of Technology? -What would a democratic world-wide government look like? Follow the podcast, drop a review, send feedback/questions or become a member of the HBC Community. Learn more about your ad choices. Visit megaphone.fm/adchoices
Kevin Kelly is a blogger, author, editor, student of technology and future systems, and self-proclaimed "packager of ideas". He is the founding executive editor of Wired magazine and has written for the New York Times, Economist, Science, Time, the Wall Street Journal, and other national and international publications. Kevin's current focus is the future of technology and its cultural and psychological effects. He predicts that the world as we know it, including human thought and behavior, will be transformed over the next few decades just as dramatically as it has been due to the rise of internet culture. Kevin has outlined this transformation in his new book, The Inevitable, in which he lists twelve technological forces that he believes will shape our lives over the next thirty years. If you're working to build a successful and fulfilling future -- and who isn't?! -- I highly recommend that you read The Inevitable immediately. Kevin also writes several blogs on a variety of topics. His technology blog, The Technium, contains one of my personal favorite posts, entitled "1000 True Fans". This post describes a big part of Kevin's theory on success and networking in the digital age. He states that almost any content creator in any genre can make a solid living by using some basic strategies to build a base of 1000 "true fans". He defines a "true fan" as someone who will purchase anything and everything you produce. They will drive 200 miles to see you sing. They will buy the super deluxe re-issued hi-res box set of your stuff even though they have the low-res version. They have a Google Alert set for your name. They bookmark the eBay page where your out-of-print editions show up. They come to your openings. They have you sign their copies. They buy the t-shirt, and the mug, and the hat. They can’t wait till you issue your next work. They are true fans. The theory is that if each of these 1000 true fans each spends even $100 per year on content and merchandise, the creator will have an annual income of roughly $100,000. In today's podcast, Kevin acknowledges a few necessary tweaks to this plan but explains how, in general, it can work for anyone. More broadly speaking, on this episode, Kevin Kelly discusses his thoughts about the future of technology and how it will affect our lives. He goes into particular depth about the ideas of accessing and sharing, two of the twelve principles covered in The Inevitable. Kevin believes that accessing (renting or borrowing physical tools, substituting digital tools for physical ones, using free or open-source software, contracting for services, etc.) will be far more efficient and useful going forward than possessing (owning physical tools or software, hiring long-term employees, etc.). Similarly, he believes strongly in the importance of sharing, in terms of both collaborative content creation and freely-shared products. He points out that the world that we take for granted, from Wikipedia to Uber and far beyond, would not be possible without these two principles. One of Kevin's main talking points is the importance of experimentation in our lives. Because he believes that the most important technology of the next 30 years hasn't been invented or even imagined yet, he knows that it's not too late for anyone. Any one of us could be the one to have the next world-changing idea. However, that idea will only come from curiosity, exploration, experimentation, and an open mind. You can begin today to build that mindset -- or strengthen and inform it if you've already begun -- by listening to Kevin Kelly's episode of The Influencer Economy now! Written by contributor Edi Gonzalez Quotes from the episode: "There's one thing that I'm pretty sure about, and that is that the most important product in 30 years, the thing that is kind of running people's lives, does not exist right now.... And the larger point is that that's great news because it means that you could be the creator, the innovator, who comes up with that technology, and that none of us are late. Great opportunity is all before us." "I would encourage anyone... to do something silly. Do something that is sort of basic science and research... that has no obvious practical value, because we know that eventually this kind of work will become the most essential and practical thing possible. What I recommend people... to do is something that is really, really interesting to them. I'm a big fan of being unproductive in a long-term productive way." "If you're too many steps ahead, that's just as terrible as being twenty steps behind. But what I want to emphasize is that you want to take that appropriate minimal step in the right direction. We are gonna do more sharing, more collaboration, more accessing, so all things equal, if you're heading in those directions, that's a viable direction." "The only thing that's not increasing in this world of abundance... is our own attention, which is fixed. And not only is it fixed, but we also have to spend it every day. We can't bank it. We can't store it up. We have to expend our 24 hours every 24 hours and we have no more than that. And so we need technological tools to help us navigate and manage this exponential abundance of stuff." Links: Buy Kevin Kelly's books on Amazon by clicking here. (http://www.amazon.com/Kevin-Kelly/e/B001HCY1LE/ref=sr_tc_2_0?qid=1466622579&sr=8-2-ent) Kevin Kelly's blogs, including The Technium, Cool Tools, and more: http://kk.org/
Due to recent security breaches, IT professionals are turning to a network design with no “trusted” or “secure” areas, built on the assumption that every element of a network could be a source of risk. We sit down with Technium’s Michael Joseph to discuss the technical details and benefits of a network with zero trusted devices or areas.
The whole idea of content marketing is predicated on one core belief. We're all screwed if it's a bogus belief. > “The model proposed in blogging seems to be that you show what you know, people sign up, and then you sell them what you know.” This is Steve Fossey writing in the Fizzle forums recently. > **“I almost NEVER pay for anything on the web when I have been attracted by free content. Why would my audience?”** Steve's question is a good one, a question most of us ask at some point in the journey of building a blog or podcast with the hopes of eventually earning revenue from a product of some kind. So on the show today we dissect this issue. You may have an idea of what the results are, but how we get there will surprise you. Corbett's ideas and experience here are especially insightful. Subscribe if you haven't — and please, enjoy the show. Show Notes How to Decide Which Content to Sell and What to Give Away for Free - Copyblogger (http://www.copyblogger.com/sell-or-give/) The Technium: 1,000 True Fans (http://kk.org/thetechnium/1000-true-fans/) The Passionate Podcaster's High Quality Microphone Shootout (https://fizzle.co/sparkline/podcast-high-quality-microphone-shootout) How Stress Makes You Sick - The Atlantic (http://www.theatlantic.com/health/archive/2015/10/how-stress-makes-you-sick/412699/) Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future (http://www.amazon.com/Elon-Musk-SpaceX-Fantastic-Future/dp/0062301233) Elon Musk: The World's Raddest Man - Wait But Why (http://waitbutwhy.com/2015/05/elon-musk-the-worlds-raddest-man.html) PrepDish.com — Shop once, prep once, enjoy healthy, stress-free meals all week! (http://www.prepdish.com/)
Many Buddhist figures have things to say about wise engagement with technology, but many of these same figures are not technologists themselves, and so have a limited view, or understanding, of the full range of what technology is, and perhaps of what it means. This week, to explore these very questions, we speak again with leading technologist Kevin Kelly. He shares his unique view on how technology should be selectively minimized on the individual level, while simultaneously maximizing the pool of technologies in the world at large. We also explore the parallel philosophies of Buddhism—especially with regards to its emphases on interdependence and impermanence—with the cybernetic process philosophy that Kelly is familar with. This techno-geek-philosophy shares many overlapping views on the nature of reality, but is strikingly different in many ways. This is part 2 of a two-part series. Listen to part 1, The Technium. Episode Links: www.KK.org What Technology Wants ( http://amzn.to/9l5NqS ) “When the Mind Wanders, Happiness Also Strays” ( http://nyti.ms/eRisjo ) Cool Tools ( http://www.kk.org/cooltools/ ) What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry ( http://amzn.to/hNmiCo )
We’re joined this week by celebrated technologist and co-founder of Wired Magazine, Kevin Kelly. Kevin shares how he went from a back-to-the-lander hippie in his early youth, to becoming one of the most important technological thinkers alive today. We then explore one of the central ideas of Kelly’s technological philosophy, what he calls the technium. He shares how the technium can be dated all the way back to the beginning of the universe, and explains how the technium—a type of super-organism of interdependent technologies—can actually increase degrees of freedom and choice in the universe. Closing up the conversation we discuss whether the technium is a neutral force, or if it has some inherent goodness. This is part 1 of a two-part series. Listen to part 2, The Wise Use of Technology. Episode Links: www.KK.org What Technology Wants ( http://amzn.to/9l5NqS ) Out of Control: The New Biology of Machines, Social Systems, & the Economic World ( http://amzn.to/csaSS0 ) Wired Magazine ( http://www.wired.com ) Ted Nelson ( http://en.wikipedia.org/wiki/Ted_Nelson ) The Technium ( http://www.kk.org/thetechnium/ )
Kevin Kelly is co-founder, senior maverick and former executive editor at Wired magazine. He is the editor and publisher at the Cool Tools website as well as other popular websites like True Films, Quantified Self, and The Technium. He has written several books including “What Technology Wants”, “New Rules for The New Economy”, and “Out of Control”, a classic book of decentralized emergent systems which inspired the Matrix trilogy. Why you should listen – Kevin Kelly comes on Bulletproof Radio today to discuss the quantified-self movement, the power of brain plasticity, his take on transhumanism, and emerging decentralized systems. Enjoy the show!
Kevin Kelly is co-founder, senior maverick and former executive editor at Wired magazine. He is the editor and publisher at the Cool Tools website as well as other popular websites like True Films, Quantified Self, and The Technium. He has written several books including “What Technology Wants”, “New Rules for The New Economy”, and “Out of Control”, a classic book of decentralized emergent systems which inspired the Matrix trilogy. Why you should listen – Kevin Kelly comes on Bulletproof Radio today to discuss the quantified-self movement, the power of brain plasticity, his take on transhumanism, and emerging decentralized systems. Enjoy the show!
For this 92nd episode of GenreTainment we are speaking with writer/producer/director Gabe Michael the co-founder of Forge Apollo. We talk to him about Forge Apollo’s new science fiction thriller series Ambient, award winning science series Technium, and Football talk show Blitzed.Plus some upcoming projects, like Cubicle Zombies. Michael also gives great tips for web series creators. Oh, and we also talk about goats and judgmental smart cars…GenreTainment is where we talk about what is happening in the world of film, TV and web series. We give you interviews with writers, directors, producers and actors in both independent and not-so-independent creations.Links:Forge ApolloForge Apollo – YouTubeInternational Academy of Web Television (IAWTV) Award WinnersTelevision on the Wild Wild Web: How To Blaze Your Own Trail@MrMarx See acast.com/privacy for privacy and opt-out information.
What comes after the Internet? What is bigger than the web? What will produce more wealth than all the startups to date? The answer is a planetary super-organism comprised of 4 billion mobile phones, 80 quintillion transistor chips, a million miles of fiber optic cables, and 6 billion human minds all wired together. The whole thing acts like a single organism, with its own behavior and character -- but at a scale we have little experience with. This is more than just a metaphor. Kelly takes the idea of a global super-organism seriously by describing what we know about it so far, how it is growing, where its boundaries are, and what it will mean for us as individuals and collectively. Both the smallest one-person enterprises today, and the largest mega-corporations on Earth, will have to learn to how this Technium operates, and how to exploit it.
You probably have Google Analytics sitting on your website, but are you utilising even a small fraction of all the wonderful stuff it can do for you? Find out here with Certified Google Analytics Partner, Simon Pilkington from Technium. Episode #18 – Host: Tim Martin Subscribe via iTunes Subscribe via Stitcher Shownotes Technium website Simon … Continue reading #18 Simon Pilkington on Google Analytics →
Zoltan Istvan - author, sailor, inventor of volcano surfing - joins us to talk about the world's most provocative coming-of-age story: The Transhumanist Wager.
Micah interviews world-traveling science fiction writer Colin Wright. A kid hearing super-smart voices, a singularity cult leader, a futuristic hacktavism group — all this and more are explored in this episode.
KEVIN KELLY (https://www.edge.org/memberbio/kevin_kelly) is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He is currently editor and publisher of the popular Cool Tools, True Film, and Street Use websites. His most recent books are Cool Tools, and What Technology Wants. The Conversation: https://www.edge.org/conversation/kevin_kelly-the-technium
See how prescient we were about the future, discover whether the world ever came to an end, find out about the Singularity & fashion, why Google makes cars, and what the difference between birds and planes tells us about AI.
In the second part of our interview, Derek Webb talks about how to approach the oncoming future with values and practicality, as well as going into the problems with laws, the Amish, hippies, Kevin Kelly, and the iPhone 13.
In the first part of our interview, Derek Webb introduces us to an album about the dawn of the first AI, and poignantly shows us just how disconnected we're all becoming. Stayed tuned for the rest of our conversation.
Ryan catches up with Jason Silva at the Seattle Interactive Conference to talk about the adjacent possible, genomes as artwork, embracing ecstasy, and dealing with existential despair.
The Spirituality and the Singularity series continues! James Carroll, a researcher at the Los Alamos National Laboratory, shares his experiences in the diverse worlds of A.I. and Mormon Transhumanism.
How does the Singularity impact Spirituality? The series kicks off with Mike Morrell addressing the intersection of Christianity and accelerating change, and gives us an expansive look at the history of humanity from the dawn of time to the far future.
The idea that technology could want something seems kind of outlandish, almost like science fiction. But journalist Kevin Kelly is proposing a kind of technological self-determination in his new book What Technology Wants. It’s not exactly robots with souls that Kelly is suggesting. It’s more of an evolutionary theory of technological development, the idea that one technology naturally evolves from another. That the mobile phone, for instance, was an inevitable evolution from the telephone, or that the internet we know today was an inevitable next step from the spread of the networked PC. Well, not exactly inevitable. Best listen to David Weinberger’s interview with Kevin Kelly to hear him explain it himself.
John, Ryan, and Micah discuss how to look at the future and the Singularity. Are you an optimist, a pessimist, or a catastrophist? What nightmare scenarios are possible? What is the best-case scenario, and how can we steer in that direction?
Micah and Ryan are driving around Sikeston, Missouri, the last place the singularity will arrive. It's a very special Christmas episode, with a returning reluctant guest.
Micah and Ryan are driving around, talking smack and singularity.
Micah, Ryan, and special guest and co-navigator Emily, discuss technology and the future...and how to do a podcast. No one was harmed in the incident at the end.
Connie and Michael discuss a variety of ways in which new technologies and new information systems are boosting and challenging their own work: both in staying informed and getting their ideas out into the world. Kevin Kelly calls this suprabiological phase of evolution, "The Technium." New modes for the free exchange and collaborations of ideas and meaning include not only podcasts, such as these, but video uploads to YouTube, video "mashing" (by which third parties freely offer their time to make video snippets more accessible and entertaining), downloadable and thus ever-updated curricula, webinars and teleseminars, and synergistic mixes of all these modes.