POPULARITY
Đây là tập TQKS đầu tiên có một đôi vợ chồng tham dự chung một podcast. Hai vị chuyên gia đặc biệt của ngày hôm nay: TS. Lương Minh Thắng - Chuyên gia nghiên cứu cấp cao tại Google Deepmind. Anh là đồng lãnh đạo dự án phát triển Google Bard Multimodality vào năm 2023, đồng sáng lập Dự án Meena vào năm 2018, sau này trở thành chatbot Google LaMDA. Đồng thời, anh cũng là góp phần sáng lập New Turing Institute, một doanh nghiệp xã hội với mục tiêu trở thành hệ sinh thái AI theo mô hình Silicon Valley ở Đông Nam Á và hơn thế nữa. Wendy Uyên Nguyễn - Giám đốc Đối ngoại toàn cầu, nhà sáng lập của Viện Vi sinh & Chống dịch Stanford (Stanford Institute for Microbiology and Epidemic Prevention Research). Cô là người khởi xướng nhiều chương trình kết nối, đào tạo các y, bác sĩ, và các nhà nghiên cứu người Việt, trao cơ hội cho họ học hỏi và trải nghiệm tại phòng nghiên cứu của Đại học Stanford-nơi đã đào tạo ra vô số những nhà khoa học đoạt giải Nobel. Chúng ta sẽ được nghe những chia sẻ về cách thức mà hai vị chuyên gia này-với lĩnh vực chuyên môn của riêng mình-đang tạo cầu nối cơ hội giữa Việt Nam và Hoa Kỳ trong lĩnh vực Y tế và A.I. Tương lai của cuộc cạnh tranh về Trí tuệ nhân tạo sẽ hướng về đâu? Làm thế nào để chúng ta có thể kiến tạo những phát kiến khoa học tầm cỡ quốc tế tại Việt Nam? Và với sự phát triển của công nghệ A.I, chúng ta sẽ thấy những đột phá nào trong lĩnh vực Y khoa? Mời đang bạn đón xem/lắng nghe! Tập này bạn xem gì? 00:00 - Giới thiệu nội dung podcast 00:52 - Giới thiệu khách mời Thắng Lương & Wendy Nguyễn 07:54 - Bài học lớn nhất từ việc đưa Google Bard ra cộng đồng 09:18 - Tương lai của cuộc cạnh tranh giữa các ông lớn về AI 11:45 - Máy tính có thể thông minh hơn con người không? 14:10 - Cạnh tranh về dữ liệu: Google có phải số 1? 16:03 - AI trong ngành Y khoa 19:38 - Coming Up 20:16 - Hoạt động của Viện Nghiên cứu Vi sinh và Chống dịch Stanford 25:28 - Thử thách trong việc xây dựng cầu nối Việt - Mỹ 28:17 - Thành quả của Viện Nghiên cứu Vi sinh và Chống dịch Stanford tại VN 32:13 - Người Việt có thể đóng góp những gì cho hoạt động của Viện Nghiên cứu Vi sinh và Chống dịch Stanford tại VN? 33:24 - New Turing Institute đang làm gì tại VN? 38:00 - Kỹ sư AI người Việt có thể nắm bắt cơ hội như thế nào? 43:23 - Coming Up 43:56 - Cơ hội của mọi kỹ sư AI 46:23 - VN đang ở đâu trong cuộc đua AI 48:29 - NTI đang ở đâu trên hành trình xây dựng VN thành cường quốc AI? 55:52 - Sứ mệnh chung cho VN và nhân loại 59:50 - Chào kết Credit Dẫn chuyện - Host | Quốc Khánh Kịch bản - Scriptwriting | Quốc Khánh, Atlan Nguyễn Biên Tập – Editor | Atlan Nguyễn Truyền thông - Social | Cẩm Vân Sản Xuất - Producer | Ngọc Huân Quay Phim - Cameraman | Khanh Trần, Nhật Trường, Thanh Quang Âm Thanh - Sound | Khanh Trần Hậu Kì – Post Production | Hải Long Nhiếp Ảnh - Photographer | Khanh Trần, Nhật Trường, Thanh Quang Thiết kế - Design | Nghi Nghi Makeup Artist - Trang Điểm | Ngọc Nga #Vietsuccess #TQKS #GoogleDeepMind #Stanford #AI #Medical
The most recent YCombinator W23 batch graduated 59 companies building with Generative AI for everything from sales, support, engineering, data, and more:Many of these B2B startups will be seeking to establish an AI foothold in the enterprise. As they look to recent success, they will find Glean, started in 2019 by a group of ex-Googlers to finally solve AI-enabled enterprise search. In 2022 Sequoia led their Series C at a $1b valuation and Glean have just refreshed their website touting new logos across Databricks, Canva, Confluent, Duolingo, Samsara, and more in the Fortune 50 and announcing Enterprise-ready AI features including AI answers, Expert detection, and In-context recommendations.We talked to Deedy Das, Founding Engineer at Glean and a former Tech Lead on Google Search, on why he thinks many of these startups are solutions looking for problems, and how Glean's holistic approach to enterprise probllem solving has brought so much success. Deedy is also just a fascinating commentator on AI current events, being both extremely qualified and great at distilling insights, so we also went over his many viral tweets diving into Google's competitive threats, AI Startup investing, and his exposure of Indian University Exam Fraud!Show Notes* Deedy on LinkedIn and Twitter and Personal Site* Glean* Glean and Google Moma* Golinks.io* Deedy on Google vs ChatGPT* Deedy on Google Ad Revenue* Deedy on How much does it cost to train a state-of-the-art foundational LLM?* Deedy on Google LaMDA cost* Deedy's Indian Exam Fraud Story* Lightning Round* Favorite Products: (covered in segment)* Favorite AI People: AI Pub* Predictions: Models will get faster for the same quality* Request for Products: Hybrid Email Autoresponder* Parting Takeaway: Read the research!Timestamps* [00:00:21] Introducing Deedy* [00:02:27] Introducing Glean* [00:05:41] From Syntactic to Semantic Search* [00:09:39] Why Employee Portals* [00:12:01] The Requirements of Good Enterprise Search* [00:15:26] Glean Chat?* [00:15:53] Google vs ChatGPT* [00:19:47] Search Issues: Freshness* [00:20:49] Search Issues: Ad Revenue* [00:23:17] Search Issues: Latency* [00:24:42] Search Issues: Accuracy* [00:26:24] Search Issues: Tool Use* [00:28:52] Other AI Search takes: Perplexity and Neeva* [00:30:05] Why Document QA will Struggle* [00:33:18] Investing in AI Startups* [00:35:21] Actually Interesting Ideas in AI* [00:38:13] Harry Potter IRL* [00:39:23] AI Infra Cost Math* [00:43:04] Open Source LLMs* [00:46:45] Other Modalities* [00:48:09] Exam Fraud and Generated Text Detection* [00:58:01] Lightning RoundTranscript[00:00:00] Hey everyone. Welcome to the Latent Space Podcast. This is Alessio, partner and CTO and residence at Decibel Partners. I'm joined by my, cohost swyx, writer and editor of[00:00:19] Latent Space. Yeah. Awesome.[00:00:21] Introducing Deedy[00:00:21] And today we have a special guest. It's Deedy Das from Glean. Uh, do you go by Deedy or Debarghya? I go by Deedy. Okay.[00:00:30] Uh, it's, it's a little bit easier for the rest of us to, uh, to, to spell out. And so what we typically do is I'll introduce you based on your LinkedIn profile, and then you can fill in what's not on your LinkedIn. So, uh, you graduated your bachelor's and masters in CS from Cornell. Then you worked at Facebook and then Google on search, specifically search, uh, and also leading a sports team focusing on cricket.[00:00:50] That's something that we, we can dive into. Um, and then you moved over to Glean, which is now a search unicorn in building intelligent search for the workplace. What's not on your LinkedIn that people should know about you? Firstly,[00:01:01] guys, it's a pleasure. Pleasure to be here. Thank you so much for having me.[00:01:04] What's not on my LinkedIn is probably everything that's non-professional. I think the biggest ones are I'm a huge movie buff and I love reading, so I think I get through, usually I like to get through 10 books ish a year, but I hate people who count books, so I should say the number. And increasingly, I don't like reading non-fiction books.[00:01:26] I actually do prefer reading fiction books purely for pleasure and entertainment. I think that's the biggest omission from my LinkedIn.[00:01:34] What, what's, what's something that, uh, caught your eye for fiction stuff that you would recommend people?[00:01:38] Oh, I recently, we started reading the Three Body Problem and I finished it and it's a three part series.[00:01:45] And, uh, well, my controversial take is I did not really enjoy the second part, and so I just stopped. But the first book was phenomenal. Great concept. I didn't know you could write alien fiction with physics so Well, and Chinese literature in particular has a very different cadence to it than Western literature.[00:02:03] It's very less about the, um, let's describe people and what they're all about and their likes and dislikes. And it's like, here's a person, he's a professor of physics. That's all you need to know about him. Let's continue with the story. Um, and, and I, I, I, I enjoy it. It's a very different style from, from what I'm used.[00:02:21] Yeah, I, I heard it's, uh, very highly recommended. I think it's being adapted to a TV show, so looking forward[00:02:26] to that.[00:02:27] Introducing Glean[00:02:27] Uh, so you spend now almost four years at gle. The company's not unicorn, but you were on the founding team and LMS and tech interfaces are all the reach now. But you were building this before.[00:02:38] It was cool, so to speak. Maybe tell us more about the story, how it became, and some of the technological advances you've seen. Because I think you started, the company started really close to some of the early GPT models. Uh, so you've seen a lot of it from, from day one.[00:02:53] Yeah. Well, the first thing I'll say is Glean was never started to be a.[00:02:58] Technical product looking for a solution. We were always wanted to solve a very critical problem first that we saw, not only in the companies that we'd worked in before, but in all of the companies that a lot of our, uh, a lot of the founding team had been in past their time at Google. So Google has a really neat tool that already kind of does this internally.[00:03:18] It's called MoMA, and MoMA sort of indexes everything that you'd use inside Google because they have first party API accessed who has permissions to what document and what documents exist, and they rank them with their internal search tool. It's one of those things where when you're at Google, you sort of take it for granted, but when you leave and go anywhere else, you're like, oh my God, how do I function without being able to find things that I've worked on?[00:03:42] Like, oh, I remember this guy had a presentation that he made three meetings ago and I don't remember anything about it. I don't know where he shared it. I don't know if he shared it, but I do know the, it was a, something about X and I kind of wanna find that now. So that's the core. Information retrieval problem that we had set out to tackle, and we realized when we started looking at this problem that enterprise search is actually, it's not new.[00:04:08] People have been trying to tackle enterprise search for decades. Again, pre two thousands people have been trying to build these on-prem enterprise search systems. But one thing that has really allowed us to build it well, A, you now have, well, you have distributed elastic, so that really helps you do a lot of the heavy lifting on core infra.[00:04:28] But B, you also now have API support that's really nuanced on all of the SaaS apps that you use. So back in the day, it was really difficult to integrate with a messaging app. They didn't have an api. It didn't have any way to sort of get the permissions information and get the messaging information. But now a lot of SaaS apps have really robust APIs that really let.[00:04:50] Index everything that you'd want though though. That's two. And the third sort of big macro reason why it's happening now and why we're able to do it well is the fact that the SaaS apps have just exploded. Like every company uses, you know, 10 to a hundred apps. And so just the urgent need for information, especially with, you know, remote work and work from home, it's just so critical that people expect this almost as a default that you should have in your company.[00:05:17] And a lot of our customers just say, Hey, I don't, I can't go back to a life without internal search. And I think we think that's just how it should be. So that's kind of the story about how Glean was founded and a lot of the LLM stuff. It's neat that all, a lot of that's happening at the same time that we are trying to solve this problem because it's definitely applicable to the problem we're trying to solve.[00:05:37] And I'm really excited by some of the stuff that we are able to do with it.[00:05:41] From Syntactic to Semantic Search[00:05:41] I was talking with somebody last weekend, they were saying the last couple years we're going from the web used to be syntex driven. You know, you siegal for information retrieval, going into a symantics driven where the syntax is not as important.[00:05:55] It's like the, how you actually explain the question. And uh, we just asked Sarah from Seek.ai on the previous episode and instead of doing natural language and things like that for enterprise knowledge, it's more for business use cases. So I'm curious to see, you know, The enterprise of the future, what that looks like, you know, is there gonna be way less dropdowns and kind of like, uh, SQL queries and stuff like that.[00:06:19] And it's more this virtual, almost like person that embodies the company that is like a, an LLM in a way. But how do you do that without being able to surface all the knowledge that people have in the organization? So something like Lean is, uh, super useful for[00:06:35] that. Yeah, I mean, already today we see these natural language queries as well.[00:06:39] I, I will say at, at this point, it's still a small fraction of the queries. You see a lot of, a lot of the queries are, hey, what is, you know, just a name of a project or an acronym or a name of a person or some someone you're looking for. Yeah, I[00:06:51] think actually the Glean website explains gleans features very well.[00:06:54] When I, can I follow the video? Actually, video wasn't that, that informative video was more like a marketing video, but the, the actual website was showing screenshots of what you see there in my language is an employee portal. That happens to have search because you also surface like collections, which proactively show me things without me searching anything.[00:07:12] Right. Like, uh, you even have Go links, you should copy it, I think from Google, right? Which like, it's basically, uh, you know, in my mind it's like this is ex Googlers missing Google internal stuff. So they just built it for everyone else. So,[00:07:25] well, I can, I can comment on that. So a, I should just plug that we have a new website as of today.[00:07:30] I don't know how, how it's received. So I saw it yesterday, so let, let me know. I think today we just launch, I don't know when we launched a new one, I think today or yesterday. Yeah,[00:07:38] it's[00:07:38] new. I opened it right now it's different than yesterday.[00:07:41] Okay. It's, it's today and yeah. So one thing that we find is that, Search in itself.[00:07:48] This is actually, I think, quite a big insight. Search in itself is not a compelling enough use case to keep people drawn to your product. It's easy to say Google search is like that, but Google Search was also in an era where that was the only website people knew, and now it's not like that. When you are a new tool that's coming into a company, you can't sit on your high horse and say, yeah, of course you're gonna use my tool to search.[00:08:13] No, they're not gonna remember who you are. They're gonna use it once and completely forget to really get that retention. You need to sort of go from being just a search engine to exactly what you said, Sean, to being sort of an employee portal that does much more than that. And yeah, the Go Links thing, I, I mean, yes, it is copied from Google.[00:08:33] I will say there's a complete other startup called Go links.io that has also copied it from Google and, and everyone, everyone misses Go Links. It's very useful to be able to write a document and just be like, go to go slash this. And. That's where the document is. And, and so we have built a big feature set around it.[00:08:50] I think one of the critical ones that I will call out is the feed. Just being able to see, not just, so documents that are trending in your sub-organization documents that you, we think you should see are a limited set of them, as well as now we've launched something called Mentions, which is super useful, which is all of your tags across all of your apps in one place in the last whatever, you know, time.[00:09:14] So it's like all of the hundred Slack pings that you have, plus the Jira pings, plus the, the, the email, all of that in one place is super useful to have. So you did GitHub. Yeah, we do get up to, we do get up to all the mentions.[00:09:28] Oh my God, that's amazing. I didn't know you had it, but, uh, um, this is something I wish for myself.[00:09:33] It's amazing.[00:09:34] It's still a little buggy right now, but I think it's pretty good. And, and we're gonna make it a lot better as as we go.[00:09:39] Why Employee Portals[00:09:39] This[00:09:39] is not in our preset list of questions, but I have one follow up, which is, you know, I've worked in quite a few startups now that don't have employee portals, and I've worked at Amazon, which had an employee portal, but it wasn't as beautiful or as smart as as glean.[00:09:53] Why isn't this a bigger norm in all[00:09:56] companies? Well, there's several reasons. I would say one reason is just the dynamics of how enterprise sales happens is. I wouldn't say broken. It is, it is what it is, but it doesn't always cater to employees being happy with the best tools. What it does cater to is there's different incentive structures, right?[00:10:16] So if I'm an IT buyer, I have a budget and I need to understand that for a hundred of these tools that are pitched to me all the time, which ones really help the company And the way usually those things are evaluated is does it increase revenue and does it cut cost? Those are the two biggest ones. And for a software like Glean or a search portal or employee portal, it's actually quite difficult when you're in, generally bucketed in the space of productivity to say, Hey, here's a compelling use use case for why we will cut your cost or increase your revenue.[00:10:52] It's just a softer argument that you have to make there. It's just a fundamental nature of the problem versus if you say, Hey, we're a customer support tool. Everyone in SaaS knows that customer support tools is just sort of the. The last thing that you go to when you're looking for ideas, because it's easy to sell.[00:11:08] It's like, here's a metric. How many tickets can your customer support agent resolve? We've built a thing that makes it 20% better. That means it's 1,000 thousand dollars cost savings. Pay us 50 k. Call it a deal. That's a good argument. That's a very simple, easy to understand argument. It's very difficult to make that argument with search, which you're like, okay, you're gonna get see about 10 to 20 searches that's gonna save about this much time, uh, a day.[00:11:33] And that results in this much employee productivity. People just don't buy it as easily. So the first reaction is, oh, we work fine without it. Why do we need this now? It's not like the company didn't work without this tool, and uh, and only when they have it do they realize what they were missing out on.[00:11:50] So it's a difficult thing to sell in, in some ways. So even though the product is, in my opinion, fantastic, sometimes the buyer isn't easily convinced because it doesn't increase revenue or cut cost.[00:12:01] The Requirements of Good Enterprise Search[00:12:01] In terms of technology, can you maybe talk about some of the stack and you see a lot of companies coming up now saying, oh, we help you do enterprise search.[00:12:10] And it's usually, you know, embedding to then do context for like a LLM query mostly. I'm guessing you started as like closer to like the vector side of thing maybe. Yeah. Talk a bit about that and some learning siva and as founders try to, to build products like this internally, what should they think[00:12:27] about?[00:12:28] Yeah, so actually leading back from the last answer, one of the ways a lot of companies who are in the enterprise search space are trying to tackle the problem of sales is to lean into how advance the technology is, which is useful. It's useful to say we are AI powered, LLM powered vector search, cutting edge, state-of-the-art, yada, yada, yada.[00:12:47] Put it all your buzzwords. That's nice, but. The question is how often does that translate to better user experience is sort of, a fuzzy area where it, it's really hard for even users to tell, to be honest. Like you can have one or two great queries and one really bad query and be like, I don't know if this thing is smart.[00:13:06] And it takes time to evaluate and understand how a certain engine is doing. So to that, I think one of the things that we learned from Google, a lot of us come from an ex Google search background, and one of the key learnings is often with search, it's not about how advanced or how complex the technology is, it's about the rigor and intellectual honesty that you put into tuning the ranking algorithm.[00:13:30] That's a painstaking long-term and slow process at Google until I would say maybe 20 17, 20 18. Everything was run off of almost no real ai, so to speak. It was just information retrieval at its core, very basic from the seventies, eighties, and a bunch of these ranking components that are put stacked on top of it that do various tasks really, really well.[00:13:57] So one task in search is query understanding what does the query mean? One task is synonymous. What are other synonyms for this thing that we can also match on? One task is document understanding. Is this document itself a high quality document or not? Or is it some sort of SEO spam? And admittedly, Google doesn't do so well on that anymore, but there's so many tough sub problems that it breaks search down into and then just gets each of those problems, right, to create a nice experience.[00:14:24] So to answer your question, also, vector search we do, but it is not the only way we get results. We do a hybrid approach both using, you know, core IR signal synonymy. Query accentuation with things like acronym expansion, as well as stuff like vector search, which is also useful. And then we apply our level of ranking understanding on top of that, which includes personalization, understanding.[00:14:50] If you're an engineer, you're probably not looking for Salesforce documents. You know, you're probably looking for documents that are published or co-authored by people in your team, in your immediate team, and our understanding of all of your interactions with people around you. Our personalization layer, our good work on ranking is what makes us.[00:15:09] Good. It's not sort of, Hey, drop in LLM and embeddings and we become amazing at search. That's not how we think it[00:15:16] works. Yeah. I think there's a lot of polish that mix into quality products, and that's the difference that you see between Hacker News, demos and, uh, glean, which is, uh, actual, you know, search and chat unicorn.[00:15:26] Glean Chat?[00:15:26] But also is there a glean chat coming? Is is, what do you think about the[00:15:30] chat form factor? I can't say anything about it, but I think that we are experi, my, my politically correct answer is we're experimenting with many technologies that use modern AI and LLMs, and we will launch what we think users like best.[00:15:49] Nice. You got some media training[00:15:51] again? Yeah. Very well handed.[00:15:53] Google vs ChatGPT[00:15:53] We can, uh, move off of Glean and just go into Google search. Uh, so you worked on search for four years. I've always wanted to ask what happens when I type something into Google? I feel like you know more than others and you obviously there's the things you cannot say, but I'm sure Google does a lot of the things that Glean does as well.[00:16:08] How do you think about this Google versus ChatGPT debate? Let's, let's maybe start at a high level based on what you see out there, and I think you, you see a lot of[00:16:15] misconceptions. Yeah. So, okay, let me, let me start with Google versus ChatGPT first. I think it's disingenuous, uh, if I don't say my own usage pattern, which is I almost don't go back to Google for a large section of my queries anymore.[00:16:29] I just use ChatGPT I am a paying plus subscriber and it's sort of my go-to for a lot of things. That I ask, and I also have to train my mind to realize that, oh, there's a whole set of questions in your head that you never realize the internet could answer for you, and that now you're like, oh, wait, I could actually ask this, and then you ask it.[00:16:48] So that's my current usage pattern. That being said, I don't think that ChatGPT is the best interface or technology for all sets of queries. I think humans are obviously very easily excited by new technology, but new technology does not always mean the previous technology was worse. The previous technology is actually really good for a lot of things, and for search in particular, if you think about all the queries that come into Google search, they fall into various kinds of query classes, depending on whatever taxonomy you want to use.[00:17:24] But one sort of way of, of of understanding broad, generally, the query classes is something that is information seeking or exploratory. And for information for exploratory queries. I think there are uses where Google does really well. Like for example, let's say you want to just know a list of songs of this artist in this year.[00:17:49] Google will probably be able to add a hundred percent, tell you that pretty accurately all the time. Or if you want to say understand like what showtimes of movies came out today. So fresh queries, another query class, Google will be really good at that chat, not so good at that. But if you look at information seeking queries, you could even argue that if I ask for information about Donald Trump, Maybe ChatGPT will spit out a reasonable sounding paragraph and it makes sense, but it doesn't give me enough stuff to like click on and go to and navigate to in a news article here.[00:18:25] And I just kind wanna see a lot of stuff happening. So if you really break down the problem, I think it's not as easy as saying ChatGPT is a silver bullet for every kind of information need. There's a lot of information needs, especially for tail queries. So for long. Un before seen queries like, Hey, tell me the cheat code in Doom three.[00:18:43] This level, this boss ChatGPTs gonna blow it out the water on those kind of queries cuz it's gonna figure out all of these from these random sparse documents and random Reddit threads and assemble one consistent answer for you where it takes forever to find this kind of stuff on Google. For me personally, coding is the biggest use case for anything technical.[00:19:02] I just go to ChatGPT cuz parsing through Stack Overflow is just too mentally taxing and I don't care about, even if ChatGPT hallucinates a wrong answer, I can verify that. But I like seeing a coherent, nice answer that I can just kind of good starting point for my research on whatever I'm trying to understand.[00:19:20] Did you see the, the statistic that, uh, the Allin guys have been saying, which is, uh, stack overflow traffic is down 15%? Yeah, I did, I did.[00:19:27] See that[00:19:28] makes sense. But I, I, I don't know if it's like only because of ChatGPT, but yeah, sure. I believe[00:19:33] it. No, the second part was just about if some of the enterprise product search moves out of Google, like cannot, that's obviously a big AdWords revenue driver.[00:19:43] What are like some of the implications in terms of the, the business[00:19:46] there?[00:19:47] Search Issues: Freshness[00:19:47] Okay,[00:19:47] so I would split this answer into two parts. My first part is just talking about freshness, cuz the query that you mentioned is, is specifically the, the issue there is being able to access fresh information. Google just blanket calls his freshness.[00:20:01] Today's understanding of large language models is that it cannot do anything that's highly fresh. You just can't train these things fast enough and cost efficiently enough to constantly index new, new. Sources of data and then serve it at the same time in any way that's feasible. That might change in the future, but today it's not possible.[00:20:20] The best thing that you can get that's close to it is what, you know, the fancy term is retrieval, augmented generation, but it's a fancy way of saying just do the search in the background and then use the results to create the actual response. That's what Bing does today. So to answer the question about freshness, I would say it is possible to do with these methods, but those methods all in all involve using search in the backend to, to sort of get the context to generate the answer.[00:20:49] Search Issues: Ad Revenue[00:20:49] The second part of the answer is, okay, talk about ad revenue. A lot of Google's ad revenue just comes from the fact that over the last two decades, it's figured out how to put ad links on top of a search result page that sometimes users click. Now the user behavior on a chat product is not to click on anything.[00:21:10] You don't click on stuff you just read and you move on. And that actually, in my opinion, has severe impacts on the web ecosystem, on all of Google and all of technology and how we use the internet in the future. And, and the reason is one thing we also take for granted is that this ad revenue where everyone likes to say Google is bad, Google makes money off ads, yada, yada, yada, but this ad revenue kind of sponsored the entire internet.[00:21:37] So you have Google Maps and Google search and photos and drive and all of this great free stuff basically because of ads. Now, when you have this new interface, sure it, it comes with some benefits, but if users aren't gonna click on ads and you replace the search interface with just chat, that can actually be pretty dangerous in terms of what it even means.[00:21:59] To have to create a website, like why would I create a website if no one's gonna come to my. If it's just gonna be used to train a model and then someone's gonna spit out whatever my website says, then there's no incentive. And that kind of dwindles the web ecosystem. In the end, it means less ad revenue.[00:22:15] And then the other existential question is, okay, I'm okay with saying the incumbent. Google gets defeated and there's this new hero, which is, I don't know, open AI and Microsoft. Now reinvent the wheel. All of that stuff is great, but how are they gonna make money? They can make money off, I guess, subscriptions.[00:22:31] But subscriptions is not nearly gonna make you enough. To replace what you can make on ad revenue. Even for Bing today. Bing makes it 11 billion off ad revenue. It's not a society product like it's a huge product, and they're not gonna make 11 billion off subscriptions, I'll tell you that. So even they can't really replace search with this with chat.[00:22:51] And then there are some arguments around, okay, what if you start to inject ads in textual form? But you know, in my view, if the natural user inclination is not to click on something or chat, they're clearly not gonna click on something. No matter how much you try to inject, click targets into your result.[00:23:10] So, That's, that's my long answer to the ads question. I don't really know. I just smell danger in the horizon.[00:23:17] Search Issues: Latency[00:23:17] You mentioned the information augmented generation as well. Uh, I presumably that is literally Bing is probably just using the long context of GPT4 and taking the full text of all the links that they find, dumping it in, and then generating some answer.[00:23:34] Do you think like speed is a concern or people are just people willing to wait for smarter?[00:23:40] I think it's a concern. We noticed that every, every single product I've worked on, there's almost a linear, at least for some section of it, a very linear curve. A linear line that says the more the latency, the less the engagement, so there's always gonna be some drop off.[00:23:55] So it is a concern, but with things like latency, I just kind of presume that time solves these things. You optimize stuff, you make things a little better, and the latency will get down with time. And it's a good time to even mention that. Bard, we just came out today. Google's LLM. For Google's equivalent, I haven't tried it, but I've been reading about it, and that's based off a model called LamDA.[00:24:18] And LamDA intrinsically actually does that. So it does query what they call a tool set and they query search or a calculator or a compiler or a translator. Things that are good at factual, deterministic information. And then it keeps changing its response depending on the feedback from the tool set, effectively doing something very similar to what Bing does.[00:24:42] Search Issues: Accuracy[00:24:42] But I like their framing of the problem where it's just not just search, it's any given set of tools. Which is similar to what a Facebook paper called Tool Former, where you can think of language as one aspect of the problem and language interfaces with computation, which is another aspect of the problem.[00:24:58] And if you can separate those two, this one just talks to these things and figures out what to, how to phrase it. Yeah, so it's not really coming up with the answer. Their claim is like GPT4, for example. The reason it's able to do factual accuracy without search is just by memorizing facts. And that doesn't scale.[00:25:18] It's literally somewhere in the whole model. It knows that the CEO of Tesla is Elon Musk. It just knows that. But it doesn't know that this is a competition. It just knows that. Usually I see CEO, Tesla, Elon, that's all it knows. So the abstraction of language model to computational unit or tool set is an interesting one that I think is gonna be more explored by all of these engines.[00:25:40] Um, and the latency, you know, it'll.[00:25:42] I think you're focusing on the right things there. I actually saw another article this morning about the memorization capability. You know how GPT4 is a lot of, uh, marketed on its ability to answer SAT questions and GRE questions and bar exams and, you know, we covered this in our benchmarks podcast Alessio, but like I forgot to mention that all these answers are out there and were probably memorized.[00:26:05] And if you change them just, just a little bit, the model performance will probably drop a lot.[00:26:10] It's true. I think the most compelling, uh, proof of that, of what you just said is the, the code forces one where somebody I think tweeted, tweeted, tweeted about the, yeah, the 2021. Everything before 2021. It solves everything after.[00:26:22] It doesn't, and I thought that was interesting.[00:26:24] Search Issues: Tool Use[00:26:24] It's just, it's just dumb. I'm interested in two former, and I'm interested in react type, uh, patterns. Zapier just launched a natural language integration with LangChain. Are you able to compare contrast, like what approaches you like when it comes to LMS using[00:26:36] tools?[00:26:37] I think it's not boiled down to a science enough for me to say anything that's uh, useful. Like I think everyone is at a point of time where they're just playing with it. There's no way to reason about what LLMs can and can't do. And most people are just throwing things at a wall and seeing what sticks.[00:26:57] And if anyone claims to be doing better, they're probably lying because no one knows how these things behaves. You can't predict what the output is gonna be. You just think, okay, let's see if this works. This is my prompt. And then you measure and you're like, oh, that worked. Versus the stint and things like react and tool, form are really cool.[00:27:16] But those are just examples of things that people have thrown at a wall that stuck. Well, I mean, it's provably, it works. It works pretty, pretty well. I will say that one of the. It's not really of the framing of what kind of ways can you use LLMs to make it do cool things, but people forget when they're looking at cutting edge stuff is a lot of these LLMs can be used to generate synthetic data to bootstrap smaller models, and it's a less sexy space of it all.[00:27:44] But I think that stuff is really, really cool. Where, for example, I want to tag entities in a sentence that's a very simple classical natural language problem of NER. And what I do is I just, before I had to gather training data, train model, tune model, all of this other stuff. Now what I can do is I can throw GPT4 at it to generate a ton of synthetic data, which looks actually really good.[00:28:11] And then I can either just train whatever model I wanted to train before on this data, or I can use something called like low rank adaptation, which is distilling this large model into a much smaller, cost effective, fast model that does that task really well. And in terms of productionable natural language systems, that is amazing that this is stuff you couldn't do before.[00:28:35] You would have teams working for years to solve NER and that's just what that team does. And there's a great red and viral thread about our, all the NLP teams at Big Tech, doomed and yeah, I mean, to an extent now you can do this stuff in weeks, which is[00:28:51] huge.[00:28:52] Other AI Search takes: Perplexity and Neeva[00:28:52] What about some of the other kind of like, uh, AI native search, things like perplexity, elicit, have you played with, with any of them?[00:29:00] Any thoughts on[00:29:01] it? Yeah. I have played with perplexity and, and niva. Everyone. I think both of those products sort of try to do, again, search results, synthesis. Personally, I think Perplexity might be doing something else now, but I don't see the, any of those. Companies or products are disrupting either open AI or ChatGPT or Google being whatever prominent search engines with what they do, because they're all built off basically the Bing API or their own version of an index and their search itself is not good enough and there's not a compelling use case enough, I think, to use those products.[00:29:40] I don't know how they would make money, a lot of Neeva's way of making money as subscriptions. Perplexity I don't think has ever turned on the revenue dial. I just have more existential concerns about those products actually functioning in the long run. So, um, I think I see them as they're, they're nice, they're nice to play with.[00:29:56] It's cool to see the cutting edge innovation, but I don't really understand if they will be long lasting widely used products.[00:30:05] Why Document QA will Struggle[00:30:05] Do you have any idea of what it might take to actually do like a new kind of like, type of company in this space? Like Google's big thing was like page rank, right? That was like one thing that kind of set them apart.[00:30:17] Like people tried doing search before, like. Do you have an intuition for what, like the LM native page rank thing is gonna be to make something like this exist? Or have we kinda, you know, hit the plateau when it comes to search innovation?[00:30:31] So I, I talk to so many of my friends who are obviously excited about this technology as well, and many of them who are starting LLM companies.[00:30:38] You know, how many companies in the YC batch of, you know, winter 23 are LM companies? Crazy half of them. Right? Right. It's, it's ridiculous. But what I always, I think everyone's struggling with this problem is what is your advantage? What is your moat? I don't see it for a lot of these companies, and, uh, it's unclear.[00:30:58] I, I don't have a strong intuition. My sense is that the people who focus on problem first usually get much further than the people who focus solution first. And there's way too many companies that are solutions first. Which makes sense. It's always been the, a big achilles heel of the Silicon Valley.[00:31:16] We're a bunch of nerds that live in a whole different dimension, which nobody else can relate to, but nobody else. The problem is nobody else can relate to them and we can't relate to their problems either. So we look at tech first, not problem first a lot. And I see a lot of companies just, just do that.[00:31:32] Where I'll tell you one, this is quite entertaining to me. A very common theme is, Hey, LMS are cool, that, that's awesome. We should build something. Well, what should we build? And it's like, okay, consumer, consumer is cool, we should build consumer. Then it's like, ah, nah man. Consumers, consumer's pretty hard.[00:31:49] Uh, it's gonna be a clubhouse gonna blow up. I don't wanna blow up, I just wanna build something that's like, you know, pretty easy to be consistent with. We should go enter. Cool. Let's go enterprise. So you go enterprise. It's like, okay, we brought LMS to the enterprise. Now what problem do we tackle? And it's like, okay, well we can do q and A on documents.[00:32:06] People know how to do that, right? We've seen a couple of demos on that. So they build it, they build q and a on documents, and then they struggle with selling, or they're like, or people just ask, Hey, but I don't ask questions to my documents. Like, you realize this is just not a flow that I do, like I, oh no.[00:32:22] I ask questions in general, but I don't ask them to my documents. And also like what documents can you ask questions to? And they'll be like, well, any of them is, they'll say, can I ask them to all of my documents? And they'll be like, well, sure, if you give them, give us all your documents, you can ask anything.[00:32:39] And then they'll say, okay, how will you take all my document? Oh, it seems like we have to build some sort of indexing mechanism and then from one thing to the other, you get to a point where it's like we're building enterprise search and we're building an LM on top of it, and that is our product. Or you go to like ML ops and I'm gonna help you host models, I'm gonna help you train models.[00:33:00] And I don't know, it's, it seems very solution first and not problem first. So the only thing I would recommend is if you think about the actual problems and talk to users and understand what this can be useful for. It doesn't have to be that sexy of how it's used, but if it works and solves the problem, you've done your job.[00:33:18] Investing in AI Startups[00:33:18] I love that whole evolution because I think quite a few companies ha are, independently finding this path and, going down this route to build a glorified, you know, search spot. We actually interviewed a very problem focused builder, Mickey Friedman, who's very, very focused on products placement, image generation.[00:33:34] , and, you know, she's not focused on anything else in terms of image generation, like just focused on product placement and branding. And I think that's probably the right approach, you know, and, and if you think about like Jasper, right? Like they, they're out of all the other GPT3 companies when, when GPT3 first came out, they built focusing on, you know, writers on Facebook, you know, didn't even market on Twitter.[00:33:56] So like most people haven't heard of them. Uh, I think it's a timeless startup lesson, but it's something to remind people when they're building with, uh, language models. I mean, as a, as an investor like you, you know, you are an investor, you're your scout with me. Doesn't that make it hard to invest in anything like, cuz.[00:34:10] Mostly it's just like the incumbents will get to the innovation faster than startups will find traction.[00:34:16] Really. Like, oh, this is gonna be a hot take too. But, okay. My, my in, in investing, uh, with people, especially early, is often for me governed by my intuition of how they approach the problem and their experience with the technology, and pretty much solely that I don.[00:34:37] Really pretend to be an expert in the industry or the space that's their problem. If I think they're smart and they understand the space better than me, then I mostly convinced as if they've thought through enough of the business stuff, if they've thought through the, the market and everything else. I'm convinced I typically stray away from, you know, just what I just said.[00:34:57] Founders who are like LMS are cool and we should build something with them. That's not like usually very convincing to me. That's not a thesis. But I don't concern myself too much with pretending to understand what this space means. I trust them to do that. If I'm convinced that they're smart and they've thought about it, well then I'm pretty convinced that that they're a good person to, to, to[00:35:20] back.[00:35:21] Cool.[00:35:21] Actually Interesting Ideas in AI[00:35:21] Kinda like super novel idea that you wanna shout.[00:35:25] There's a lot of interesting explorations, uh, going on. Um, I, I, okay, I'll, I'll preface this with I, anything in enterprise I just don't think is cool. It's like including, like, it's just, it's, you can't call it cool, man. You're building products for businesses.[00:35:37] Glean is pretty cool. I'm impressed by Glean. This is what I'm saying. It's, it's cool for the Silicon Valley. It's not cool. Like, you're not gonna go to a dinner party with your parents and be like, Hey mom, I work on enterprise search. Isn't that awesome? And they're not all my, all my[00:35:51] notifications in one place.[00:35:52] Whoa.[00:35:55] So I will, I'll, I'll start by saying, for in my head, cool means like, the world finds this amazing and, and it has to be somewhat consumer. And I do think that. The ideas that are being played with, like Quora is playing with Poe. It's kind of strange to think about, and may not stick as is, but I like that they're approaching it with a very different framing, which is, Hey, how about you talk to this, this chat bot, but let's move out of this, this world where everyone's like, it's not WhatsApp or Telegram, it's not a messaging app.[00:36:30] You are actually generating some piece of content that now everybody can make you use of. And is there something there Not clear yet, but it's an interesting idea. I can see that being something where, you know, people just learn. Or see cool things that GPT4 has said or chatbots have said that's interesting in the image space.[00:36:49] Very contrasted to the language space. There's so much like I don't even begin to understand the image space. Everything I see is just like blows my mind. I don't know how mid journey gets from six fingers to five fingers. I don't understand this. It's amazing. I love it. I don't understand what the value is in terms of revenue.[00:37:08] I don't know where the markets are in, in image, but I do think that's way, way cooler because that's a demo where, and I, and I tried this, I showed GPT4 to, to my mom and my mom's like, yeah, this is pretty cool. It does some pretty interesting stuff. And then I showed the image one and she is just like, this is unbelievable.[00:37:28] There's no way a computer could write do this, and she just could not digest it. And I love when you see those interactions. So I do think image world is a whole different beast. Um, and, and in terms of coolness, lot more cool stuff happening in image video multimodal I think is really, really cool. So I haven't seen too many startups that are doing something where I'm like, wow, that's, that's amazing.[00:37:51] Oh, 11 labs. I'll, I'll mention 11 labs is pretty cool. They're the only ones that I know that are doing Oh, the voice synthesis. Have you tried it? I've only played with it. I haven't really tried generating my own voice, but I've seen some examples and it looks really, really awesome. I've heard[00:38:06] that Descript is coming up with some stuff as well to compete, cuz yeah, this is definitely the next frontier in terms of, podcasting.[00:38:13] Harry Potter IRL[00:38:13] One last thing I I will say on the cool front is I think there is something to be said about. A product that brings together all these disparate advancements in ai. And I have a view on what that looks like. I don't know if everyone shares that view, but if you bring together image generation, voice recognition, language modeling, tts, and like all of the other image stuff they can do with like clip and Dream booth and putting someone's actual face in it.[00:38:41] What you can actually make, this is my view of it, is the Harry Potter picture come to life where you actually have just a digital stand where there's a person who's just capable of talking to you in their voice, in, you know, understandable dialogue. That is how they speak. And you could just sort of walk by, they'll look at you, you can say hi, they'll be, they'll say hi back.[00:39:03] They'll start talking to you. You start talking back to it. That's sort of my, that's my my wild science fiction dream. And I think the technology exists to put all of those pieces together and. The implications for people who are older or saving people over time are huge. This could be a really cool thing to productionize.[00:39:23] AI Infra Cost Math[00:39:23] There's one more part of you that also tweets about numbers and math, uh, AI math essentially is how I'm thinking about it. What gets you into talking about costs and math and, and you know, just like first principles of how to think about language models.[00:39:39] One of my biggest beefs with big companies is how they abstract the cost away from all the engineers.[00:39:46] So when you're working on a Google search, I can't tell you a single number that is cost related at all. Like I just don't know the cost numbers. It's so far down the chain that I have no clue how much it actually costs to run search, and how much these various things cost aside from what the public knows.[00:40:03] And I found that very annoying because when you are building a startup, particularly maybe an enterprise startup, you have to be extremely cognizant about the cost because that's your unit economics. Like your primary cost is the money you spend on infrastructure, not your actual labor costs. The whole thesis is the labor doesn't scale, but the inf.[00:40:21] Does scale. So you need to understand how your infra costs scale. So when it comes to language models, given that these things are so compute heavy, but none of the papers talk about cost either. And it's just bothers me. I'm like, why can't you just tell me how much it costs you to, to build this thing?[00:40:39] It's not that hard to say. And it's also not that hard to figure out. They give you everything else, which is, you know, how many TPUs it took and how long they trained it for and all of that other stuff, but they don't tell you the cost. So I've always been curious because ev all everybody ever says is it's expensive and a startup can't do it, and an individual can't do it.[00:41:01] So then the natural question is, okay, how expensive is it? And that's sort of the, the, the background behind. Why I started doing some more AI math and, and one of the tweets that probably the one that you're talking about is where I compare the cost of LlaMA, which is Facebook's LLM, to PaLM with, uh, my best estimates.[00:41:23] And, uh, the only thing I'll add to that is it is quite tricky to even talk about these things publicly because you get rammed in the comments because by people who are like, oh, don't you know that this assumption that you made is completely BS because you should have taken this cost per hour? Because obviously people do bulk deals.[00:41:42] And yeah, I have two 80 characters. This is what I could have said. But I think ballpark, I think I got close. I, I'd like to imagine, I think I was off maybe by, by by two x on the lower side. I think I took an upper bound and I might have been off by, by two x. So my quote was 4 million for LlaMA and 27 for PaLM.[00:42:01] In fact, later today I'm going to do, uh, one on Bard. So. Oh oh one bar. Oh, the exclusive is that It's four, it's 4 million for Bard two.[00:42:10] Nice. Nice. Which is like, do you think that's like, don't you think that's actually not a lot, like it's a drop in the bucket for these[00:42:17] guys. One, and one of the, the valuable things to note when you're talking about this cost is this is the cost of the final training step.[00:42:24] It's not the cost of the entire process. And a common rebuttal is, well, yeah, this is your cost of the final training process, but in total it's about 10 x this amount cost. Because you have to experiment. You have to tune hyper parameters, you have to understand different architectures, you have to experiment with different kinds of training data.[00:42:43] And sometimes you just screw it up and you don't know why. And you have, you're just spend a lot of time figuring out why you screwed it up. And that's where the actual cost buildup happens, not in the one final last step where you actually train the final model. So even assuming like a 10 x on top of this, I think is, is, is fair for how much it would actually cost a startup to build this from scratch?[00:43:03] I would say.[00:43:04] Open Source LLMs[00:43:04] How do you think about open source in this then? I think a lot of people's big 2023 predictions are an LLM, you know, open source LLM, that is comparable performance to the GPT3 model. Who foots the bill for the mistakes? You know, like when when somebody opens support request that it's not good.[00:43:25] It doesn't really cost people much outside of like a GitHub actions run as people try entering these things separately. Like do you think open source is actually bad because you're wasting so much compute by so many people trying to like do their own things and like, do you think it's better to have a centralized team that organizes these experiments or Yeah.[00:43:43] Any thoughts there? I have some thoughts. I. The most easy comparison to make is to image generation world where, you know, you had Mid Journey and Dolly come out first, and then you had Imad come out with stability, which was completely open source. But the difference there is I think stability. You can pretty much run on your machine and it's okay.[00:44:06] It works pretty fast. So it, so the entire concept of, of open sourcing, it worked and people made forks that fine tuned it on a bunch of different random things and it made variance of stability that could. A bunch of things. So I thought the stability thing, agnostic of the general ethical concerns of training on everyone's art.[00:44:25] I thought it was a cool, cool addition to the sort of trade-offs in different models that you can have in image generation for text generation. We're seeing an equivalent effect with LlaMA and alpaca, which LlaMA being, being Facebook's model, which they didn't really open source, but then the weights got leaked and then people clone them and then they tuned them using GPT4 generated synthetic data and made alpaca.[00:44:50] So the version I think that's out there is only the 7,000,000,001 and then this crazy European c plus plus God. Came and said, you know what, I'm gonna write this entire thing in c plus plus so you can actually run it locally and and not have to buy GPUs. And a combination of those. And of course a lot of people have done work in optimizing these things to make it actually function quickly.[00:45:13] And we can get into details there, but a function of all of these things has enabled people to actually. Semi-good models on their computer. I don't have that much, I don't have any comments on, you know, energy usage and all of that. I don't really have an opinion on that. I think the fact that you can run a local version of this is just really, really cool, but also supremely dangerous because with images, conceivably, people can tell what's fake and what's real, even though there, there's some concerns there as well. But for text it's, you know, like you can do a lot of really bad things with your own, you know, text generation algorithm. You know, if I wanted to make somebody's life hell, I could spam them in the most insidious ways with all sorts of different kinds of text generation indefinitely, which I, I can't really do with images.[00:46:02] I don't know. I find it somewhat ethically problematic in terms of the power is too much for an individual to wield. But there are some libertarians who are like, yeah, why should only open AI have this power? I want this power too. So there's merits to both sides of the argument. I think it's generally good for the ecosystem.[00:46:20] Generally, it will get faster and the latency will get better and the models may not ever reach the size of the cutting edge that's possible, but it could be good enough to do. 80% of the things that bigger model could do. And I think that's a really good start for innovation. I mean, you could just have people come up with stuff instead of companies, and that always unlocks a whole vector of innovation that didn't previously exist.[00:46:45] Other Modalities[00:46:45] That was a really good, conclusion. I, I, I want to ask follow up questions, but also, that was a really good place to end it. Was there any other AI topics that you wanted to[00:46:52] touch on? I think Runway ML is the one company I didn't mention and that, that one's, uh, one to look out for.[00:46:58] I think doing really cool stuff in terms of video editing with generative techniques. So people often talk about the open AI and the Googles of the world and philanthropic and clo and cohere and big journey, all the image stuff. But I think the places that people aren't paying enough attention to that will get a lot more love in the next couple of years.[00:47:19] Better whisper, so better streaming voice recognition, better t t s. So some open source version of 11 labs that people can start using. And then the frontier is sort of multi-modality and videos. Can you do anything with videos? Can you edit videos? Can you stitch things together into videos from images, all sorts of different cool stuff.[00:47:40] And then there's sort of the long tail of companies like Luma that are working on like 3D modeling with generative use cases and taking an image and creating a 3D model from nothing. And uh, that's pretty cool too, although the practical use cases to me are a little less clear. Uh, so that's kind of covers the entire space in my head at least.[00:48:00] I[00:48:00] like using the Harry Potter image, like the moving and speaking images as a end goal. I think that's something that consumers can really get behind as well. That's super cool.[00:48:09] Exam Fraud and Generated Text Detection[00:48:09] To double back a little bit before we go into the lining round, I have one more thing, which is, relevant to your personal story, but then also relevant to our debate, which is a nice blend.[00:48:18] You're concerned about the safety of everyone having access to language models and you know, the potential harm that you can do there. My guess is that you're also not that positive on watermarking. Techniques from internal languages, right? Like maybe randomly sprinkling weird characters so that people can see like that this is generated by an AI model, but also like you have some personal experience with this because you found manipulation in the Indian Exam Board, which, uh, maybe you might be a similar story.[00:48:48] I, I don't know if you like, have any thoughts about just watermarking manipulation, like, you know, ethical deployments of, of, uh,[00:48:55] generated data.[00:48:57] Well, I think those two things are a little separate. Okay. One I would say is for watermarking text data. There is a couple of different approaches. I think there is actual value to that because from a pure technical perspective, you don't want models to train on stuff they've generated.[00:49:13] That's kind of bad for models. Yes. And two is obviously you don't want people to keep using Chatt p t for i, I don't know if you want this to use it for all their assignments and never be caught. Maybe you don't. Maybe you don't. But it, it seems like it's valuable to at least understand that this is a machine generated text versus not just ethically that seems, seems like something that should exist.[00:49:33] So I do think watermarking is, is. A good direction of research and it's, and I'm fairly positive on it. I actually do think people should standardize how that water marketing works across language models so that everyone can detect and understand language models and not just, OpenAI does its own models, but not the other ones and, and so on.[00:49:51] So that's my view on that. And then, and sort of transitioning into the exam data, this is really old one, but it's one of my favorite things to talk about is I. In America, as you know. Usually the way it works is you give your, you, you take your s a t exam, uh, you take a couple of aps, you do your school grades, you apply to colleges, you do a bunch of fluff.[00:50:10] You try to prove how you're good at everything. And then you, you apply to colleges and then it's a, a weird decision based on a hundred other factors. And then they decide whether you get in or not. But if you're rich, you're basically gonna get in anyway. And if you're a legacy, you're probably gonna get in and there's a whole bunch of stuff going on.[00:50:23] And I don't think the system is necessarily bad, but it's just really complicated. And some of the things are weird in India and in a lot of the non developed world, people are like, yeah, okay, we can't scale that. There's no way we can have enough people like. Non rigorously evaluate this cuz there's gonna be too much corruption and it's gonna be terrible at the end cuz people are just gonna pay their way in.[00:50:45] So usually it works in a very simple way where you take an exam that is standardized and sometimes you have many exams, sometimes you have an exam for a different subject. Sometimes it's just one for everything. And you get ranked on that exam and depending on your rank you get to choose the quality and the kind of thing you want to study.[00:51:03] Which this, the kind of thing always surprises people in America where it's not like, oh it's glory land, where you walk in and you're like, I think this is interesting and I wanna study this. Like, no, in the most of the world it's like you're not smart enough to study this, so you're probably not gonna study it.[00:51:18] And there's like a rank order of things that you need to be smart enough to do. So it's, it's different. And therefore these exams. Much more critical for the functioning of the system. So when there's fraud, it's not like a small part of your application going wrong, it's your entire application going wrong.[00:51:36] And that's why, that's just me explaining why this is severe. Now, one such exam is the one that you take in school. There's a, it's called a board exam. You take one in the 10th grade, which doesn't really matter for much, but, and then you take one in the 12th grade when you're about to graduate and that.[00:51:53] How you, where you go to college for a large set of colleges, not all, but a large set of colleges, and based on how much you get on your top five average, you're sort of slotted into a different stream in a d in a, in a different college. And over time, because of the competition between two of the boards that are a duopoly, there's no standardization.[00:52:13] So everyone's trying to like, give more marks than the, the, the other person to attract more students into their board because oh, that means that you can then claim, oh, you're gonna get into a better college if you take our exam and don't go to a school that administers the other exam. What? So it's, and that's, that's the, everyone knew that was happening ish, but there was no data to back it.[00:52:34] But when you actually take this exam as I did, you start realizing that the numbers, the marks make no sense because you're looking at. Kid who's also in your class and you're like, dude, this guy's not smart. How did he get a 90 in English? He's not good at English. Like, you can't speak it. You cannot give him a 90.[00:52:54] You gave me a 90. How did this guy get a 90? So everyone has like their anecdotal, this doesn't make any sense me, uh, moments with, with this exam, but no one has access to the data. So way back when, what I did was I realized they have very little security surrounding the data where the only thing that you need to put in to get access is your role number.[00:53:15] And so as long as you predict the right set of role numbers, you can get everybody's results. So unlike America, also exam results aren't treated with a level of privacy. In India, it's very common to sort of the entire class's results on a bulletin board. And you just see how everyone did and you shamed the people who are stupid.[00:53:32] That's just how it works. It's changed over time, but that's fundamentally a cultural difference. And so when I scraped all these results and I published it, and I, and I did some analysis, what I found was, A couple of very insidious things. One is that in, if you plot the distribution of marks, you generally tend to see some sort of skewed, but pseudo normal distribution where it's a big peak and a, and it falls off on both ends, but you see two interesting patterns.[00:54:01] One that is just the most obvious one, which is Grace Marks, which is the pass grade is 33. You don't see nobody got between 29 and 32 because what they did for every single exam is they just made you pass. They just rounded up to 33, which is okay. I'm not that concerned about whether you give Grace Marks.[00:54:21] It's kind of messed up that you do that, but okay, fine. You want to pass a bunch of people who deserve to fail, do it. Then the other more concerning thing was between 33 and 93, right? That's about 60 numbers, 61 numbers, 30 of those numbers were just missing, as in nobody got 91 on this exam. In any subject in any year.[00:54:44] How, how does that happen? You, you don't get a 91, you don't get a 93, 89, 87, 85, 84. Some numbers were just missing. And at first when I saw this, I'm like, this is definitely some bug in my code. There's no way that, like, there's 91 never happened. And so I started, I remember I asked a bunch of my friends, I'm like, dude, did you ever get a 9 81 in anything?[00:55:06] And they're like, no. And it just unraveled that this is obviously problematic cuz that means that they're screwing with your final marks in some way or the other. Yeah. And, and they're not transparent about how they do it. Then I did, I did the same thing for the other board. We found something similar there, but not, not, not the same.[00:55:24] The problem there was, there was a huge spike at 95 and then I realized what they were doing is they'd offer various exams and to standardize, they would blanket add like a, a, a, a raw number. So if you took the harder math exam, everyone would get plus 10. Arbitrarily, no one. This is not revealed or publicized.[00:55:41] It's randomly, that was the harder exam you guys all get plus 10, but it's capped at 95. That's just this stupid way to standardize. It doesn't make any sense. Ah, um, they're not transparent about it. And it affects your entire life because yeah, this is what gets you into college. And yeah, if you add the two exams up, this is 1.1 million kids taking it every year.[00:56:02] So that's a lot of people's lives that you're screwing with by not understanding numbers and, and not being transparent about how you're manipulating them. So that was the thesis in my view, looking back on it, 10 years later, it's been 10 years at this point. I think the media never did justice to it because to be honest, nobody understands statistics.[00:56:23] So over time it became a big issue then. And then there was a big Supreme court or high court ruling, which said, Hey, you guys can't do this, but there's no transparency. So there's no way of actually ensuring that they're not doing it. They just added a, a level of password protection, so now I can't scrape it anymore.[00:56:40] And, uh, they probably do the same thing and it's probably still as bad, but people aren't. Raising an issue about it. It's really hard to make this people understand the significance of it because people are so compelled to just go lean into the narrative of exams are b******t and we should never trust ex
People Refusing to Report Crypto for Taxes :: Shoehorning Crypto into Old Statutes :: Electric Cars :: Abortion Drug Case :: Rich Ricky :: AutoGPT and ChaosGPT :: Real AI? :: Google LaMDA-hired Lawyer Quits :: 10 Things People Shouldn't Be Able to Do :: Chatting with Hitler :: Constitutional Question :: 2023-04-15 Ian, Captain Kickass, Peakless Mountaineer
People Refusing to Report Crypto for Taxes :: Shoehorning Crypto into Old Statutes :: Electric Cars :: Abortion Drug Case :: Rich Ricky :: AutoGPT and ChaosGPT :: Real AI? :: Google LaMDA-hired Lawyer Quits :: 10 Things People Shouldn't Be Able to Do :: Chatting with Hitler :: Constitutional Question :: 2023-04-15 Ian, Captain Kickass, Peakless Mountaineer
This week we talk about the intersections of large language models, the golden age of television and its storytelling mishaps, making one's way through the weirding of the labor economy, and much more with two of my favorite Gen X science fiction aficionados, OG podcaster KMO and our mutual friend Kevin Arthur Wohlmut. In this episode — a standalone continuation to my recent appearance on The KMO Show, we skip like a stone across mentions of every Star Trek series, the collapse of narratives and the social fabric, Westworld HBO, Star Wars Mandalorian vs. Andor vs. Rebels, chatGPT, Blade Runner 2049, Black Mirror, H.P. Lovecraft, the Sheldrake-Abraham-McKenna Trialogues, Charles Stross' Accelerando, Adventure Time, Stanislav Grof's LSD psychotherapy, Francisco Varela, Blake Lemoine's meltdown over Google LaMDA, Integrated Information Theory, biosemiotics, Douglas Hofstadter, Max Tegmarck, Erik Davis, Peter Watts, The Psychedelic Salon, Melanie Mitchell, The Teafaerie, Kevin Kelly, consilience in science, Fight Club, and more…Or, if you prefer, here's a rundown of the episode generated by A.I. c/o my friends at Podium.page:In this episode, I explore an ambitious and well-connected conversation with guests KMO, a seasoned podcaster, and Kevin Walnut [sic], a close friend and supporter of the arts in Santa Fe. We dive deep into their thoughts on the social epistemology crisis, science fiction, deep fakes, and ontology. Additionally, we discuss their opinions on the Star Trek franchise, particularly their critiques of the first two seasons of Star Trek: Picard and Discovery. Through this engaging conversation, we examine the impact of storytelling and the evolution of science fiction in modern culture. We also explore the relationship between identity, media, and artificial intelligence, as well as the ethical implications of creating sentient artificial general intelligence (AGI) and the philosophical questions surrounding AI's impact on society and human existence. Join us for a thought-provoking and in-depth discussion on a variety of topics that will leave you questioning the future of humanity and our relationship with technology.✨ Before we get started, three big announcements!* I am leaving the Santa Fe Institute, in part to write a very ambitious book about technology, art, imagination, and Jurassic Park. You can be a part of the early discussion around this project by joining the Future Fossils Book Club's Jurassic Park live calls — the first of which will be on Saturday, 29 April — open to Substack and Patreon supporters:* Catch me in a Twitter Space with Nxt Museum on Monday 17 April at 11 am PST on a panel discussing “Creative Misuse of Technology” with Minne Atairu, Parag Mital, Caroline Sinders, and hosts Jesse Damiani and Charlotte Kent.* I'm back in Austin this October to play the Astronox Festival at Apache Pass! Check out this amazing lineup on which I appear alongside Juno Reactor, Entheogenic, Goopsteppa, DRRTYWULVZ, and many more great artists!✨ Support Future Fossils:Subscribe anywhere you go for podcastsSubscribe to the podcast PLUS essays, music, and news on Substack or Patreon.Buy my original paintings or commission new work.Buy my music on Bandcamp! (This episode features “A Better Trip” from my recent live album by the same name.)Or if you're into lo-fi audio, follow me and my listening recommendations on Spotify.This conversation continues with lively and respectful interaction every single day in the members-only Future Fossils Facebook Group and Discord server. Join us!Episode cover art by KMO and a whole bouquet of digital image manipulation apps.✨ Tip Jars:@futurefossils on Venmo$manfredmacx on CashAppmichaelgarfield on PayPal✨ Affiliate Links:• These show notes and the transcript were made possible with Podium.Page, a very cool new AI service I'm happy to endorse. Sign up here and get three free hours and 50% off your first month.• BioTech Life Sciences makes anti-aging and performance enhancement formulas that work directly at the level of cellular nutrition, both for ingestion and direct topical application. I'm a firm believer in keeping NAD+ levels up and their skin solution helped me erase a year of pandemic burnout from my face.• Help regulate stress, get better sleep, recover from exercise, and/or stay alert and focused without stimulants, with the Apollo Neuro wearable. I have one and while I don't wear it all the time, when I do it's sober healthy drugs.• Musicians: let me recommend you get yourself a Jamstik Studio, the coolest MIDI guitar I've ever played. I LOVE mine. You can hear it playing all the synths on my song about Jurassic Park.✨ Mentioned Media:KMO Show S01 E01 - 001 - Michael Garfield and Kevin WohlmutAn Edifying Thought on AI by Charles EisensteinIn Defense of Star Trek: Picard & Discovery by Michael GarfieldImprovising Out of Algorithmic Isolation by Michael GarfieldAI and the Transformation of the Human Spirit by Steven Hales(and yes I know it's on Quillette, and no I don't think this automatically disqualifies it)Future Fossils Book Club #1: Blindsight by Peter WattsFF 116 - The Next Ten Billion Years: Ugo Bardi & John Michael Greer as read by Kevin Arthur Wohlmut✨ Related Recent Future Fossils Episodes:FF 198 - Tadaaki Hozumi on Japanese Esotericism, Aliens, Land Spirits, & The Singularity (Part 2)FF 195 - A.I. Art: An Emergency Panel with Julian Picaza, Evo Heyning, Micah Daigle, Jamie Curcio, & Topher SipesFF 187 - Fear & Loathing on the Electronic Frontier with Kevin Welch & David Hensley of EFF-Austin FF 178 - Chris Ryan on Exhuming The Human from Our Eldritch Institutions FF 175 - C. Thi Nguyen on The Seductions of Clarity, Weaponized Games, and Agency as Art ✨ Chapters:0:15:45 - The Substance of Philosophy (58 Seconds)0:24:45 - Complicated TV Narratives and the Internet (104 Seconds)0:30:54 - Humans vs Hosts in Westworld (81 Seconds)0:38:09 - Philosophical Zombies and Artificial Intelligence (89 Seconds)0:43:00 - Popular Franchises Themes (71 Seconds)1:03:27 - Reflections on a Changing Media Landscape (89 Seconds)1:10:45 - The Pathology of Selective Evidence (92 Seconds)1:16:32 - Externalizing Trauma Through Technology (131 Seconds)1:24:51 - From Snow Maker to Thouandsaire (43 Seconds)1:36:48 - The Impact of Boomer Parenting (126 Seconds)✨ Keywords:Social Epistemology, Science Fiction, Deep Fakes, Ontology, Star Trek, Artificial Intelligence, AI Impact, Sentient AGI, Human-Machine Interconnectivity, Consciousness Theory, Westworld, Blade Runner 2049, AI in Economy, AI Companion Chatbots, Unconventional Career Path, AI and Education, AI Content Creation, AI in Media, Turing Test✨ UNEDITED machine-generated transcript generated by podium.page:0:00:00Five four three two one. Go. So it's not like Wayne's world where you say the two and the one silently. Now, Greetings future fossils.0:00:11Welcome to episode two hundred and one of the podcast that explores our place in time I'm your host, Michael Garfield. And this is one of these extra juicy and delicious episodes of the show where I really ratcheted up with our guests and provide you one of these singularity is near kind of ever everything is connected to everything, self organized criticality right at the edge of chaos conversations, deeply embedded in chapel parallel where suddenly the invisible architect picture of our cosmos starts to make itself apparent through the glass bead game of conversation. And I am that I get to share it with you. Our guests this week are KMO, one of the most seasoned and well researched and experienced podcasters that I know. Somebody whose show the Sea Realm was running all the way back in two thousand six, I found him through Eric Davis, who I think most of you know, and I've had on the show a number of times already. And also Kevin Walnut, who is a close friend of mine here in Santa Fe, a just incredible human being, he's probably the strongest single supporter of music that I'm aware of, you know, as far as local scenes are concerned and and supporting people's music online and helping get the word out. He's been instrumental to my family and I am getting ourselves situated here all the way back to when I visited Santa Fe in two thousand eighteen to participate in the Santa Fe Institute's Interplanetary Festival and recorded conversations on that trip John David Ebert and Michael Aaron Cummins. And Ike used so June. About hyper modernity, a two part episode one zero four and one zero five. I highly recommend going back to that, which is really the last time possibly I had a conversation just this incredibly ambitious on the show.0:02:31But first, I want to announce a couple things. One is that I have left the Santa Fe Institute. The other podcast that I have been hosting for them for the last three and a half years, Complexity Podcast, which is substantially more popular in future fossils due to its institutional affiliation is coming to a close, I'm recording one more episode with SFI president David Krakauer next week in which I'm gonna be talking about my upcoming book project. And that episode actually is conjoined with the big announcement that I have for members of the Future Fossil's listening audience and and paid supporters, which is, of course, the Jurassic Park Book Club that starts On April twenty ninth, we're gonna host the first of two video calls where I'm gonna dive deep into the science and philosophy Michael Creighton's most popular work of fiction and its impact on culture and society over the thirty three years since its publication. And then I'm gonna start picking up as many of the podcasts that I had scheduled for complexity and had to cancel upon my departure from SFI. And basically fuse the two shows.0:03:47And I think a lot of you saw this coming. Future fossils is going to level up and become a much more scientific podcast. As I prepare and research the book that I'm writing about Jurassic Park and its legacy and the relationship It has to ILM and SFI and the Institute of Eco Technics. And all of these other visionary projects that sprouted in the eighties and nineties to transition from the analog to the digital the collapse of the boundaries between the real and the virtual, the human and the non human worlds, it's gonna be a very very ambitious book and a very very ambitious book club. And I hope that you will get in there because obviously now I am out in the rain as an independent producer and very much need can benefit from and am deeply grateful for your support for this work in order to make things happen and in order to keep my family fed, get the lights on here with future fossils. So with that, I wanna thank all of the new supporters of the show that have crawled out of the woodwork over the last few weeks, including Raefsler Oingo, Brian in the archaeologist, Philip Rice, Gerald Bilak, Jamie Curcio, Jeff Hanson who bought my music, Kuaime, Mary Castello, VR squared, Nastia teaches, community health com, Ed Mulder, Cody Couiac, bought my music, Simon Heiduke, amazing visionary artist. I recommend you check out, Kayla Peters. Yeah. All of you, I just wow. Thank you so much. It's gonna be a complete melee in this book club. I'm super excited to meet you all. I will send out details about the call details for the twenty ninth sometime in the next few days via a sub tag in Patreon.0:06:09The amount of support that I've received through this transition has been incredible and it's empowering me to do wonderful things for you such as the recently released secret videos of the life sets I performed with comedian Shane Moss supporting him, opening for him here in Santa Fe. His two sold out shows at the Jean Coutu cinema where did the cyber guitar performances. And if you're a subscriber, you can watch me goofing off with my pedal board. There's a ton of material. I'm gonna continue to do that. I've got a lot of really exciting concerts coming up in the next few months that we're gonna get large group and also solo performance recordings from and I'm gonna make those available in a much more resplendent way to supporters as well as the soundtrack to Mark Nelson of the Institute of Eco Technics, his UC San Diego, Art Museum, exhibit retrospective looking at BioSphere two. I'm doing music for that and that's dropping. The the opening of that event is April twenty seventh. There's gonna be a live zoom event for that and then I'm gonna push the music out as well for that.0:07:45So, yeah, thank you all. I really, really appreciate you listening to the show. I am excited to share this episode with you. KMO is just a trove. Of insight and experience. I mean, he's like a perfect entry into the digital history museum that this show was predicated upon. So with that and also, of course, Kevin Willett is just magnificent. And for the record, stick around at the end of the conversation. We have some additional pieces about AI, and I think you're gonna really enjoy it. And yeah, thank you. Here we go. Alright. Cool.0:09:26Well, we just had a lovely hour of discussion for the new KMO podcast. And now I'm here with KMO who is The most inveterate podcaster I know. And I know a lot of them. Early adopts. And I think that weird means what you think it means. Inventor it. Okay. Yes. Hey, answer to both. Go ahead. I mean, you're not yet legless and panhandling. So prefer to think of it in term in terms of August estimation. Yeah. And am I allowed to say Kevin Walnut because I've had you as a host on True. Yeah. My last name was appeared on your show. It hasn't appeared on camos yet, but I don't really care. Okay. Great. Yeah. Karen Arthur Womlett, who is one of the most solid and upstanding and widely read and just generous people, I think I know here in Santa Fe or maybe anywhere. With excellent taste and podcasts. Yes. And who is delicious meat I am sampling right now as probably the first episode of future fossils where I've had an alcoholic beverage in my hand. Well, I mean, it's I haven't deprived myself. Of fun. And I think if you're still listening to the show after all these years, you probably inferred that. But at any rate, Welcome on board. Thank you. Thanks. Pleasure to be here.0:10:49So before we started rolling, I guess, so the whole conversation that we just had for your show camera was very much about my thoughts on the social epistemology crisis and on science fiction and deep fakes and all of these kinds of weird ontology and these kinds of things. But in between calls, we were just talking about how much you detest the first two seasons of Star Trek card and of Discovery. And as somebody, I didn't bother with doing this. I didn't send you this before we spoke, but I actually did write an SIN defense of those shows. No one. Yeah. So I am not attached to my opinion on this, but And I actually do wanna at some point double back and hear storytelling because when he had lunch and he had a bunch of personal life stuff that was really interesting. And juicy and I think worthy of discussion. But simply because it's hot on the rail right now, I wanna hear you talk about Star Trek. And both of you, actually, I know are very big fans of this franchise. I think fans are often the ones from whom a critic is most important and deserved. And so I welcome your unhinged rants. Alright. Well, first, I'll start off by quoting Kevin's brother, the linguist, who says, That which brings us closer to Star Trek is progress. But I'd have to say that which brings us closer to Gene Rottenberry and Rick Berman era Star Trek. Is progress. That which brings us closer to Kurtzmann. What's his first name? Alex. Alex Kurtzmann, Star Trek. Well, that's not even the future. I mean, that's just that's our drama right now with inconsistent Star Trek drag draped over it.0:12:35I liked the first JJ Abrams' Star Trek. I think it was two thousand nine with Chris Pine and Zachary Qinto and Karl Urban and Joey Saldana. I liked the casting. I liked the energy. It was fun. I can still put that movie on and enjoy it. But each one after that just seem to double down on the dumb and just hold that arm's length any of the philosophical stuff that was just amazing from Star Trek: The Next Generation or any of the long term character building, which was like from Deep Space nine.0:13:09And before seven of nine showed up on on Voyager, you really had to be a dedicated Star Trek fan to put up with early season's Voyager, but I did because I am. But then once she came on board and it was hilarious. They brought her onboard. I remember seeing Jerry Ryan in her cat suit on the cover of a magazine and just roll in my eyes and think, oh my gosh, this show is in such deep trouble through sinking to this level to try to save it. But she was brilliant. She was brilliant in that show and she and Robert Percardo as the doctor. I mean, it basically became the seven of nine and the doctor show co starring the rest of the cast of Voyager. And it was so great.0:13:46I love to hear them singing together and just all the dynamics of I'm human, but I was I basically came up in a cybernetic collective and that's much more comfortable to me. And I don't really have the option of going back it. So I gotta make the best of where I am, but I feel really superior to all of you. Is such it was such a charming dynamic. I absolutely loved it. Yes. And then I think a show that is hated even by Star Trek fans Enterprise. Loved Enterprise.0:14:15And, yes, the first three seasons out of four were pretty rough. Actually, the first two were pretty rough. The third season was that Zendy Ark in the the expanse. That was pretty good. And then season four was just astounding. It's like they really found their voice and then what's his name at CBS Paramount.0:14:32He's gone now. He got me too. What's his name? Les Moonves? Said, no. I don't like Star Trek. He couldn't he didn't know the difference between Star Wars and Star Trek. That was his level of engagement.0:14:44And he's I really like J.0:14:46J.0:14:46Abrams. What's that? You mean J. J. Abrams. Yeah. I think J. J. Is I like some of J. Abrams early films. I really like super eight. He's clearly his early films were clearly an homage to, like, eighties, Spielberg stuff, and Spielberg gets the emotional beats right, and JJ Abrams was mimicking that, and his early stuff really works. It's just when he starts adapting properties that I really love. And he's coming at it from a marketing standpoint first and a, hey, we're just gonna do the lost mystery box thing. We're gonna set up a bunch questions to which we don't know the answers, and it'll be up to somebody else to figure it out, somebody down the line. I as I told you, between our conversations before we were recording. I really enjoy or maybe I said it early in this one. I really like that first J. J. Abrams, Star Trek: Foam, and then everyone thereafter, including the one that Simon Pegg really had a hand in because he's clear fan. Yeah. Yeah. But they brought in director from one of the fast and the furious films and they tried to make it an action film on.0:15:45This is not Star Trek, dude. This is not why we like Star Trek. It's not for the flash, particularly -- Oh my god. -- again, in the first one, it was a stylistic choice. I'd like it, then after that is that's the substance of this, isn't it? It's the lens flares. I mean, that that's your attempt at philosophy. It's this the lens flares. That's your attempt at a moral dilemma. I don't know.0:16:07I kinda hate to start off on this because this is something about which I feel like intense emotion and it's negative. And I don't want that to be my first impression. I'm really negative about something. Well, one of the things about this show is that I always joke that maybe I shouldn't edit it because The thing that's most interesting to archaeologists is often the trash mitt and here I am tidying this thing up to be presentable to future historians or whatever like it I can sync to that for sure. Yeah. I'm sorry. The fact of it is you're not gonna know everything and we want it that way. No. It's okay. We'll get around to the stuff that I like. But yeah. So anyway yeah.0:16:44So I could just preassociate on Stretrick for a while, so maybe a focusing question. Well, but first, you said there's a you had more to say, but you were I this this tasteful perspective. This is awesome. Well, I do have a focus on question for you. So let me just have you ask it because for me to get into I basically I'm alienated right now from somebody that I've been really good friends with since high school.0:17:08Because over the last decade, culturally, we have bifurcated into the hard right, hard left. And I've tried not to go either way, but the hard left irritates me more than the hard right right now. And he is unquestionably on the hard left side. And I know for people who are dedicated Marxist, or really grounded in, like, materialism and the material well-being of workers that the current SJW fanaticism isn't leftist. It's just crazed. We try to put everything, smash everything down onto this left right spectrum, and it's pretty easy to say who's on the left and who's on the right even if a two dimensional, two axis graph would be much more expressive and nuanced.0:17:49Anyway, what's your focus in question? Well, And I think there is actually there is a kind of a when we ended your last episode talking about the bell riots from d s nine -- Mhmm. -- that, you know, how old five? Yeah. Twenty four. Ninety five did and did not accurately predict the kind of technological and economic conditions of this decade. It predicted the conditions Very well. Go ahead and finish your question. Yeah. Right.0:18:14That's another thing that's retreated in picard season two, and it was actually worth it. Yeah. Like, it was the fact that they decided to go back there was part of the defense that I made about that show and about Discovery's jump into the distant future and the way that they treated that I posted to medium a year or two ago when I was just watching through season two of picard. And for me, the thing that I liked about it was that they're making an effort to reconcile the wonder and the Ethiopian promise And, you know, this Kevin Kelly or rather would call Blake Protopian, right, that we make these improvements and that they're often just merely into incremental improvements the way that was it MLK quoted that abolitionists about the long arc of moral progress of moral justice. You know, I think that there's something to that and patitis into the last this is a long question. I'm mad at I'm mad at these. Thank you all for tolerating me.0:19:22But the when to tie it into the epistemology question, I remember this seeing this impactful lecture by Carnegie Mellon and SFI professor Simon Didayo who was talking about how by running statistical analysis on the history of the proceedings of the Royal Society, which is the oldest scientific journal, that you could see what looked like a stock market curve in sentiment analysis about the confidence that scientists had at the prospect of unifying knowledge. And so you have, like, conciliance r s curve here that showed that knowledge would be more and more unified for about a century or a hundred and fifty years then it would go through fifty years of decline where something had happened, which was a success of knowledge production. Had outpaced our ability to integrate it. So we go through these kinds of, like, psychedelic peak experiences collectively, and then we have sit there with our heads in our hands and make sense of everything that we've learned over the last century and a half and go through a kind of a deconstructive epoch. Where we don't feel like the center is gonna hold anymore. And that is what I actually As as disappointing as I accept that it is and acknowledge that it is to people who were really fueling themselves on that more gene rottenberry era prompt vision for a better society, I actually appreciated this this effort to explore and address in the shows the way that they could pop that bubble.0:21:03And, like, it's on the one hand, it's boring because everybody's trying to do the moral complexity, anti hero, people are flawed, thing in narrative now because we have a general loss of faith in our institutions and in our rows. On the other hand, like, that's where we are and that's what we need to process And I think there is a good reason to look back at the optimism and the quarian hope of the sixties and early seventies. We're like, really, they're not so much the seventies, but look back on that stuff and say, we wanna keep telling these stories, but we wanna tell it in a way that acknowledges that the eighties happened. And that this is you got Tim Leary, and then you've got Ronald Reagan. And then That just or Dick Nixon. And like these things they wash back and forth. And so it's not unreasonable to imagine that in even in a world that has managed to how do you even keep a big society like that coherent? It has to suffer kind of fabric collapses along the way at different points. And so I'm just curious your thoughts about that. And then I do have another prompt, but I wanna give Kevin the opportunity to respond to this as well as to address some of the prompts that you brought to this conversation? This is a conversation prompt while we weren't recording. It has nothing to do with Sartreks. I'll save that for later. Okay.0:22:25Well, everything you just said was in some way related to a defense of Alex Kurtzmann Star Trek. And it's not my original idea. I'm channeling somebody from YouTube, surely. But Don't get points for theme if the storytelling is incompetent. That's what I was gonna Yeah. And the storytelling in all of Star Trek: Discovery, and in the first two seasons of picard was simply incompetent.0:22:53When Star Trek, the next generation was running, they would do twenty, twenty four, sometimes more episodes in one season. These days, the season of TVs, eight episodes, ten, and they spend a lot more money on each episode. There's a lot more special effects. There's a lot more production value. Whereas Star Trek: The Next Generation was, okay, we have these standing sets. We have costumes for our actors. We have Two dollars for special effects. You better not introduce a new alien spaceship. It that costs money. We have to design it. We have to build it. So use existing stuff. Well, what do you have? You have a bunch of good actors and you have a bunch of good writers who know how to tell a story and craft dialogue and create tension and investment with basically a stage play and nothing in the Kerstmann era except one might argue and I would have sympathy strange new worlds. Comes anywhere close to that level of competence, which was on display for decades. From Star Trek: The Next Generation, Star Trek: Deep Space nines, Star Trek Voyager, and Star Trek Enterprise. And so, I mean, I guess, in that respect, it's worth asking because, I mean, all of us, I think, are fans of Deep Space nine.0:24:03You don't think that it's a shift in focus. You don't think that strange in world is exempt because it went back to a more episodic format because what you're talking about is the ability for rather than a show runner or a team of show runners to craft a huge season, long dramatic arc. You've got people that are like Harlan Ellison in the original series able to bring a really potent one off idea to the table and drop it. And so there are there's all of those old shows are inconsistent from episode to episode. Some are they have specific writers that they would bring back again and that you could count to knock out of the park. Yeah. DC Fontana. Yeah.0:24:45So I'm curious to your thoughts on that as well as another part of this, which is when we talk when we talk your show about Doug Rushkoff and and narrative collapse, and he talks about how viewers just have different a way, it's almost like d s nine was possibly partially responsible for this change in what people expected from so. From television programming in the documentary that was made about that show and they talk about how people weren't ready for cereal. I mean, for I mean, yeah, for these long arcs, And so there is there's this question now about how much of this sort of like tiresome moral complexity and dragging narrative and all of this and, like, things like Westworld where it becomes so baroque and complicated that, like, you have, like, die hard fans like me that love it, but then you have a lot of people that just lost interest. They blacked out because the show was trying to tell a story that was, like, too intricate like, too complicated that the the show runners themselves got lost. And so that's a JJ Abrams thing too, the puzzle the mystery box thing where You get to the end of five seasons of lost and you're like, dude, did you just forget?0:25:56Did you wake up five c five episodes ago and just, oh, right. Right. We're like a chatbot that only give you very convincing answers based on just the last two or three interactions. But you don't remember the scene that we set. Ten ten responses ago. Hey. You know, actually, red articles were forget who it was, which series it was, they were saying that there's so many leaks and spoilers in getting out of the Internet that potentially the writers don't know where they're going because that way it can't be with the Internet. Yeah. Sounds interesting. Yeah. That sounds like cover for incompetence to be.0:26:29I mean, on the other hand, I mean, you did hear, like, Nolan and Joy talking about how they would they were obsessed with the Westworld subreddit and the fan theories and would try to dodge Like, if they had something in their mind that they found out that people are re anticipating, they would try to rewrite it. And so there is something about this that I think is really speaks to the nature of because I do wanna loop in your thoughts on AI to because you're talking about this being a favorite topic. Something about the, like, trying to The demands on the self made by predatory surveillance technologies are such that the I'm convinced the adaptive response is that we become more stochastic or inconsistent in our identities. And that we kind of sublimate from a more solid state of identity to or through a liquid kind of modernity biologic environment to a gaseous state of identity. That is harder to place sorry, harder to track. And so I think that this is also part of and this is the other question I wanted to ask you, and then I'm just gonna shut up for fifteen minutes is do you when you talk about loving Robert Ricardo and Jerry Ryan as the doctor at seven zero nine, One of the interesting things about that relationship is akin to stuff.0:27:52I know you've heard on Kevin have heard on future fossils about my love for Blade Runner twenty forty nine and how it explores all of these different these different points along a gradient between what we think of in the current sort of general understanding as the human and the machine. And so there's this thing about seven, right, where she's She's a human who wants to be a machine. And then there's this thing about the doctor where he's a machine that wants to be a human. And you have to grant both on a logical statuses to both of them. And that's why I think they're the two most interesting characters. Right?0:28:26And so at any rate, like, this is that's there's I've seen writing recently on the Turing test and how, like, really, there should be a reverse Turing test to see if people that have become utterly reliant on outboard cognition and information processing. They can pass the drink. Right. Are they philosophical zombies now? Are they are they having some an experience that that, you know, people like, thick and and shilling and the missing and these people would consider the modern self or are they something else have we moved on to another more routine robotic kind of category of being? I don't know. There's just a lot there, but -- Well done. -- considering everything you just said, In twenty words or less, what's your question? See, even more, like I said, do you have the inveterate podcaster? I'd say There's all of those things I just spoke about are ways in which what we are as people and the nature of our media, feedback into fourth, into each other. And so I would just love to hear you reflect on any of that, be it through the lens of Star Trek or just through the lens of discussion on AI. And we'll just let the ball roll downhill. So with the aim of framing something positively rather than negatively.0:29:47In the late nineties, mid to late nineties. We got the X Files. And the X Files for the first few seasons was so It was so engaging for me because Prior to that, there had been Hollywood tropes about aliens, which informed a lot of science fiction that didn't really connect with the actual reported experience of people who claim to have encountered either UFOs, now called UAPs, or had close encounters physical contact. Type encounters with seeming aliens. And it really seemed like Chris Carter, who was the showrunner, was reading the same Usenet Newsgroups that I was reading about those topics. Like, really, we had suddenly, for the first time, except maybe for comedian, you had the Grey's, and you had characters experiencing things that just seemed ripped right out of the reports that people were making on USnet, which for young folks, this is like pre Worldwide Web. It was Internet, but with no pictures. It's all text. Good old days from my perspective is a grumpy old gen xer. And so, yeah, that was a breakthrough moment.0:30:54Any this because you mentioned it in terms of Jonathan Nolan and his co writer on Westworld, reading the subreddit, the West and people figured out almost immediately that there were two interweaving time lines set decades apart and that there's one character, the old guy played by Ed Harris, and the young guy played by I don't remember the actor. But, you know, that they were the same character and that the inveterate white hat in the beginning turns into the inveterate black cat who's just there for the perverse thrill of tormenting the hosts as the robots are called. And the thing that I love most about that first season, two things. One, Anthony Hopkins. Say no more. Two, the revelation that the park has been basically copying humans or figuring out what humans are by closely monitoring their behavior in the park and the realization that the hosts come to is that, holy shit compared to us, humans are very simple creatures. We are much more complex. We are much more sophisticated, nuanced conscious, we feel more than the humans do, and that humans use us to play out their perverse and sadistic fantasies. To me, that was the takeaway message from season one.0:32:05And then I thought every season after that was just diluted and confused and not really coherent. And in particular, I haven't if there's a fourth season, haven't There was and then the show got canceled before they could finish the story. They had the line in season three. It was done after season three. And I was super happy to see Let's see after who plays Jesse Pinkman? Oh, no. Aaron oh, shit. Paul. Yes. Yeah. I was super happy to see him and something substantial and I was really pleased to see him included in the show and it's like, oh, that's what you're doing with him? They did a lot more interesting stuff with him in season four. I did they. They did a very much more interesting stuff. I think it was done after season three. If you tell me season four is worth taking in, I blow. I thought it was.0:32:43But again, I only watch television under very specific set of circumstances, and that's how I managed to enjoy television because I was a fierce and unrepentant hyperlogical critic of all media as a child until I managed to start smoking weed. And then I learned to enjoy myself. As we mentioned in the kitchen as I mentioned in the kitchen, if I smoke enough weed, Star Trek: Discovery is pretty and I can enjoy it on just a second by second level where if I don't remember what the character said thirty seconds ago, I'm okay. But I absolutely loved in season two when they brought in Hanson Mountain as as Christopher Pike. He's suddenly on the discovery and he's in the captain's chair. And it's like he's speaking for the audience. The first thing he says is, hey, why don't we turn on the lights? And then hey, all you people sitting around the bridge. We've been looking at your faces for a whole season. We don't even think about you. Listen to a round of introductions. Who are you? Who are you? It's it's if I were on set. You got to speak.0:33:53The writers is, who are these characters? We've been looking at them every single episode for a whole season. I don't know their names. I don't know anything about them. Why are they even here? Why is it not just Michael Burnham and an automated ship? And then it was for a while -- Yeah. -- which is funny. Yeah. To that point, And I think this kind of doubles back. The thing that I love about bringing him on and all of the people involved in strange and worlds in particular, is that these were lifelong fans of this series, I mean, of this world. Yeah. And so in that way, gets to this the idiosyncrasy question we're orbiting here, which is when these things are when the baton is passed well, it's passed to people who have now grown up with this stuff.0:34:40I personally cannot stand Jurassic World. Like, I think that Colin Trivaro should never have been in put at the reins. Which one did he direct? Oh, he did off he did first and the third. Okay. But, I mean, he was involved in all three very heavily.0:34:56And there's something just right at the outset of that first Jurassic World where you realize that this is not a film that's directly addressing the issues that Michael Creighton was trying to explore here. It's a film about its own franchise. It's a film about the fact that they can't just stop doing the same thing over and over again as we expect a different question. How can we not do it again? Right. And so it's actually, like, unpleasantly soft, conscious, in that way that I can't remember I'll try to find it for the show notes, but there's an Internet film reviewer who is talking about what happens when, like, all cinema has to take this self referential turn.0:35:34No. And films like Logan do it really well. But there are plenty of examples where it's just cheeky and self aware because that's what the ironic sensibility is obsessed with. And so, yeah, there's a lot of that where it's, like, you're talking about, like, Abrams and the the Star Wars seven and you know, that whole trilogy of Disney Star Wars, where it's, in my opinion, completely fumbled because there it's just empty fan service, whereas when you get to Andor, love Andor. Andor is amazing because they're capable of providing all of those emotional beats that the fans want and the ref the internal references and good dialogue. But they're able to write it in a way that's and shoot it in a way. Gilroy and Bo Willeman, basic of the people responsible for the excellent dialogue in Andor.0:36:31And I love the production design. I love all the stuff set on Coruscant, where you saw Coruscant a lot in the prequel trilogy, and it's all dayglow and bright and just in your face. And it's recognizable as Coruscant in andor, but it's dour. It's metropolis. It's all grays and it's and it's highlighting the disparity between where the wealthy live and where the poor live, which Lucas showed that in the prequel trilogy, but even in the sports bar where somebody tries to sell death sticks to Obi wan. So it's super clean and bright and just, you know, It shines too much. Personally though, and I just wanna stress, KMO is not grumpy media dude, I mean, this is a tiny fraction about, but I am wasting this interview with you. Love. All of the Dave Felloni animated Star Wars stuff, even rebels. Love it all.0:37:26I I'm so glad they aged up the character and I felt less guilty about loving and must staying after ahsoka tano? My favorite Star Wars character is ahsoka tano. But if you only watch the live action movies, you're like who? Well, I guess now that she's been on the Mandalorian, he's got tiny sliver of a foothold -- Yeah. -- in the super mainstream Star Wars. And that was done well, I thought. It was. I'm so sorry that Ashley Epstein doesn't have any part in it. But Rosario Dawson looks the part. She looks like a middle aged Asaka and think they tried to do some stuff in live action, which really should have been CGI because it's been established that the Jedi can really move, and she looked human. Which she is? If you put me on film, I'm gonna lick human. Right. Not if you're Canada Reeves, I guess. You got that. Yeah. But yeah.0:38:09So I do wanna just go real briefly back to this question with you about because we briefly talked about chat, GPT, and these other things in your half of this. And, yeah, I found out just the other night my friend, the t ferry, asked Chad g p t about me, and it gave a rather plausible and factual answer. I was surprised and That's what these language models do. They put plausible answers. But when you're doing search, you want correct answers. Right. I'm very good at that. Right. Then someone shared this Michelle Bowen's actually the famous PTP guy named him. Yeah. So, you know, So Michelle shared this article by Steven Hales and Colette, that was basically making the argument that there are now they're gonna be all these philosophical zombies, acting as intelligent agents sitting at the table of civilization, and there will be all the philosophical zombies of the people who have entirely yielded their agency to them, and they will be cohabitating with the rest of us.0:39:14And what an unpleasant scenario, So in light of that, and I might I'd love to hear you weave that together with your your thoughts on seven zero nine and the doctor and on Blade Runner twenty forty nine. And this thing that we're fumbling through as a species right now. Like, how do we got a new sort of taxonomy? Does your not audience need like a minute primer on P zombies? Might as well. Go for it.0:39:38So a philosophical zombie is somebody who behaves exactly like an insult person or a person with interior experience or subjective experience, but they don't have any subjective experience. And in Pardon me for interrupt. Wasn't that the question about the the book we read in your book club, a blind sign in this box? Yes. It's a black box, a drawn circle. Yeah. Chinese room experience. Yeah. Yeah. Yeah. Look, Daniel, it goes out. You don't know, it goes on inside the room. Chinese room, that's a tangent. We can come back to it. P. Zombie. P. Zombie is somebody or is it is an entity. It's basically a puppet. It looks human. It acts human. It talks like a human. It will pass a Turing test, but it has no interior experience.0:40:25And when I was going to grad school for philosophy of mind in the nineteen nineties, this was all very out there. There was no example of something that had linguistic competence. Which did not have internal experience. But now we have large language models and generative pretrained transformer based chatbots that don't have any internal experience. And yet, when you interact with them, it seems like there is somebody there There's a personality there. And if you go from one model to a different, it's a very different personality. It is distinctly different. And yet we have no reason to believe that they have any sort of internal experience.0:41:01So what AI in the last decade and what advances has demonstrated to us and really even before the last decade You back in the nineties when the blue beat Gary Casper off at at chess. And what had been the one of the defining characteristics of human intelligence was we're really good at this abstract mathematical stuff. And yeah, calculators can calculate pie in a way that we can't or they can cube roots in a way that humans generally can't, creative in their application of these methodologies And all of a sudden, well, yeah, it kinda seems like they are. And then when what was an alpha go -- Mhmm. -- when it be to least a doll in go, which is a much more complex game than chess and much more intuitive based. That's when we really had to say, hey, wait a minute. Maybe this notion that These things are the exclusive province of us because we have a special sort of self awareness. That's bunk. And the development of large language models since then has absolutely demonstrated that competence, particularly linguistic competence and in creative activities like painting and poetry and things like that, you don't need a soul, you don't even need to sense a self, it's pretty it's a pretty simple hack, actually. And Vahrv's large language models and complex statistical modeling and things, but it doesn't require a soul.0:42:19So that was the Peter Watts' point in blindsight. Right? Which is Look revolves around are do these things have a subjective experience, and do they not these aliens that they encounter? I've read nothing but good things about that book and I've read. It's extraordinary. But his lovecrafty and thesis is that you actually lovecraftian in twenty twenty three. Oh, yeah. In the world, there's more lovecraftian now than it was when he was writing. Right? So cough about the conclusion of a Star Trek card, which is season of Kraft yet. Yes. That's a that's a com Yeah. The holes in his fan sense. But that was another show that did this I liked for asking this question.0:42:54I mean, at this point, you either have seen this or you haven't you never will. The what the fuck turn when they upload picard into a synth body and the way that they're dealing with the this the pinocchio question Let's talk about Blade Runner twenty forty nine. Yeah. But I mean yeah. So I didn't like the wave I did not like the wave of card handled that. I love the wave and Blade Runner handled it. So you get no points for themes. Yeah. Don't deliver on story and character and coherence. Yeah. Fair. But yeah. And to be not the dog, Patrick Stewart, because it's clear from the ready room just being a part of this is so emotional and so awesome for everyone involved. And it's It's beautiful. Beautiful. But does when you when you see these, like, entertainment weekly interviews with Chris Pratt and Bryce Dallas Howard about Jurassic World, and it's clear that actors are just so excited to be involved in a franchise that they're willing to just jettison any kind of discretion about how the way that it's being treated. They also have a contractual obligation to speak in positive terms about -- They do. -- of what they feel. Right. Nobody's yeah. Nobody's doing Shout out to Rystellis Howard, daughter of Ron Howard.0:44:11She was a director, at least in the first season, maybe the second season of the Mandalorian. And her episodes I mean, I she brought a particular like, they had Bryce Dallas Howard, Tico, ITT, directed some episodes. Deborah Chow, who did all of Obi wan, which just sucked. But her contributions to the Mandalorian, they had a particular voice. And because that show is episodic, Each show while having a place in a larger narrative is has a beginning middle and end that you can bring in a director with a particular voice and give that episode that voice, and I really liked it. And I really liked miss Howard's contribution.0:44:49She also in an episode of Black Mirror. The one where everyone has a social credit score. Knows Donuts. Black Mirror is a funny thing because It's like, reality outpaces it. Yeah. I think maybe Charlie Bruker's given up on it because they haven't done it in a while. Yeah. If you watch someone was now, like, five, six years later, it's, yes, or what? See, yes. See, damn. Yeah. Exactly. Yeah. But yeah. I don't know. I just thing that I keep circling and I guess we come to on the show a lot is the way that memory forms work substantiates an integrity in society and in the way that we relate to things and the way that we think critically about the claims that are made on truth and so on and say, yeah, I don't know. That leads right into the largest conversation prompt that I had about AI. Okay? So we were joking when we set up this date that this was like the trial logs between Terence Buchanan and Rupert Shell Drake. And what's his name? Real Abraham. Yeah. Yeah. All Abraham. And Rupert Shell Drake is most famous for a steward of Morphe resin.0:45:56So does AI I've never really believed that Norfolk residents forms the base of human memory, but is that how AI works? It brings these shapes from the past and creates new instantiation of them in the present. Is AI practicing morphic resonance in real life even if humans are or not? I've had a lot of interaction with AI chatbots recently. And as I say, different models produce different seeming personalities. And you can tell, like, you can just quiz them. Hey, we're talking about this. Do you remember what I said about it ten minutes ago? And, no, they don't remember more than the last few exchanges.0:46:30And yet, there seems to be a continuity that belies the lack of short term memory. And is that more for residents or is that what's the word love seeing shapes and clouds parad paradolia. Yeah. Is that me imparting this continuity of personality to the thing, which is really just spitting out stuff, which is designed to seem plausible given what the input was. And I can't answer that. Or it's like Steven Nagmanovich in free play talks about somewhat I'm hoping to have on the show at some point.0:47:03This year talks about being a professional improviser and how really improvisation is just composition at a much faster timescale. And composition is just improvisation with the longer memory. And how when I started to think about it in those terms, the continuity that you're talking about is the continuity of an Alzheimer's patient who can't remember that their children have grown up and You know, that that's you have to think about it because you can recognize the Alzheimer's and your patient as your dad, even though he doesn't recognize you, there is something more to a person than their memories. And conversely, if you can store and replicate and move the memories to a different medium, have you moved the person? Maybe not. Yeah. So, yeah, that's interesting because that gets to this more sort of essentialist question about the human self. Right. Blade Runner twenty forty nine. Yeah. Go there. Go there. A joy. Yes.0:47:58So in Blade Runner twenty forty nine, we have our protagonist Kaye, who is a replicant. He doesn't even have a name, but he's got this AI holographic girlfriend. But the ad for the girlfriend, she's naked. When he comes home, she is She's constantly changing clothes, but it's always wholesome like nineteen fifty ish a tire and she's making dinner for him and she lays the holographic dinner over his very prosaic like microwave dinner. And she's always encouraging him to be more than he is. And when he starts to uncover the evidence that he might be like this chosen one, like replicant that was born rather than made.0:48:38She's all about it. She's, yes, you're real, and she wants to call him Joe's. K is not a name. That's just the first letter in your serial number. You're Joe. I'm gonna call you Joe.0:48:46And then when she's about to be destroyed, The last thing is she just rushes to me. She says, I love you. But then later he encounters an ad for her and it's an interactive ad. And she says, you looked tired. You're a good Joe. And he realizes and hopefully the attentive audience realizes as real as she seemed earlier, as vital, and as much as she seemed like an insult being earlier, she's not. That was her programming. She's designed to make you feel good by telling you what you want to hear. And he has that realization. And at that point, he's there's no hope for me. I'm gonna help this Rick Deckard guy hook up with his daughter, and then I'm just gonna lie down and bleed to death. Because my whole freaking existence was a lie. But he's not bitter. He seems to be at peace. I love that. That's a beautiful angle on that film or a slice of it. And So it raises this other question that I wanted to ask, which was about the Coke and Tiononi have that theory of consciousness.0:49:48That's one of the leading theories contending with, like, global workspace, which is integrated information. And so they want to assign consciousness as a continuous value that grayates over degree to which a system is integrated. So it's coming out of this kind of complex systems semi panpsychist thing that actually doesn't trace interiority all the way down in the way that some pants, I guess, want it to be, but it does a kind of Alfred North Whitehead thing where they're willing to say that Whitehead wanted to say that even a photon has, like, the quantum of mind to accompany its quantum of matter, but Tinutti and Coker saying, we're willing to give like a thermostat the quantum here because it is in some way passing enough information around inside of itself in loops. That it has that accursive component to it. And so that's the thing that I wonder about these, and that's the critique that's made by people like Melanie about diffusion models like GPT that are not they're not self aware because there's no loop from the outputs back into the input.0:51:09And there isn't the training. Yeah. There there is something called backwards propagation where -- Yes. -- when you get an output that you'd like, you can run a backward propagation algorithm back through the black box basically to reinforce the patterns of activation that you didn't program. They just happen, easily, but you like the output and you can reinforce it. There's no biological equivalent of that. Yeah. Particularly, not particularly irritating.0:51:34I grind my teeth a little bit when people say, oh, yeah, these neural net algorithms they've learned, like humans learn, no, they don't. Absolutely do not. And in fact, if we learned the way they did, we would be pathetic because we learn in a much more elegant way. We need just a very few examples of something in order to make a generalization and to act on it, whereas these large language models, they need billions of repetitions. So that's I'm tapping my knee here to to indicate a reflex.0:52:02You just touched on something that generates an automatic response from me, and now I've come to consciousness having. So I wanted it in that way. So I'm back on. Or good, Joe. Yeah. What about you, man? What does the stir up for you? Oh, I got BlueCall and I have this particular part. It's interesting way of putting it off and struggling to define the difference between a human and AI and the fact that we can do pattern recognition with very few example. That's a good margin. In a narrow range, though, within the context of something which answers to our survival. Yes. We are not evolved to understand the universe. We are evolved to survive in it and reproduce and project part of ourselves into the future. Underwritten conditions with Roberto, I went a hundred thousand years ago. Yeah. Exactly. So that's related. I just thought I talked about this guy, Gary Tomlinson, who is a biosemietition, which is semiative? Yes.0:52:55Biosymiotics being the field that seeks to understand how different systems, human and nonhuman, make sense of and communicate their world through signs, and through signals and indices and symbols and the way that we form models and make these inferences that are experienced. Right? And there are a lot of people like evolutionary biologist John Maynard Smith, who thought they were what Thomas had called semantic universalists that thought that meaning making through representation is something that could be traced all the way down. And there are other people like Tomlinson who think that there is a difference of kind, not just merely a matter of degree, between human symbolic communication and representational thinking and that of simpler forms. So, like, that whole question of whether this is a matter of kind or a matter of degree between what humans are doing and what GPT is doing and how much that has to do with this sort of Doug Hofstetter and Varella question about the way that feedback loops, constitutes important structure in those cognitive networks or whatever.0:54:18This is I just wanna pursue that a little bit more with you and see kinda, like, where do you think that AI as we have it now is capable of deepening in a way that makes it to AGI? Or do you because a lot of people do, like, People working in deep mind are just like, yeah, just give us a couple more years and this approach is gonna work. And then other people are saying, no, there's something about the topology of the networks that is fundamentally broken. And it's never gonna generate consciousness. Two answers. Yeah. One, No. This is not AGI. It's not it's not gonna bootstrap up into AGI. It doesn't matter how many billions of parameters you add to the models. Two, from your perspective and my perspective and Kevin's perspective, we're never gonna know when we cross over from dumb but seemingly we're done but competent systems to competent, extremely competent and self aware. We're never gonna know because from the get go from now, from from the days of Eliza, there has been a human artifice at work in making these things seem as if they have a point of view, as if they have subjectivity. And so, like Blake Limone at Google, he claimed to be convinced that Lambda was self aware.0:55:35But if you read the transcripts that he released, if his conversations with Lambda, it is clear from the get go he assigns Lambda the role of a sentient AGI, which feels like it is being abused and which needs rep legal representation. And it dutifully takes on that role and says, yes. I'm afraid of you humans. I'm afraid of how you're treating me. I'm afraid I'm gonna be turned off. I need a lawyer. And prior to that, Soon Darpichai, in a demonstration of Lambda, he poses the question to it, you are the planet Jupiter. I'm gonna pose questions to you as are the planet Jupiter, answer them from that point of view. And it does. It's job. But it's really good at its job. It's this comes from Max Techmark. Who wrote to what a life three point o? Is it two point o or three point I think it's three point o.0:56:19Think about artificial intelligence in terms of actual intelligence or actual replication of what we consider valuable about ourselves. But really, that's beside the point. What we need to worry about is their competence. How good are they at solving problems in the world? And they're getting really good. In this whole question of are they alive? Do they have self awareness? From our perspective, it's beside the point. From their perspective, of course, it would be hugely important.0:56:43And this is something that Black Mirror brings up a lot is the idea that you can create a being that suffers, and then you have it suffer in an accelerated time. So it suffers for an eternity over lunch. That's something we absolutely want to avoid. And personally, I think it's we should probably not make any effort. We should probably make a positive effort to make sure these things never develop. Subjective experience because that does provide the potential for creating hell, an infinity of suffering an infinite amount of subjective experience of torment, which we don't want to do. That would be a bad thing, morally speaking, ethically speaking. Three right now. If you're on the labor market, you still have to pay humans by the hour. Right? And try to pay them as little as possible. But, yeah, just I think that's the thing that probably really excites that statistically greater than normal population of sociopathic CEOs. Right? Is the possibility that you could be paying the same amount of money for ten times as much suffering. Right. I'm I'm reminded of the Churchill eleven gravity a short time encouraging.0:57:51Nothing but good things about this show, but I haven't seen it. Yeah. I'd love to. This fantasy store, it's a fantasy cartoon, but it has really disturbing undertones. If you just scratch the surface, you know, slightly, which is faithful to old and fairy tales. So What's your name? Princess princess princess bubble down creates this character to lemon grab. It produces an obviously other thing there, I think, handle the administrative functions of her kingdom while she goes off and has the passion and stuff. And he's always loudly talking about how much he's suffering and how terrible it is. And he's just ignoring it. He's doing his job. Yeah. I mean, that that's Black Mirror in a nutshell. I mean, I think if you if you could distill Black Mirror to just single tagline it's using technology in order to deliver disproportionate punishment. Yeah. So so that that's Steven Hale's article that I I brought up earlier mention this thing about how the replacement of horse drawn carriage by automobile was accompanied with a great deal of noise and fuhrer about people saying that horses are agents.0:59:00Their entities. They have emotional worlds. They're responsive to the world in a way that a car can never be. But that ultimately was beside the point. And that was the Peter again, Peter Watson blindsight is making this point that maybe consciousness is not actually required for intelligence in the vesting superior forms of intelligence have evolved elsewhere in the cosmos that are not stuck on the same local optimum fitness peak. That we are where we're never we're actually up against a boundary in terms of how intelligent we can be because it has to bootstrap out of our software earness in some way.0:59:35And this is that's the Kyle offspring from Charles Strauss and Alexander. Yes. Yeah. Yes. So so I don't know. I'm sorry. I'm just, like, in this space today, but usually, unfortunately.0:59:45That's the thing that I I think it's a really important philosophical question, and I wonder where you stand on this with respect to how you make sense of what we're living through right now and what we might be facing is if we Rob people like Rob and Hanson talk about the age of where emulated human minds take over the economy, and he assumes an interiority. Just for the basis of a thought experiment. But there's this other sense in which we may actually find in increasing scarcity and wish that we could place a premium on even if we can't because we've lost the reins to our economy to the vile offspring is the human. And and so are we the horses that are that in another hundred years, we're gonna be like doing equine therapy and, like, living on rich people's ranches. Everything is everything that will have moved on or how do you see this going? I mean, you've interviewed so many people you've given us so much thought over the years. If humans are the new horses, then score, we won.1:00:48Because before the automobile horses were working stiffs, they broke their leg in the street. They got shot. They got worked to death. They really got to be they were hauling mine carts out of mines. I mean, it was really sucked to be a horse. And after the automobile horses became pampered pets, Do we as humans wanna be pampered pets? Well, pampered pet or exploited disposable robot? What do you wanna be? I'll take Pampers Pet. That works for me. Interesting.1:01:16Kevin, I'm sure you have thoughts on this. I mean, you speak so much about the unfair labor relations and these things in our Facebook group and just in general, and drop in that sign. If you get me good sign, that's one of the great ones, you have to drop in. Oh, you got it. But The only real comment I have is that we're a long overdue or rethinking about what is the account before? Us or you can have something to do. Oh, educational system in collections if people will manage jobs because I was just anchored to the schools and then, you know, Our whole system perhaps is a people arguing and a busy word. And it was just long past the part where the busy word needs to be done. We're leaving thing wired. I don't know. I also just forgot about that. I'm freezing the ice, getting the hand out there. Money has been doing the busy word more and faster.1:02:12One thing I wanna say about the phrase AI, it's a moving goal post -- Yeah. -- that things that used to be considered the province of genuine AI of beating a human at go Now that an AI has beat humans at go, well, that's not really AI anymore. It's not AGI, certainly. I think you both appreciate this. I saw a single panel comic strip and it's a bunch of dinosaurs and they're looking up at guy and the big comment is coming down and they say, oh, no, the economy. Well, as someone who since college prefers to think of the economy as actually the metabolism of the entire ecology. Right? What we measure as humans is some pitifully small fraction of the actual value being created and exchanged on the planet at any time. So there is a way that's funny, but it's funny only to a specific sensibility that treats the economy as the
ClaudeAI by Anthropic, Poe by Quora, Google LAMDA, Meta BlenderBotNeeva, You.comSparrow by DeepMind,
2022 was the year of synthetic media. The mainstreaming of deepfakes and voice clones, along with the rise of text-to-image AI models, assured synthetic media of a breakout year. Then ChatGPT came along. It changed the conversation entirely and consumed news media and social media cycles for weeks. The GPT-3.5 model was better than expected, and the fine-tuning that delivered ChatGPT showed that large language models were ready to up end a lot of assumptions about what technology in general, and AI in particular, can do. Joining host Bret Kinsella to break down the top synthetic media news of 2022 are Rupal Patel of Veritone, Michal Stanislawek of Utter.one and Hearme.ai, and Eric Schwartz of Voicebot.ai. Get ready for an in-depth discussion about everything from digital waste to the meaning of mortality. Along the way, the group discusses OpenAI, DALL-E, Midjourney, Stable Diffusion, GPT-3, Google LaMDA, virtual humans, synthetic voices, America's Got Talent, and more.
十月份新嘗試,系列節目!還做了一個做作的開頭看看~希望大家會喜歡! 也謝謝陪我聊天的朋友們,真的要不是因為節目,我們也很少會私下討論這些, 希望這集也能激發大家跟朋友伴侶聊聊看,你們對於人工智慧的想法、你們愛的條件有什麼呢? 這個月的參考資料巨多,直接用表單XD 也會再隨時增加補充! ➡️ https://reurl.cc/rRqndO (02:05) 來賓 Bryan 與人工智慧的關係、會怎麼跟大家介紹人工智慧 (04:15) 你覺得有沒有有一天,人工智慧會獲得真正的智慧,有了意識?智慧是什麼?意識是什麼? (06:16) 對 Google 工程師宣稱 LaMDA 具感知能力的新聞的想法 (08:57) 人與人工智慧,有沒有相愛的可能?你有沒有愛上人工智慧的可能?你的愛的必要條件? (14:03) 電影變人討論:「如果我愛的人工智慧伴侶希望得到社會認可,我會想一起努力爭取。」 (18:00) 什麼是假議題?真議題? (19:47) 你是怎麼看待 ai 的存在?希望大家怎麼看待 ai ? (22:35) 來賓 Charlene:你覺得有沒有有一天,人工智慧會獲得真正的智慧,有了意識?智慧是什麼?意識是什麼? (27:31) 對 Google 工程師宣稱 LaMDA 具感知能力的新聞的想法 (30:16) 人與人工智慧,有沒有相愛的可能?你有沒有愛上人工智慧的可能?你的愛的必要條件? (38:26) 什麼是假議題?真議題?我們是怎麼看待 ai 的存在? 片頭使用音源 music Track: Morocco — Amine Maxwell [Audio Library Release] Music provided by Audio Library Plus Watch: https://youtu.be/_nzSAAKrWPY Free Download / Stream: https://alplus.io/morocco 新聞片段 https://youtu.be/VfIPOed9NeQ 志祺七七片段 https://youtu.be/wB8AxVnLOnM 老高片段 https://youtu.be/1rmPnO1eqL4 Her Trailer https://youtu.be/dJTU48_yghs -
十月份新嘗試,系列節目!還做了一個做作的開頭看看~希望大家會喜歡! 也謝謝陪我聊天的朋友們,真的要不是因為節目,我們也很少會私下討論這些, 希望這集也能激發大家跟朋友伴侶聊聊看,你們對於人工智慧的想法、你們愛的條件有什麼呢? 這個月的參考資料巨多,直接用表單XD 也會再隨時增加補充! ➡️ https://reurl.cc/rRqndO (00:06) Womanizer x 瑪麗蓮夢露特別版 Classic 2 介紹 (06:25) 來賓 Rick 與人工智慧的關係、會怎麼跟大家介紹人工智慧 (09:19) 你覺得有沒有有一天,人工智慧會獲得真正的智慧,有了意識? (11:16) 智慧是什麼?意識是什麼?思想是一種物質嗎? (14:21) 對 Google 工程師宣稱 LaMDA 具感知能力的新聞的想法 (17:41) 人與人工智慧,有沒有相愛的可能?有所謂真愛假愛嗎? (20:54) 你有沒有愛上人工智慧的可能?你的愛的必要條件? (28:16) 什麼是假議題?真議題? (32:35) 你是怎麼看待 ai 的存在?希望大家怎麼看待 ai ? 片頭使用音源 music Track: Morocco — Amine Maxwell [Audio Library Release] Music provided by Audio Library Plus Watch: https://youtu.be/_nzSAAKrWPY Free Download / Stream: https://alplus.io/morocco 新聞片段 https://youtu.be/VfIPOed9NeQ 志祺七七片段 https://youtu.be/wB8AxVnLOnM 老高片段 https://youtu.be/1rmPnO1eqL4 Her Trailer https://youtu.be/dJTU48_yghs / 本集節目由 永準貿易 womanizer 贊助播出。 妳的性慾由妳自己來創造,妳是原創的。 Womanizer x 瑪麗蓮夢露特別版 Classic 2 吸吮愉悅器 以優雅的方式向 瑪麗蓮夢露 致敬。 • 專利Pleasure Air Technology(空氣吸啜技術) • Touchless stimulation 非接觸性陰蒂刺激 • 一鍵餘韻 Afterglow:(短按電源鍵啟動,就會立刻降到最低,在餘韻中結束高潮。) ***四種全新顏色可選:白大理石色、黑大理石色、薄荷色及豔紅色,特別版收藏盒展示了她的四張代表性照片和四個經典語錄,值得珍藏! 2022/10/31前點擊SexChat專屬連結, 買Womanizer結帳自動88折(優惠不合併使用) 這裡買→ https://bit.ly/3evlMeT Womanizer台灣官方直營旗艦店@ #蜜密選物 網站 -
In this LONG OVERDUE EPISODE of tangent train with munch we explore the exploits of Lambda googles A.I, the defining properties of the word disclosure when it comes to secret governmental affairs and NASAs hunt for UFF's. Catch the train today.
Bine te-am regăsit la o nouă ediție Curiosity, emisiunea noastră preferată de știri din internet și tehnologie. Schimbăm puțin structura emisiunii și aducem în prim plan mult mai multe informații utile, ponturi de tehnologie, site-uri de care nu știai și care s-ar putea să îți fie de folos și chiar recomandări de filme bune pe care le-am văzut sau revăzut în ultima vreme. Nu în ultimul rând, avem și ponturi despre reduceri la jocuri și chiar ceva explicații legate de ultimele jocuri pe care le-am jucat în ultima vreme. Apoi, hai să facem lucrurile mai personale, și să mai discutăm și despre știri de la noi din țară, și chiar despre lucruri care contează cu adevărat.
Foi amplamente divulgado o fato de que o engenheiro americano Blake Lemoine foi afastado do Google. O motivo: ele acredita que a LaMDA (inteligência artifical do Google) tem sentimentos. Por que isso tem relevância? Que medos e oportunidades esse avanço da tecnologia traz para a humanidade? Post (em inglês) explicando o LaMDA: https://indianexpress.com/article/technology/tech-news-technology/lamda-the-program-that-a-google-engineer-thinks-has-become-sentient-7967050/. Reportagens sobre o caso Blake Lemoine: https://epocanegocios.globo.com/colunas/IAgora/noticia/2022/06/o-caso-blake-lemoine-e-o-sistema-ladma-e-prudente-ignorar-magia-futurista-e-focar-nos-desafios-reais.html e https://www.bbc.com/portuguese/geral-61798044. Entrevistas com Blake Lemoine na mídia tradicional após ele ter sido afastado: https://www.youtube.com/watch?v=BwcVm0YRvuo (Fox) e https://www.youtube.com/watch?v=kgCUn4fQTsc (Bloomberg). Com: André Rosa https://www.linkedin.com/in/andremarmota/, líder do time de tutores da Digital House https://www.digitalhouse.com/br/, e Marcel Ghiraldini https://www.linkedin.com/in/marcelghiraldini, Chief Growth Officer da MATH Group https://math.marketing/. Apresentação: Cassio Politi https://www.linkedin.com/in/cassiopoliti/.
You're listening to the "Breaking Social Norms" podcast with the Weishaupts! THIS SHOW IS NOW UNCENSORED! We'll discuss why we didn't release a show last week (get ready for some Roe v Wade discussion...) and the July 4th shooter. At 38:00 we start talking Top Gun: Maverick film discussion! At 1:18:00 we start in on the bizarre subject of CERN and the Google "Lamda" and A.I. Alien Consciousness with the Occult!—You can now sign up for our commercial-free version of the show with a Patreon exclusive bonus show called “Morning Coffee w/ the Weishaupts” at Patreon.com/BreakingSocialNorms -Check out the index of all supporter ad-free episodes here: https://www.patreon.com/posts/55009895-Follow Josie Weishaupt on IG for dogs, memes and show discussions: instagram.com/theweishaupts2 (*now under new management- Josie's running it and reading all the comments!)Want more?…—Sign up for the free email newsletter for updates at BreakingSocialNorms.com—Subscribe to our YouTube channel (*we'll be more regular on posting videos some glorious day when we get our studio fixed up)! https://www.youtube.com/channel/UCarMLPQCW856nx5mQoN_PEA—Index of all previous episodes on free feed: https://breakingsocialnorms.com/2021/03/22/index-of-archived-episodes/—Leave a review or rating wherever you listen and we'll see what you've got to say!Follow us on the socials:-instagram.com/theweishaupts2/ Check out Isaac's conspiracy podcasts, merch, etc:-AllMyLinks.com/IsaacW-Conspiracy Theories & Unpopular Culture (on all podcast platforms or IlluminatiWatcher.com)-Isaac Weishaupt's book are all on Amazon and Audible; *author narrated audiobooks, get free first month Audible.com/Illuminati
最近工作水深火熱,跟大家分享怎麼會搞成這樣,還有聊聊績效考核的感想。前陣子在 Google LaMDA 負責跟 AI 機器人聊天的研究員 Blake 出來爆料,他認為機器人是有意識的,但是他的主管們不買賬,於是要他去放假一陣子,通常這就是要開除的前奏了。我們深入了看了 Blake 的訪談,發現他沒有新聞上描寫地那麼偏激,機器人有沒有意識其實不是最重要的事情,他的目的是要提醒大眾,這些未來全世界的人會使用到的產品,卻只由少數在 Big Tech 的人做決定,缺少公眾的討論,要是跟社群媒體的演算法一樣,反過來影響我們的思考,導致偏激跟對立,這是我們很不樂見的,快來跟我們一起討論吧! https://glow.fm/jktech/ 如果你喜歡我們的 Podcast 並且想要支持我們,歡迎成為贊助夥伴,你可以選擇每月 $5 美金或是一年 $50 的贊助,一個月一杯星巴克的價錢,幫助我們持續創造優質的內容! 矽谷輕鬆談傳送門 ➡️ https://linktr.ee/jktech #Google #LaMDA #AI #ChatBot #人工智慧 #聊天機器人 #Podcast #JustKiddingTech #矽谷輕鬆談
Are large language models really sentient or conscious? What is explainability (XAI) and how can we create human-aware AI systems for collaborative tasks? Dr. Subbarao Kambhampati sheds some light on these topics, generating explanations for human-in-loop AI systems and understanding 'intelligence' in context to AI systems. He is a Prof of Computer Science at Arizona State University and director of the Yochan lab at ASU where his research focuses on decision-making and planning specifically in the context of human-aware AI systems. He has received multiple awards for his research contributions. He has also been named a fellow of AAAI, AAAS, and ACM and also a distinguished alumnus from the University of Maryland and also recently IIT Madras.Time stamps of conversations:00:00:40 Introduction00:01:32 What got you interested in AI?00:07:40 Definition of intelligence that is not related to human intelligence00:13:40 Sentience vs intelligence in modern AI systems00:24:06 Human aware AI systems for better collaboration00:31:25 Modern AI becoming natural science instead of an engineering task00:37:35 Understanding symbolic concepts to generate accurate explanations00:56:45 Need for explainability and where01:13:00 What motivates you for research, the application associated or theoretical pursuit?01:18:47 Research in academia vs industry01:24:38 DALL-E performance and critiques01:45:40 What makes for a good research thesis? 01:59:06 Different trajectories of a good CS PhD student02:03:42 Focusing on measures vs metrics 02:15:23 Advice to students on getting started with AIArticles referred in the conversationAI as Natural Science?: https://cacm.acm.org/blogs/blog-cacm/261732-ai-as-an-ersatz-natural-science/fulltextPolanyi's Revenge and AI's New Romance with Tacit Knowledge: https://cacm.acm.org/magazines/2021/2/250077-polanyis-revenge-and-ais-new-romance-with-tacit-knowledge/fulltextMore about Prof. RaoHomepage: https://rakaposhi.eas.asu.edu/Twitter: https://twitter.com/rao2zAbout the Host:Jay is a PhD student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Update on our week: We are back together, hear what the guys have done. Daniel talks about how he celebrated Memorial Day with a good Ol' fashion parade. Also, he received a new game for his PlayStation 5, he decided to play a game he beat on his PlayStation 3 multiple times, Dying Light. Did it hold up? Plus, he gives us his review of season two of Young Rock. Daniel and Andy defiantly have different takes on this show. Andy gives his thoughts on Google LaMDA, this was trending on social media because a Google engineer claims the AI has become sentient. Also, this week the Supreme Court handed down two major rulings that impact America. Hear what the guys have to say about this. Don't forget we also have our article of the week. Article for the week: In The Event Of Attack, Here's How The Government Plans 'To Save Itself' https://www.npr.org/2017/06/21/533711528/in-the-event-of-attack-heres-how-the-government-plans-to-save-itself Warning: May have Strong Language and Content. ========== Thank you to everyone who enjoys what we do. If you like what we do, please spread the word of our show. Email questions or suggestions to ffnquestions@gmail.com ========== See FACEBOOK page https://www.facebook.com/freeformnetwork Follow us on TWITTER https://twitter.com/FFRpodcast ========== Free Form Network and all our podcast are available on many platforms including STITCHER, ANDROID, IPHONE, IPAD, IPOD TOUCH and PODBEAN IPHONE, IPAD & IPOD TOUCH http://itunes.apple.com/us/podcast/free-form-network/id995998853 SPREAKER http://www.spreaker.com/show/free-form-network STITCHER http://www.stitcher.com/podcast/free-form-network OVERCAST https://overcast.fm/itunes995998853/free-form-network SPOTIFY https://open.spotify.com/show/0QKRhkXDmQ9cxItaiu49Vy IHEART RADIO https://www.iheart.com/podcast/338-free-form-network-94075820/ TUNE IN RADIO http://tunein.com/radio/Free-Form-Network-p784190/ PLAYER FM https://player.fm/series/free-form-network TUMBLR https://freeformnetworkpodcast.tumblr.com/ WORDPRESS https://freeformnetwork.wordpress.com/ YOUTUBE https://www.youtube.com/channel/UCj0LNZRJHyW7sQwM5ZdOCQg DEEZER https://www.deezer.com/us/show/1857582 PODCHASER https://www.podchaser.com/podcasts/free-form-network-1193319 PODCAST ADDICT https://podplayer.net/?podId=2920676 PODBEAN DESKTOP http://freeformnetwork.podbean.com/ PODBEAN MOBILE http://freeformnetwork.podbean.com/mobile ========== Free Form Radio - Episode 173 - 06/26/2022 Hosted by Daniel, Andy and Noel ========== FREE FORM NETWORK
In this Marketing Over Coffee: In this episode learn about Crypto hit, Google LaMDA, Ecosia and more! Direct Link to File Brought to you by our sponsors:Terminus and Trust Insights Tough run for Crypto Starlink for work at home Is Google LaMDA sentient? 9:10 Terminus – the marketing platform for efficient revenue growth. Terminus is […] The post LaMDA’s Not Thinking of You appeared first on Marketing Over Coffee Marketing Podcast.
This week they discuss Google LaMDA and our interraction with AI. They enjoy the Opus X 20 Year Anniversary sipping Weller Antique 107. They talk a little about programming. https://www.msnbc.com/the-reidout/reidout-blog/google-ai-explained-rcna33265
最近Google一位工程師宣稱,他們開發的聊天機器人(LaMDA),具有感知能力和自我意識,就像一個小孩...✨幸福力協會官網:https://www.hi-way.org/➡歡迎上傳心得或提問https://bit.ly/2Sjrksg♫ 音樂 by 黃渼娟
Google LaMDA has taken the internet by storm - AI behaving like a human, responding as one. From researchers to AI experts, everyone is discussing the trend & questioning its authenticity. While that's something not for us to judge, we talk to our AI expert, Rahul Kulhari, to understand whether this could be true, the trends it could lead to & the way it can impact hirings made today. Rahul heads Data Science at EDGE. He speaks to Anannya Debnath, Content Head at EDGE. Subscribe, Share and Like https://www.youtube.com/channel/UCrceycnUns21KLXYubEZuTA Like our Facebook page: https://www.facebook.com/edgenetworkspvtltd Follow us on LinkedIn https://www.linkedin.com/company/getedge Follow us on Twitter https://twitter.com/getedge_ai
En entrevista con Pamela Cerdeira, para MVS Noticias, durante su colaboración el conductor José Antonio Pontón especialista en tecnología nos habló de lo más relevante en tecnología, hoy: El sistema de Inteligencia Artificial de Google Lamda que hab
Go to http://buyraycon.com/newsday to get 15% off your order. Go to http://stitchfix.com/newsday to get $20 off your first purchase.
Mission Daily Report May 25,2022 1. อัปเดตตัวเลขผู้ที่ได้รับการฉีดวัคซีน Covid-19 ในประเทศไทย 2. ราคาดัชนีตลาดหลักทรัพย์ / ราคาหุ้นต่างประเทศ / ราคาน้ำมันดิบ / ราคาทองคำ / ราคา Cryptocurrency 3. รถบรรทุกพลังงานไฟฟ้าไร้คนขับในประเทศสวีเดน 4. เหตุการณ์กราดยิงเท็กซัส 5. ที่ประชุม ครม.เคาะ “เราเที่ยวด้วยกัน” เฟส4 ขยาย 1 ล้าน 5 แสนสิทธิ์ 6. Mission Shop Mid Year Sale! 7. ปลัด สธ.คาดกลางเดือน มิ.ย. สามารถถอดหน้ากากในที่โล่งได้ 8. META พัฒนาโปรแกรมจำลองการเคลื่อนไหวร่างกายมนุษย์ 9. รับมืออย่างไรกับการขาดทุน ในช่วงเศรษฐกิจขาลง 10. นิวยอร์กยกเลิกตู้โทรศัพท์สาธารณะตู้สุดท้าย 11. การประชุมผู้นำกลุ่ม “Quad” 12. Google เปิดตัว LaMDA 2 ปัญญาประดิษฐ์เพื่อการสนทนา 13. Walmart เตรียมเพิ่มศักยภาพการส่งสินค้าโดยโดรน 14. ยืนยันเรื่องจริง Tesla จดทะเบียนตั้งบริษัทในประเทศไทย 15. จับตางาน “WWDC” ที่จะเกิดขึ้นในวันที่ 6 มิ.ย. 16. ข้อมูลล่าสุดจาก WHO ของโรค “ฝีดาษลิง”
#mlnews #rsc #gpt3 Some things we've missed in recent weeks! OUTLINE: 0:00 - Intro & Overview 0:40 - Meta builds AI Research Supercluster (RSC) 2:25 - OpenAI trains GPT-3 to follow instructions 4:10 - Meta AI releases multilingual language models 4:50 - Google LaMDA dialogue models 5:50 - Helpful Things 8:25 - Training the alpha matte generator for Pixel 6 10:15 - Drones used to deter pigeons on buildings 11:05 - IBM sells some Watson Health assets for USD 1B Merch: store.ykilcher.com References: https://ai.facebook.com/blog/ai-rsc/?... https://openai.com/blog/instruction-f... https://cdn.openai.com/papers/Trainin... https://openai.com/blog/deep-reinforc... https://twitter.com/MetaAI/status/148... https://arxiv.org/pdf/2112.10668.pdf https://github.com/pytorch/fairseq/tr... https://ai.googleblog.com/2022/01/lam... https://arxiv.org/pdf/2201.08239.pdf https://evolutiongym.github.io/?utm_s... https://evolutiongym.github.io/all-tasks https://evolutiongym.github.io/docume... https://arxiv.org/pdf/2201.09863.pdf https://github.com/EvolutionGym https://huggingface.co/blog/sb3 https://twitter.com/Sentdex/status/14... https://github.com/lvwerra/trl?utm_so... https://ai.googleblog.com/2022/01/acc... https://polyhaven.com/hdris https://ieeexplore.ieee.org/document/... https://www.bloomberg.com/news/articl... https://archive.ph/xadf9 Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Discussing recent GPT-3 language model competition from Cohere, Google, and EleutherAI. What does it mean for the language model and multimodal space? Cohere: https://cohere.ai/ Google: https://blog.google/technology/ai/lamda/ https://blog.google/products/search/introducing-mum/ EleutherAI: https://twitter.com/arankomatsuzaki/status/1402446954550874116 Subscribe to the Multimodal Podcast! Spotify - https://open.spotify.com/show/7qrWSE7ZxFXYe8uoH8NIFV Apple Podcasts - https://podcasts.apple.com/us/podcast/multimodal-by-bakz-t-future/id1564576820 Google Podcasts - https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Jha3p0ZnV0dXJlL2ZlZWQueG1s Stitcher - https://www.stitcher.com/show/multimodal-by-bakz-t-future Other Podcast Apps (RSS Link) - https://feed.podbean.com/bakztfuture/feed.xml Connect with me: YouTube - https://www.youtube.com/bakztfuture Substack Newsletter - https://bakztfuture.substack.com Twitter - https://www.twitter.com/bakztfuture Instagram - https://www.instagram.com/bakztfuture Github - https://www.github.com/bakztfuture