POPULARITY
Canary Cry News Talk #605 - 03.24.2023 - Recorded Live to Tape SACRED SILICA | TikTok Trap, Holy Taiwanese Microchips, All Seeing Orb, Pray AI A Podcast that Deconstructs Mainstream Media News from a Biblical Worldview We Operate Value 4 Value: http://CanaryCry.Support Submit Articles: http://CanaryCry.Report Join Supply Drop: http://CanaryCrySupplyDrop.com Join the Tee Shirt Council: http://CanaryCryTShirtCouncil.com Resource: Index of MSM Ownership (Harvard.edu) Resource: Aliens Demons Doc (feat. Dr. Heiser, Unseen Realm) All the links: http://CanaryCry.Party This Episode was Produced By: Executive Producers Morgan E*** Producers Lady Knight Little Wing, Michael G, Kalub G, Brenton B, JonnyNI, Jamie B, Malik, Sir Morv Knight of the Burning Chariots, Sir LX Protocol V2 Knight of the Berrean Protocol, Runksmash, Joey, Dame Gail Canary Whisperer and Lady of X's and O's, Sir Casey the Shield Knight, Veronica D, DrWhoDunDat, Sir Scott Knight of Truth CanaryCry.ART Submissions LittleOwen JonathanF Sir Dove Knight of Rusbeltia Microfiction Runksmash - That life began as a normal American boy, but in his early years he was faced with a choice; the man who's life Mike was sharing chose to follow Jesus, but Mike refused, that is until his journey showed him the truth of the world he was immersed in. Stephen S - Dr. Diablo receives a call while relaxing on his privately owned island.“You need a new variant released next week? …. Oh, no, it won't be a problem… Although I'm in retirement, it doesn't mean I'm not prepared. I have a big red AI easy button.” CLIP PRODUCER Emsworth, FaeLivrin, Joelms, Laura TIMESTAPERS Jade Bouncerson, Christine C, Pocojo SOCIAL MEDIA DOERS Dame MissG of the OV and Deep Rivers LINKS HELP JAM REMINDERS Clankoniphius SHOW NOTES Podcast - T - 4:01 from Rumble by Pocojo and Jade Bouncerson HELLO, RUN DOWN 6:54 V / 2:53 P TIKTOK/CHINA 9:06 V / 5:05 P Utah Law Could Curb Use of TikTok and Instagram by Children and Teens (NY Times) → TikTok attacked for China ties as US lawmakers push for ban (Reuters/MSN) → From Facebook intern to TikTok CEO: Who is Singaporean Chew Shou Zi? (Strait Times) → Inside the Influencer Campaign to Save TikTok in Washington (Time) → WH turning to TikTok stars to take message to a younger audience (Oct. 2022, NPR) → TikTok's Secret ‘Heating' Button Can Make Anyone Go Viral (Jan. 2023, Forbes) DAY JINGLE/PERSONAL/EXEC. 32:25 V / 28:24 P FLIPPY 38:36 V / 34:35 P Engaging robots could be roaming Disney parks in near future (Orlando Sentinel) DATA/BEAST SYSTEM/BEING WATCHED 45:26 V / 41:25 P JPMorgan Test Will Ditch Cards to Let Consumers Pay with Palm or Face Instead (Bloomberg) CRYPTO 54:44 V / 50:43 P OpenAI's Sam Altman wants to convince billions to scan eyes to prove personhood (Fortune) → WorldCoin introduces ID SDK to bring Proof of Personhood (Worldcoin.org) → Sam Altman is tech's next household name — if we survive the killer robots (NBC, Feb. 2023) → Worldcoin SDK lets you prove you're human online (Biometric Update) MONEY 1:20:38 V / 1:16:37 P It's Not a Crisis. This Is the New Normal. (Mother Jones) PARTY TIME: http://CANARYCRY.PARTY 1:42:33 V / 1:38:32 P BREAK 1: TREASURE: https://CanaryCryRadio.com/Support 1:43:21 V / 1:39:20 P BEAST SYSTEM/BIBLICAL 2:01:54 V / 1:57:53 P I Saw the Face of God in a Semiconductor Factory (Wired) → Nano-Silicon from Beach Sand study BREAK 3: TALENT 2:51:25 V / 2:47:24 P BIBLICAL/AI 3:02:44 V / 2:58:43 P Pray the Bible App Launches to Transform Prayer Life (Faith News) → No supporters on Patreon BREAK 4: TIME 3:14:58 V / 3:10:57 P END
Twitter is finally sunsetting its legacy verified program. OpenAI rolls out ChatGPT plugins. Do Kwan has been detained and is facing formal charges here in the US. The FTC's “click to cancel” proposal. And, of course, the weekend longreads suggestions.Sponsors:Headspace.com/RIDE30DAYLinks:Twitter to Revoke ‘Legacy' Verified Badges in April, Leaving Only Paying Subscribers With Blue Check-Marks (Variety)OpenAI is massively expanding ChatGPT's capabilities to let it browse the web and more (The Verge)Do Kwon Charged With Fraud by US Prosecutors in New York (Bloomberg)The FTC wants to ban those tough-to-cancel gym and cable subscriptions (The Verge)Weekend Longreads Suggestions:Cheating is All You Need (Sourcegraph Blog)The Age of AI has begun (GatesNotes/Bill Gates)The secret history of Elon Musk, Sam Altman, and OpenAI (Semafor)The case for slowing down AI (Vox)Epic's new motion-capture animation tech has to be seen to be believed (ArsTechnica)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
You may have heard about the arrival of GPT-4, OpenAI's latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don't yet understand its capabilities - yet it has already been deployed to the public.At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing.AI may help us achieve major advances like curing cancer or addressing climate change. But the point we're making is: if our dystopia is bad enough, it won't matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.RECOMMENDED MEDIAAI ‘race to recklessness' could have dire consequences, tech experts warn in new interviewTristan Harris and Aza Raskin sit down with Lester Holt to discuss the dangers of developing AI without regulationThe Day After (1983)This made-for-television movie explored the effects of a devastating nuclear holocaust on small-town residents of KansasThe Day After discussion panelModerated by journalist Ted Koppel, a panel of present and former US officials, scientists and writers discussed nuclear weapons policies live on television after the film airedZia Cora - Submarines “Submarines” is a collaboration between musician Zia Cora (Alice Liu) and Aza Raskin. The music video was created by Aza in less than 48 hours using AI technology and published in early 2022RECOMMENDED YUA EPISODES Synthetic humanity: AI & What's At StakeA Conversation with Facebook Whistleblower Frances HaugenTwo Million Years in Two Hours: A Conversation with Yuval Noah HarariYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
開頭先來聊聊日光節約時間害我們少睡一小時很不方便,接著談一下這週很熱門的 SVB 矽谷銀行倒閉的事件,最新進展是美國政府出手了,確保在星期一 SVB 的客戶都可以存取他們的資金。本集接續上集繼續來聊 Sam Altman 的創業故事,今天會聚焦在他在 YC 推動的重要改革、創立 OpenAI 的契機、還有他對於 ChatGPT 爆紅的想法以及為什麼現在是最好的創業時機。 https://glow.fm/jktech/ 如果我們的 Podcast 有帶給你歡笑還有知識的話,歡迎支持我們成為贊助夥伴,一個月一杯星巴克的價錢,幫助我們持續創造優質的內容! 矽谷輕鬆談傳送門 ➡️ https://linktr.ee/jktech
For the first episode of the Newcomer podcast, I sat down with Reid Hoffman — the PayPal mafia member, LinkedIn co-founder, Greylock partner, and Microsoft board member. Hoffman had just stepped off OpenAI's board of directors. Hoffman traced his interest in artificial intelligence back to a conversation with Elon Musk.“This kicked off, actually, in fact, with a dinner with Elon Musk years ago,” Hoffman said. Musk told Hoffman that he needed to dive into artificial intelligence during conversations about a decade ago. “This is part of how I operate,” Hoffman remembers. “Smart people from my network tell me things, and I go and do things. And so I dug into it and I'm like, ‘Oh, yes, we have another wave coming.'”This episode of Newcomer is brought to you by VantaSecurity is no longer a cost center — it's a strategic growth engine that sets your business apart. That means it's more important than ever to prove you handle customer data with the utmost integrity. But demonstrating your security and compliance can be time-consuming, tedious, and expensive. Until you use Vanta.Vanta's enterprise-ready Trust Management Platform empowers you to:* Centralize and scale your security program* Automate compliance for the most sought-after frameworks, including SOC 2, ISO 27001, and GDPR* Earn and maintain the trust of customers and vendors alikeWith Vanta, you can save up to 400 hours and 85% of costs. Win more deals and enable growth quickly, easily, and without breaking the bank.For a limited time, Newcomer listeners get $1,000 off Vanta. Go to vanta.com/newcomer to get started.Why I Wanted to Talk to Reid Hoffman & What I Took AwayHoffman is a social network personified. Even his journey to something as wonky as artificial intelligence is told through his connections with people. In a world of algorithms and code, Hoffman is upfront about the extent to which human connections decide Silicon Valley's trajectory. (Of course they are paired with profound technological developments that are far larger than any one person or network.)When it comes to the rapidly developing future powered by large language models, a big question in my mind is who exactly decides how these language models work? Sydney appeared in Microsoft Bing and then disappeared. Microsoft executives can dispatch our favorite hallucinations without public input. Meanwhile, masses of images can be gobbled up without asking their creators and then the resulting image generation tools can be open-sourced to the world. It feels like AI super powers come and go with little notice. It's a world full of contradictions. There's constant talk of utopias and dystopias and yet startups are raising conventional venture capital financing.The most prominent player in artificial intelligence — OpenAI — is a non-profit that raised from Tiger Global. It celebrates its openness in its name and yet competes with companies whose technology is actually open-sourced. OpenAI's governance structure and priorities largely remain a mystery. Finally, unlike tech's conservative billionaires who throw their money into politics, in the case of Hoffman, here is a tech overlord that I seem to mostly agree with politically. I wanted to know what that would be like. Is it just good marketing? And where exactly is his heart and political head at right now?I thought he delivered. I didn't feel like he was dodging my questions, even in a world where maintaining such a wide network requires diplomacy. Hoffman seemed eager and open — even if he started to bristle at what he called my “edgy words.”Some Favorite QuotesWe covered a lot of ground in our conversation. We talked about AI sentience and humans' failures to identify consciousness within non-human beings. We talked about the coming rise in AI cloud compute spending and how Microsoft, Google, and Amazon are positioned in the AI race.Hoffman said he had one major condition for getting involved in OpenAI back in the early days when Musk was still on board.“My price for participation was to ask Elon to stop saying the word “robocalypse,” Hoffman told me. “Because I thought that the problem was it's very catchy and it evokes fear.”I asked Hoffman why he thought Musk got involved in artificial intelligence in the first place when Musk seems so worried about how it might develop. Why get the ball rolling down the hill at all, I wondered?Hoffman replied that many people in the field of artificial intelligence had “messiah complexes.”“It's the I am the one who must bring this — Prometheus, the fire to humanity,” Hoffman said. “And you're like, ‘Okay, I kind of think it should be us versus an individual.'” He went on, “Now, us can't be 8 billion people — us is a small group. But I think, more or less, you see the folks who are steering with a moral compass try to say, how do I get at least 10 to 15 people beyond myself with their hands on the steering wheel in deep conversations in order to make sure you get there? And then let's make sure that we're having the conversations with the right communities.”I raised the possibility that this merely suggested oligarchic control of artificial intelligence rather than dictatorial control. We also discussed Hoffman's politics, including his thoughts on Joe Biden and “woke” politics. I asked him about the state of his friendship with fellow PayPal mafia member Peter Thiel. “I basically am sympathetic to people as long as they are legitimately and earnestly committed to the dialogue and discussion of truth between them and not committed otherwise,” Hoffman said. “There are folks from the PayPal years that I don't really spend much time talking to. There are others that I do continue because that conversation about discovering who we are and who we should be is really important. And you can't allow your own position to be the definer.”I suggested that Thiel's public views sometimes seemed insincere.“Oh, that's totally corrosive,” Hoffman said. “And as much as that's happening, it's terrible. And that's one of the things that in conversations I have, I push people, including Peter, on a lot.”Give it a listen.Find the PodcastRead the TranscriptEric: Reid, thank you so much for coming on the show. I'm very excited for this conversation. You know, I'm getting ready for my own AI conference at the end of this month, so hopefully this is sort of a prep by the end of this conversation, we'll all be super smart and ready for that. I feel like there've been so many rounds of sort of AI as sort of the buzzword of the day.This clearly seems the hottest. When did you get into this moment of it? I mean, obviously you just stepped off the Open AI board. You were on that board. Like how, when did you start to see this movement that we're experiencing right now coming.Reid: Well, it's funny because my undergraduate major was artificial intelligence and cognitive science. So I've, I've been around the hoop for multiple waves for a long time and I think this kicked off actually, in fact, with a dinner with Elon Musk years ago. You know, 10-ish years ago, Elon and I would have dinner about once a quarter and he's like, well, are you paying attention to this AI stuff?And I'm like, well, I majored in it and you know, I know about this stuff. He's like, no, you need to get back involved. And I was like, all right. This is part of how I operate is smart people from my network tell me things and I go and do things. And so I dug into it and I went, oh yes, we have another wave coming.And this was probably about seven or eight years ago, when I, when I saw the beginning of the wave or the seismic event. Maybe it was a seismic event out at sea and I was like, okay, there's gonna be a tsunami here and we should start getting ready cause the tsunami is actually gonna be amazingly great and interesting.Eric: And that—is that the beginning of Open AI?Reid: Open AI is later. What I did is I went and made connections with the kind of the heads of every AI lab and major company because I concluded that I thought that the AI revolution will be primarily driven by large companies initially because of the scale compute requirements.And so, you know, talked to Demis Hassabis, met Mustafa Suleyman, talked to Yann LeCun, talked to Jeff Dean, you know, all these kind of folks and kind of, you know, built all that. And then it was later in conversations with Sam and Elon that I said, look, we need to do something that's a for pro humanity. Not just commercial effort. And my price for participation, cause I thought it was a great idea, but my price for participation was to ask Elon to stop saying the word robocalypse. Because I thought that the problem was that it's very catchy and it evokes fear. And actually, in fact, one of the things I think about this whole area is that it's so much more interesting and has so much amazing opportunity for humanity.A little bit like, I don't know if you saw the Atlantic article I wrote that we evolve ourselves through technology and I'm, you know, going to be doing some writings around describing AI as augmented intelligence versus artificial intelligence. And I wanted to kind of build that positive, optimistic case that I think is the higher probability that I think we can shape towards and so forth.So it's like, okay, I'm in, but no more Robocalypse.Eric: I appreciate the ultimate sort of network person that you tell the story through people. I always appreciate when the origin stories of technology actually come through the human beings. With Elon in particular, I'm sort of confused by his position because it seems like he's very afraid of AI.And if that's the case, why would you want to, like, do anything to sort of get the ball rolling down the hill? Like, isn't there a sort of just like, stay away from it, man, if you think it's so bad. How do you see his thinking? And I'm sure it's evolved.Reid: Well, I think his instinct for the good and the challenging of this is he tends to think AI will only be good if I'm the one who's in control.Eric: Sort of, yeah.Reid: Yeah. And this is actually somewhat replete within the modern AI field. Not everybody but this. And Elon is a public enough figure that I think, you know, making this comment of him is not talking at a school.Other people would, there's a surprising number of Messiah complexes in the field of AI, and, and it's the, I am the one who must bring this, you know, Prometheus, you know, the Fire to humanity. And you're like, okay, I kind of think it should be us, right? Versus an individual. Now us can't be 8 billion people, us as a small group, but I think more or less you see the, the folks who are steering with a moral compass try to say, how do I get at least 10 to 15 people beyond myself with their hands on the steering wheel in deep conversations in order to make sure you get there and then let, let's make sure that we're having the conversations with the right communities.Like if you say, well, is this going to, you know, institutionalize, ongoing, um, you know, power structures or racial bias, something else? Well, we're talking to the people to make sure that we're going to minimize that, especially over time and navigate it as a real issue. And so those are the, like, that's the kind of anti Messiah complex, which, which is more or less the efforts that I tend to get involved in.Eric: Right. At least sort of oligarchy, of AI control instead of just dictatorship of it.Reid: Well, yeah, and it depends a little bit, even on oligarchy, look, things are built by small numbers of people. It's just a fact, right? Like, there aren't more than, you know, a couple of founders, maybe maximum five in any, any particular thing. There is, you know, there's reasons why. When you have a construction project, you have a head of construction, right?Et cetera. The important thing is to make sure that's why you have, why you have a CEO, you have a board of directors. That's why you have, you know, you say, well, do we have the right thing where a person is accountable to a broader group? And that broader group feels their governance responsibility seriously.So oligarchy is a—Eric: a chargedReid: is a charged word. And I,Eric: There's a logic to it. I'm not, I'm not using it to say it doesn't make sense that you want the people to really understand it around, around it. Um, I mean, specifically with Open AI, I mean, you, you just stepped off the board. You're also on the board of Microsoft, which is obviously a very significant player.In this future, I mean, it's hard to be open. I get a little frustrated with the “open” in “Open AI” because I feel like there's a lot that I don't understand. I'm like, maybe they should change the name a little bit, but is it still a charity in your mind? I mean, it's obviously raised from Tiger Global, the ultimate prophet maker.Like, how should we think about the sort of core ambitions of Open AI?Reid: Well, um, one, the board I was on was a fine one and they've been very diligent about making sure that all of the controls, including for the subsidiary company are from the 501(C)(3) and diligent to its mission, which is staffed by people on the 501(C)(3) board with the responsibilities of being on a 5 0 1 board, which is being in service of the mission, not doing, you know, private inurement and other kinds of things.And so I actually think it is fundamentally still a 501(C)(3). The challenge is if you kind of say, you look at this and say, well, in order to be a successful player in the modern scale AI, you need to have billions of dollars of compute. Where do you get those billions of dollars? Because, you know, the foundations and the philanthropy industry is generally speaking bad at tech and bad at anything other than little tiny checks in tech.And so you said, well, it's really important to do this. So part of what I think, you know, Sam and that group of folks came up with this kind of clever thing to say, well, look, we're about beneficial AI, we're about AI for humanity. We're about making an, I'll make a comment on “open” in a second, but we are gonna generate some commercially valuable things.What if we struck a commercial deal? So you can have the commercial things or you can share the commercial things. You invest in us in order to do this, and then we make sure that the AI has the right characteristics. And then the “open”, you know, all short names have, you know, some simplicities to them.The idea is open to the world in terms of being able to use it and benefit from it. It doesn't mean the same thing as open source because AI is actually one of those things where opening, um, where you could do open source, you could actually be creating something dangerous. As a modern example, last year, Open AI deliberately… DALL·E 2 was ready four months before it went out. I know cause I was playing with it. They did the four months to do safety training and the kind of safety training is, well, let's make sure that individuals can't be libeled. Let's make sure you can't create as best we can, child sexual material. Let's make sure you can't do revenge porn and we'll serve it through the API and we'll make it unchangeable on that.And then the open source people come out and they go do whatever you want and then wow, you get all this crazy, terrible stuff. So “open” is openness of availability, but still with safety and still with, kind of call it the pro-human controls. And that's part of what OpenAI means in this.Eric: I wrote in sort of a mini essay in the newsletter about, like tech fatalism and it fits into your sort of messiah complex that you're talking about, if I'm a young or new startup entrepreneur, it's like this is my moment if I hold back, you know, there's a sense that somebody else is gonna do it too. This isn't necessarily research. Some of the tools are findable, so I need to do it. If somebody's going to, it's easy if you're using your own personhood to say, I'm better than that guy! Even if I have questions about it, I should do it. So that, I think we see that over and over again. Obviously the stakes with AI, I think we both agree are much larger.On the other hand, with AI, there's actually, in my view, been a little bit more restraint. I mean, Google has been a little slower. Facebook seems a little worried, like, I don't know. How do you agree with that sort of view of tech fatalism? Is there anything to be done about it or it's just sort of—if it's possible, it's gonna happen, so the best guy, the best team should do it?Or, or how do you think about that sense of inevitability on if it's possible, it'll be built?Reid: Well, one thing is you like edgy words, so what you describe is tech fatalism, I might say as something more like tech inevitability or tech destiny. And part of it is what, I guess what I would say is for example, we are now in a AI moment and era. There's global competition for it. It's scale compute.It's not something that even somebody like a Google or someone else can kind of have any kind of, real ball control on. But the way I look at it is, hey, look, there's, there's utopic outcomes and dystopic outcomes and it's within our control to steer it. Um, and even to steer it at speed, even under competition because.For example, obviously the general discourse within media is, oh my God, what's happening with the data and what's gonna happen with the bias and what's gonna happen with the crazy conversations, with Bing Chat and all the rest of this stuff. And you're like, well, what am I obsessed about? I'm obsessed about the fact that I have line of sight to an AI tutor and an AI doctor on every cell phone.And think about if you delay that, whatever number of years you delay that, what your human cost is of delaying that, right? And it's like, how do we get that? And for example, people say, wow, the real issue is that Bing chat model is gonna go off the rails and have a drunken cocktail party conversation because it's provoked to do so and can't run away from the person who's provoking it.Uh, and you say, well, is that the real issue? Or is it a real issue? Let's make sure that as many people as we can have access to that AI doctor have access to that AI tutor that where, where we can, where not only, you know, cause obviously technology cause it's expensive initially benefits elites and people are rich.And by the way, that's a natural way of how our capitalist system and all the rest works. But let's try to get it to everyone else as quickly as possible, right?Eric: I a hundred percent agree with that. So I don't want any of my sort of, cynical take like, oh my God, this version.I'd also extend it, you know, I think you're sort of referencing maybe the Sydney situation where you have Kevin Rus in New York Times, you know, communicating with Bing's version of ChatGPT and sort of finding this character who's sort of goes by Sydney from the origin story.And Ben Thompson sort of had a similar experience. And I would almost say it's sad for the world to be deprived of that too. You know, there's like a certain paranoia, it's like, it's like, oh, I wanna meet this sort of seemingly intelligent character. I don't know. What do you make of that whole episode? I mean, people really, I mean, Ben Thompson, smart tech writers really latched onto this as something that they found moving.I don't know. Is there anything you take away from that saga and do you think we'll see those sort of, I don't know, intelligent characters again,Reid: Well for sure. I think 2023 will be at least the first year of the so-called chatbot. Not just because of ChatGPT. And I think that we will have a bunch of different chat bots. I think we'll have chatbots that are there to be, you know, entertainment companions, witty dialogue participants.I think we'll have chatbots that are there to be information like Insta, Wikipedia, kind of things. I think we'll have chatbots that are there to just have someone to talk to. So I think there'll be a whole, whole range of things. And I think we will have all that experience.And I think part of the thing is to say, look, what are the parameters by which you should say the bots should absolutely not do X. And it's fine if these people want a bot that's like, you know, smack talking and these people want something that you know, goes, oh heck. Right?You know, like, what's, what's the range of that? And obviously children get in the mix and, and the questions around things that we already encounter a lot with search, which is like could a chat bot enable self-harm in a way that would be really bad?Let's really try to make sure that someone who's depressed doesn't figure out a way to harm themselves either with search or with chat bots.Eric: Is there a psychologically persuasive, so it's not just the information provided, it's the sense that they might be like walking you towards something less serious.Reid: And they are! This is the thing that's amazing. and it's part of the reason why like everyone should have some interaction with these in some emotional, tangible way. We are really passing the Turing test. This is the thing that I had visibility on a few years ago because I was like, okay, we kind of judge, you know, intelligence and sentience like that, Google engineers like it.I asked if it was conscious and it said it was because we use language as a way of doing that. And you're like, well, but look, that tells you that your language use is not quite fully there. And because part of what's really amazing about, “hallucinations”—and I'm probably gonna do a fireside chat with the gray matter thing on hallucinations, maybe later this week—where the hallucination is, on one hand it says this amazingly accurate, wonderful thing, very persuasively, and then it says this other thing really persuasively that's total fiction, right? And you're like, wow, you sound very persuasive in both cases. But that one's true and that one's fiction.And that's part of the reason why I kind of go back to the augmented intelligence and all the things that I see going on with in 2023 is much less replacement and much more augmentation. It's not zero replacement, but it's much more augmentation in terms of how this plays. And that is super exciting.Eric: Yeah. I mean, to some degree it reflects sort of the weakness in human beings' own abilities to read what's happening. Ahead of this interview, I was talking to the publicly available ChatGPT. I don't know if you saw but I was asking it for questions and I felt like it delivered a very reasonable set of questions. You know, you've written about Blitzscaling, so [ChatGPT] is like, let's ask about that. It's, you know, ask in the context of Microsoft. But when I was like, have you [ChatGPT] ever watched Joe Rogan? Have you ever been on a podcast? Sometimes maybe you should have a long sort of, you should have a statement like I'm doing right now where I sort of have some things I'm saying.Then I ask a question. Other times it should be short and sweet. Sometimes it, you know, annoys you and says oligarchy, like explaining to the chat bot. [In an interview, a journalist] can't just ask a list of like, straightforward questions and it felt like it didn't really even get that. And I get that there's some sort of, we're, we're starting to have a conversation now with companies like Jasper, where it's almost like the language prompting itself.I think Sam Altman was maybe saying it's like almost a form of plain language like coding because you have to figure out how to get what you want out of them. And maybe it was just my failure to explain it, but as a journalist replacing questions, I didn't find the current model of ChatGPT really capable of that.Reid: No, that's actually one of the things on the ChatGPT I find is, like, for example, you ask what questions to ask Reid Hoffman in a podcast interview, and you'll get some generic ones. It'll say like, well, what's going on with new technologies like AI and, and what's going on in Silicon Valley? And you know, and you're like, okay, sure.But those aren't the really interesting questions. That's not what makes me a great journalist, which is kind of a lens to something that people can learn from and that will evolve and change that'll get better. But that's again, one of the reasons why I think it's a people plus machine. Because for example, if I were to say, hey, what should I ask Eric about? Or what should I talk to Eric about and go to? Yeah, gimme some generic stuff. Now if I said, oh, give me a briefing on, um, call it, um, UN governance systems as they apply to AI, because I want to be able to talk about this. I didn't do this, but it would give me a kind of a quick Wikipedia briefing and that would make my conversation more interesting and I might be able to ask a question about the governance system or something, you know, as a way of doing it.And that's what AI is, I think why the combo is so great. Um, and anyway, so that's what we should be aiming towards. It isn't to say, by the way, sometimes like replacement is a good thing. For example, you go to autonomous vehicles and say, hey, look, if we could wave a wand and every car on the road today would be an autonomous vehicle, we'd probably save, we'd probably go from 40,000 deaths in the US per, you know, year to, you know, maybe a thousand or 2000. And you're like, you're shaving 38,000 lives a year, in doing this. It's a good thing. And, you know, it will have a positive vector on gridlocks and for climate change and all the rest of the stuff.And you go, okay, that replacement, yes, we have to navigate truck jobs and all the rest, but that replacement's good. But I think a lot of it is going to end up being, you know, kind of, various forms of amplification. Like if you get to journalists, you go, oh, it'll help me ask, figure out which interesting questions to add.Not because it'll just go here, here's your script to ask questions. But you can get better information to prep your thinking on it.Eric: Yeah. I'm glad you brought up like the self-driving car case and, you know, you're, are you still on the board of Aurora?Reid: I am.Eric: I've, you know, I covered Uber, so I was in their self-driving cars very early, and they made a lot of promises. Lyft made a lot of promises.I mean, I feel like part of my excitement about this sort of generative AI movement is that it feels like it doesn't require completeness in the same way that self-driving cars do. You know? And that, that, that's been a barrier to self-driving cars. On the flip side, you know, sometimes we sort of wave away the inaccuracy and then we say, you know, we sort of manage it.I think that's what we were sort of talking about earlier. You imagine it in some of the completeness that could come. So I guess the question here is just do you think, what I'm calling the completeness problem. I guess just the idea that it needs to be sort of fully capable will be an issue with the large language models or do you think you have this sort of augmented model where it could sort of stop now and still be extremely useful to much of society?Reid: I think it could stop now and be extremely useful. I've got line of sight on current technology for a tutor, for a doctor, for a bunch of other stuff. One of the things my partner and I wrote last year was that within five years, there's gonna be a co-pilot for every profession.The way to think about that is what professionals do. They process information, they take some kind of action. Sometimes that's generating other information, just like you see with Microsoft's co-pilot product for engineers. And what you can see happening with DallE and other image generation for graphic designers, you'll see this for every professional, that there will be a co-pilot on today's technology that can be built.That's really amazing. I do think that as you continue to make progress, you can potentially make them even more amazing, because part of what happened when you move from, you know, GPT3 to 3.5, which is all of a sudden it can write sonnets. Right? You didn't really know that it was gonna be able to write sonnets.That's giving people superpowers. Most people, including myself—I mean, look, I could write a sonnet if you gave me a couple of days and a lot of coffee and a lot of attempts to really try.Eric: But you wouldn't.Reid: You wouldn't. Yeah. But now I can go, oh, you know, I'd like to, to, um, write a sonnet about my friend Sam Altman.And I can go down and I can sit there and I can kind of type, you know, duh da, and I can generate, well, I don't like that one. Oh, but then I like this one, you know, and da da da. And, and that, that gives you superpowers. I mean, think about what you can do for writing, a whole variety of things with that. And that I think the more and more completeness is the word you are using is I think also a powerful thing. Even though what we have right now is amazing.Eric: Is GPT4 a big improvement over what we have? I assume you've seen a fair bit of unreleased, stuff. Like how hyped should we be about the improvement level?Reid: I have. I'm not really allowed to say very much about it cause, you know, part of the responsibilities of former board members and confidentiality. But I do think that it will be a nice—I think people will look at it and go, Ooh, that's cool. And it will be another iteration, another thing as amazing as ChatGPT has, and obviously that's kind of in the last few months. It's kind of taken the world by storm, opening up this vista of imagination and so forth.I think GPT4 will be another step forward where people will go, Ooh, that's, that, that's another cool thing. I think that's—can't be more specific than that, but watch this space cause it'll be cool.Eric: Throughout this conversation we've danced around this sort of artificial general intelligence question. starting with the discussion of Elon and the creation of eventually Open AI. I'm curious how close you think we are with AGI and this idea of a sort of, I mean, people define it so many different ways, you know, it's more sophisticated than humans in some tasks, you know, mini tasks, whatever.How, how do you think we're far from that? Or how, how, how do you see that playing out?Reid: Personally amongst a lot of the people who are in the field, I'm probably on the, we're-much-further-than-we-think stage. Now, some of that's because I've lived through this before with my undergraduate degree and the, you know, the pattern generally is, oh my God, we've gotten this computer to do this amazing thing that we thought was formally the provence of only these cognitive human beings.And it could do that. So then by the way, in 10 years it'll be solving new science problems like fusion and all the rest. And if you go back to the seventies, you saw that same dialogue. I mean, it, it's, it's an ongoing thing. Now we do have a more amazing set of cognitive capabilities than we did before, and there are some reasons to argue that it could be in a decade or two. Because you say, well, these large language models can enable coding and that coding can all, can then be self, reflective and generative, and that can then make something go. But when I look at the coding and how that works right now, it doesn't generate the kind of code that's like, oh, that's amazing new code.It helps with the, oh, I want to do a parser for quick sort, right? You know, like that kind of stuff. And it's like, okay, that's great. Or a systems integration use of an API or calling in an API for a spellchecker or whatever. Like it's really helpful stuff on engineers, but it's not like, oh my God, it's now inventing the new kind of training of large scale models techniques.And so I think even some of the great optimists will tell you of the great, like believers that it'll be soon and say there's one major invention. And the thing is, once you get to one major invention, is that one major invention? Is that three major inventions? Is it 10 major inventions?Like I think we are some number of major inventions away. I don't, I certainly don't think it's impossible to get there.Eric: Sorry. The major inventions are us human beings build, building things into the system or…?Reid: Yeah. Like for example, you know, can it do, like, for example, a classic, critique of a lot of large language models is can it do common sense reasoning.Eric: Gary Marcus is very…Reid: Exactly. Right. Exactly. And you know, the short answer right now is the large language models are approximating common sense reasoning.Now they're doing it in a powerful and interesting enough way that you're like, well, that's pretty useful. It's pretty helpful about what it's doing, but I agree that it's not yet doing all of that. And also you get problems like, you know, what are called one shot learning. Can you learn from one instance of it?Cause currently the training requires lots and lots of compute processing over days or in self play, can you have an accurate memory store that you update? Like for example, you say now fact X has happened, your entire world based on fact X. Look, there's a bunch of this stuff to all go.And the question is, is that one major invention is that, you know, five major inventions, and by the way, major inventions or major inventions even all the amazing stuff we've done over the last five to 10 years. Major inventions on major inventions. So I myself tend to be two things on the AGI one.I tend to think it's further than most people think. And I don't know if that further is it's 10 years versus five or 20 years versus 10 or 50 years versus 20. I don't, I don't really know.Eric: In your lifetime, do you think?Reid: It's possible, although I don't know. But let me give two other lenses I think on the AGI question cause the other thing that people tend to do is they tend to go, there's like this AI, which is technique machine learning, and there's totally just great, it's augmented intelligence and then there's AGI and who knows what happens with AGI.And you say, well first is AGI is a whole range of possible things. Like what if you said, Hey, I can build something that's the equivalent of a decent engineer or decent doctor, but to run it costs me $200 an hour and I have AGI? But it's $200 an hour. And you're like, okay, well that's cool and that means we can, we can get as many of them as we need. But it's expensive. And so it isn't like all of a sudden, you know, Terminator or you know, or inventing fusion or something like that is AGI and or a potential version of AGI. So what is AGI is the squishy thing that people then go, magic. The second thing is, the way that I've looked at the progress in the last five to eight years is we're building a set of iteratively better savants, right?It just like the chess player was a savant. Um, and, and the savants are interestingly different now. When does savant become a general intelligence and when might savant become a general super intelligence? I don't know. It's obviously a super intelligence already in some ways. Like for example, I wouldn't want to try to play, go against it and win, try to win.It's a super intelligence when it comes, right? But like okay, that's great cause in our perspective, having some savants like this that are super intelligence is really helpful to us. So, so the whole AGI discussion I think tends to go a little bit Hollywood-esque. You know, it's not terminator.Eric: I mean, there there is, there's a sort of argument that could be made. I mean, you know, humans are very human-centric about our beliefs and our intelligence, right? We don't have a theory of mind for other animals. It's very hard for us to prove that other species, you know, have some experience of consciousness like qualia or whatever.Reid: Very philosophically good use of a term by the way.Eric: Thank you. Um, I studied philosophy though. I've forgotten more than I remember. But, um, you know, I mean…Reid: Someday we'll figure out what it's like to be a bat. Probably not this time.Eric: Right, right, exactly. Is that, that's Nagel. If the machine's better than me at chess and go there, there's a level of I, you know, here I am saying it doesn't have an experience, but it, it's so much smarter than me in certain domains.I don't, I, the question is just like, it seems like humans are not capable of seeing what it's like to be a bat. So will we ever really be able to sort of convince ourselves that there's something that it's like to be, um, an AGI system?Reid: Well, I think the answer is um, yes, but it will require a bunch of sophistication. Like one of the things I think is really interesting about, um, as we anthropomorphize the world a little bit and I think some of this machine. Intelligence stuff will, will enable us to do that is, well what does it mean to understand X or, or, or, or no X or experience X or have qualia or whatever else.And right now what we do is we say, well it's some king of shadowy image from being human. So we tend to undercount like animals intelligence. And people tend to be surprised like, look, you know, some animals mate for life and everything else, they clearly have a theory of the world and it's clearly stuff we're doing.We go, ah, they don't have the same kind of consciousness we do. And you're like, well they certainly don't have the same kind of consciousness, but we're not doing a very good job of studying like what the, where it's similar in order it's different. And I think we're gonna need to broaden that out outcome to start saying, well, when you compare us and an eagle or a dolphin or a whale or a chimpanzee or a lion, you know, what are the similarities and and differences?And how this works. And um, and I think that will also then be, well, what happens when it's a silicon substrate? You know? Do we, do we think that consciousness requires a biological substrate? If so, why? Um, and, you know, part of how, of course we get to understand, um, each other's consciousness as we, we get this depth of experience.Where I realize is it isn't, you're just a puppet.Eric: [laughs] I am, I am just a puppet.Reid: Well, we're, we're talking to each other through Riverside, so, you know, who knows, right. You know, deep fakes and all that.Eric: The AI's already ahead of you. You know, I'm just, it's already, no.Reid: Yeah. I think we're gonna have to get more sophisticated on that question now.I think it's, it's too trivial to say because it can mimic language in particularly interesting ways. And it says, yes, I'm conscious that that makes it conscious. Like that's not, that's not what we use as an instance. And, and part of it is like, do you understand the like part of how we've come to understand each other's consciousness is we realize that we experience things in similar ways.We feel joy in similar, we feel pain in similar ways and that kinda stuff. And that's part of how we begin to understand. And I think it'll be really good that this may kick off kind of us being slightly less kind of call it narcissistically, anthropocentric in this and a broader concept as we look at this.Eric: You know, I was talking to my therapist the other day and I was saying, you know, oh, I did this like kind gesture, but I didn't feel like some profound, like, I don't, it just seemed like the right thing to do. I did it. It felt like I did the right thing should, you know, shouldn't I feel like more around it?And you know, her perspective was much more like, oh, what matters is like doing the thing, not sort of your internal states about it. Which to me would, would go to the, if the machine can, can do all the things we expect from sort of a caring type type machine. Like why do we need to spend all this time when we don't even expect that of humans to always feel the right feelings.Reid: I totally agree with you. Look, I think the real question is what you do. Now that being said, part of how we predict what you do is that, you know, um, you may not have like at that moment gone, haha, I think of myself as really good cause I've done this kind thing. Which by the way, might be a better human thing as opposed to like, I'm doing this cause I'm better than most people.Eric: Right.Reid: Yeah, but it's the pattern in which you engage in these things and part of the feelings and so forth is cause that creates a kind of a reliability of pattern of do you see other people? Do you have the aspiration to have, not just yourself, but the people around you leading better and improving lives.And obviously if that's the behavior that we're seeing from these things, then that's a lot of it. And the only question is, what's that forward looking momentum on it? And I think amongst humans that comes to an intention, a model of the world and so forth. You know, amongst, amongst machines that mean just maybe the no, no, we're aligned.Well, like, we've done a really good alignment with human progress.Eric: Do you think there will be a point in time where it's like an ethical problem to unplug it? Like I think of like a bear, right? Like a bear is dangerous. You know, there are circumstances where pretty comfortable. Killing the bear,But if the bear like hasn't actually done anything, we've taken it under our care. Like we don't just like shoot bears at zoos, you know? Do you think there's a point where like, and it costs us money to sustain the bear at a zoo, do you think there are cases where we might say, oh man, now there's an ethical question around unpluggingReid: I think it's a when, not an if.Eric: Yeah.Reid: Right? I mean, it may be a when, once again, just like AGI, that's a fair way's out. But it's a when, not an if. And by the way, I think that's again, part of the progress that we make because we think about like, how should we be treating it? Because, you know, like for example, if you go back a hundred, 150 years, the whole concept of animal rights doesn't exist in humans.You know, it's like, hey, you wanna, you want to torture animal X to death, you know, like you're queer, but you're, you're, you're allowed to do that. That's an odd thing for you to do. And maybe it's kind of like, like distasteful, like grungy bad in some way, but , you know, it's like, okay. Where's now you're like, oh, that person is, is like going out to try to go torture animals! We should like get them in an institution, right? Like, that's not okay. You know, what is that further progress for the rights and lives? And I think it will ultimately come to things that we think are, when it gets to kind of like things that have their own agency and have their own consciousness and sets of existence.We should be including all of that in some, in some grand or elevated, you know, kind of rights conceptions.Eric: All right, so back back to my listeners who, you know, wanna know where to invest and make money off this and, you know.Reid: [laughs] It isn't from qualia and consciousness. Oh, wait.Eric: Who do you think are the key players? The key players in the models. Then obviously there are more sort of, I don't know if we're calling them vertical solutions or product oriented or whatever, however you think about them.But starting with the models, like who do you see as sort of the real players right now? Are you counting out a Google or do you think they'll still, you know, sort of show?Reid: Oh no. I think Google will show up. And obviously, you know, Open AI, Microsoft has done a ton of stuff. I co-founded Inflection last year with Mustafa Suleyman. We have a just amazing team and I do see a lot of teams, so I'm.Eric: And that's to build sort of the foundational…Reid: Yeah, they're gonna, well, they're building their own models and they're gonna build some things off those models.We haven't really said what they are yet. But that's obviously going to be kind of new models. Adept, another Greylock investment building its own models, Character is building its own models, Anthropic is building its own models. And Anthropic is, you know, Dario and the crew is smart folks from Open AI, they're, they're doing stuff within a kind of a similar research program that Open AI is doing.And so I think those are the ones that I probably most track.Eric: Character's an interesting case and you know, we're still learning more about that company. You know, I was first to report they're looking to raise 250 million. My understanding is that what's interesting is they're building the models, but then for a particular use case, right?Or like, it's really a question of leverage or like, do people need to build the models to be competitive or do you think there will be... can you build a great business on top of Stability or Open AI or do you need to do it yourself?Reid: I think you can, but the way you do it is you can't say it's cause I have unique access to the model. It has to be, you know, I have a business that has network effects or I'm well integrated in enterprise, or I have another deep stack of technology that I'm bringing into it. It can't just be, I'm a lightweight front end to it because then other people can be the lightweight front end.So you can build great businesses. I think with it, I do think that people will both build businesses off, you know, things like the Open AI APIs and I think people will also train models. Because I think one of the things that will definitely happen is a lot of… not just will large models be built in ways that are interesting and compelling, but I think a bunch of smaller models will be built that are specifically tuned and so forth.And there's all kinds of reasons. Everything from you can build them to do something very specific, but also like inference cost, does it, does it run on a low compute or low power footprint? You know, et cetera, et cetera. You know, AI doctor, AI tutor, um, you know, duh and on a cell phone. And, um, and so, you know, I think like all of that, I think the short answer to this is allEric: Right. Do you think we are in a compute arms race still, or do you, do you think this is gonna continue where it's just if you can raise a billion dollars to, to buy sort of com GPU access basically from Microsoft or Amazon or Google, you're, you're gonna be sort of pretty far ahead? Or how do you think about that sort of the money, the money and computing rates shaping up?Reid: So I kind of think about two. There's kind of two lines of trends. There's one line, which is the larger and larger models, which by the way, you say, well, okay, so does the scale compute and one x flop goes to two x flops, and does your performance function go up by that?And it doesn't have to go up by a hundred percent or, or two x or plus one x. It could go up by 25%, but sometimes that really matters. Coding doctors, you know, legal, other things. Well, it's like actually, in fact, it, even though it's twice as expensive, a 25% increase in, you know, twice as expensive of compute, the 25% increase in performance is worth it. And I think you then have a large scale model, like a set of things that are kind of going along need to be using the large scale models.Then I think there's a set of things that don't have that need. And for example, that's one of the reasons I wasn't really surprised at all by the profusion of image generation, cuz those are, you know, generally speaking, trainable for a million to $10 million. I think there's gonna be a range of those.I think, you know, maybe someone will figure out how to do, you know, a hundred-million version and once they figured out how to do a hundred-million dollar version, someone also figured out how to do the 30-million version of that hundred-million dollar version. And there's a second line going on where all of these other smaller models will fit into interesting businesses. And then I think a lot of people will either deploy an open source model that they're using themselves, train their own model, get a special deal with, like a model provider or something else as a way of doing it.And so I think the short answer is there will be both, and you have to be looking at this from what's the specific that this business is doing. You know, the classic issues of, you know, how do you go to market, how do you create a competitive mode? What are the things that give you real, enduring value that people will pay for in some way in a business?All of the, those questions still apply, but the, but, but there's gonna be a panoply of answers, depending on the different models of how it playsEric: Do you think spend on this space in terms of computing will be larger in ‘24 and then larger in 25?Reid: Yes. Unquestionably,Eric: We're on the, we're still on the rise.Reid: Oh, yes. Unquestionably.Eric: That's great for a certain company that you're on the board of.Reid: Well look, and it's not just great for Microsoft. There are these other ones, you know, AWS, Google, but…Eric: Right. It does feel like Amazon's somewhat sleepy here. Do you have any view there?Reid: Well, I think they have begun to realize, what I've heard from the market is that they've begun to realize that they should have some stuff here. I don't think they've yet gotten fully underway. I think they are trying to train some large language models themselves. I don't know if they've even realized that there is a skill to training those large language models, cause like, you know, sometimes people say, well, you just turn on and you run the, run the large language model, the, the training regime that you read in the papers and then you make stuff.We've seen a lot of failures, of people trying to build these things and failing to do so, so, you know, there's, there's an expertise that you learn in doing it as well. And so I think—Eric: Sorry to interrupt—if Microsoft is around Open AI and Google is around Anthropic, is Amazon gonna be around stability? That's sort of the question that I'll put out to the world. I don't know if you have.Reid: I certainly don't know anything. And in the case of, you know, very, very, very, um, a politely said, um, Anthropic and OpenAI have scale with huge models. Stability is all small models, so, hmm.Eric: Yeah. Interesting. I, I don't think I've asked you sort of directly about sort of stepping off the Open AI board. I mean, I would assume you would prefer to be on the board or…?Reid: Yeah. Well, so look, it was a funny thing because, um, you know, I was getting more and more requests from various Greylock portfolio companies cause we've been investing in AI stuff for over five years. Like real AI, not just the, we call it “software AI”, but actual AI companies.For a while and I was getting more and more requests to do it and I was like oh, you know, what I did before was, well here's the channel. Like here is the guy who, the person who handles the API request goes, go talk to them. Like, why can't you help me? I was like, well, I'm on the board.I have a responsibility to not be doing that. And then I realized that, oh s**t, it's gonna look more and more. Um, I might have a real conflict of interest here, even as we're really carefully navigating it and, and it was really important cause you know various forces are gonna kind of try to question the frankly, super deep integrity of Open AI.It's like, look, I, Sam, I think it might be best even though I remain a fan, an ally, um, to helping, I think it may be best for Open AI. And generally to step off a board to avoid a conflict of interest. And we talked about a bunch and said, okay, fine, we'll do it. And you know, I had dinner with Sam last night and most of what we were talking about was kind of the range of what's going on and what are the important things that open eyes need to solve? And how should we be interfacing with governments so that governments understand? What are the key things that, that, that should be in the mix? And what great future things for humanity are really important not to fumble in the, in the generally, like everyone going, oh, I'm worrying. And then I said, oh, I got a question for you. And he's like, yeah, okay. I'm like, now that I'm no longer on the board, could I ask you to personally look at unblocking, my portfolio company's thing to the API? Because I couldn't ever ask you that question before. Cause I would be unethical. But now I'm not on the board, so can I ask the question?He's like, sure, I'll look into it. I'm like, great, right? And that's the substance of it, which I never would've done before. But that wasn't why, I mean, obviously love Sam and the Open AI team.Eric: The fact that you're sort of a Democratic super donor was that in the calculus? Or, because I mean, we are seeing Republican… well, I didn't think that at all coming into this conversation, but just hearing what you're saying. Looking at it now, it feels like Republicans are like trying to find something to be angry about.Reid: WellEric: These AI things, I don't quite…Reid: The unfortunate thing about the, the most vociferous of the republican media ecosystem is they just invent fiction, like their hallucination full out.Eric: Right.Reid: I mean, it just like, I mean, the amount of just like, you know, 2020 election denial and all the rest, which you can tell from having their text released from Fox News that like, here are these people who are on camera going on where you have a question about, you know, what happened in the election.And they're texting each other going, oh my God, this is insane. This is a coup, you know, da da da. And you're like, okay. Anyway, so, so all like, they don't require truth to generate. Heat and friction. So that was, wasn't that no, no. It's just really, it's kind of the question of, when you're serving on a board, you have to understand what your mission is very deeply and, and to navigate it.And part of the 501(C)(3) boards is to say, look, obviously I contribute by being a board member and helping and navigate various circumstances and all the rest. And, you know, I can continue to be a counselor and an aid to the company not being on the board. And one of the things I think is gonna be very important for the next X years, for the entire world to know is that open AI takes its ethics super seriously,Eric: Right.Reid: As do I.Eric: Does that fit with having to invest? I mean, there are lots of companies that do great things. They have investors. I believe in companies probably more than personally I believe in charities to accomplish things. But the duality of OpenAI is extremely confusing. Like, was Greylock, did Greylock itself invest a lot or you invested early as an angel?Reid: I was the founding investor as an angel, as a, as a program related investment from my foundation. Because like I started, I was among the first people to make a philanthropic donation to Open AI. Just straight out, you know, here's a grant by Wednesday, then Sam and Crew came up with this idea for doing this commercial lp, and I said, look, I, I'll help and I have no idea if this will be an interesting economic investment.They didn't have a business plan, they didn't have a revenue plan, they didn't have a product plan. I brought it to Greylock. We talked about it and they said, look, we think this will be possibly a really interesting technology, but you know, part of our responsibility to our LPs, which you know, includes a whole bunch of universities and else we invest in businesses and there is no business plan.Eric: So is that the Khosla did? Khosla's like we invested wild things. Anyway, we don't care. That's sort of what Vinod wants to project anyway, so yeah.Reid: You know, yes, that's exactly the same. So I put them 50 and then he put in a, I think he was the only venture fund investing in that round. But like, there was no business plan, there was no revenue model, there was no go to market…Eric: Well, Sam basically says, someday we're gonna have AGI and we're gonna ask you how to make a bunch of money? Like, is he, that's a joke, right? Or like, how much is he joking?Reid: It's definitely, it's not a 100% joke and it's not a 0% joke. It's a question around, the mission is really about how do we get to AGI or as close to AGI as useful and to make it useful for humanity. And by the way, the closer you get to AGI, the more interesting technologies fall out, including the ability to have the technology itself solve various problems.So if you said, we have a business model problem, it's like, well ask the thing. Now, if you currently sit down and ask, you know, ChatGPT what the business model is, you'll get something pretty vague and generic that wouldn't get you a meeting with a venture capitalist because it's like “we will have ad supported”... you're like, okay. Right.Eric: Don't you have a company that's trying to do pitch decks now or something?Reid: Oh yeah, Tome. No, and it's awesome, but by the way, that's the right kind of thing. Because, because what it does is you say, hey, give me a set of tiles, together with images and graphics and things arguing X and then you start working with the AI to improve it. Say, oh, I need a slide that does this and I need a catchier headline here, and, and you know, da da da.And then you, and you know, obviously you can edit it yourself and so on. So that's the kind of amplification. Now you don't say, give me my business model, right?Eric: You're like, I have this business model, like articulate it.Reid: Exactly.Eric: Um, I, politics, I mean, I feel like we, we live through such like a… you know what I mean, I feel like Silicon Valley, you know, has like, worked on PE everybody be able to, you know, everybody can get along. There's sort of competition, but then you sort of still stay close to any, everybody like, you, you especially like are good, you know, you you are in the PayPal mafia with a lot of people who are fairly very conservative now.The Trump years broke that in some ways and particular, and that, yeah. So how did you maintain those relationships?I see headlines that say you're friends with Peter Thiel. What is, what's the state of your friendship with Peter Thiel and how, how did it survive?I guess the Trump years is the question.Reid: Well, I think the thing that Peter and I learned when we were undergraduate at Stanford together is it's very important to… cause we, you know, I was a lefty. He was a righty. We'd argue a lot to maintain conversation and to argue things. It's difficult to argue on things that feel existential and it's ethically challenged is things around Trump. You know, the, you know, Trump feels to be a corrosive asset upon our democracy that is disfiguring us and staining us to the world. And so to have a dispassionate argument about it is, it's challenging. And it ends up with some uneven ground and statements like, I can't believe you're f*****g saying that, as part of dialogue.But on the other hand, you know, maintaining dialogue is I think part of how we make progress as society. And I basically sympathetic to people as long as they are legitimately and earnestly and committed to the dialogue and discussion of truth between them and committed otherwise.And so, you know, there are folks from the PayPal years that I don't really spend much time talking to, right?. There are others that I do because that conversation about discovering who we are and who we should be is really important. And you can't allow your own position to be the definer.It almost goes back to what we were talking about, the AI side, which is make sure you're talking to other smart people who challenge you to make sure you're doing the right thing. And that's, I think, a good general life principle.Eric: Well, you know, I feel like part of what my dream of like the Silicon Valley world is that we have these, you know, we have, Twitter is like the open forum. We're having sincere sort of on the level debates, but then you see something like, you know, the…Reid: You don't think it's the modern Seinfeld show I got? Well, not Seinfeld, um, Springer, Jerry Springer.Eric: Yeah, that's, yeah. Right. But I just feel like the sort of like, if the arguments are on the level issue is my problem with some of the sort of, I don't know, Peter Theil arguments, that he's not actually publicly advancing his beliefs in a sincere way, and that that's almost more corrosive.Reid: Oh, that's totally corrosive. And as much as that's happening, it's terrible. And that's one of the things that I, um, you know, in conversations I have, I push people including Peter on a lot.Eric: Yeah. Are you still, are you still gonna donate a lot, or what was, what's your, are you as animated about the Democratic party and working through sort of donor channels at the moment?Reid: Well, what I would say is I think that we have a responsibility to try to make, like with, it's kind of the Spider-Man ethics. With power comes responsibility, with wealth comes responsibility, and you have to try to help contribute to… what is the better society that we should be living and navigating in?And so I stay committed on that basis. And I do think there are some really amazing people in the administration. I think Biden is kind of a good everyday guy.Eric: Yeah.Reid: In fact, good for trying to build bridges in the country. I think there are people like Secretary Raimondo and Secretary Buttigieg who are thinking intensely about technology and what should be done in the future.And I think there's other folks now, I think there's a bunch of folks on the democratic side that I think are more concerned with their demagoguery than they are with the right thing in society. And so I tend to be, you know, unsympathetic to, um, you know…Eric: I know, Michael Moritz, it's Sequoia, that oped sort of criticizing San Francisco government, you know, and there's, there's certainly this sort of woke critique of the Democratic Party. I'm curious if there's a piece of it sort of outside of he governance that you're…Reid: Well, the interesting thing about woke is like, well, we're anti woke. And you're like, well, don't you think being awake is a good thing? I mean, it's kind of a funny thing. Eric: And sort of the ill-defined nature of woke is like key to the allegation because it's like, what's the substantive thing you're saying there? And you know, I mean we we're seeing Elon tweet about race right now, which is sort of terrifying anyway.Reid: Yeah. I think the question on this stuff is to try to say, look, people have a lot of different views and a lot of different things and some of those views are, are bad, especially in kind of minority and need to be advocated against in various… part of why we like democracy is to have discourse.I'm very concerned about the status of public discourse. And obviously most people tend to focus that around social media, which obviously has some legitimate things that we need to talk about. But on the other hand, they don't track like these, like opinion shows on, like, Fox News that represent themselves implicitly as news shows and saying, man, this is the following thing.Like there's election fraud in 2020, and then when they're sued for the various forms of deformation, they say, we're just an entertainment show. We don't do anything like news. So we have that within that we are already struggling on a variety of these issues within society. and we, I think we need to sort them all out.Eric: Is there anything on the AI front that we missed or that you wanted to make sure to talk about? I think we covered so much great ground. Reid: And, and we can do it again, right. You know, it's all, it's great.Eric: I love it. This was all the things you're interested in and I'm interested in, so great. I really enjoyed having you on the podcast and thanks.Reid: Likewise. And, you know, I follow the stuff you do and it's, it's, it's cool and keep doing it. Get full access to Newcomer at www.newcomer.co/subscribe
Now we know why Henry Kissinger looks the way he does. He peered into the true face of the New God // New Technology. He is paying the price for this forbidden glance – cursed to live for eternity in a shriveled flesh husk, damned to preach about its power, along with Eric Schmidt and Daniel Huttenlocher. Their reverence is tempered by vague warnings of risk and responsibility. Their hysterical register is offset by the hushed tones of sober reflection. Stuff we reference: ••• ChatGPT Heralds an Intellectual Revolution | Henry Kissinger, Eric Schmidt, Daniel Huttenlocher https://www.wsj.com/articles/chatgpt-heralds-an-intellectual-revolution-enlightenment-artificial-intelligence-homo-technicus-technology-cognition-morality-philosophy-774331c6 ••• How the Enlightenment Ends | Henry Kissinger https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/ ••• Planning for AGI and beyond | Sam Altman https://openai.com/blog/planning-for-agi-and-beyond/ ••• The Myth of Artificial Intelligence | Meredith Whittaker, Lucy Suchman https://prospect.org/culture/books/myth-of-artificial-intelligence-kissinger-schmidt-huttenlocher/ Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)
台灣僅有一家旗艦店的吃茶三千總算開來我們家附近了,跟大家分享一下這個小確幸。今天想要跟大家來聊一下 ChatGPT 之父 Sam Altman 的故事,他的故事真的相當有趣值得好好細談,所以我們會分成上下兩集,這集會介紹 Sam 的生平、高中出櫃的故事、進入史丹佛大學輟學創立 Loopt 這款基於地點的社群軟體以及進入 Y Combinator 當執行長的故事,最後有 Apple Podcasts 的 Q&A! https://glow.fm/jktech/ 如果我們的 Podcast 有帶給你歡笑還有知識的話,歡迎支持我們成為贊助夥伴,一個月一杯星巴克的價錢,幫助我們持續創造優質的內容! 矽谷輕鬆談傳送門 ➡️ https://linktr.ee/jktech (00:47) 吃茶三千 (08:13) Sam Altman 的創業故事 (15:22) 16 歲出櫃 (18:49) 史丹佛輟學創立 Loopt (23:46) 跟 Y Combinator 的淵源 (28:31) YC 之母 Jessica (41:08) Q&A
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How popular is ChatGPT? Part 2: slower growth than Pokémon GO, published by Richard Korzekwa on March 3, 2023 on LessWrong. Rick Korzekwa, March 3, 2023 A major theme in reporting on ChatGPT is the rapid growth of its user base. A commonly stated claim is that it broke records, with over 1 million users in less than a week and 100 million users in less than two months. It seems not to have broken the record, though I do think ChatGPT's growth is an outlier. Checking the claims ChatGPT growth From what I can tell, the only source for the claim that ChatGPT had 1 million users in less than a week comes from this tweet by Sam Altman, the CEO of OpenAI: I don't see any reason to strongly doubt this is accurate, but keep in mind it is an imprecise statement from a single person with an incentive to promote a product, so it could be wrong or misleading. The claim that it reached 100 million users within two months has been reported by many news outlets, which all seem to bottom out in data from Similarweb. I was not able to find a detailed report, but it looks like they have more data behind a paywall. I think it's reasonable to accept this claim for now, but, again, it might be different in some way from what the media is reporting1. Setting records and growth of other apps Claims of record setting I saw people sharing graphs that showed the number of users over time for various apps and services. Here is a rather hyperbolic example: That's an impressive curve and it reflects a notable event. But it's missing some important data and context. The claim that this set a record seems to originate from a comment by an analyst at investment bank UBS, who said “We cannot remember an app scaling at this pace”, which strikes me as a reasonable, hedged thing to say. The stronger claim that it set an outright record seems to be misreporting. Data on other apps I found data on monthly users for all of these apps except Spotify2. I also searched lists of very popular apps for good leads on something with faster user growth. You can see the full set of data, with sources, here.3 I give more details on the data and my methods in the appendix. From what I can tell, that graph is reasonably accurate, but it's missing Pokémon GO, which was substantially faster. It's also missing the Android release of Instagram, which is arguably a new app release, and surpassed 1M within the first day. Here's a table summarizing the numbers I was able to find, listed in chronological order: ServiceDate launchedDays to 1MDays to 10MDays to 100MNetflix subscribers (all)1997-08-29366941857337Facebook2004-02-043319501608Twitter2006-07-156709551903Netflix subscribers (streaming)2007-01-15188923513910Instagram (all)2010-10-0161362854Instagram (Android)2012-04-031Pokemon Go (downloads)2016-07-05727ChatGPT2022-11-30461 It's a little hard to compare early numbers for ChatGPT and Pokémon GO, since I couldn't find the days to 1M for Pokémon GO or the days to 10M for ChatGPT, but it seems unlikely that ChatGPT was faster for either. Analysis Scaling by population of Internet users The total number of people with access to the Internet has been growing rapidly over the last few decades. Additionally, the growth of social networking sites makes it easier for people to share apps with each other. Both of these should make it easier for an app to spread. With that in mind, here's a graph showing the fraction of all Internet users who are using each app over time (note the logarithmic vertical axis): In general, it looks like these curves have initial slopes that are increasing with time, suggesting that how quickly an app can spread is influenced by more than just an increase in the number of people with access to the Internet. But Pokémon GO and ChatGPT just look like vertical lines of different heights, so here's anoth...
ChatGPT is all the rage. It's also the reason why you now matter even more than ever. As an oversimplified definition, Chat GPT uses artificial intelligence like Siri on your iPhone or your Alexa smart speaker. It is just much more powerful. GPT is short for Generative Pretrained Transformer. We'll get into a deeper definition in a bit. What I want you to understand is the impact it will have on your podcast. NOT ALL INFO Your podcast cannot simply be information. ChatGPT has nearly all the information anyone could ever need. It is the depth of the internet with the conversation of Alexa. It was an early Monday morning in March of 1995. I had just started my new job as Program Director of an alternative radio station in Lincoln, Nebraska. I was standing in the jock lounge. It was basically an open room with a countertop around the perimiter. All the DJs kept their stuff in there. Sitting on the countertop was a big, bulky, desktop computer. It was primarily used to schedule music logs for the stations. However, this particular computer was connected to the World Wide Web. The mid-90s was when the internet really started taking off. We would pull up a site called Webcrawler. It was the first search engine to be widely used. It was also the first to fully index the content on web pages. One of the primary investors in Webcrawler was Paul Allen of Microsoft. But, we'll get to that connection in a minute. As we played with Webcrawler, we could find anything we wanted. I typed in all sorts of words and phrases to see what would come up. Baseball, bullfrogs, blues music. It was all there. IT'S ABOUT TO CHANGE And that's when I realized the world was about to change. The Encyclopedia Britannica set and the World Books we had in the basement of my mom's house were no longer relevant. Why would I search the encylopedia when I could use Webcrawler? Now, I know you're probably thinking the use of an encyclopedia sounds ludacris. Or I just sound old. Either way, it was the dawn of a new day. This also meant my radio show could no longer be the interesting bits of trivia or music news I typically shared. I would need to serve my listeners something Webcrawler couldn't. That something turned out to be me, my story and my personality. Webcrawler couldn't copy that. Rather than sharing the tidbit that Bob Mould was once a member of Husker Du and then of Sugar, I needed to talk about the strange sounds coming from the apartment next door last night or the time Ozzy Osbourne wouldn't stop talking to my girlfriend. Thanks to Webcrawler and the World Wide Web in 1995, it was indeed a different world and time for a new approach. HERE WE GO AGAIN And that's where we are again today. ChatGPT has the information. If you are only serving information on your podcast, you are the new version of the Encyclopedia Britannica. This new artificial intelligence tool can serve up the exact same information you are delivering. Only ChatGPT does it in less time. Let's say you teach how to write code for computers. I can ask ChatGPT how to write computer code. ChatGPT can now only write code, it can debug it. You need to move into your new world. Share your story. Give listeners your personality. Build relationships. Offer something more that ChatGPT cannot give your audience. WHAT IS CHATGPT? So, what is ChatGPT? ChatGPT is an artificial intelligence chatbot developed by OpenAI. They are a startup based in San Francisco. The company was co-founded in 2015 by Elon Musk and Sam Altman. OpenAI also has other backers and investors. One of those investors happens to be Microsoft, just like Webcrawler. The OpenAI website says, "We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests." In regular language, the tool is like Alexa on steroids. It is capable of taking inputs from users and producing human-like responses. The thing that makes it different is the ability of ChatGPT to learn and adjust according to the conversation. CNBC asked ChatGPT to give its own description. ChatGPT said it is "an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to generate human-like responses to text inputs in a conversational manner." Microsoft isn't simply an investor in the company. According to Fox Business, Microsoft has added the technology to its products, including search engine Bing. GARBAGE IN, GARBAGE OUT ChatGPT does have some serious limitations. The biggest concern is misinformation and infringing on intellectual property. ChatGPT is trained on a vast compilation of articles, websites and social-media posts scraped from the internet as well as real-time conversations. As you know, the information on the internet isn't always perfect. Therefore, the information coming out of ChatGPT also isn't flawless. According to Business Insider, Chat bots like GPT are powered by large amounts of data and computing techniques to make predictions. Those predictions string words together in a meaningful way. These chat bots not only tap into a vast amount of vocabulary and information, but also understand words in context. This helps them mimic speech patterns while offering up an encyclopedic knowledge. It's just like my day with Webcrawler. Unlike most chatbots and your Alexa, ChatGPT remembers previous prompts given to it in the same conversation. It learns as it goes. ChatGPT has the ability to log context from earlier messages in a thread. The tool can then use that information to form responses later in the conversation. Inputs are filtered so potentially racist or sexist prompts are dismissed. OpenAI believes this should prevent offensive outputs from being presented to and produced from ChatGPT. Although the core function of a chatbot is to mimic human conversation, ChatGPT is versatile. For example, it can write and debug computer programs, compose music, and write student essays. ChatGPT can answer test questions, write poetry, and simulate an ATM. Can you see where the concern might come in? The tool has sparked concerns over potential abuses in many of these areas. Students have already used ChatGPT to generate entire essays, while hackers have used it to write code for the bad guys. GETTING BIGGER It's only getting bigger. ChatGPT is growing faster than any other app. By January 2023, ChatGPT had amassed 100 million monthly active users. That was only two months into its launch. That skyrocking growth also made ChatGPT the fastest-growing consumer application in history, according to UBS. It took TikTok nine months to reach 100 million users. Instagram didn't hit 100 million for two and a half years. If you have an Open AI account, you can try ChatGPT for free while they test it and it learns. Find OpenAI at OpenAI.com. MORE YOU So, how do you stay relevant on your show? There are 3 ways. STORIES First, tell stories. Everything interesting is about people. Stop teaching your six steps to success. Be you. Share something as if you were telling your best friend. Stories sell. People remember stories. Stories make you human. It is also the most powerful way to build relationships. PERSONALITY Next, let your personality shine. You don't need to be Howard Stern or Gary Vaynerchuk. You just need to be you. Stand for something and stand out. If I asked a group to rate you on a one to five scale and they all gave you a three, you would be dead in the water. Three means I could take it or leave it. I really have no preference. Lots of fives and lots of ones mean you are making people care. Get noticed. AUTHENTIC Finally, be authentic. Don't try to be someone or something you are not. When I was coming up in radio, I learned this the hard way. It was a few years before the Webcrawler incident. I was doing nights at that same radio station. We had just signed it on a few months earlier. It was late afternoon and I was sitting in the office of my Program Director for my weekly show review. We would review a recording of my show once a week to help me improve. Her office was right next to the studio. My show started at 7. We were meeting at 4. As the tape played, it was a typical show. Nothing crazy. Same sorts of breaks I always did. Melinda sat there listening, not saying anything. She was just taking it in. Finally, she reached over and turned it off. She looked at me and said, "When are you going to start being yourself?" I asked her what she meant. She said, "You are using all these phrases and words and cliches that the guys on the rock station use. That's not you. It's not even our station. Why don't you leave that to them and just start being real?" Now, I worked on the rock station before moving over to this one. So I still had a little of the rock in me. But the truth is... that wasn't me then either. That was the night I started sharing my authentic self on the radio. It was also the day my radio career started to take off. Instead of being a poor imitation of some other DJ, I was now crafting my own personality. It was something nobody could copy. I was becoming one of a kind. YOUR CHOICE You can do it as well. Just be true to yourself. So, now you have a choice. You can continue to deliver information episode after episode and end up fading away like the Encyclopedia Brittanica. Or you can share a little bit of you on every episode and build long-lasting, powerful relationships with your listeners. If you would like help developing stories for your show, grab my Story Development Worksheet at www.PodcastTalentCoach.com/story. Developing your personality is a little more involved. I would love to help you walk through that process. We can talk about that during your Podcast Strategy call. It is my gift to you. No charge. We just develop a powerful strategy for your show. Go to www.PodcastTalentCoach.com/apply, click the button and apply to have a chat with me. We will develop your plan and see how I can help and support you to achieve your podcast goals.
Train Derailment & Environmental Fallout In East Palestine Leads To Political & Legal Frenzy The train derailment in East Palestine, Ohio has led to a frenzy of political activity, criticisms, lawsuits, investigations, advocacy demands, and conspiracy theories as the fallout from the derailment continues to maintain prominence in the national conversation. The derailment has prompted criticism of both the Biden and former Trump administrations, ensnarled politicians like Gov. WeWine and Secretary Buttigieg, and has led to numerous lawsuits, criticism of the EPA, and many other activities. One nonprofit law firm We The Patriots USA (WTP USA), a nonprofit public interest law firm, “will host a press conference in Akron to discuss litigation against the Environmental Protection Agency” according to local reporting from WKYC. Americans are increasingly sensitive to environmental disasters and this incident could refocus public scrutiny on environmental regulation, and potentially spur increasing attention toward nonprofit environmental advocacy and intervention efforts. Read more ➝ Summary Many Ukrainian refugees in US are sponsored by ordinary Americans | USA TODAY IRS working with nonprofit New America to deliver online direct file tax system study | FedScoop The nonprofits accelerating Sam Altman's AI vision | TechCrunch Together We Rise becomes Foster Love
Train Derailment & Environmental Fallout In East Palestine Leads To Political & Legal Frenzy The train derailment in East Palestine, Ohio has led to a frenzy of political activity, criticisms, lawsuits, investigations, advocacy demands, and conspiracy theories as the fallout from the derailment continues to maintain prominence in the national conversation. The derailment has prompted criticism of both the Biden and former Trump administrations, ensnarled politicians like Gov. WeWine and Secretary Buttigieg, and has led to numerous lawsuits, criticism of the EPA, and many other activities. One nonprofit law firm We The Patriots USA (WTP USA), a nonprofit public interest law firm, “will host a press conference in Akron to discuss litigation against the Environmental Protection Agency” according to local reporting from WKYC. Americans are increasingly sensitive to environmental disasters and this incident could refocus public scrutiny on environmental regulation, and potentially spur increasing attention toward nonprofit environmental advocacy and intervention efforts. Read more ➝ Summary Many Ukrainian refugees in US are sponsored by ordinary Americans | USA TODAY IRS working with nonprofit New America to deliver online direct file tax system study | FedScoop The nonprofits accelerating Sam Altman's AI vision | TechCrunch Together We Rise becomes Foster Love
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman: "Planning for AGI and beyond", published by Lawrence Chan on February 24, 2023 on The AI Alignment Forum. (OpenAI releases a blog post detailing their AGI roadmap. I'm copying the text below, though see the linked blog post for better formatted version) Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity. On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right. AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast. Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt. Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most: We want AGI to empower humanity to maximally flourish in the universe. We don't expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity. We want the benefits of, access to, and governance of AGI to be widely and fairly shared. We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios. The short term There are several things we think are important to do now to prepare for AGI. First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it's better to adjust to this incrementally. A gradual transition gives people, policymakers, and institutions time to understand what's happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low. We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman: "Planning for AGI and beyond", published by LawrenceC on February 24, 2023 on LessWrong. (OpenAI releases a blog post detailing their AGI roadmap. I'm copying the text below, though see the linked blog post for better formatted version) Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity. On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right. AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast. Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt. Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most: We want AGI to empower humanity to maximally flourish in the universe. We don't expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity. We want the benefits of, access to, and governance of AGI to be widely and fairly shared. We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios. The short term There are several things we think are important to do now to prepare for AGI. First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it's better to adjust to this incrementally. A gradual transition gives people, policymakers, and institutions time to understand what's happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low. We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very di...
Carlota Perez is a researcher who has studied hype cycles for much of her career. She's affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries. Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master's at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979. Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries. Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There's certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner's graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases. * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn't even usable, but shows promise. * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there's enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There's also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum's "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist. ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore. Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn't much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s. These hype cycles weren't just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn't cash. For example, the New York Times claimed Rosenblatt's perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy's ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp's “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn't live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM's Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs. Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI. This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included: * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that's more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don't have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That's when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn't lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn't always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
A re-broadcast of Greylock general partner Reid Hoffman's interview with OpenAI CEO Sam Altman, recorded during Greylock's Intelligent Future summit in August 2022. Founded in 2015, OpenAI has recently released several high-profile products in quick succession: succession: its generative transformer model GPT-3 – which uses deep learning to produce human-like text – its image-creation platform DALL-E – and most recently, ChatGPT. Trained on massive large language models, the highly sophisticated chatbot can mimic human conversation and speak on a wide range of topics. You can watch a video of the interview here: https://youtu.be/WHoWGNQRXb0 You can read a transcript of this interview here: https://greylock.com/greymatter/sam-altman-ai-for-the-next-era/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4 Predictions, published by Stephen McAleese on February 17, 2023 on LessWrong. Introduction GPT-4 is OpenAI's next major language model which is expected to be released at some point in 2023. My goal here is to get some idea of when it will be released and what it will be capable of. I also think it will be interesting in retrospect to see how accurate my predictions were. This post is partially inspired by Mathew Barnett's GPT-4 Twitter thread which I recommend reading. Background of GPT models GPT-1, GPT-2, GPT-3 GPT stands for generative pre-trained transformer and is a family of language models that were created by OpenAI. GPT was released in 2018, GPT-2 in 2019, and GPT-3 in 2020. All three models have used a similar architecture with some relatively minor variations: a dense, text-only, decoder transformer language model that's trained using unsupervised learning to predict missing words in its text training set . InstructGPT, GPT-3.5, ChatGPT Arguably one of the biggest changes in the series in terms of architecture and behavior was the release of InstructGPT in January 2022 which used supervised fine-tuning using model answers and reinforcement learning with human feedback where model responses are ranked in addition to the standard unsupervised pre-training. The GPT-3.5 models finished training and were released in 2022, and demonstrated better quality answers than GPT-3. In late 2022, OpenAI released ChatGPT which is based on GPT-3.5 and fine-tuned for conversation. When will GPT-4 be released? Sam Altman, the CEO of OpenAI, was interviewed by StrictlyVC in January 2023. When asked when GPT-4 would come out, he replied, “It will come out at some point when we are confident that we can do it safely and responsibly.” Metaculus predicts a 50% chance that GPT-4 will be released by May 2023 and a ~93% chance that it will be released by the end of 2023. It seems like there's still quite a lot of uncertainty here but I think we can be quite confident that GPT-4 will be released at some point in 2023. What will GPT-4 be like? Altman revealed some more details about GPT-4 at an AC10 meetup Q&A. He said: GPT-4 will be a text-only model like GPT-3. GPT-4 won't be much bigger than GPT-3 but will use much more compute and have much better performance. GPT-4 will have a longer context window. How capable will GPT-4 be? Scaling laws According to the paper Scaling Laws for Neural Language Models (2020), model performance as measured by cross-entropy loss can be calculated from three factors: the number of parameters in the model, the amount of compute used during training, and the amount of training data. There is a power-law relationship between these three factors and the loss. Basically, this means you have to increase the amount of compute, data, and parameters by a factor of 10 to decrease the loss by one unit, by 100 to decrease the loss by two units, and so on. The authors of the paper recommended training very large models on relatively small amounts of data and recommended investing compute into more parameters over more training steps or data to minimize loss as shown in this diagram: For every 10x increase in compute, the paper approximately recommends increasing the number of parameters by 5x, the number of training tokens by 2x, and the number of serial training steps by 1.2x. This explains why the original GPT-3 model and other models such as Megatron and PaLM were so large. However, the new scaling laws from DeepMind's 2022 paper Training Compute Optimal Language Models instead emphasize the importance of training data for minimizing loss. Instead of prioritizing more parameters, the paper recommends scaling the number of parameters and training tokens equally. DeepMind originally trained a large 280B parameter model named Gopher but then found a 70B mo...
Artificial Intelligence was once the realm of science fiction. But over the last several years, advances in machine learning and deep neural networks have moved us closer to a reality where computers can learn and solve problems independently, the way a human does. From art and music to medicine and politics, the potential applications of AI are nearly endless, and the technology just keeps getting better.This week on How I Built This Lab, Guy talks with one of the leaders in the field of AI development, Sam Altman. Sam talks about his journey from Stanford dropout and teenage entrepreneur to president of the legendary startup incubator Y Combinator and co-founder of the nonprofit OpenAI. Plus, Sam shares his hopes and fears for the future of AI and how his company is working to ensure it ultimately benefits all of humanity. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
声动活泼也尝试用 ChatGPT 进行创作?教育竟然也是生成式 AI 最快落地的场景之一? 社交媒体上的 ChatGPT 狂欢正在带动一种生成式 AI 的学习热潮。从写小说到写作业,确实没有 ChatGPT 写不出来的东西。为了防止学生使用生成式 AI 作弊,纽约市教育部门决定限制 ChatGPT 在教育系统的使用,包括 UCLA 在内的多所美国大学也在考虑是否禁止学生使用 ChatGPT 参加正式考试和论文写作。 面对教育从业者所表现出来的种种不适,OpenAI 创始人 Sam Altman 选择用计算机在数学教育中所扮演的角色,来解释 ChatGPT 对教育的积极影响:计算机的出现,让人类不必依靠自己独立运算,数学考试自然应声而变。所以未来,教育系统也必定能找到与生成式 AI 友好共处的方式。 学习的本质是什么?ChatGPT 以及其他的生成式 AI 工具,将如何改变人类传授知识的方式,乃至影响未来的社会分工? 本期是「科技早知道」与「声东击西」的串台节目,聊聊 ChatGPT 当下给教育行业带来的震动,以及思考这种变化的角度。 本期人物 Diane,「声动活泼」联合创始人、「科技早知道」主播 徐涛,「声动活泼」联合创始人、「声东击西」主播 Jill Li,硅谷 Parent Lab (育见科技) 创始人和 CEO 主要话题 [03:25] ChatGPT 测评的真实感受 [14:17] ChatGPT 是在 GPT 的技术上包装了一层人机交互的界面 [25:33] 每个人都有很独特的部分,而 AI 自己很难产生真正的目标和价值观 [35:11] 教育现在所背负的功能里包括了将孩子标签化,但这并非教育的本质 [48:47] 在初等教育阶段,不应该交给孩子他们无法掌控的工具 延伸阅读 - S6E47|年终总结3:AIGC可能改变人类未来,但它知道自己的未来在哪里吗? (https://guiguzaozhidao.fireside.fm/20220148) - S6E40|从单模态到多模态,人工“智障”何时才能不鸡肋 (https://guiguzaozhidao.fireside.fm/20220139) - ChatGPT Is a Blurry JPEG of the Web (https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web) - StrictlyVC in conversation with Sam Altman, part two (OpenAI) (https://www.youtube.com/watch?v=ebjkD1Om4uw) - Don't Ban ChatGPT in Schools. Teach With It. (https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html) 收听提示 点击节目的收听链接即可完整收听本期节目,无需安装收听应用。 欢迎加入声动胡同会员计划 (https://sourl.cn/iCVg6n) 订阅方式: 国内支付渠道(年付):加入声动胡同会员计划 (https://sourl.cn/GWqwsa) 国外支付渠道(月付):加入声动胡同会员计划 (https://sourl.cn/UXhR7j) 「声动胡同」是以声动活泼北京办公室所在的前永康胡同为灵感,为喜欢我们的听众打造的一款社区产品。我们希望在这样的社区里,能让有价值共识的年轻人们在此获得更多源源不断的思考养料,彼此支持和成长。如果你加入成为我们的街坊,将获得以下与我们保持更亲密连接的机会: - 每周三封 newsletter 形式的「胡同来信」,获得更多节目之外的信息与故事。胡同来信试读:不止有耐克:波特兰为何能成为全球运动鞋的研发中心? (https://sourl.cn/DEichH) - 每季度举办一场线上或线下活动,例如已举办了三期的「露天演讲台」 (https://sourl.cn/BzAURb)、「胡同里的清醒梦」快闪活动。 - 你也可以在邮件中和我们讨论或询问任何事,我们都会一一邮件回复。 使用音乐 Transit - Wendel Scherer 幕后制作 监制:刘灿、信宇 后期:Luke 运营:Babs 设计:饭团 关于节目 原「硅谷早知道」,全新改版后为「What's Next|科技早知道」。放眼全球,聚焦科技发展,关注商业格局变化。 商务合作 声动活泼商务合作咨询 (https://sourl.cn/6vdmQT) 关于声动活泼 「用声音碰撞世界」,声动活泼致力于为人们提供源源不断的思考养料。 - 我们还有这些播客:声东击西 (https://etw.fm/episodes)、What's Next|科技早知道 (https://guiguzaozhidao.fireside.fm/episodes)、声动早咖啡 (https://sheng-espresso.fireside.fm/)、商业WHY酱 (https://msbussinesswhy.fireside.fm/)、跳进兔子洞 (https://therabbithole.fireside.fm/)、反潮流俱乐部 (https://fanchaoliuclub.fireside.fm/)、泡腾 VC (https://popvc.fireside.fm/) - 如果你想获取热门节目文字稿,请添加微信公众号 声动活泼 - 如果想与我们交流,欢迎到即刻 (https://okjk.co/Qd43ia)找到我们 - 也期待你给我们写邮件交流,邮箱地址是:ting@sheng.fm - 如果你喜欢我们的节目,欢迎 打赏 (https://etw.fm/donation) 支持,或把我们的节目推荐给朋友 Special Guest: Jill Li.
ChatGPT 话题确实已经热得不能再热了,AI 已经能够在许多工作场景帮助甚至替代人们的实际工作,各行各业的人都在思考自己的工作会如何受到 ChatGPT 和 AI 的影响。我们也没必要刻意去回避这个话题,但更想聊一下我们一直感兴趣的教育领域。 本期节目由「科技早知道」和「声东击西」串台,主播丁教和徐涛邀请到了硅谷 Parent Lab (育见科技) 创始人、 CEO Jill Li,来跟大家一起讨论 AI 会如何影响未来的教育和我们。作为一家科技公司的创始人,Jill 分享了 AI 能够在哪些方面影响我们的工作和职业、早期教育领域的创新经验,以及她认为可以如何通过AI技术将早期教育和家庭关系结合起来。 如果你是一个对未来感到好奇的人,希望这期节目能给你带来一点启发。 本期人物 Diane,「声动活泼」联合创始人、「科技早知道」主播 徐涛,「声动活泼」联合创始人、「声东击西」主播 Jill Li,硅谷 Parent Lab (育见科技) 创始人和 CEO 主要话题 [03:25] ChatGPT 测评的真实感受 [14:17] ChatGPT 是在 GPT 的技术上包装了一层人机交互的界面 [25:33] 每个人都有很独特的部分,而 AI 自己很难产生真正的目标和价值观 [35:11] 教育现在所背负的功能里包括了将孩子标签化,但这并非教育的本质 [48:47] 在初等教育阶段,不应该交给孩子他们无法掌控的工具 加入我们 声动活泼正在招聘「节目监制」,查看详细讯息请 点击链接 (https://mp.weixin.qq.com/s/Cbg2wM0O6rkTD8X-0BCvIQ) 。如果你正准备在内容领域发挥专长、贡献能量,请联系我们吧。 延伸阅读 - 科技早知道:S6E47|年终总结3:AIGC可能改变人类未来,但它知道自己的未来在哪里吗? (https://guiguzaozhidao.fireside.fm/20220148) - 科技早知道:S6E40|从单模态到多模态,人工“智障”何时才能不鸡肋 (https://guiguzaozhidao.fireside.fm/20220139) - ChatGPT Is a Blurry JPEG of the Web (https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web) - StrictlyVC in conversation with Sam Altman, part two (OpenAI) (https://www.youtube.com/watch?v=ebjkD1Om4uw) - Don't Ban ChatGPT in Schools. Teach With It. (https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html) 欢迎加入声动胡同会员计划 (https://sourl.cn/iCVg6n) 订阅方式: 国内支付渠道(年付):加入声动胡同会员计划 (https://sourl.cn/GWqwsa) 国外支付渠道(月付):加入声动胡同会员计划 (https://sourl.cn/UXhR7j) 「声动胡同」是以声动活泼北京办公室所在的前永康胡同为灵感,为喜欢我们的听众打造的一款社区产品。我们希望在这样的社区里,能让有价值共识的年轻人们在此获得更多源源不断的思考养料,彼此支持和成长。如果你加入成为我们的街坊,将获得以下与我们保持更亲密连接的机会: - 每周三封 newsletter 形式的「胡同来信」,获得更多节目之外的信息与故事。胡同来信试读:不止有耐克:波特兰为何能成为全球运动鞋的研发中心? (https://sourl.cn/DEichH) - 每季度举办一场线上或线下活动,例如已举办了三期的「露天演讲台」 (https://sourl.cn/BzAURb)、「胡同里的清醒梦」快闪活动。 - 你也可以在邮件中和我们讨论或询问任何事,我们都会一一邮件回复。 使用音乐 - Book Bag-E's Jammy Jams 幕后制作 监制:刘灿、信宇 后期:赛德、Luke 运营:Babs 设计:饭团 关于节目 Bigger Than Us,渴望多元视角,用发问来探索世界。 商务合作 声动活泼商务合作咨询 (https://sourl.cn/6vdmQT) 关于声动活泼 「用声音碰撞世界」,声动活泼致力于为人们提供源源不断的思考养料。 我们还有这些播客:声动早咖啡 (https://sheng-espresso.fireside.fm/)、What's Next|科技早知道 (https://guiguzaozhidao.fireside.fm/episodes)、反潮流俱乐部 (https://fanchaoliuclub.fireside.fm/)、泡腾 VC (https://popvc.fireside.fm/)、商业WHY酱 (https://msbussinesswhy.fireside.fm/)、跳进兔子洞 (https://therabbithole.fireside.fm/) 欢迎在即刻 (https://okjk.co/Qd43ia)、微博等社交媒体上与我们互动,搜索 声动活泼 即可找到我们 期待你给我们写邮件,邮箱地址是:ting@sheng.fm 如果你喜欢我们的节目,欢迎 打赏 (https://etw.fm/donation) 支持或把节目推荐给一两位朋友 Special Guest: Jill Li.
Margaret O'Mara, Scott and Dorothy Bullitt Chair of American history and professor at the University of Washington, leads the conversation on big tech and global order. CASA: Welcome to today's session of the Winter/Spring 2023 CFR Academic Webinar Series. I'm Maria Casa, director of the National Program and Outreach at CFR. Thank you all for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Margaret O'Mara with us to discuss big tech and global order. Dr. O'Mara is the Scott and Dorothy Bullitt Chair of American history and professor at the University of Washington. She writes and teaches about the growth of the high-tech economy, the history of American politics, and the connections between the two. Dr. O'Mara is an Organization of American Historians distinguished lecturer and has received the University of Washington Distinguished Teaching Award for Innovation with Technology. Previously, she served as a fellow with the Center for Advanced Study in the Behavioral Sciences, the American Council of Learned Societies, and the National Forum on the Future of Liberal Education. From 1993 to 1997, Dr. O'Mara served in the Clinton administration as an economic and social policy aide in the White House and in the U.S. Department of Health and Human Services. She is the author of several books and an editor of the Politics and Society in Modern America series at Princeton University Press. Welcome, Margaret. Thank you very much for speaking with us today. O'MARA: Thank you so much, Maria, and thank you all for being here today. I'm setting my supercomputer on my wrist timer so I—to time my talk to you, and which is very apropos and it's really—it's great to be here. I have a few slides I wanted to share as I talk through, and I thought that since we had some really interesting meaty present tense readings from Foreign Affairs as background for this conversation as well as the recent review essay that I wrote last year, I thought I would set the scene a little more with a little more history and how we got to now and thinking in broad terms about how the technology industry relates to geopolitics and the global order as this very distinctive set of very powerful companies now. So I will share accordingly, and, Maria, I hope that this is showing up on your screen as it should. So I knew I—today I needed to, of course, talk—open with something in the news, this—the current—the ongoing questions around what has—what was in the sky and what is being shot down in addition to a Chinese spy balloon, which is really kind of getting to a question that's at the center of all of my work. I write at the intersection of economic history and political history and I do that because I'm interested in questions of power. Who has power? What do they value? This is the kind of the question of the U.S.-China—the operative question of the U.S.-China rivalry and the—and concern about China, what are the values, what are the—and Chinese technology and Chinese technology companies, particularly consumer-facing ones. And this is also an operative question about the extraordinary concentration of wealth and power in a few large platform companies that are based on the West Coast of the United States—(laughs)—a couple in my town of Seattle where I am right now talking to you, and others in Silicon Valley. It's very interesting when one does a Google image search to find a publicly available image and puts in Silicon Valley the images that come up are either the title cards of the HBO television comedy, which I was tempted to add, but the—really, the iconic shot of the valley as place is the Apple headquarters—the Spaceship, as it's called in Cupertino—that opened a few years ago in the middle of suburbia. And this is—you know, the questions of concentrated power in the Q&A among the background readings, you know, this was noted by several of the experts consulted about what is the threat of big tech geopolitically and concentrated power, whether that's good, bad, if that's an advantage geopolitically or not. It was something that many of those folks brought up as did the other readings as well. And this question of power—who has power and taking power—has been an animating question of the modern technology industry and there's an irony in this that if you think about the ideological granddaddy of Apple itself is the Whole Earth Catalog, which I—and this is—I quote from this in the opening to my review essay that was part of the background readings and I just thought I would pop this up in full for us to think about. This is Stewart Brand. This is the first issue of the Whole Earth Catalog. The full issue is digitized at the Internet Archive as are so many other wonderful artifacts and primary source materials about this world, and this is right here on the—you know, you turn—open the cover and here is the purpose: “We are as gods and might as well get used to it. So far, remotely done power and glory as via government, big business, formal education, and church has succeeded to the point where gross obscure actual gains. In response to this dilemma and to these gains a realm of intimate personal power is developing—power of the individual to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested. Tools that aid this process are sought and promoted by the Whole Earth Catalog.” The audience of the Whole Earth Catalog was not a bunch of techies, per se. It was back to the landers, people who were going and founding communes and the catalog was—you know, which was more a piece of art than it was an actual shopping guide, had all sorts of things from books by Buckminster Fuller to camp stoves and to the occasional Hewlett Packard scientific calculator, making this kind of statement that these tools could actually be used for empowerment of the individual because, of course, the world of 1968 is one in which computers and AI are in the hands of the establishment. We see this playing out in multiple scales including Hollywood films like Kubrick's 2001: A Space Odyssey, which, of course, follows, what, four years earlier Dr. Strangelove, which was also a satiric commentary on concentrated power of the military industrial complex, and computers were, indeed, things that were used by large government agencies, by the Pentagon, by Fortune 50 companies. And so the countercultural computer or personal computer movement is very much about individual power and taking this away from the global order, so to speak. This is the taking—using these tools as a way to connect people at the individual level, put a computer on every desk, connect everyone via computer networks to one another, and that is how the future will be changed. That is how the inequities of the world would be remedied. The notion of ultimate connectivity as a positive good was not something that originated with Facebook but, indeed, has much, much deeper origins and that's worth thinking about as we consider where we are in 2023 and where things are going from there. It's also worth thinking about the way in which global—the global order and particularly national security and government spending has played a role—an instrumental role—in the growth of the technology industry as it is. Take, for example, the original venture-backed startup, Fairchild Semiconductor, which is legendary as really starting the silicon semiconductor industry in the valley. It is the—it puts the silicon in the valley, and the eight co-founders known as the Traitorous Eight because they all quit en masse their previous job at Shockley Semiconductor working for William Shockley, the co-inventor of the transistor, and they went off and did something that one does not—did not do in 1957 very often, which was start your own company. This was something that you did if you were weird and you couldn't work for people. That's what one old timer told me, reflecting back on this moment. But they, indeed, started their own company, found outside financing and in this group contains Robert Noyce and Gordon Moore, the two co-founders of Intel, as well as Gene Kleiner, co-founder of Kleiner Perkins, the venture capital firm. This is really the—you know, the original—where it all began, and yes, this is a story of free-market entrepreneurialism but it also is a story of the national security state. This is a—Fairchild is founded at a moment when most of the business in the Santa Clara Valley of California, later known as Silicon Valley, was defense related. This is where the jobs were. This is the business they were doing, by and large. There was not a significant commercial market for their products. A month after they're incorporated—in September '57 is when Fairchild incorporates itself. October 1957 Sputnik goes into orbit. The consequent wave of space spending is really what is the literal rocket ship that gets Silicon Valley's chip business going. The integrated circuits made by Fairchild and other chip makers in the valley go into the Apollo guidance system. NASA is buying these chips at a time that there is not a commercial market for them and that enables these companies to scale up production to create a commodity that can be delivered to the enterprise. And so by the time you get to the 1970s you are not talking about defense contractors in any way. These are companies that are putting their chips in cars and in other—all sorts of one time mechanical equipment is becoming transistorized. And Intel is Intel, still one of the most important and consequential—globally consequential tech companies around at the center of the action in the CHIPS Act of last year, not to mention others. But this longer history and this intertwining with the military industrial complex and with broader geopolitics—because, of course, the space program and the Apollo program was a Cold War effort. It was about beating the Soviets to the moon, not just doing it because we could. But that really kind of dissipates and fades from collective memory in the Valley and beyond with the rise of these entrepreneurs like Steve Jobs, Steve Wozniak, Bill Gates, young, new-time CEOs that are presenting a very, very different face of business and really being consciously apolitical, presenting themselves as something so far apart from Washington, D.C. And this notion of tech, big or little, being something separate from government and governance is perpetuated by leaders of both parties, not just Ronald Reagan but also by Democrats of a younger generation that in the early 1980s there was a brief moment in which lawmakers like Tim Wirth and Gary Hart were referred to as Atari Democrats because they were so bullish on high-tech industries as the United States' economic future. And the way in which politicians and lawmakers from the 1980s forward talked about tech was very much in the same key as that of people like Steve Jobs, which is that this is a revolutionary—the tools have been taken from the establishment, and this is something that is apart from politics, that transcends the old global order and is a new one. And, in fact, in the speech in May 1988 in Moscow at the end of his presidency Ronald Reagan delivers a—you know, really frames the post-Cold War future as one in which the microchip is the revolutionary instrument of freedom: “Standing here before a mural of your revolution”—and a very large bust of Lenin—“I talk about a very different revolution that is taking place right now. Its effects are peaceful but they will fundamentally alter our world, and it is—the tiny silicon chip is the agent of that, no bigger than a fingerprint.” This is really remarkable, if we sit back and take a deep breath and think about it, and particularly thinking about what happens after that. What happens after that are decades in which, again, leaders of both parties in the United States and world leaders elsewhere are framing the internet and understanding the internet as this tool for freedom and liberation, a tool that will advance democracy. Bill Clinton, towards the end of his presidency, famously kind of said, effectively, that I'm not worried about China because the internet is going to bring—you know, internet is going to make it very hard to have anything but democracy. And this notion of a post-Cold War and beyond the end of history and tech and big tech being central to that that, in fact, aided the rise of big tech. That was a rationale for a light regulatory hand in the United States, allowing these companies to grow and flourish and so big, indeed, they have become. But I want to end on a note just thinking about the—you know, why this history is important, why this connective tissue between past and present actually does matter. It isn't just that, oh, this is nice to know. This is useful. Lawrence Preston Gise was the second—sorry, the first deputy administrator of DARPA in 1958, created in the wake of the Sputnik—post-Sputnik panic, originally called ARPA, now DARPA. He later ran the entire Western Division of the Atomic Energy Commission—Los Alamos, Livermore, et cetera. Longtime government public servant. In his retirement he retired to his farm in west Texas and his young grandson came and lived with him every summer. And his grandson throughout his life has talked about how—what a profound influence his grandfather was on him, showing him how to be a self-sufficient rancher, how to wrangle cattle and to build a barbed wire fence. But the grandson—you know, what the grandson didn't mention that much because it wasn't really relevant to his personal experience was who his grandfather was and what he had done. But when that grandson, Jeff Bezos—a few years ago when there was—when Google employees were writing their open letter to CEO Sundar Pichai saying, we are not in the defense business. We are—we don't like the fact that you are doing work with the Pentagon, and pressuring Google successfully and other companies to get out of doing work with the Pentagon, Bezos reflected, no, I think we're—I think this is our patriotic duty to do work—do this kind of work. And as I listened to him say that on a stage in an interview I thought, ah, that's his grandfather talking because this little boy, of course, was Jeff Bezos, the grandfather of Lawrence Preston Gise, and those—that connective tissue—familial connective tissue as well as corporate and political connective tissue, I think, is very relevant to what we have before us today. So I'll leave it there. Thanks. CASA: Thank you, Margaret, for that very interesting introduction. Let's open up to questions. (Gives queuing instructions.) While our participants are gathering their thoughts would you start us off by providing a few examples of emerging technologies that are affecting higher education? O'MARA: Yeah. Well, we've had a very interesting last three years in which the debate over online learning versus in-person learning very quickly was not necessarily resolved. We did this mass real-time experiment, and I think it made—put into sharp relief the way in which different technologies are shaping the way that higher education institutions are working and this question of who's controlling the—who controls the platforms and how we mediate what learning we do. Even though I now teach in person again almost everything that I do in terms of assignments and communication is through electronic learning management systems. The one we use at UW is Canvas. But, of course, there are these broader questions—ethical questions and substantive questions—about how our AI-enabled technologies including, notably, the star of the moment, ChatGPT, going to change the way in which—it's mostly been around how are students going to cheat more effectively. But I think it also has these bigger questions about how you learn and where knowledge, where the human—where the human is necessary. My take on it is, aside from the kind of feeling pretty confident in my having such arcane prompts for my midterm essay questions and research projects that ChatGPT, I think, would have a very hard time doing a good job with it but although I'm looking forward to many a form letter being filled by that technology in the future, I think that there is a—you know, this has a history, too. The concern about the robot overlords is a very deep one. It extends from—you know, predates the digital age, and the anxiety about whether computers are becoming too powerful. Of course, this question of artificial intelligence or augmented intelligence kind of is the computer augmenting what a human can do rather than replacing what a human can do or pretending to have the nuance and the complexity that a human might be able to convey. I think there's, you know, these bigger questions and I'm sure—I imagine there are going to be some other questions about AI. Really, you know, this is a—I think this is a very good learning moment, quite frankly, to think more—you know, one of the things I teach about a lot is kind of the information that is on the internet and who's created it and how it is architected and how it is findable and how those platforms have been developed over time. And what ChatGPT and other AIs like them are doing is they're scraping this extraordinary bounteous ocean of information and it is as good as the—it's as good as its source, right. So whatever you're able to do with it you have—your source materials are going to determine it. So if there is bias in the sources, if there is inaccuracy in the sources, there is—that will be replicated. It cannot be—you know, I think what it is is it's a really good rough draft, first draft, for then someone with tacit knowledge and understanding to come into, and I like to think of digital tools as ones that reveal where things that only people can do that cannot be replicated, that this—where human knowledge cannot be, where a machine still—even though a machine is informed by things that humans do and now does it at remarkable speed and scale it still is—there is—we are able to identify where humanity makes a difference. And then my one last caution is I do—you know, the one thing you can't do with these new—any of these new technologies is do them well really fast, and the rush to it is a little anxiety inducing. CASA: Thank you. Our first question is from Michael Leong from the—he's a graduate student at the University of Arizona. Michael, would you like to unmute and ask your question? Q: Yeah. Hi, Dr. O'Mara. Hi, Ms. Casa. Sorry for any background noise. I just had a, like, general question about your thoughts on the role big tech plays in geopolitics. Specifically, we've seen with SpaceX and Starlink especially with what's going on in Ukraine and how much support that has been provided to the Ukrainian Armed Forces, and potentially holding that over—(inaudible)—forces. So, basically, do we expect to see private companies having more leverage over geopolitical events? And how can we go forward with that? O'MARA: Yeah. That's a really—that's a really great question. And you know, I think that there's—it's interesting because the way—there's always been public-private partnerships in American state building and American geopolitics, and that's something—it's worth kind of just noting that. Like, from the very beginning the United States has used private entities as instruments of policy, as parastatal entities, whether it be through, you know, land grants and transcontinental railroad building in the nineteenth century all the way through to Starlink and Ukraine because, of course, the Pentagon is involved, too—you know, that SpaceX is in a very—is a significant government contractor as ones before it. I think that where there's a really interesting departure from the norm is that what we've seen, particularly in the last, you know, the last forty years but in this sort of post-Cold War moment has been and particularly in the last ten to fifteen years a real push by the Pentagon to go to commercial enterprises for technology and kind of a different model of contracting and, I should say, more broadly, national security agencies. And this is something, you know, a real—including the push under—when Ash Carter was in charge of DOD to really go to Silicon Valley and say, you guys have the best technology and a lot of it is commercial, and we need to update our systems and our software and do this. But I think that the SpaceX partnership is one piece of that. But there has been a real—you know, as the government has, perhaps, not gotten smaller but done less than it used to do and there's been more privatization, there have been—there's been a vacuum left that private companies have stepped into and I think Ian Bremmer's piece was really—made some really important points in this regard that there are things that these platform companies are doing that the state used to do or states used to do and that does give them an inordinate amount of power. You know, and these companies are structurally—often a lot of the control over these companies is in the hands of very, very few, including an inordinate unusual amount of founder power, and Silicon Valley, although there's plenty of political opinionating coming out of there now, which is really a departure from the norm, this kind of partisan statements of such—you know, declarations of the—of recent years are something that really didn't—you didn't see very much before. These are not folks who are—you know, their expertise lies in other domains. So that's where my concern—some concern lies where you have these parastatal actors that are becoming, effectively, states and head of states then and they are not, indeed, speaking for—you know, they're not sovereign powers in the same way and they are speaking for themselves and speaking from their own knowledge base rather than a broader sense of—you know, they're not speaking for the public. That's not their job. CASA: Our next question is from Michael Raisinghani from Texas Woman's University. Michael, if you could unmute. Q: Thank you, Ms. Casa and Dr. O'Mara. A very insightful discussion. Thank you for that. I just thought maybe if you could maybe offer some clarity around the generative AI, whether it's ChatGPT or Wordtune or any of this in terms of the future. If you look, let's say, five, ten years ahead, if that's not too long, what would your thoughts be in this OpenAI playground? O'MARA: Mmm hmm. Well, with the first—with the caveat that the first rule of history is that you can't predict the future—(laughs)—and (it's true ?); we are historians, we like to look backwards rather than forwards—I will then wade into the waters of prediction, or at least what I think the implications are. I mean, one thing about ChatGPT as a product, for example, which has been really—I mean, what a—kudos for a sort of fabulous rollout and marketing and all of a sudden kind of jumping into our public consciousness and being able to release what they did in part because it wasn't a research arm of a very large company where things are more being kept closer because they might be used for that company's purposes. Google, for example, kind of, you know, has very in short order followed on with the reveal of what they have but they kind of were beaten to the punch by OpenAI because OpenAI wasn't—you know, it was a different sort of company, a different sort of enterprise. You know, a lot of it are things that are already out there in the world. If we've, you know, made an airline reservation and had a back and forth with a chatbot, like, that's—that's an example of some of that that's already out in the world. If you're working on a Google doc and doing what absolutely drives me bonkers, which is that Google's kind of completing my sentences for me, but that predictive text, those—you know, many things that we are—that consumers are already interacting with and that enterprises are using are components of this and this is just kind of bringing it together. I think that we should be very cautious about the potential of and the accuracy of and the revolutionary nature of ChatGPT or any of these whether it be Bard or Ernie or, you know, name your perspective chatbot. It is what it is. Again, it's coming from the—it's got the source material it has, it's working with, which is not—you know, this is not human intelligence. This is kind of compilation and doing it very rapidly and remarkably and in a way that presents with, you know, literacy. So I'm not—you know, does very cool stuff. But where the future goes, I mean, clearly, look, these company—the big platform companies have a lot of money and they have a great deal of motivation and need to be there for the next big thing and, you know, if we dial back eighteen months ago there were many in tech who were saying crypto and Web3 was the next big thing and that did not—has not played out as some might have hoped. But there is a real desire for, you know, not being left behind. Again, this is where my worry is for the next five years. If this is driven by market pressures to kind of be the—have the best search, have the best—embed this technology in your products at scale that is going to come with a lot of hazards. It is going to replicate the algorithmic bias, the problems with—extant problems with the internet. I worry when I see Google saying publicly, we are going to move quickly on this and it may not be perfect but we're going to move quickly when Google itself has been grappling with and called out on its kind of looking the other way with some of the real ethical dilemmas and the exclusions and biases that are inherent in some of the incredibly powerful LLMs—the models that they are creating. So that's my concern. This is a genie that is—you know, letting this genie out of the bottle and letting it become a mass consumer product, and if—you know, OpenAI, to its credit, if you go to ChatGPT's website it has a lot of disclaimers first about this is not the full story, effectively, and in the Microsoft rollout of their embedding the technology in Bing last week Microsoft leaders, as well as Sam Altman of OpenAI, were kind of—their talking points were very careful to say this is not everything. But it does present—it's very alluring and I think we're going to see it in a lot more places. Is it going to change everything? I think everyone's waiting for, like, another internet to change everything and I don't know if—I don't know. The jury's out. I don't know. CASA: Thank you. Our next question is a written one. It comes from Denis Fred Simon, clinical professor of global business and technology at the University of North Carolina at Chapel Hill. He asked, technology developments have brought to the surface the evolving tension between the drive for security with the desire for privacy. The U.S. represents one model while China represents another model. How do societies resolve this tension and is there some preferred equilibrium point? O'MARA: That is a—that's the billion-dollar question and it's—I think it's a relevant one that goes way back. (Laughs.) I mean, there are many moments in the kind of evolution of all of these technologies where the question of who should know what and what's allowable. If we go back to 1994 and the controversy over the Clipper chip, which was NSA wanting to build a backdoor into commercially available software, and that was something that the industry squashed because it would, among other things, have made it very difficult for a company like Microsoft to sell their products in China or other places if you had a—knew that the U.S. national security agencies were going to have a window into it. And, of course, that all comes roaring back in 2013 with Snowden's revelations that, indeed, the NSA was using social media platforms and other commercial platforms—consumer-facing platforms—to gather data on individuals. You know, what is the perfect balance? I mean, this is—I wish I had this nice answer. (Laughs.) I would probably have a really nice second career consulting and advising. But I think there is a—what is clear is that part of what has enabled the American technology industry to do what it has done and to generate companies that have produced, whether you think the transformations on balance are good or bad, transformative products, right. So everything we're using to facilitate this conversation that all of us are having right now is coming from that font. And democratic capitalism was really critical to that and having a free—mostly free flow of information and not having large-scale censorship. I mean, the postscript to the Clipper chip—you know, Clipper chip controversy is two years later the Telecom Act of 1996, which was, on the one hand, designed to ensure the economic growth of what were then very small industries in the internet sector and not—and prevent the telecoms from ruling it all but also were—you know, this was a kind of making a call about, OK, in terms when it comes to the speech on the internet we are going to let the companies regulate that and not be penalized for private—when private companies decide that they want to take someone down, which is really what Section 230 is. It's not about free speech in a constitutional sense. It's about the right of a company to censor or to moderate content. It's often the opposite of the way that it's kind of understood or interpreted or spun in some ways. But it is clear that the institutions of—that encourage free movement of people and capital have been—are pretty critical in fueling innovation writ large or the development and the deployment and scaling of new technologies, particularly digital technologies. But I think you can see that playing out in other things, too. So that has been, I think, a real tension and a real—there's a market dimension to this, not just in terms of an ethical dimension or political dimension that there does need to be some kind of unfettered ability of people to build companies and to grow them in certain ways. But it's a fine balance. I mean, this sort of, like, when does regulation—when does it—when do you need to have the state come in and in what dimension and which state. And this goes back to that core question of like, OK, the powerful entities, what are their values? What are they fighting for? Who are they fighting for? I don't know. I'm not giving you a terribly good answer because I think it's a really central question to which many have grappled for that answer for a very long time. CASA: Thank you. Our next question comes from Ahmuan Williams, a graduate student at the University of Oklahoma. Ahmuan? Q: Thank you. Hi. I'm wondering about ChatGPT, about the regulation side of that. It seems like it's Microsoft that has kind of invested itself into ChatGPT. Microsoft had before gotten the Pentagon contract just a few years back. So it's kind of a two-part question. So, first of all, how does that—what does that say about government's interest in artificial intelligence and what can be done? I know the Council of Foreign Relations also reported that the Council of Europe is actually planning an AI convention to figure out how, you know, a framework of some type of AI convention in terms of treaties will work out. But what should we be worried about when it comes to government and the use of AI in political advertisements and campaigns, about, basically, them flooding opinions with, you know, one candidate's ideas and, therefore, them being able to win because they're manipulating our opinions? So what would you say would be kind of a regulation scheme that might come out of these type—new flourishing AI devices? O'MARA: Mmm hmm. Mmm hmm. That's a good question. I think there's sort of different layers to it. I mean, I see that, you know, the Pentagon contract—the JEDI contract—being awarded to Microsoft, much to Amazon's distress—(laughs)—and litigious distress, is a kind of a separate stream from its decision to invest 10 billion (dollars) in OpenAI. I think that's a commercial decision. I think that's a recognition that Microsoft research was not producing the—you know, Microsoft didn't have something in house that was comparable. Microsoft saw an opportunity to at last do a—you know, knock Google off of its dominant pedestal in search and make Bing the kind of long—kind of a punch line—no longer a punch line but actually something that was a product that people would actively seek out and not just use because it was preinstalled on their Microsoft devices. That is—so I see that as a market decision kind of separate from. The bigger AI question, the question of AI frameworks, yes, and this, again, has a longer history and, you know, I kind of liken AI to the Pacific Ocean. It's an enormous category that contains multitudes. Like, it's—you know, we can—oftentimes when we talk about AI or the AI that we see and we experience, it's machine learning. And part of why we have such extraordinary advances in machine learning in the last decade has—because of the harvesting of individual data on these platforms that we as individuals use, whether it be Google or Meta or others, that that has just put so much out there that now these companies can create something that—you know, that the state of the art has accelerated vastly. Government often is playing catch up, not just in tech but just in business regulation, generally. The other—you know, another example of this in the United States cases with the—in the late nineteenth century, early twentieth century, with what were then new high-tech tech-driven industries of railroads and oil and steel that grew to enormous size and then government regulators played catch up and created the institutions that to this day are the regulators like the FTC created in 1913. Like, you know, that's—of that vintage. So, I think that it depends on—when it comes to—the question about electoral politics, which I think is less about government entities—this is about entities, people and organizations that want to be in charge of government or governments—that is, you know, AI—new technologies of all kinds that incorporate ever more sophisticated kind of, essentially, disinformation, that—information that presents as real and it is not. The increased volume of that and the scale of that and the sophistication of that and the undetectability of it does create a real challenge to free and fair elections and also to preventing, in the American context, international and foreign intervention in and manipulation of elections but true in every context. That is, you know, getting good information before voters and allowing bad actors to exploit existing prejudices or misassumptions. That is an existing problem that probably will be accelerated by it. I think there's—there's a strong case to be made, at least in the U.S. context, for much stronger regulation of campaign advertising that extends to the internet in a much more stricter form. In that domain there's—I think we have pretty good evidence that that has not been—you know, having that back end has made the existing restrictions on other types of campaign speech and other media kind of made them moot because you can just go on a social platform and do other things. So there's—you know, this is—I think the other thing that compromises this is the rapidly changing nature of the technology and the digital—and the global reach of these digital technologies that extends any other product made—you know, any other kind of product. It just is borderless that—in a kind of overwhelming way. That doesn't mean government should give up. But I think there's a sort of supranational level of frameworks, and then there are all sorts of subnational kind of domain-specific frameworks that could occur to do something as a countervailing force or at least slow the role of developers and companies in moving forward in these products. CASA: Thank you. Our next question is a written one. It comes from Prashant Hosur, assistant professor of humanities and social sciences at Clarkson University. He asks, how do you—or she. I'm sorry. I'm not sure. How do you think big tech is likely to affect conventional wisdom around issues of great power rivalry and power transitions? O'MARA: Hmm. I don't—well, I think there are a—these are always—these definitions are always being redefined and who the great powers are and what gives them power is always being reshuffled and—but, of course, markets and economic resources and wealth and—are implicated in this for millennia. I think that tech companies do have this—American tech companies and the tech platforms, which I should preface this by saying, you know, none of the companies we're talking about now are going to rule forever. Maybe that just goes without—it's worth just note, you know, this is—we will have the rise and fall. Every firm will be a dinosaur. Detroit was the most innovative city in the world a hundred and ten years ago. There's still a lot of innovation and great stuff coming out of Detroit, but if you—if I queried anyone here and said, what's the capital of innovation I don't know if you would say Detroit. But back in the heyday of the American auto industry it was, and I think it's a good reminder. We aren't always going to be talking about this place in northern California and north Seattle in this way. But what we have right now are these companies that their products, unlike the products of Henry Ford or General Motors, are ones that are—go across borders with—you know, the same product goes across borders seamlessly and effortlessly, unlike an automobile where a—to sell in a certain country you have to meet that country's fuel standards and, you know, safety standards, et cetera, et cetera. You have a different model for a different market. Instead, here, you know, a Facebook goes where it goes, Google goes where it goes, YouTube goes where it goes, and that has been kind of extraordinary in terms of internationalizing politics, political trends. I think what we've seen globally is very—you know, the role of the internet in that has been extraordinary, both for good and for ill, in the last fifteen years. And then the kind of—the immense—the great deal of power that they have in the many different domains and, again, Ian Bremmer also observed this kind of the—all the different things they do and that is something that is different from twenty-five years ago where you now have companies that are based on the West Coast of the United States with products designed by a small group of people from a kind of narrow, homogenous band of experience who are doing things like transforming taxis and hotels and, I mean, you name it, kind of going everywhere in a way that in the day of the—you know, the first Macintosh, which was like this cool thing on your desk, that was—yes, it was a transformative product. It was a big deal and Silicon Valley was—became a household word and a phrase in the 1980s and the dot.com era, too. That was—you know, everyone's getting online with their AOL discs they got in the mail. But what's happened in the twenty-first century is at a scale and—a global scale and an influence across many different domains, and politics, this very deliberate kind of we are a platform for politics that has really reshaped the global order in ways that are quite profound. This is not to say that everything has to do with big tech is at the root of everything. But let's put it in context and let's, you know—and also recognize that these are not companies that were designed to do this stuff. They've been wildly successful what they set out to do and they have a high-growth tech-driven model that is designed to move fast and, yes, indeed, it breaks things and that has—you know, that has been—they are driven by quarterly earnings. They are driven by other things, as they should be. They are for-profit companies, many of them publicly traded. But the—but because, I think, in part they have been presenting themselves as, you know, we're change the world, we're not evil, we're something different, we're a kinder, gentler capitalism, there has been so much hope hung on them as the answer for a lot of things, and that is not—kind of giving states and state power something of the past to get its act together that instead states need to step up. CASA: Our next question is from Alex Grigor. He's a PhD candidate from University of Cambridge. Alex? Q: Hello. Yes. Thank you. Can you hear me? O'MARA: Yes. CASA: Yes. Q: Yeah. Hi. Thank you, Ms. O'Mara. Very insightful and, in fact, a lot of these questions are very good as well. So they've touched upon a lot of what I was going to ask and so I'll narrow it down slightly. My research is looking at cyber warfare and sort of international conflict particularly between the U.S. and China but beyond, and I was wondering—you started with the sort of military industrial complex and industry sort of breaking away from that. Do you see attempts, perhaps, because of China and the—that the technology industry and the military are so closely entwined that there's an attempt by the U.S. and, indeed, other countries. You see increase in defense spending in Japan and Germany. But it seems to be specifically focused, according to my research, on the technologies that are coming out of that, looking to reengage that sort of relationship. They might get that a little bit by regulation. Perhaps the current downsizing of technology companies is an opportunity for governments to finally be able to recruit some good computer scientists that they haven't been able to—(laughs)—(inaudible). Perhaps it's ASML and semiconductor sort of things. Do you see that as part of the tension a conscious attempt at moving towards reintegrating a lot of these technologies back into government? O'MARA: Yeah. I think we're at a really interesting moment. I mean, one thing that's—you know, that's important to note about the U.S. defense industry is it never went away from the tech sector. It just kind of went underground. Lockheed, the major defense contractor, now Lockheed Martin, was the biggest numerical employer in the valley through the end of the Cold War through the end of the 1980s. So well into the commercial PC era and—but very—you know, kind of most of what was going on there was top secret stuff. So no one was on the cover of Forbes magazine trumpeting what they've done. And there has been—but there has been a real renewed push, particularly with the kind of—to get made in Silicon Valley or, you know, made in the commercial sector software being deployed for military use and national security use and, of course, this is very—completely bound up in the questions of cyber warfare and these existing commercial networks, and commercial platforms and products are ones that are being used and deployed by state actors and nonstate actors as tools for cyber terrorism and cyber warfare. So, yes, I think it's just going to get tighter and closer and the great—you know, the stark reality of American politics, particularly in the twentieth and into the twenty-first centuries, is the one place that the U.S. is willing to spend lots of money in the discretionary budget is on defense and the one place where kind of it creates a rationale for this unfettered—largely, unfettered spending or spending with kind of a willingness to spend a lot of money on things that don't have an immediately measurable or commercializable outcome is in national security writ large. That's why the U.S. spent so much money on the space program and created this incredible opportunity for these young companies making chips that only—making this device that only—only they were making the things that the space program needed, and this willingness to fail and the willingness to waste money, quite frankly. And so now we're entering into this sort of fresh—this interesting—you know, the geopolitical competition with China between the U.S. has this two dimensions in a way and the very—my kind of blunt way of thinking about it it's kind of like the Soviet Union and Japan all wrapped up in one, Japan meaning the competition in the 1980s with Japan, which stimulated a great deal of energy among—led by Silicon Valley chip makers for the U.S. to do something to help them compete and one of those outcomes was SEMATECH, the consortium to develop advanced semiconductor technology, whose funding—it was important but its funding was a fraction of the wave of money that just was authorized through last year's legislation, the CHIPS Act as well as Inflation Reduction Act and others. So I'm seeing, you know, this kind of turn to hardware and military hardware and that a lot of the commercial—the government subsidized or incentivized commercial development of green technology and advanced semiconductor, particularly in military but other semiconductor technology and bringing semiconductor manufacturing home to the United States, that is—even those dimensions that are nonmilitary, that are civilian, it's kind of like the Apollo program. That was a civilian program but it was done for these broader geopolitical goals to advance the economic strength and, hence, the broader geopolitical strength of the United States against a competitor that was seen as quite dangerous. So that's my way of saying you're right, that this is where this is all going and so I think that's why this sort of having a healthy sense of this long-term relationship is healthy. It's healthy for the private sector to recognize the government's always been there. So it isn't though you had some innovative secret that the government is going to take away by being involved. And to also think about what are the broader goals that—you know, who is benefiting from them and what is the purpose and recognize often that, you know, many of the advanced technologies we have in the United States are thanks to U.S. military funding for R&D back in the day. CASA: Our next question is written. It's from Damian Odunze, who is an assistant professor at Delta State University. Regarding cybersecurity, do you think tech companies should take greater responsibility since they develop the hardware and software packages? Can the government mandate them, for instance, to have inbuilt security systems? O'MARA: Hmm. Yeah. I think—look, with great power comes great responsibility is a useful reminder for the people at the top of these companies that for—that are so remarkably powerful at the moment and because their platforms are so ubiquitous. There are—you see, for example, Microsoft has really—is a—I think what they've done in terms of partnering with the White House and its occupants and being—kind of acting as a NSA first alert system of sorts and kind of being open about that I think that's been good for them from a public relations perspective, and also—but I think it also reflects this acknowledgement of that responsibility and that it also is bad for their business if these systems are exploited. Yeah, I think that, again, regulation is something that—you know, it's like saying Voldemort in Silicon Valley. Like, some people are, like, oh, regulation, you know. But there's really—there can be a really generative and important role that regulation can play, and the current industry has grown up in such a lightly-regulated fashion you just kind of get used to having all that freedom, and when it comes to cybersecurity and to these issues of national security importance and sort of global importance and importance to the users of the products and the companies that make them there's, I think, a mutual interest in having some sort of rules of the road and that—and I think any company that's operating at a certain scale is—understands that it's in their market interest to be—you know, not to be a renegade, that they are working with. But I think having—you know, there can be a willingness to work with but they're—having a knowledge and an understanding and a respect for your government partners, your state partners, whether they be U.S. or non-U.S. or supranational is really critically important and sometimes tech folks are a little too, like, oh, politics, they don't know what they're doing, you know. We know better. And I think there needs to be a little more mutual exchange of information and some more—yes, some more technical people being able to be successfully recruited into government would probably be a help, too, so there's—on both sides of the table you have technically savvy people who really understand the inner workings of how this stuff is made and don't have simplistic answers of like, oh, we'll just take all the China-made technology out of it. You're, like, well, there's—like, it's kind of deep in the system. You know, so having technologists in the conversation at all points is important. CASA: Thank you. I think we have time for one more question. We'll take that from Louis Esparza, assistant professor at California State University in Los Angeles. Q: Hi. Thank you for your very interesting talk. So I'm coming at this from the social movements literature and I'm coming into this conversation because I'm interested in the censorship and influence of big tech that you seem to be, you know, more literate in. So my question is do you think that this—the recent trends with big tech and collaboration with federal agencies is a rupture with the origin story of the 1960s that you talked about in your talk or do you think it's a continuity of it? O'MARA: Yeah. That's a great way to put it. The answer is, is it both? Well, it's something of a rupture. I mean, look, this—you know, you have this—you have an industry that grows up as intensely—you know, that those that are writing and reading the Whole Earth Catalog in 1968 the military industrial complex is all around them. It is paying for their education sort of effectively or paying for the facilities where they're going to college at Berkeley or Stanford or name your research university—University of Washington. It is the available jobs to them. It is paying for the computers that they learn to code on and that they're doing their work on. It is everywhere and it is—and when you are kind of rebelling against that establishment, when you see that establishment is waging war in Vietnam as being a power—not a power for good but a power for evil or for a malevolent—a government you don't trust whose power, whose motivations you don't trust, then you—you know, you want to really push back against that and that is very much what the personal computer movement that then becomes an industry is. That's why all those people who were sitting around in the 1970s in Xerox Palo Alto Research Center—Xerox Park—just spitballing ideas, they just did not want to have anything to do with military technology. So that's still there, and then that—and that ethos also suffused other actors in, you know, American government and culture in the 1980s forward, the sort of anti-government sentiment, and the concerns about concentrated power continue to animate all of this. And the great irony is that has enabled the growth of these private companies to the power of states. (Laughs.) So it's kind of both of those things are happening and I think, in some ways, wanting to completely revolutionize the whole system was something that was not quite possible to do, although many—it is extraordinary how much it has done. CASA: Margaret, thank you very much for this fascinating discussion and to all of you for your questions and comments. I hope you will follow Margaret on Twitter at @margaretomara. Our next Academic Webinar will take place on Wednesday, March 1, at 1:00 p.m. Eastern Time. Chris Li, director of research of the Asia Pacific Initiative and fellow at the Belfer Center for Science and International Affairs at Harvard University, will lead a conversation on U.S. strategy in East Asia. In the meantime, I encourage you to learn about CFR's paid internships for students and fellowships for professors at CFR.org/Careers. Follow at @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Thank you again for joining us today. We look forward to you tuning in for our webinar on March 1. Bye. (END)
Microsoft recently released a new version of Bing, its search engine that has long been kind of a punchline in the tech world.The company billed this Bing — which is powered by artificial intelligence software from OpenAI, the maker of the popular chatbot ChatGPT — as a reinvention of how billions of people search the internet.How does that claim hold up?Guest: Kevin Roose, a technology columnist for The New York Times and host of the Times podcast “Hard Fork.”Background reading: When Microsoft released the new Bing, it was billed as a landmark event and the company's “iPhone moment.”On the latest episode of “Hard Fork,” OpenAI's chief executive, Sam Altman, and Microsoft's chief technology officer, Kevin Scott, talk about an A.I.-powered Bing.For more information on today's episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.
最近ChatGPT迅速成为大众热议话题,大家都在探讨如何把这个AI技术应用到商业领域,为人类带来一场互联网之后的下一场的技术革命。毫无疑问,人工智能已经完成了一次极其重要的进化,并且在未来还将继续深远地改变我们的生活。作为人类的我们,是否准备好了呢?在最新这一期的“柠檬变成柠檬水”播客里,主持人俞骅和Poy Zhong邀请数字营销分析师Raina Meng与听众朋友们一起聊聊ChatGPT的伟大潜力,以及对Google与Facebook所带来的巨大商业影响。收听方式:请您在Apple Podcasts, 小宇宙APP, Spotify, iHeart Radio, Google Podcasts, Amazon Music等,搜寻”柠檬变成柠檬水“。
Microsoft's release of a ChatGPT-powered Bing signifies a new era in search. Then, a disastrous preview of Bard — Google's answer to ChatGPT — caused the company's stocks to slide 7 percent. The A.I. arms race is on.Plus: What “Nothing, Forever,” the 24/7, A.I.-generated “Seinfeld” parody, says about bias in A.I.On today's episode:Sam Altman is the chief executive of OpenAI.Kevin Scott is the chief technology officer of Microsoft.Additional reading:Microsoft integrated OpenAI's technology into its search engine and kicked off an A.I. arms race.Google released Bard, a rival chatbot to ChatGPT.“Nothing, Forever” was temporarily banned on Twitch.
(0:00) Intro(1:48) Welcome Matt Mochary(9:52) Resonating with people of such diverse personality types(22:08) Fear leads to people making the wrong decision(24:36) Matt's Process(30:30) 4 Zones of Genius(40:09) CEO Tactics(46:27) Philosophies(57:04) Offboarding(1:10:19) Having a hard conversation(1:14:16) Boards are the death of every great investor(1:22:20) Quick hitters Mixed and edited: Justin Hrabovsky Produced: Andrew Nadeau and Rashad Assir Executive Producer: Josh Machiz Music: Griff Lawson
Erichsen Geld & Gold, der Podcast für die erfolgreiche Geldanlage
Eigentlich war für heute noch keine weitere Folge zum Thema KI / künstliche Intelligenz geplant. Aber: wenn der gerade von Microsoft mit weiteren 10 Mrd. unterstütze Chef von OpenAI - Sam Altman - sagt, er bereite sich auf die Zukunft vor, indem er sich Waffen gekauft habe, Gold und Gasmasken, dazu noch ein Stück Land … dann müssen wir eben doch mal zum Thema KI sprechen. ► Schau Dir hier die neue Aktion der Rendite-Spezialisten an: https://www.rendite-spezialisten.de/aktion ► TIPP: Sichere Dir wöchentlich meine Tipps zu Gold, Aktien, ETFs & Co. – 100% gratis: https://erichsen-report.de/ Viel Freude beim Anhören. Über eine Bewertung und einen Kommentar freue ich mich sehr. Jede Bewertung ist wichtig. Denn sie hilft dabei den Podcast bekannter zu machen. Damit noch mehr Menschen verstehen, wie sie ihr Geld mit Rendite anlegen können. ► Mein YouTube-Kanal: http://youtube.com/ErichsenGeld ► Folge mir bei Facebook: https://www.facebook.com/ErichsenGeld/ ► Folge meinem Instagram-Account: https://www.instagram.com/erichsenlars Die verwendete Musik wurde unter www.soundtaxi.net lizensiert. Ein wichtiger abschließender Hinweis: Aus rechtlichen Gründen darf ich keine individuelle Einzelberatung geben. Meine geäußerte Meinung stellt keinerlei Aufforderung zum Handeln dar. Sie ist keine Aufforderung zum Kauf oder Verkauf von Wertpapieren.
En este episodio platicamos sobre una de las grandes ideas de la temporada: sistemas híbridos son servicios de software. Además hablamos de Sam Altman; director de Openai, mayormente conocido por Chatgpt. Por último, no te pierdas las grandes ideas de negocio utilizando la inteligencia artificial.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are short timelines actually bad?, published by joshc on February 5, 2023 on LessWrong. Sam Altman recently posted the following: I have seen very little serious discussion about whether short timelines are actually bad. This is surprising given that nearly everyone I talk to in the AI risk community seems to think that they are. Of course, the question "was the founding of OpenAI net positive?" and "would it be good to accelerate capabilities in 2023?" are different questions. I'm leaning towards yes on the first and no on the second. I've listed arguments that factor into these questions below. Reasons one might try to accelerate progress Avoid/delay a race with China. If the language model boom happened 10 years from now, China might be a bigger player. Global coordination seems harder than domestic coordination. A lot harder. Perhaps the U.S. will have to shake hands with China eventually, but the more time we have to experiment with powerful systems before then the better. That corresponds to time demonstrating dangers and iterating on solutions, which is way more valuable than "think about things in front of a white board" time. Smooth out takeoff. FLOPS get cheaper over time. Data is accumulating. Architectures continue to improve. The longer it takes for companies to invest ungodly amounts of money, the greater the potential overhang. Shortening timelines in 2015 may have slowed takeoff, which again corresponds to more time of the type that matters most. Keep the good guys in the lead. We're lucky that the dominant AGI companies respect safety as much as they do. Sam Altman recently commented that "the bad case — and I think this is important to say — is, like, lights out for all of us." I'm impressed that he said this given how bad this sort of thing could be for business -- and this doesn't seem like a PR move. AI x-risk isn't really in the overton window yet. The leading companies set an example. Maybe OpenAI's hesitance to release GPT-4 has set a public expectation. It might now be easier to shame companies who don't follow suit for being too fast and reckless. Reasons to hit the breaks There are lots of research agendas that don't particularly depend on having powerful systems in front of us. Transparency research is the cat's meow these days and has thus far been studied for relatively tiny models; in the extreme case, two-layer attention only transformers that I can run on my laptop. It takes time to lay foundations. In general, research progress is quite serial. Results build on previous results. It might take years before any of the junior conceptual researchers could get into an argument with Eleizer without mostly eliciting cached responses. The type of work that can be done in advance is also typically the type of work that requires the most serial time. Language models are already sophisticated for empirical work to be useful. GPT-3-level models have made empirically grounded investigations of deception possible. Soon it might also make sense to say that a large language model has a goal or is situationally aware. If slowing down takeoff is important, we should hit the breaks. Takeoff has already begun. We need time for field-building. The technical AI safety research community is growing very rapidly right now (28% per year according to this post). There are two cruxes here: How useful is this field building in expectation? Is this growth mostly driven by AGI nearness? I'm pretty confused about what's useful right now so I'll skip the first question. As for the second, I'll guess that most of the growth is coming from increasing the number of AI safety research positions and university field building. The growth in AI safety jobs is probably contingent on the existence of maybe-useful empirical research. The release of GPT-3 may have been an imp...
En este episodio platicamos sobre una de las grandes ideas de la temporada: sistemas híbridos son servicios de software. Además hablamos de Sam Altman; director de Openai, mayormente conocido por Chatgpt. Por último, no te pierdas las grandes ideas de negocio utilizando la inteligencia artificial.
Attain Your Potential - Download the FREE Impact 90 Challenge Start Pack: bit.ly/3hr3zBi Click here to download your FREE guide to 100x YOUR EFFICIENCY IN 10 EASY STEPS: https://bit.ly/3F8qOJL AI is going to obliterate your job. And that's fantastic news. It will finally free you up to make real money. If you don't panic, that is. Most people are going to panic. Don't let that be you. Stick with me. I'll explain. If you do a search on google about AI you're going to find a lot of terrifying things. You'll see art that's indistinguishable from the best artists on the planet. You'll hear how AI can pass college exams and get a high score on an IQ test. For many of you, you'll even see examples of how AI can do your job better than you can. And this is all with a clunky phase one product. Wait until AI has been on the market for 6 months. Or more terrifyingly, 6 years. No one is safe. It will be that disruptive. But as I always tell people, moments of disruption present the biggest opportunity. But you're going to have to be aggressive when everyone else is freaking out. So… is AI really THAT revolutionary? And if so, how do you really make money with it? The short answer is yes. AI will be the biggest change not just in your lifetime, but in anyone's lifetime. It is the ultimate force multiplier. Right now, humans are limited by the rate at which they can think. This determines the rate at which they can solve problems. And as Elon Musk says, people are paid in direct proportion to the difficulty of the problems they solve. If you try to beat AI, you will lose. What I want to convince you of in this video is that you don't need to beat AI, you need to use it. This is all happening in plain sight. Everyone is talking about it. But to take advantage of this moment, I need you to do three things. Remember, this is the very beginning of a very aggressive revolution. Moving quickly gives you two advantages: You can rocket ahead of other people by mastering the tools. If you master the tools, people are going to turn to you because you're able to more efficiently solve problems. Going back to the Elon Musk quote - if you can solve harder problems faster, you're going to get paid more. And in these early days where most people are stuck in the “deer in headlights” mode, you have an unfair advantage. The second advantage that AI gives you is an almost unimaginable amount of efficiency in certain tasks. Don't get me wrong, AI isn't a panacea. There are plenty of problems that right now AI sucks at. Sam Altman, the founder of OpenAI, maker of the ubiquitous ChatGPT, has himself said that people are getting so hyped up that they're going to be disappointed. If you think that this is Terminator 2 already and ChatGPT is going to turn into liquid metal and save you from space aliens, yes, you're going to be disappointed. Take action. Learn. Build. Create. Leverage AI and see what you can do together. Follow Tom Bilyeu: Website: https://impacttheory.com/ Twitter: https://twitter.com/TomBilyeu Facebook: https://www.facebook.com/tombilyeu Instagram: https://www.instagram.com/tombilyeu/ Sponsors: Get $500 off Peloton Tread Packages that come with accessories like a heart rate band, workout mat and non-slip dumbbells. Just go to onepeleton.com to get the deal. Join Prince EA on the hot seat, and check out Sauna Sessions wherever you get your podcasts. Right now, you can save up to fifty percent at http://bluenile.com! Follow Deep Purpose on Apple Podcasts, Spotify or your favorite listening app!
Attain Your Potential - Download the FREE Impact 90 Challenge Start Pack: bit.ly/3hr3zBiClick here to download your FREE guide to 100x YOUR EFFICIENCY IN 10 EASY STEPS: https://bit.ly/3F8qOJLAI is going to obliterate your job. And that's fantastic news. It will finally free you up to make real money. If you don't panic, that is. Most people are going to panic. Don't let that be you. Stick with me. I'll explain. If you do a search on google about AI you're going to find a lot of terrifying things. You'll see art that's indistinguishable from the best artists on the planet. You'll hear how AI can pass college exams and get a high score on an IQ test.For many of you, you'll even see examples of how AI can do your job better than you can. And this is all with a clunky phase one product. Wait until AI has been on the market for 6 months. Or more terrifyingly, 6 years. No one is safe. It will be that disruptive. But as I always tell people, moments of disruption present the biggest opportunity. But you're going to have to be aggressive when everyone else is freaking out.So… is AI really THAT revolutionary? And if so, how do you really make money with it? The short answer is yes. AI will be the biggest change not just in your lifetime, but in anyone's lifetime. It is the ultimate force multiplier. Right now, humans are limited by the rate at which they can think.This determines the rate at which they can solve problems. And as Elon Musk says, people are paid in direct proportion to the difficulty of the problems they solve. If you try to beat AI, you will lose. What I want to convince you of in this video is that you don't need to beat AI, you need to use it. It almost doesn't matter what industry you're in, you're going to be able to do your job better with AI. But you need to get the first mover advantage. To do that, you need to stop researching AI and start using it. At my company, Impact Theory, we've already integrated AI into our marketing funnels, our copywriting pipeline, for art concepting, final image generation, creative ideation, and human voice generation. And that's all just in the last few months. I've been watching Ai closely for a while now, and we've reached the elbow of the exponential curve. Things are only going to start moving faster from here. The key is to not get left behind. So don't waste a single minute lamenting about how things are changing. Change is inevitable, and change at this speed is dangerous if you're not paying attention.Given how much AI has already altered our systems, over the next few years I'm expecting it to majorly accelerate our ability to test and learn. And whoever learns the fastest is going to win. This is all happening in plain sight. Everyone is talking about it. But to take advantage of this moment, I need you to do three things: Reframe your thinking around AI. Don't see it as the enemy. See it as a tool. It really is a tool. You're not going to be replaced by AI, at least not yet. You're going to be replaced by a human using AI.Be that human that replaces others. Figure out how AI is going to disrupt you. Face it head on. Don't run. Don't hide. Identify your vulnerabilities. Identify all of the AI tools that are relevant to you and master them. Learn absolutely everything you can. Remember, this is the very beginning of a very aggressive revolution. Moving quickly gives you two advantages: You can rocket ahead of other people by mastering the tools. If you master the tools, people are going to turn to you because you're able to more efficiently solve problems.Going back to the Elon Musk quote - if you can solve harder problems faster, you're going to get paid more. And in these early days where most people are stuck in the “deer in headlights” mode, you have an unfair advantage. The second advantage that AI gives you is an almost unimaginable amount of efficiency in certain tasks. Don't get me wrong, AI isn't a panacea.There are plenty of problems that right now AI sucks at. Sam Altman, the founder of OpenAI, maker of the ubiquitous ChatGPT, has himself said that people are getting so hyped up that they're going to be disappointed. If you think that this is Terminator 2 already and ChatGPT is going to turn into liquid metal and save you from space aliens, yes, you're going to be disappointed.Take action. Learn. Build. Create. Leverage AI and see what you can do together.Follow Tom Bilyeu:Website: https://impacttheory.com/Twitter: https://twitter.com/TomBilyeuFacebook: https://www.facebook.com/tombilyeuInstagram: https://www.instagram.com/tombilyeu/Sponsors:Get $500 off Peloton Tread Packages that come with accessories like a heart rate band, workout mat and non-slip dumbbells. Just go to onepeleton.com to get the deal.Join Prince EA on the hot seat, and check out Sauna Sessions wherever you get your podcasts.Right now, you can save up to fifty percent at http://bluenile.com!Follow Deep Purpose on Apple Podcasts, Spotify or your favorite listening app!
“I imagine a world in which AI is going to make us work more productively, live longer, and have cleaner energy.” – Fei-Fei Li Jason A. Duprat, Entrepreneur, Healthcare Practitioner, and Host of the Healthcare Entrepreneur Academy podcast, talks about the Rapid acceleration of Artificial Intelligence, the revolutionary wave it's going to create, and how you could ride that wave to skyrocket your growth. In this episode, Jason shares his insights about OpenAI's system called ChatGPT, how it works, and how people have utilized it to create amazing outputs. 3 KEY POINTS: ChatGPT is a Large Language Model deveoped by OpenAI. Large Language Models are revolutionizing Artificial Intelligence. Artificial Intelligence is revolutionizing the world. EPISODE HIGHLIGHTS: OpenAI released their ChatGPT model for free, but will eventually convert it to a paid service. Large Language Models (LLMs) are revolutionizing Artificial Intelligence. You don't have to be a developer or understand any code to interact with and use OpenAI's Large Language Models. As users interact with it, it gets smarter and learns how to create better output based on the prompts being input. An example of how it works is software developers create entire software programs by prompting the A.I. step-by-step. In the next five years, Sam Altman envisions that the app will be capable of doing even more advanced tasks, such as writing codes for apps without having to feed it step-by-step prompts. The prediction is that, after years, this technology will bring forth the elimination of several jobs. OpenAI has also developed an A.I. System called DALL·E capable of creating images and art. A.I. is capable of creating outputs at the level of a human professional with decades of experience. Sam Altman said that there are probably opportunities to create billion and trillion-dollar companies by fine-tuning their existing models for specific industries. If you're interested to try out the A.I. system for yourself, go to https://chat.openai.com/chat TWEETABLE QUOTES: "Artificial Intelligence will change a lot of industries drastically." – Jason A. Duprat "Look at how you can leverage Artificial Intelligence to skyrocket your productivity, increase your patient satisfaction, and provide better patient care." – Jason A. Duprat CONNECT WITH JASON DUPRAT LinkedIn | Facebook | Instagram | Youtube Email: support@jasonduprat.com Join our Facebook group: https://jasonduprat.com/group RESOURCES Want to become an IV Nutritional Therapy provider? JOIN our FREE masterclass: https://ivtherapyacademy.com/podcast Sign up for one of our free business start-up Masterclasses by heading over to https://jasonduprat.com/freemasterclass Have a healthcare business question? Want to request a podcast topic? Text me at 407-972-0084 and I'll add you to my contacts. Occasionally, I'll share important announcements and answer your questions as well. I'm excited to connect with you! Do you enjoy our podcast? Leave a rating and review: https://lovethepodcast.com/hea Don't want to miss an episode? Subscribe and follow: https://followthepodcast.com/hea #HealthcareEntrepreneurAcademy #healthcare #HealthcareBoss #entrepreneur #entrepreneurship #podcast #businessgrowth #teamgrowth #digitalbusiness
In today's episode:Founder of ChatGPT and OpenAI, Sam Altman, talks about how the very best people will birth our new godProject Veritas releases hidden-camera video where an exec discusses "directed evolution."Connect with Be Reasonable: https://linktr.ee/imyourmoderatorHear the show when it's released. Become a paid subscriber at imyourmoderator.substack.comOther ways to support the work:ko-fi.com/imyourmoderatorbtc via coinbase: 3MEh9J5sRvMfkWd4EWczrFr1iP3DBMcKk5Merch site: https://cancelcouture.myspreadshop.com/Follow the podcast info stream: t.me/imyourmoderatorOther social platforms: Twitter, Truth Social, Gab, Rumble, or Gettr - @imyourmoderator Become a member at https://plus.acast.com/s/be-reasonable-with-your-moderator-chris-paul. Hosted on Acast. See acast.com/privacy for more information.
In today's episode:Founder of ChatGPT and OpenAI, Sam Altman, talks about how the very best people will birth our new godProject Veritas releases hidden-camera video where an exec discusses "directed evolution."Connect with Be Reasonable: https://linktr.ee/imyourmoderatorHear the show when it's released. Become a paid subscriber at imyourmoderator.substack.comOther ways to support the work:ko-fi.com/imyourmoderatorbtc via coinbase: 3MEh9J5sRvMfkWd4EWczrFr1iP3DBMcKk5Merch site: https://cancelcouture.myspreadshop.com/Follow the podcast info stream: t.me/imyourmoderatorOther social platforms: Twitter, Truth Social, Gab, Rumble, or Gettr - @imyourmoderator Become a member at https://plus.acast.com/s/be-reasonable-with-your-moderator-chris-paul. Hosted on Acast. See acast.com/privacy for more information.
El Chat GPT fue una de las innovaciones más destacadas de 2022. La responsable fue la empresa OpenAI, que busca desarrollar una inteligencia artificial “amigable” y, a juzgar por lo que uno puede experimentar, lo logró. El Chat GPT es una suerte de chatbot con el que uno puede tener una conversación y hacerle preguntas complejas, como que explique en forma sencilla qué es la computación cuántica. Pero lo más llamativo es cómo uno puede pedirle también que escriba determinadas cosas: un discurso sobre tal tema destacando tal y tal punto, un ensayo contraponiendo dos visiones, un cuento sobre un animalito perdido en el bosque que aprende una valiosa lección, o incluso una introducción para una entrevista radial. El resultado no siempre es óptimo, suele necesitar ajustes, pero no deja de ser sorprendente. E hizo encender las alarmas, por ejemplo en el mundo de la educación: en Nueva York, el Departamento de Educación estadual lo prohibió, preocupado por el impacto en el aprendizaje de los estudiantes que podrían pedirle al chat que les hiciera la tarea domiciliaria. Otros han alzado banderas rojas sobre el riesgo que una herramienta tan potente como esta significa para la democracia. Por ejemplo, abarataría los costos para hacer lobby, porque no habría que tener un extenso conocimiento legal para opinar sobre proyectos de ley, sino que bastaría con pedirle asistencia al chat. Desde la empresa OpenAI han tratado de bajarle el perfil: su CEO, Sam Altman, escribió en Twitter que el ChatGPT es “increíblemente limitado, pero “lo suficientemente bueno en algunas cosas como para crear una impresión engañosa”, y advirtió: “Es un error confiar en él para algo importante en este momento. Es una vista previa del progreso; queda mucho trabajo por hacer”. Por más que quede mucho por hacer, hay preguntas de fondo que ya se pueden plantear. Para charlar de este tema, les proponemos ahora una mesa especial con la filósofa Maybeth Garcés, profesora de Ética digital en la Universidad Católica; la doctora en Ciencias Sociales Carolina Aguerre, docente e investigadora, especialista en gobernanza de tecnologías digitales; y el doctor en ingeniería eléctrica Federico Lecumberry, profesor de Procesamiento de Señales y Machine Learning en la Facultad de Ingeniería de la Udelar.
With ChatGPT in the news, I thought it was high time we take a look at OpenAI -- the company behind the controversial chatbot. From its founding in 2015 to its shift to a "capped-profit" company, we look at the organization founded with the goal of creating AI that's beneficial for humanity.See omnystudio.com/listener for privacy information.
Microsoft has announced a new multi-billion investment into the AI company which owns ChatGPT. It's the third installment of a set of investments into OpenAI, which which was co-founded by Elon Musk and investor Sam Altman. Microsoft made the announcement via a blog post on Monday, and Microsoft's chief executive Satya Nadella says the technology will be integreated globally in months. Wedbush Securities equity research analyst Dan Ives spoke to Guyon Espiner.
Microsoft has announced a multi-year, multibillion dollar investment in artificial intelligence (AI) as it extends its partnership with OpenAI. OpenAI is the creator of popular image generation tool Dall-E and the chatbot ChatGPT. In 2019 Microsoft invested one billion dollars in the company, founded by Elon Musk and tech investor Sam Altman. The Windows and Xbox maker plans up to 10,000 redundancies, but said it would still hire in key strategic areas.