Podcast appearances and mentions of miles brundage

  • 28PODCASTS
  • 32EPISODES
  • 57mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Sep 26, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about miles brundage

Latest podcast episodes about miles brundage

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers,Artificial intelligence may prove to be one of the most transformative technologies in history, but like any tool, its immense power for good comes with a unique array of risks, both large and small.Today on Faster, Please! — The Podcast, I chat with Miles Brundage about extracting the most out of AI's potential while mitigating harms. We discuss the evolving expectations for AI development and how to reconcile with the technology's most daunting challenges.Brundage is an AI policy researcher. He is a non-resident fellow at the Institute for Progress, and formerly held a number of senior roles at OpenAI. He is also the author of his own Substack.In This Episode* Setting expectations (1:18)* Maximizing the benefits (7:21)* Recognizing the risks (13:23)* Pacing true progress (19:04)* Considering national security (21:39)* Grounds for optimism and pessimism (27:15)Below is a lightly edited transcript of our conversation. Setting expectations (1:18)It seems to me like there are multiple vibe shifts happening at different cadences and in different directions.Pethokoukis: Earlier this year I was moderating a discussion between an economist here at AEI and a CEO of a leading AI company, and when I asked each of them how AI might impact our lives, our economists said, ‘Well, I could imagine, for instance, a doctor's productivity increasing because AI could accurately and deeply translate and transcribe an appointment with a patient in a way that's far better than what's currently available.” So that was his scenario. And then I asked the same question of the AI company CEO, who said, by contrast, “Well, I think within a decade, all human death will be optional thanks to AI-driven medical advances.” On that rather broad spectrum — more efficient doctor appointments and immortality — how do you see the potential of this technology?Brundage: It's a good question. I don't think those are necessarily mutually exclusive. I think, in general, AI can both augment productivity and substitute for human labor, and the ratio of those things is kind of hard to predict and might be very policy dependent and social-norm dependent. What I will say is that, in general, it seems to me like the pace of progress is very fast and so both augmentation and substitutions seem to be picking up steam.It's kind of interesting watching the debate between AI researchers and economists, and I have a colleague who has said that the AI researchers sometimes underestimate the practical challenges in deployment at scale. Conversely, the economists sometimes underestimate just how quickly the technology is advancing. I think there's maybe some happy middle to be found, or perhaps one of the more extreme perspectives is true. But personally, I am not an economist, I can't really speak to all of the details of substitution, and augmentation, and all the policy variables here, but what I will say is that at least the technical potential for very significant amounts of augmentation of human labor, as well as substitution for human labor, seem pretty likely on even well less than 10 years — but certainly within 10 years things will change a lot.It seems to me that the vibe has shifted a bit. When I talk to people from the Bay Area and I give them the Washington or Wall Street economist view, to them I sound unbelievably gloomy and cautious. But it seems the vibe has shifted, at least recently, to where a lot of people think that major advancements like superintelligence are further out than they previously thought — like we should be viewing AI as an important technology, but more like what we've seen before with the Internet and the PC.It's hard for me to comment. It seems to me like there are multiple vibe shifts happening at different cadences and in different directions. It seems like several years ago there was more of a consensus that what people today would call AGI was decades away or more, and it does seem like that kind of timeframe has shifted closer to the present. There there's still debate between the “next few years” crowd versus the “more like 10 years” crowd. But that is a much narrower range than we saw several years ago when there was a wider range of expert opinions. People who used to be seen as on one end of the spectrum, for example, Gary Marcus and François Chollet who were seen as kind of the skeptics of AI progress, even they now are saying, “Oh, it's like maybe 10 years or so, maybe five years for very high levels of capability.” So I think there's been some compression in that respect. That's one thing that's going on.There's also a way in which people are starting to think less abstractly and more concretely about the applications of AI and seeing it less as this kind of mysterious thing that might happen suddenly and thinking of it more as incremental, more as something that requires some work to apply in various parts of the economy that there's some friction associated with.Both of these aren't inconsistent, they're just kind of different vibe shifts that are happening. So getting back to the question of is this just a normal technology, I would say that, at the very least, it does seem faster in some respects than some other technological changes that we've seen. So I think ChatGPT's adoption going from zero to double-digit percentages of use across many professions in the US and in a matter of high number of months, low number of years, is quite stark.Would you be surprised if, five years from now, we viewed AI as something much more important than just another incremental technological advance, something far more transformative than technologies that have come before?No, I wouldn't be surprised by that at all. If I understand your question correctly, my baseline expectation is that it will be seen as one of the most important technologies ever. I'm not sure that there's a standard consensus on how to rate the internet versus electricity, et cetera, but it does seem to me like it's of the same caliber of electricity in the sense of essentially converting one kind of energy into various kinds of useful economic work. Similarly, AI is converting various types of electricity into cognitive work, and I think that's a huge deal.Maximizing the benefits (7:21)There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications.However you want to define society or the aspect of society that you focus on — government businesses, individuals — are we collectively doing what we need to do to fully exploit the upsides of this technology over the next half-decade to decade, as well as minimizing potential downsides?I think we are not, and this is something that I sometimes find frustrating about the way that the debate plays out is that there's sometimes this zero-sum mentality of doomers versus boomers — a term that Karen Hao uses — and this idea that there's this inherent tension between mitigating the risks and maximizing the benefits, and there are some tensions, but I don't think that we are on the Pareto frontier, so to speak, of those issues.Right now, I think there's a lot of value being left on the table in terms of fairly low-cost risk mitigations. There's also a lot of value being left on the table in terms of finding new ways to exploit the upsides and accelerate particularly beneficial applications. I'll give just one example, because I write a lot about the risk, but I also am very interested in maximizing the upside. So I'll just give one example: Protecting critical infrastructure and improving the cybersecurity of various parts of critical infrastructure in the US. Hospitals, for example, get attacked with ransomware all the time, and this causes real harm to patients because machines get bricked, essentially, and they have one or two people on the IT team, and they're kind of overwhelmed by these, not even always that sophisticated, but perhaps more-sophisticated hackers. That's a huge problem. It matters for national security in addition to patients' lives, and it matters for national security in the sense that this is something that China and Russia and others could hold at risk in the context of a war. They could threaten this critical infrastructure as part of a bargaining strategy.And I don't think that there's that much interest in helping hospitals have a better automated cybersecurity engineer helper among the Big Tech companies — because there aren't that many hospital administrators. . . I'm not sure if it would meet the technical definition of market failure, but it's at least a national security failure in that it's a kind of fragmented market. There's a water plant here, a hospital administrator there.I recently put out a report with the Institute for Progress arguing that philanthropists and government could put some additional gasoline in the tank of cybersecurity by incentivizing innovation that specifically helps these under-resourced defenders more so than the usual customers of cybersecurity companies like Fortune 500 companies.I'm confident that companies and entrepreneurs will figure out how to extract value from AI and create new products and new services, barring any regulatory slowdowns. But since you mentioned low-hanging fruit, what are some examples of that?I would say that transparency is one of the areas where a lot of AI policy experts seem to be in pretty strong agreement. Obviously there is still some debate and disagreement about the details of what should be required, but just to give you some illustration, it is typical for the leading AI companies, sometimes called frontier AI companies, to put out some kind of documentation about the safety steps that they've taken. It's typical for them to say, here's our safety strategy and here's some evidence that we're following this strategy. This includes things like assessing whether their systems can be used for cyber-attacks, and assessing whether they could be used to create biological weapons, or assessing the extent to which they make up facts and make mistakes, but state them very confidently in a way that could pose risks to users of the technology.That tends to be totally voluntary, and there started to be some momentum as a result of various voluntary commitments that were made in recent years, but as the technology gets more high-stakes, and there's more cutthroat competition, and there's maybe more lawsuits where companies might be tempted to retreat a bit in terms of the information that they share, I think that things could kind of backslide, and at the very least not advance as far as I would like from the perspective of making sure that there's sharing of lessons learned from one company to another, as well as making sure that investors and users of the technology can make informed decisions about, okay, do I purchase the services of OpenAI, or Google, or Anthropic, and making these informed decisions, making informed capital investment seems to require transparency to some degree.This is something that is actively being debated in a few contexts. For example, in California there's a bill that has that and a few other things called SB-53. But in general, we're at a bit of a fork in the road in terms of both how certain regulations will be implemented such as in the EU. Is it going to become actually an adaptive, nimble approach to risk mitigation or is it going to become a compliance checklist that just kind of makes big four accounting firms richer? So there are questions then there are just “does the law pass or not?” kind of questions here.Recognizing the risks (13:23). . . I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing . . .In my probably overly simplistic way of looking at it, I think of two buckets and you have issues like, are these things biased? Are they giving misinformation? Are they interacting with young people in a way that's bad for their mental health? And I feel like we have a lot of rules and we have a huge legal system for liability that can probably handle those.Then, in the other bucket, are what may, for the moment, be science-fictional kinds of existential risks, whether it's machines taking over or just being able to give humans the ability to do very bad things in a way we couldn't before. Within that second bucket, I think, it sort of needs to be flexible. Right now, I'm pretty happy with voluntary standards, and market discipline, and maybe the government creating some benchmarks, but I can imagine the technology advancing to where the voluntary aspect seems less viable and there might need to be actual mandates about transparency, or testing, or red teaming, or whatever you want to call it.I think that's a reasonable distinction, in the sense that there are risks at different scales, there are some that are kind of these large-scale catastrophic risks and might have lower likelihood but higher magnitude of impact. And then there are things that are, I would say, literally happening millions of times a day like ChatGPT making up citations to articles that don't exist, or Claud saying that it fixed your code but actually it didn't fix the code and the user's too lazy to notice, and so forth.So there are these different kinds of risks. I personally don't make a super strong distinction between them in terms of different time horizons, precisely because I think things are going so quickly. I think science fiction is becoming science fact very much sooner than many people expected. But in any case, I think that similar logic around, let's make sure that there's transparency even if we don't know exactly what the right risk thresholds are, and we want to allow a fair degree of flexibility and what measures companies take.It seems good that they share what they're doing and, in my opinion, ideally go another step further and allow third parties to audit their practices and make sure that if they say, “Well, we did a rigorous test for hallucination or something like that,” that that's actually true. And so that's what I would like to see for both what you might call the mundane and the more science fiction risks. But again, I think it's kind of hard to say how things will play out, and different people have different perspectives on these things. I happen to be on the more aggressive end of the spectrumI am worried about the spread of the apocalyptic, high-risk AI narrative that we heard so much about when ChatGPT first rolled out. That seems to have quieted, but I worry about it ramping up again and stifling innovation in an attempt to reduce risk.These are very fair concerns, and I will say that there are lots of bills and laws out there that have, in fact, slowed down innovation and certain contexts. The EU, I think, has gone too far in some areas around social media platforms. I do think at least some of the state bills that have been floated would lead to a lot of red tape and burdens to small businesses. I personally think this is avoidable.There are going to be mistakes. I don't want to be misleading about how high quality policymakers' understanding of some of these issues are. There will be mistakes, even in cases where, for example, in California there was a kind of blue ribbon commission of AI experts producing a report over several months, and then that directly informing legislation, and a lot of industry back and forth and negotiation over the details. I would say that's probably the high water mark, SB-53, of fairly stakeholder/expert-informed legislation. Even there, I'm sure there'll be some things that we look back on and say it's not ideal, but in my opinion, it's better to do something that is as informed as we can do, because it does seem like there are these kind of market failures and incentive problems that are going to arise if we do nothing, such as companies retrenching and holding back information that makes it hard for the field as a whole to tackle these issues.I'll just make one more point, which is adapting to the compliance capability of different companies: How rich are they? How expensive are the models they're training, I think is a key factor in the legislation that I tend to be more sympathetic to. So just to make a contrast, there's a bill in Colorado that was kind of one size fits all, regulate all the kind of algorithms, and that, I think, is very burdensome to small businesses. I think something like SB-53 where it says, okay, if you can afford to train an AI system for a $100 million, you can probably afford to put out a dozen pages about your safety and security practices.Pacing true progress (19:04). . . some people . . . kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress . . . there's quite rapid progress happening still.Hopefully Grok did not create this tweet of yours, but if it did, well, there we go. You won't have to answer it, but I just want to understand what you meant by it: “A lot of AI safety people really, really want to find evidence that we have a lot of time for AGI.” What does that mean?What I was trying to get at is that — and I guess this is not necessarily just AI safety people, but I sometimes kind of try to poke at people in my social network who I'm often on the same side of, but also try to be a friendly critic to, and that includes people who are working on AI safety. I think there's a common tendency to kind of grasp at what I would consider straws when reading papers and interpreting product launches in a way that kind of suggests, well, we've hit a wall, AI is slowing down, this was a flop, who cares?I'm doing my kind of maybe uncharitable psychoanalysis. What I was getting at is that I think one reason why some people might be tempted to do that is that it makes things seem easier and less scary: “Well, we don't have to worry about really powerful AI enabled cyber-attacks for another five years, or biological weapons for another two years, or whatever.” Maybe, maybe not.I think the specific example that sparked that was GPT-5 where there were a lot of people who, in my opinion, were reading the tea leaves in a particular way and missing important parts of the context. For example, at GPT-5 wasn't a much larger or more expensive-to-train model than GPT-4, which may be surprising by the name. And I think OpenAI did kind of screw up the naming and gave people the wrong impression, but from my perspective, there was nothing particularly surprising, but to some people it was kind of a flop that they kind of wanted to say, “Well, things are slowing down.” But in my opinion, if you look at more objective measures of progress like scores on math, and coding, and the reduction in the rate of hallucinations, and solving chemistry and biology problems, and designing new chips, and so forth, there's quite rapid progress happening still.Considering national security (21:39)I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.I'm not sure if you're familiar with some of the work being done by former Google CEO Eric Schmidt, who's been doing a lot of work on national security and AI, and his work, it doesn't use the word AGI, but it talks about AI certainly smart enough to be able to have certain capabilities which our national security establishment should be aware of, should be planning, and those capabilities, I think to most people, would seem sort of science fictional: being able to launch incredibly sophisticated cyber-attacks, or be able to improve itself, or be able to create some other sort of capabilities. And from that, I'm like, whether or not you think that's possible, to me, the odds of that being possible are not zero, and if they're not zero, some bit of the bandwidth of the Pentagon should be thinking about that. I mean, is that sensible?Yeah, it's totally sensible. I'm not going to argue with you there. In fact, I've done some collaboration with the Rand Corporation, which has a pretty heavy investment in what they call the geopolitics of AGI and kind of studying what are the scenarios, including AI and AGI being used to produce “wonder weapons” and super-weapons of some kind.Basically, I think this is super important and in fact, I have a paper coming out that was in collaboration with some folks there pretty soon. I won't spoil all the details, but if you search “Miles Brundage US China,” you'll see some things that I've discussed there. And basically my perspective is we need to strike a balance between competing vigorously on the commercial side with countries like China and Russia on AI — more so China, Russia is less of a threat on the commercial side, at least — and also making sure that we're fielding national security applications of AI in a responsible way, but also recognizing that there are these ways in which things could spiral out of control in a scenario with totally unbridled competition. I want to avoid a scenario like the Cuban Missile Crisis or ways in which that could have been much worse than the actual Cuban Missile Crisis happening as a result of AI and AGI.If you think that, again, the odds are not zero that a technology which is fast-evolving, that we have no previous experience with because it's fast-evolving, could create the kinds of doomsday scenarios that there's new books out about, people are talking about. And so if you think, okay, not a zero percent chance that could happen, but it is kind of a zero percent chance that we're going to stop AI, smash the GPUs, as someone who cares about policy, are you just hoping for the best, or are the kinds of things we've already talked about — transparency, testing, maybe that testing becoming mandatory at some point — is that enough?It's hard to say what's enough, and I agree that . . . I don't know if I give it zero, maybe if there's some major pandemic caused by AI and then Xi Jinping and Trump get together and say, okay, this is getting out of control, maybe things could change. But yeah, it does seem like continued investment and a large-scale deployment of AI is the most likely scenario.Generally, the way that I see this playing out is that there are kind of three pillars of a solution. There's kind of some degree of safety and security standards. Maybe we won't agree on everything, but we should at least be able to agree that you don't want to lose control of your AI system, you don't want it to get stolen, you don't want a $10 billion AI system to be stolen by a $10 million-scale hacking effort. So I think there are sensible standards you can come up with around safety and security. I think you can have evidence produced or required that companies are following these things. That includes transparency.It also includes, I would say, third-party auditing where there's kind of third parties checking the claims and making sure that these standards are being followed, and then you need some incentives to actually participate in this regime and follow it. And I think the incentives part is tricky, particularly at an international scale. What incentive does China have to play ball other than obviously they don't want to have their AI kill them or overthrow their government or whatever? So where exactly are the interests aligned or not? Is there some kind of system of export control policies or sanctions or something that would drive compliance or is there some other approach? I think that's the tricky part, but to me, those are kind of the rough outlines of a solution. Maybe that's enough, but I think right now it's not even really clear what the rough rules of the road are, who's playing by the rules, and we're relying a lot on goodwill and voluntary reporting. I think we could do better, but is that enough? That's harder to say.Grounds for optimism and pessimism (27:15). . . it seems to me like there is at least some room for learning from experience . . . So in that sense, I'm more optimistic. . . I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table.Did your experience at OpenAI make you more or make you more optimistic or worried that, when we look back 10 years from now, that AI will have, overall on net, made the world a better place?I am sorry to not give you a simpler answer here, and maybe think I should sit on this one and come up with a kind of clearer, more optimistic or more pessimistic answer, but I'll give you kind of two updates in different directions, and I think they're not totally inconsistent.I would say that I have gotten more optimistic about the solvability of the problem in the following sense. I think that things were very fuzzy five, 10 years ago, and when I joined OpenAI almost seven years now ago now, there was a lot of concern that it could kind of come about suddenly — that one day you don't have AI, the next day you have AGI, and then on the third day you have artificial superintelligence and so forth.But we don't live to see the fourth day.Exactly, and so it seems more gradual to me now, and I think that is a good thing. It also means that — and this is where I differ from some of the more extreme voices in terms of shutting it all down — it seems to me like there is at least some room for learning from experience, iterating, kind of taking the lessons from GPT-5 and translating them into GPT-6, rather than it being something that we have to get 100 percent right on the first shot and there being no room for error. So in that sense, I'm more optimistic.I would say, in another respect, I'm maybe more pessimistic in that I am seeing value being left on the table. It seems to me like, as I said, we're not on the Pareto frontier. It seems like there are pretty straightforward things that could be done for a very small fraction of, say, the US federal budget, or very small fraction of billionaires' personal philanthropy or whatever. That in my opinion, would dramatically reduce the likelihood of an AI-enabled pandemic or various other issues, and would dramatically increase the benefits of AI.It's been a bit sad to continuously see those opportunities being neglected. I hope that as AI becomes more of a salient issue to more people and people start to appreciate, okay, this is a real thing, the benefits are real, the risks are real, that there will be more of a kind of efficient policy market and people take those opportunities, but right now it seems pretty inefficient to me. That's where my pessimism comes from. It's not that it's unsolvable, it's just, okay, from a political economy and kind of public-choice perspective, are the policymakers going to make the right decisions?On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Radio Wnet
KIEDY I W CZYM PRZEŚCIGNIE NAS AI. 48 odcinek audycji Limes inferior

Radio Wnet

Play Episode Listen Later Jun 30, 2025 26:00


Pod koniec ub. roku dwóch dżentelmenów, ważnych figur świata AI, zawarło zakład dotyczący rozwoju sztucznej inteligencji.Czy AI do 2027 będzie w stanie:• samodzielnie przeanalizować i omówić film i powieść;• napisać biogram albo nekrolog, angażujący i bez konfabulacji;• opanować każdą nową grę wideo w ciągu minut lub przynajmniej godzin;• pisać pisma prawnicze bez błędów w casusach;• programować bez błędu kod o długości ponad 10 tys. linii na podstawie polecenia w człowieka, który nie jest ekspertem;• napisać powieść lub dokument zasługujący na nagrodę Pulitzera;• stworzyć scenariusz filmowy godny Oscara;• dokonać przełomowego odkrycia naukowego zasługującego na nagrodę Nobla;• dokonać konwersji dowodów matematycznych napisanych w języku naturalnym w formę symboliczną?Uczestnikami zakładu są Gary Marcus, amerykański naukowiec (psycholog z MIT), przedsiębiorca i autor zajmujący się sztuczną inteligencją (pomysłodawca zakładu) i Miles Brundage, niezależny badacz AI, od 2018 do 2024 r. pracował w OpenAI, min. na ważnym stanowisku starszego doradcy ds. gotowości AGI (pol. Ogólnej Sztucznej Inteligencji). Marcus stawia, że AI nie będzie umiało wykonać więcej niż 4 zadań z tej listy; Brundage - stawia, że będzie potrafiło zrealizować przynajmniej 8.10 prac AI i szanse obu stron komentuje gość Limes inferior, Robert Trypuz, badacz i entuzjasta AI, specjalista w dziedzinie Semantic Web i inżynierii danych. Doktor informatyki i telekomunikacji Uniwersytetu w Trydencie, dr habilitowany filozofii na KUL. Od ponad 15 lat angażuje się w implementację rozwiązań z obszaru sztucznej inteligencji, współpracując z licznymi firmami z sektora ICT. Jest autorem ponad 80 publikacji naukowych i laureatem prestiżowych nagród. Prywatnie chrześcijanin, mąż i ojciec; właściciel psa i kanarka. Był już gościem naszej audycji, kiedy premierę miała książka popularyzatorska - “Prosto o AI. Jak działa i myśli sztuczna inteligencja?”

The AI Policy Podcast
AI, Cybersecurity, and Securing Model Weights with Miles Brundage and Chris Rohlf

The AI Policy Podcast

Play Episode Listen Later Jun 27, 2025 77:13


In this episode, we're joined by Miles Brundage, independent AI policy researcher and former Head of Policy Research at OpenAI, and Chris Rohlf, Security Engineer at Meta and cybersecurity expert. We cover the fundamentals of cybersecurity today (9:20), whether AI is tipping the offense-defense balance (21:00), the critical challenge of securing AI model weights (34:55), the debate over “AI security doomerism” (1:03:15), and how policymakers can strengthen incentives to secure AI systems (1:08:46).

ChinaTalk
EMERGENCY POD: DeepSeek R1 and the Future of AI Competition with Miles Brundage

ChinaTalk

Play Episode Listen Later Jan 24, 2025 32:33


Miles Brundage, a six year OpenAI vet who ran its Policy Research and AGI readiness arms, discuss why all your deepseek takes are so terrible. Outtro music: The Departure, Max Richter https://www.youtube.com/watch?v=8R5Ppb9wqjY Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
EMERGENCY POD: DeepSeek R1 and the Future of AI Competition with Miles Brundage

ChinaEconTalk

Play Episode Listen Later Jan 24, 2025 32:33


Miles Brundage, a six year OpenAI vet who ran its Policy Research and AGI readiness arms, discuss why all your deepseek takes are so terrible. Outtro music: The Departure, Max Richter https://www.youtube.com/watch?v=8R5Ppb9wqjY Learn more about your ad choices. Visit megaphone.fm/adchoices

Mixture of Experts
Episode 36: OpenAI o3, DeepSeek-V3, and the Brundage/Marcus AI bet

Mixture of Experts

Play Episode Listen Later Jan 3, 2025 39:19


Is deep learning hitting a wall? It's 2025 and Mixture of Experts is back and better than ever. In episode 36, host Tim Hwang is joined by Chris Hay, Kate Soule and Kush Varshney to debrief one of the biggest releases of 2024, OpenAI o3. Next, DeepSeek-V3 is here! Finally, will AI exist in 2027? The experts dissect the AI bet between Miles Brundage and Gary Marcus. All that and more on the first Mixture of Experts of 2025.The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.00:00 — Intro00:49 — OpenAI o314:40 — DeepSeek-V328:00 — The Brundage/Marcus bet

The Retort AI Podcast
The Retort's biggest AI stories of 2024

The Retort AI Podcast

Play Episode Listen Later Dec 6, 2024 47:45


We're back! Tom and Nate catch up after the Thanksgiving holiday. Our main question was -- what were the biggest AI stories of the year? We touch on the core themes of the show: infrastructure, AI realities, and and antitrust. The power buildout to scale out AI is going to have very real long-term impacts.Some links this week:* Ben Thompson's, The End of the Beginning: https://stratechery.com/2020/the-end-of-the-beginning/* Miles Brundage's Substack: https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im* Stochastic Parrots paper: https://dl.acm.org/doi/10.1145/3442188.3445922Thanks for listening! Get The Retort (https://retortai.com/)…… on YouTube: https://www.youtube.com/@TheRetortAIPodcast… on Spotify: https://open.spotify.com/show/0FDjH8ujv7p8ELZGkBvrfv?si=fa17a4d408f245ee… on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-retort-ai-podcast/id1706223190… Follow Interconnects: https://www.interconnects.ai/… email us: mail@retortai.com

Sway
Billionaire Game Theory + We Are Not Ready for A.G.I. + Election Betting Markets Get Weird

Sway

Play Episode Listen Later Nov 1, 2024 71:28


Last week, Jeff Bezos canceled the Washington Post editorial board's plan to endorse Kamala Harris. Are tech billionaires hedging their bets in case Donald Trump wins? Then, Miles Brundage, a former OpenAI senior adviser on artificial general intelligence readiness, stops by to tell us how his old company is doing when it comes to being ready for superintelligence, and whether we should all keep saving for retirement. And finally, David Yaffe-Bellany, a Times technology reporter, joins us to explore the rise of Polymarket, a crypto-powered betting platform, and discuss whether prediction markets can tell us who is going to win the election. Guests:Miles Brundage, former OpenAI senior adviser for A.G.I. readiness.David Yaffe-Bellany, technology reporter for The New York Times. Additional Reading:Jeff Bezos, Elon Musk and the Billions of Ways to Influence an ElectionMiles Brundage's on Why He's Leaving OpenAIThe Crypto Website Where the Election Odds Swing in Trump's Favor We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Learning Tech Talks
Weekly Update | OpenAI Safety Dismantled | Meta Lawsuit | AI Wrongful Death | Chanel AI Mishap | AI vs. Humans Research

Learning Tech Talks

Play Episode Listen Later Nov 1, 2024 58:47


DescriptionHappy Friday, everyone, and congratulations on making it through another week. What better way to kick off November 2024 than a rundown on the latest happenings at the intersection of business, technology, and human experience? As usual, I picked five of my favorites. With that, let's get into it. OpenAI Safety Team Disbands, Again - OpenAI is making headlines as their safety team falls apart yet again after losing executive Miles Brundage. While some of the noise around it is likely just noise, his cryptic warning that OpenAI is not ready for what it's created has some folks rightfully perking up their eyes and ears. Meta Social Media Lawsuits - While big tech companies keep trying to use Section 230 as an immunity shield from the negative impact of social media, a judge has determined lawsuits will be allowed. What exactly that will mean for Meta and other big tech companies is still TBD, but they will see their day in court. Google & Character.AI Sued - It's tragic whenever someone takes their life. It's even more tragic when it's a teenager fueled to take the path by an AI bot. While AI bots are promoted as “for entertainment purposes only,” it's obvious entertainment isn't the only outcome. We continue seeing new legal precedents being established, and it's just the beginning. GenAI Bias Flub with Chanel - I'm not exactly sure what Chanel's CEO Leena Nair expected when she asked AI to create an image of her executive team or why on earth anyone at Microsoft moved forward with the request during her headquarters visit. However, it demonstrated how far we still have to go in mitigating bias in AI training data and why it's so important to use AI properly. AI vs. Humans Research - Where is AI better than humans and vice versa? A recent study tried to answer that question. Unfortunately, while the data validates many of the things we already know, it also is ripe for cherry-picking, depending on the story you're trying to tell. While there were some interesting findings, I won't be retracting any of my previous statements based on the results. #ai #ethicalAI #Meta #Microsoft #lawsuit

The Marketing AI Show
#121: New Claude 3.5 Sonnet and Computer Use, Wild OpenAI "Orion" Rumors, Dark Side of AI Companions & Ex-OpenAI Researcher Sounds Alarm on AGI

The Marketing AI Show

Play Episode Listen Later Oct 29, 2024 76:19


Next-gen models emerge while safety concerns reach a boiling point. Join Mike Kaput and Paul Roetzer as they unpack last weeks wave of AI updates, including Anthropic's Claude 3.5 models and computer use capabilities, plus the brewing rumors about OpenAI's "Orion" and Google's Gemini 2.0. In our other main topics, we review the tragic Florida case raising alarms about AI companion apps, and ex-OpenAI researcher Miles Brundage's stark warnings about AGI preparedness. Today's episode is brought to you by rasa.io. Rasa.io makes staying in front of your audience easy. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber, ensuring every email you send is not just relevant but compelling. Visit rasa.io/maii and sign up with the code 5MAII for an exclusive 5% discount for podcast listeners.  Today's episode is also brought to you by our AI for Agencies Summit, a virtual event taking place from 12pm - 5pm ET on Wednesday, November 20. Visit www.aiforagencies.com and use the code POD100 for $100 off your ticket. 00:05:04 — AI Model Releases and Rumors: New Claude Model + Computer Use, Claude Analysis Tool, OpenAI Doubles Down on AI for Code, Perplexity Pro Reasoning Update, Runway Act-One, Eleven Labs Voice Design, Stable Diffusion 3.5, The Rumors 00:27:07 — The Dark Side of AI Companions 00:39:29 — Ex-OpenAI Researcher Sounds Alarm on AGI Preparedness 00:47:57 — AI + National Security 00:53:14 — Microsoft vs. Salesforce Over Agents 00:57:08 — Disney AI Initiative  01:00:17 — Apple Intelligence Photos 01:03:03 — Google Open Sourcing SynthID 01:06:32 — OpenAI + Fair Use 01:10:43 — Using Gemini to Prep for Public Speaking Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute

Let's Talk AI
#187 - Anthropic Agents, Mochi1, 3.4B data center, OpenAI's FAST image gen

Let's Talk AI

Play Episode Listen Later Oct 28, 2024 129:38


Our 187th episode with a summary and discussion of last week's big AI news, now with Jeremie co-hosting once again! With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris) Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter (00:03:07) Response to listener comments / corrections (00:05:13) Sponsor Read) Tools & Apps(00:06:22) Anthropic's latest AI update can use a computer on its own (00:18:09) AI video startup Genmo launches Mochi 1, an open source rival to Runway, Kling, and others (00:20:37) Canva has a shiny new text-to-image generator (00:23:35) Canvas Beta brings Remix, Extend, and Magic Fill to Ideogram users (00:26:16) StabilityAI releases Stable Diffusion 3.5  (00:28:27) Bringing Agentic Workflows into Inflection for Enterprise Applications & Business(00:32:35) Crusoe's $3.4B joint venture to build AI data center campus with up to 100,000 GPUs (00:39:08) Anthropic reportedly in early talks to raise new funding on up to $40B valuation (00:45:47) Longtime policy researcher Miles Brundage leaves OpenAI (00:49:53) NVIDIA's Blackwell GB200 AI Servers Ready For Mass Deployment In December (00:52:41) Foxconn building Nvidia superchip facility in Mexico, executives say (00:55:27) xAI, Elon Musk's AI startup, launches an API Projects & Open Source(00:58:32) INTELLECT-1: The First Decentralized 10-Billion-Parameter AI Model Training (01:06:34) Meta FAIR Releases Eight New AI Research Artifacts—Models, Datasets, and Tools to Inspire the AI Community (01:10:02) Google DeepMind is making its AI text watermark open source Research & Advancements(01:13:21) OpenAI researchers develop new model that speeds up media generation by 50X (01:17:54) How much AI compute is out there, and who owns it? (01:25:28) Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning (01:33:30) Inference Scaling for Long-Context Retrieval Augmented Generation Policy & Safety(01:41:50) Announcing our updated Responsible Scaling Policy (01:48:52) Anthropic is testing AI's capacity for sabotage (01:56:30) OpenAI asked US to approve energy-guzzling 5GW data centers, report says (02:00:05) US Probes TSMC's Dealings with Huawei (02:03:03) TikTok owner ByteDance taps TSMC to make its own AI GPUs to stop relying on Nvidia — the company has reportedly spent over $2 billion on Nvidia AI GPUs (02:06:37) Outro

Grumpy Old Geeks
671: Lorum Ipsum Is My Sister

Grumpy Old Geeks

Play Episode Listen Later Oct 26, 2024 67:34


San Fran embracing self-driving cars; not-Bitcoin creator in hiding; i h8 ai; anti-AI artist open letter; X updates their policies; more people leave OpenAI; SynthID; 23andMe and your genetic data; no more fake online reviews; private equity acquires Squarespace; right to repair; Tesla Blade Runner AI ripoff; Star Trek frogs; the Riker Maneuver; de-extinction; a whole slew of great new shows dropping - Star Trek, Dune, Silo & more; Good Omens season 3 now just a movie; Cruel World and nostalgia fatigue; are we not retired? we are Devo; Fresco, free; Penguin adds a robots.txt file to their books.Sponsors:HelloFresh - Get 10 FREE meals at HelloFresh.com/freegogPrivate Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordDeleteMe - Head over to JoinDeleteMe.com/GOG and use the code "GOG" for 20% off.1Password Extended Access Management - Check it out at 1Password.com/grumpyoldgeeks. Secure every sign-in for every app on every device.Show notes at https://gog.show/671FOLLOW UPHow San Francisco Learned to Love Self-Driving CarsPeter Todd Is in Hiding After a Documentary Named Him as Bitcoin's CreatorIN THE NEWSi h8 aiMore than 10,500 artists sign open letter protesting unlicensed AI trainingX updates its privacy policy to allow third parties to train AI models with its dataFormer OpenAI Researcher Says the Company Broke Copyright LawOpenAI and Microsoft are funding $10 million in grants for AI-powered journalismByteDance intern fired for planting malicious code in AI modelsLongtime policy researcher Miles Brundage leaves OpenAIGoogle offers its AI watermarking tech as free open source toolkit23andMe faces an uncertain future — so does your genetic dataA federal ban on fake online reviews is now in effectPrivate Equity Firm Permira Acquires Squarespace for $7.2 BillionThe Feds Are Coming for John Deere Over the Right to RepairMeta bans private jet tracking accounts on Instagram and ThreadsElon Musk, Tesla and WBD sued over alleged 'Blade Runner 2049' AI ripoff for Cybercab promotionSeven newly named frog species make whistles that sound like Star TrekRiker sits downDe-extinction company provides a progress report on thylacine effortsMEDIA CANDYShrinking S2 - Out nowThe Diplomat S2 - Oct 31Star Trek: Lower Decks S5 - Oct 24Silo S2 - Nov 15Dune: Prophecy - Nov 17Star Trek: Section 31 - Jan 25, 2025Star Trek: Strange New Worlds S3 - 2025‘Star Trek: Starfleet Academy' Gets Early Season 2 Renewal, Adds Tatiana Maslany As Recurring‘Black Mirror': ‘Outer Banks' & ‘She Hulk' Actor Nicholas Cirillo Joins Cast Of Season 7‘Good Omens' To End With One 90-Minute Episode As Neil Gaiman Exits Following Sexual Assault AllegationsMidnight MassBuffy the Vampire Slayer Is Finally Streaming for Free in Time for HalloweenThe Lincoln Lawyer Season 3Lioness | Season 2 Sneak Peek | Paramount+ - Oct 27thCruel World FestDevo Has the Uncontrollable Urge to RetireAPPS & DOODADSAdobe made its painting app completely free to take on ProcreateMidjourney launches AI image editor: how to use itStartup School: Gen AIAI in Marketing: Fast-track your skillsPerplexity AI app for macOS now available on the Mac App StoreBluesky Teases Creator Payments While New Sign-Ups Explode After Elon Musk's Destruction of TwitterNew AirPods Pro 2 firmware now available for iOS 18.1's hearing health featuresApple's macOS Sequoia lets you snap windows into position — here's howWeb Design MuseumDiff Text - Compare Text OnlineSetAppJOIN TIMBALAND AND DISCOVER HOW SUNO CAN ELEVATE YOUR SOUNDSan Francisco to pay $212 million to end reliance on 5.25-inch floppy disksAT THE LIBRARYPenguin Adds a Do-Not-Scrape-for-AI Page to Its BooksBookcase by AstropadCLOSING SHOUT-OUTSPhilip G. Zimbardo, the Stanford psychologist behind the controversial 'Stanford Prison Experiment' dies at 91Ward Christensen, BBS inventor and architect of our online age, dies at age 78Dodgers icon Fernando Valenzuela is gone. But 'Fernandomania' will live forever.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Tech Update | BNR
'OpenAI komt in december met opvolger GPT-4 taalmodel'

Tech Update | BNR

Play Episode Listen Later Oct 25, 2024 6:11


Op 30 november is het twee jaar geleden dat ChatGPT gelanceerd werd, destijds met het taalmodel GPT-3.5. Inmiddels is ChatGPT met GPT-4 flink gegroeid in prestaties, maar voor het einde van het jaar zou de volgende generatie van het taalmodel alweer gepresenteerd worden door OpenAI.  Dat meldt techsite The Verge op basis van bronnen. Of het gaat om 'GPT-5' is niet bekend, maar intern zou OpenAI werken met de codenaam Orion. Vernoemd naar een sterrenbeeld dat over het algemeen tussen november en februari te zien is. De verwachting is dat het nieuwe taalmodel rond de verjaardag van ChatGPT gepresenteerd wordt, maar niet direct voor het grote publiek beschikbaar is. Eerst zal het nieuwe taalmodel intern met partners verder getest worden. Het taalmodel zou op dit moment al getraind worden met 'systhetische data', data die door de huidige taalmodellen gegenereerd is.  OpenAI zit in een hectische periode. Het bedrijf haalde laatst 6,6 miljard dollar aan investering op en is van plan om een for profit bedrijf te worden. Tegelijk hebben een hoop topmensen het bedrijf de afgelopen maanden verlaten. Afgelopen week vertrok AGI-adviseur Miles Brundage nog, hij denk dat niemand klaar is voor AGI. Ook OpenAI zelf niet.  Verder in deze Tech Update: Britse marktwaakhond ACM start officieel onderzoek naar deal tussen Google-moeder Alphabet en AI-startup Anthropic Apple teaset naar 'week vol spannende aankondigingen', met mogelijk nieuwe iMac, Macbook Pro en Mac Mini.    See omnystudio.com/listener for privacy information.

Philosophical Disquisitions
108 - Miles Brundage (Head of Policy Research at Open AI) on the speed of AI development and the risks and opportunities of GPT

Philosophical Disquisitions

Play Episode Listen Later May 3, 2023


[UPDATED WITH CORRECT EPISODE LINK]In this episode I chat to Miles Brundage. Miles leads the policy research team at Open AI. Unsurprisingly, we talk a lot about GPT and generative AI. Our conservation covers the risks that arise from their use, their speed of development, how they should be regulated, the harms they may cause and the opportunities they create. We also talk a bit about what it is like working at OpenAI and why Miles made the transition from academia to industry (sort of). Lots of useful insight in this episode from someone at the coalface of AI development. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

The Nonlinear Library
EA - Future Matters #4: AI timelines, AGI risk, and existential risk from climate change by Pablo

The Nonlinear Library

Play Episode Listen Later Aug 8, 2022 28:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #4: AI timelines, AGI risk, and existential risk from climate change, published by Pablo on August 8, 2022 on The Effective Altruism Forum. But if it is held that each generation can by its own deliberate acts determine for good or evil the destinies of the race, then our duties towards others reach out through time as well as through space, and our contemporaries are only a negligible fraction of the “neighbours” to whom we owe obligations. The ethical end may still be formulated, with the Utilitarians, as the greatest happiness of the greatest number [...] This extension of the moral code, if it is not yet conspicuous in treatises on Ethics, has in late years been obtaining recognition in practice. John Bagnell Bury Future Matters is a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Research Jacob Steinhardt's AI forecasting: one year in reports and discusses the results of a forecasting contest on AI progress that the author launched a year ago. Steinhardt's main finding is that progress on all three capability benchmarks occurred much faster than the forecasters predicted. Moreover, although the forecasters performed poorly, they would—in Steinhardt's estimate—probably have outperformed the median AI researcher. That is, the forecasters in the tournament appear to have had more aggressive forecasts than the experts did, yet their forecasts turned out to be insufficiently, rather than excessively, aggressive. The contest is still ongoing; you can participate here. Tom Davidson's Social returns to productivity growth estimates the long-run welfare benefits of increasing productivity via R&D funding to determine whether it might be competitive with other global health and wellbeing interventions, such as cash transfers or malaria nets. Davidson's toy model suggests that average returns to R&D are roughly 20 times lower than Open Philanthropy's minimum bar for funding in this space. He emphasizes that only very tentative conclusions should be drawn from this work, given substantial limitations to his modelling. Miles Brundage discusses Why AGI timeline research/discourse might be overrated. He suggests that more work on the issue has diminishing returns, and is unlikely to narrow our uncertainty or persuade many more relevant actors that AGI could arrive soon. Moreover, Brundage is somewhat skeptical of the value of timelines information for decision-making by important actors. In the comments, Adam Gleave reports finding such information useful for prioritizing within technical AI safety research, and Carl Shulman points to numerous large philanthropic decisions whose cost-benefit depends heavily on AI timelines. In Two-year update on my personal AI timelines, Ajeya Cotra outlines how her forecasts for transformative AI (TAI) have changed since 2020. Her timelines have gotten considerably shorter: she now puts ~35% probability density on TAI by 2036 (vs. 15% previously) and her median TAI date is now 2040 (vs. 2050). One of the drivers of this update is a somewhat lowered threshold for TAI. While Cotra was previously imagining that a TAI model would have to be able to automate most of scientific research, she now believes that AI systems able to automate most of AI/ML research specifically would be sufficient to set off an explosive feedback loop of accelerating capabilities. Back in 2016, Katja Grace and collaborators ran a survey of machine learning researchers, the main results of which were published the following year. Grace's What do ML researchers think about AI in 2022? reports on the preliminary re...

SuperDataScience
SDS 597: A.I. Policy at OpenAI

SuperDataScience

Play Episode Listen Later Aug 2, 2022 83:17


Dr. Miles Brundage, Head of Policy Research at OpenAI, joins Jon Krohn this week to discuss AI model production, policy, safety, and alignment. Tune in to hear him speak on GPT-3, DALL-E, Codex, and CLIP as well. In this episode you will learn: • Miles' role as Head of Policy Research at OpenAI [4:35] • OpenAI's DALL-E model [7:20] • OpenAI's natural language model GPT-3 [30:43] • OpenAI's automated software-writing model Codex [36:57] • OpenAI's CLIP model [44:01] • What sets AI policy, AI safety, and AI alignment apart from each other [1:07:03] • How A.I. will likely augment more professions than it displaces them [1:12:06] Additional materials: www.superdatascience.com/597

head ai policy openai gpt clip codex policy research openai's dall e miles brundage jon krohn
The Nonlinear Library
EA - Why AGI Timeline Research/Discourse Might Be Overrated by Miles Brundage

The Nonlinear Library

Play Episode Listen Later Jul 3, 2022 16:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI Timeline Research/Discourse Might Be Overrated, published by Miles Brundage on July 3, 2022 on The Effective Altruism Forum. TL;DR: Research and discourse on AGI timelines aren't as helpful as they may at first appear, and a lot of the low-hanging fruit (i.e. motivating AGI-this-century as a serious possibility) has already been plucked. Introduction A very common subject of discussion among EAs is “AGI timelines.” Roughly, AGI timelines, as a research or discussion topic, refer to the time that it will take before very general AI systems meriting the moniker “AGI” are built, deployed, etc. (one could flesh this definition out and poke at it in various ways, but I don't think the details matter much for my thesis here—see “What this post isn't about” below). After giving some context and scoping, I argue below that while important in absolute terms, improving the quality of AGI timelines isn't as useful as it may first appear. Just in the past few months, a lot of digital ink has been spilled, and countless in-person conversations have occurred, about whether recent developments in AI (e.g. DALL-E 2.0, Imagen, PALM, Minerva) suggest a need for updating one's AGI timelines to be shorter. Interest in timelines has informed a lot of investment in surveys, research on variables which may be correlated with timelines like compute, etc. At least dozens of smart-person-years have been spent on this question; possibly the number is more like hundreds or thousands. AGI timelines are, at least a priori, very important to reduce uncertainty about, to the extent that's possible. Whether one's timelines are “long” or “short” could be relevant to how one makes career investments—e.g. “exploiting” by trying to maximize influence over AI outcomes in the near-term, or “exploring” by building up skills that can be leveraged later. Timelines could also be relevant to what kinds of alignment research directions are useful, and which policy levers to consider (e.g. whether a plan that may take decades to pan out is worth seriously thinking about, or whether the “ship will have sailed” before then). I buy those arguments to an extent, and indeed I have spent some time myself working on this topic. I've written or co-authored various papers and blog posts related to AI progress and its conceptualization/measurement, I've contributed to papers and reports that explicitly made forecasts about what capabilities were plausible on a given time horizon, and I have participated in numerous surveys/scenario exercises/workshops/conferences etc. where timelines loomed large. And being confused/intrigued by people's widely varying timelines is part of how I first got involved in AI, so it has a special place in my heart. I'll kcertainly eep doing some things related to timelines myself, and think some others with special knowledge and skills should also continue to do so. But I think that, as with many research and discussion topics, there are diminishing returns on trying to understand AGI timelines better and talking widely about them. A lot of the low-hanging fruit from researching timelines has already been plucked, and even much higher levels of certainty on this question (if that were possible) wouldn't have all the benefits that might naively be suspected. I'm not sure exactly how much is currently being invested in timeline research, so I am deliberately vague here as to how big of a correction, if any, is actually needed compared to the current level of investment. As a result of feedback on this post, I may find out that there's actually less work on this than I thought, that some of my arguments are weaker than I thought, etc. and update my views. But currently, while I think timelines should be valued very highly compared to a random research topic, I suspect that many reading thi...

Papers Read on AI
Evaluating Large Language Models Trained on Code

Papers Read on AI

Play Episode Listen Later Jun 28, 2022 53:01


We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Fur-thermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics. 2021: Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, F. Such, D. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, I. Babuschkin, S. Balaji, Shantanu Jain, A. Carr, J. Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, M. Knight, Miles Brundage, Mira Murati, Katie Mayer, P. Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba https://arxiv.org/pdf/2107.03374v2.pdf

The Gradient Podcast
Miles Brundage on AI Misuse and Trustworthy AI

The Gradient Podcast

Play Episode Listen Later Nov 23, 2021 54:03


In episode 17 of The Gradient Podcast, we talk to Miles Brundage, Head of Policy Research at OpenAI and a researcher passionate about the responsible governance of artificial intelligence. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSLinks:Will Technology Make Work Better for Everyone?Economic Possibilities for Our Children: Artificial Intelligence and the Future of Work, Education, and LeisureTaking Superintelligence SeriouslyThe Malicious Use of Artificial Intelligence: Forecasting, Prevention, and MitigationRelease Strategies and the Social Impact of Language ModelsAll the News that's Fit to Fabricate: AI-Generated Text as a Tool of Media MisinformationToward Trustworthy AI Development: Mechanisms for Supporting Verifiable ClaimsTimeline:(00:00) Intro(01:05) How did you get started in AI(07:05) Writing about AI on Slate(09:20) Start of PhD(13:00) AI and the End of Scarcity(18:12) Malicious Uses of AI(28:00) GPT-2 and Publication Norms(33:30) AI-Generated Text for Misinformation(37:05) State of AI Misinformation(41:30) Trustworthy AI(48:50) OpenAI Policy Research Team(53:15) OutroMiles is a researcher and research manager, and is passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before that, he was a Research Fellow at the University of Oxford's Future of Humanity Institute, where he is still a Research Affiliate).He also serves as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019.Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"Hosted by Andrey Kurenkov (@andrey_kurenkov), a PhD student with the Stanford Vision and Learning Lab working on learning techniques for robotic manipulation and search. Get full access to The Gradient at thegradientpub.substack.com/subscribe

Papers Read on AI
Evaluating Large Language Models Trained on Code

Papers Read on AI

Play Episode Listen Later Aug 22, 2021 52:47


Codex, a GPT language model fine tuned on publicly available code from GitHub, and study its Python code-writing capabilities. 2021: Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, J. Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea. Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, F. Such, D. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, I. Babuschkin, S. Balaji, Shantanu Jain, A. Carr, J. Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, M. Knight, Miles Brundage, Mira Murati, Katie Mayer, P. Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba https://arxiv.org/pdf/2107.03374.pdf

Gradient Dissent - A Machine Learning Podcast by W&B
Societal Impacts of AI with Miles Brundage

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Jun 30, 2020 62:25


Miles Brundage researches the societal impacts of artificial intelligence and how to make sure they go well. In 2018, he joined OpenAI, as a Research Scientist on the Policy team. Previously, he was a Research Fellow at the University of Oxford's Future of Humanity Institute and served as a member of Axon's AI and Policing Technology Ethics Board. Keep up with Miles on his website: https://www.milesbrundage.com/ and on Twitter: https://twitter.com/miles_brundage Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast

80,000 Hours Podcast with Rob Wiblin
#54 - OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 19, 2019 173:39


OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm. How is this possible and what does it show? In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems. A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map. When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space. Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcript This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software. The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy. Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2. We discuss: • What are the most significant changes in the AI policy world over the last year or two? • What capabilities are likely to develop over the next five, 10, 15, 20 years? • How much should we focus on the next couple of years, versus the next couple of decades? • How should we approach possible malicious uses of AI? • What are some of the potential ways OpenAI could make things worse, and how can they be avoided? • Publication norms for AI research • Where do we stand in terms of arms races between countries or different AI labs? • The case for creating newsletters • Should the AI community have a closer relationship to the military? • Working at OpenAI vs. working in the US government • How valuable is Twitter in the AI policy world? Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss: • The reaction to OpenAI's release of GPT-2 • Jack’s critique of our US AI policy article • How valuable are roles in government? • Where do you start if you want to write content for a specific audience? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Dissecting the Controversy around OpenAI's New Language Model - TWiML Talk #234

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Feb 25, 2019 66:22


If you’re listening to this podcast, you’ve likely seen some of the press coverage and discussion surrounding the release, or lack thereof, of OpenAI’s new GPT-2 Language Model. The announcement caused quite a stir, with reactions spanning confusion, frustration, concern, and many points in between. Several days later, many open questions remained about the model and the way the release was handled. Seeing the continued robust discourse, and wanting to offer the community a forum for exploring this topic with more nuance than Twitter’s 280 characters allow, we convened the inaugural “TWiML Live” panel. I was joined on the panel by Amanda Askell and Miles Brundage of OpenAI, Anima Anandkumar of NVIDIA and CalTech, Robert Munro of Lilt, and Stephen Merity, the latter being some of the most outspoken voices in the online discussion of this issue. Our discussion thoroughly explored the many issues surrounding the GPT-2 release controversy. We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many. The discussion initially aired via Youtube Live and we’re happy to share it with you via the podcast as well. To be clear, both the panel discussion and live stream format were a bit of an experiment for us and we’d love to hear your thoughts on it. Would you like to see, or hear, more of these TWiML Live conversations? If so, what issues would you like us to take on? If you have feedback for us on the format or if you’d like to join the discussion around OpenAI’s GPT-2 model, head to the show notes page for this show at twimlai.com/talk/234 and leave us a comment.

Eye On A.I.
Episode 5 - Miles Brundage

Eye On A.I.

Play Episode Listen Later Nov 6, 2018 22:33


In this episode of Eye on AI, I talk to Miles Brundage, who studies the societal impacts of artificial intelligence and works on the policy team of OpenAI, the nonprofit A.I. research company founded by Elon Musk. When I spoke to Miles, he was a research fellow at the University of Oxford’s Future of Humanity Institute, where he remains an associate. We talked about the policy side of AI security and whether he is optimistic that regulations can steer machine learning applications away from the nightmare scenarios popularly imagined. I hope you find Miles as interesting as I did.

Top of Mind with Julie Rose
Hawaii's Erupting Volcano, Motherhood and Politics, Carbon Neutrality

Top of Mind with Julie Rose

Play Episode Listen Later May 23, 2018 101:52


Estelle Chaussard explains why Kilauea keeps erupting. Debra Schilling Wolfe of the Univ of PA explains why homeless youth are victims of human trafficking. Laurel Elder of Hartwick College points out a new emphasis on motherhood in campaigns. Miles Brundage of Arizona State Univ and the future of A-I. Storyteller Sam Payne of The Apple Seed. Lera Boroditsky of the UCSD argues that language shapes the way we think. Nobel Prize winner William Moomaw questions EPA statement on carbon neutrality.

Y Combinator
#72 - Miles Brundage and Tim Hwang

Y Combinator

Play Episode Listen Later Apr 25, 2018 46:18


Miles Brundage is an AI Policy Research Fellow with the Strategic AI Research Center at the Future of Humanity Institute. He is also a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University.Miles recently co-authored The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.Tim Hwang is the Director of the Harvard-MIT Ethics and Governance of AI Initiative. He is also a Visiting Associate at the Oxford Internet Institute and a Fellow at the Knight-Stanford Project on Democracy and the Internet. This is Tim's second time on the podcast; he was also on episode 11.The YC podcast is hosted by Craig Cannon.

The Cyberlaw Podcast
Interview with Miles Brundage and Shahar Avin

The Cyberlaw Podcast

Play Episode Listen Later Mar 6, 2018 57:08


Our interview features an excellent and mostly grounded exploration of how artificial intelligence could become a threat as a result of the cybersecurity arms race. Maury Shenk does much of the interviewing in London. He talks to Miles Brundage, AI Policy Research Fellow at the Future of Humanity Institute at Oxford and Shahar Avin of the Centre for the Study of Existential Risk and Research Associate at Cambridge. They are principal authors of a paper titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation.” The discussion was mostly grounded, as I said, but I did manage to work in a reference to the all-too-plausible threat of a hacking, bargaining AI sent by aliens from other star systems. In the news roundup, semi-regular contributor Gus Hurwitz does a post-mortem on the oral argument in the Microsoft-Ireland case. Maury notes that Google has issued its most detailed report yet on how it's implementing the right to be forgotten. My takeaway: Apart from censoring media in their own countries, everyone's favorite censorship targets seem to be U.S. sites. I am not comforted that 90 percent of the censorship stays home, since the rest of it seems aimed at keeping true facts from, well, me. Gus evaluates the latest Securities and Exchange Commission cybersecurity guidance. Bottom line: no surprises, but a good thing nonetheless. I do a quick recap of the CFIUS butcher's bill for Chinese deals. It's every bit as ugly as you'd expect. The Xcerra and Cogint deals have collapsed over chip and personal data worries. The Genworth deal is on the bubble. And CFIUS is taking unprecedented action to intervene in the Qualcomm-Broadcom proxy fight. A new contributor, Megan Reiss of the R Street Institute, unpacks a couple of new security industry reports covering the emergence of false flags at the Olympics and the increasingly blurred line between criminal and state cyberespionage. Maury covers the latest EU effort to wrongfoot Big Tech over scrubbing terrorist content. And I try to broaden the point, noting that the idea of a tech “platform” immunity has begun to fray even in the US, the land of its birth. For those listeners afraid to traverse the feverswamps of conservative media, I bring back a story that shows why the loss of Big Tech platform immunity is shaping up as a bipartisan issue. Would you believe that CNN has bought an industrial washing machine so that it can spin stories more efficiently before airing them?  Do you need Snopes.com to tell you that's satire? Does anyone need an anonymous Big Tech finger-wagger to tell you it's fake news and threaten the site with penalties for repeat offenses? If not, you can see the right is uncomfortable with Big Tech as media gatekeeper. Finally, as a bit of comic relief, last week Edward Snowden took to Twitter to criticize Apple for posing as a protector of privacy while actually cozying up to a dictatorship. Really. You can't make this stuff up. As always The Cyberlaw Podcast is open to feedback.  Send your questions, suggestions for interview candidates or topics to CyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785. The Cyberlaw Podcast is thinking of hiring a part-time intern for our Washington, DC offices. If you are interested, visit our website in the next week or so at Steptoe.com/careers. Download the 206th Episode (mp3). Subscribe to The Cyberlaw Podcast here.  We are also on iTunes, Pocket Casts, and Google Play (available for Android and Google Chrome)! The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

The Cyberlaw Podcast
Interview with Miles Brundage and Shahar Avin

The Cyberlaw Podcast

Play Episode Listen Later Mar 5, 2018 57:07


In our 206th episode of The Cyberlaw Podcast, Stewart Baker, Maury Shenk, Megan Reiss and Gus Hurwitz discuss: evaluating the oral argument in Microsoft’s Ireland case; Google issues a report on how it’s implementing the Right To Be Forgotten; the Securities and Exchange Commission issues cybersecurity guidance; CFIUS: Chinese bodies keep piling up: Xcerra deal fails; Cogint fails too; and Genworth is on the bubble; next steps in attribution: false flags at the Olympics; Facebook, Google get one hour from the European Union to scrub terror content; related: Section 230 “platform” immunity begins to fray in the land of its birth; why this will end in tears; the story; the apology; blurred line between criminal and state cyberespionage; Edward Snowden criticizes Apple for posing as a protector of privacy while actually cozying up to a dictatorship. Words fail me; should we be worried about interstellar hacks. Our guest interview is Miles Brundage, AI Policy Research Fellow at the Future of Humanity Institute at Oxford and Shahar Avin of the Centre for the Study of Existential Risk and Research Associate at Cambridge to discuss their newly released paper The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm. 096866

Algocracy and Transhumanism Podcast
Episode #35 – Brundage on the Case for Conditional Optimism about AI

Algocracy and Transhumanism Podcast

Play Episode Listen Later Jan 15, 2018


In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal … More Episode #35 – Brundage on the Case for Conditional Optimism about AI

Philosophical Disquisitions
Episode #35 - Brundage on the Case for Conditional Optimism about AI

Philosophical Disquisitions

Play Episode Listen Later Jan 15, 2018


In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford's Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI.You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:00 - Why did Miles write the conditional case for AI optimism?5:07 - What is AI anyway?8:26 - The difference between broad and narrow forms of AI12:00 - Is the current excitement around AI hype or reality?16:13 - What is the conditional case for AI conditional upon?22:00 - The First Argument: The Value of Task Expedition29:30 - The downsides of task expedition and the problem of speed mismatches33:28 - How AI changes our cognitive ecology36:00 - The Second Argument: The Value of Improved Coordination40:50 - Wouldn't AI be used for malicious purposes too?45:00 - Can we create safe AI in the absence of global coordination?48:03 - The Third Argument: The Value of a Leisure Society52:30 - Would a leisure society really be utopian?56:24 - How were Miles's arguments received when presented at the EU parliament?  Relevant LinksMiles's HomepageMiles's past publicationsMiles at the Future of Humanity InstituteVideo of Miles's presentation to the EU Parliament (starts at approx 10:05:19 or 1 hour and 1 minute into the video)Olle Haggstrom's write-up about the EU parliament event'Cognitive Scarcity and Artificial Intelligence' by Miles Brundage and John Danaher  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

EARadio
EAG 2017 SF: Working in AI (multiple speakers)

EARadio

Play Episode Listen Later Nov 3, 2017 49:34


Working in AI with Jan Leike, Andrew Snyder-Beattie, Malo Bourgonm, Miles Brundage, and Helen Toner. Source: Effective Altruism Global (video).

ai miles brundage multiple speakers
80,000 Hours Podcast with Rob Wiblin
#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 5, 2017 55:15


Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.