Podcast appearances and mentions of miles brundage

  • 24PODCASTS
  • 28EPISODES
  • 59mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about miles brundage

Latest podcast episodes about miles brundage

ChinaTalk
EMERGENCY POD: DeepSeek R1 and the Future of AI Competition with Miles Brundage

ChinaTalk

Play Episode Listen Later Jan 24, 2025 32:33


Miles Brundage, a six year OpenAI vet who ran its Policy Research and AGI readiness arms, discuss why all your deepseek takes are so terrible. Outtro music: The Departure, Max Richter https://www.youtube.com/watch?v=8R5Ppb9wqjY Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
EMERGENCY POD: DeepSeek R1 and the Future of AI Competition with Miles Brundage

ChinaEconTalk

Play Episode Listen Later Jan 24, 2025 32:33


Miles Brundage, a six year OpenAI vet who ran its Policy Research and AGI readiness arms, discuss why all your deepseek takes are so terrible. Outtro music: The Departure, Max Richter https://www.youtube.com/watch?v=8R5Ppb9wqjY Learn more about your ad choices. Visit megaphone.fm/adchoices

Mixture of Experts
Episode 36: OpenAI o3, DeepSeek-V3, and the Brundage/Marcus AI bet

Mixture of Experts

Play Episode Listen Later Jan 3, 2025 39:19


Is deep learning hitting a wall? It's 2025 and Mixture of Experts is back and better than ever. In episode 36, host Tim Hwang is joined by Chris Hay, Kate Soule and Kush Varshney to debrief one of the biggest releases of 2024, OpenAI o3. Next, DeepSeek-V3 is here! Finally, will AI exist in 2027? The experts dissect the AI bet between Miles Brundage and Gary Marcus. All that and more on the first Mixture of Experts of 2025.The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.00:00 — Intro00:49 — OpenAI o314:40 — DeepSeek-V328:00 — The Brundage/Marcus bet

The Retort AI Podcast
The Retort's biggest AI stories of 2024

The Retort AI Podcast

Play Episode Listen Later Dec 6, 2024 47:45


We're back! Tom and Nate catch up after the Thanksgiving holiday. Our main question was -- what were the biggest AI stories of the year? We touch on the core themes of the show: infrastructure, AI realities, and and antitrust. The power buildout to scale out AI is going to have very real long-term impacts.Some links this week:* Ben Thompson's, The End of the Beginning: https://stratechery.com/2020/the-end-of-the-beginning/* Miles Brundage's Substack: https://milesbrundage.substack.com/p/why-im-leaving-openai-and-what-im* Stochastic Parrots paper: https://dl.acm.org/doi/10.1145/3442188.3445922Thanks for listening! Get The Retort (https://retortai.com/)…… on YouTube: https://www.youtube.com/@TheRetortAIPodcast… on Spotify: https://open.spotify.com/show/0FDjH8ujv7p8ELZGkBvrfv?si=fa17a4d408f245ee… on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-retort-ai-podcast/id1706223190… Follow Interconnects: https://www.interconnects.ai/… email us: mail@retortai.com

Sway
Billionaire Game Theory + We Are Not Ready for A.G.I. + Election Betting Markets Get Weird

Sway

Play Episode Listen Later Nov 1, 2024 71:28


Last week, Jeff Bezos canceled the Washington Post editorial board's plan to endorse Kamala Harris. Are tech billionaires hedging their bets in case Donald Trump wins? Then, Miles Brundage, a former OpenAI senior adviser on artificial general intelligence readiness, stops by to tell us how his old company is doing when it comes to being ready for superintelligence, and whether we should all keep saving for retirement. And finally, David Yaffe-Bellany, a Times technology reporter, joins us to explore the rise of Polymarket, a crypto-powered betting platform, and discuss whether prediction markets can tell us who is going to win the election. Guests:Miles Brundage, former OpenAI senior adviser for A.G.I. readiness.David Yaffe-Bellany, technology reporter for The New York Times. Additional Reading:Jeff Bezos, Elon Musk and the Billions of Ways to Influence an ElectionMiles Brundage's on Why He's Leaving OpenAIThe Crypto Website Where the Election Odds Swing in Trump's Favor We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Learning Tech Talks
Weekly Update | OpenAI Safety Dismantled | Meta Lawsuit | AI Wrongful Death | Chanel AI Mishap | AI vs. Humans Research

Learning Tech Talks

Play Episode Listen Later Nov 1, 2024 58:47


DescriptionHappy Friday, everyone, and congratulations on making it through another week. What better way to kick off November 2024 than a rundown on the latest happenings at the intersection of business, technology, and human experience? As usual, I picked five of my favorites. With that, let's get into it. OpenAI Safety Team Disbands, Again - OpenAI is making headlines as their safety team falls apart yet again after losing executive Miles Brundage. While some of the noise around it is likely just noise, his cryptic warning that OpenAI is not ready for what it's created has some folks rightfully perking up their eyes and ears. Meta Social Media Lawsuits - While big tech companies keep trying to use Section 230 as an immunity shield from the negative impact of social media, a judge has determined lawsuits will be allowed. What exactly that will mean for Meta and other big tech companies is still TBD, but they will see their day in court. Google & Character.AI Sued - It's tragic whenever someone takes their life. It's even more tragic when it's a teenager fueled to take the path by an AI bot. While AI bots are promoted as “for entertainment purposes only,” it's obvious entertainment isn't the only outcome. We continue seeing new legal precedents being established, and it's just the beginning. GenAI Bias Flub with Chanel - I'm not exactly sure what Chanel's CEO Leena Nair expected when she asked AI to create an image of her executive team or why on earth anyone at Microsoft moved forward with the request during her headquarters visit. However, it demonstrated how far we still have to go in mitigating bias in AI training data and why it's so important to use AI properly. AI vs. Humans Research - Where is AI better than humans and vice versa? A recent study tried to answer that question. Unfortunately, while the data validates many of the things we already know, it also is ripe for cherry-picking, depending on the story you're trying to tell. While there were some interesting findings, I won't be retracting any of my previous statements based on the results. #ai #ethicalAI #Meta #Microsoft #lawsuit

The Marketing AI Show
#121: New Claude 3.5 Sonnet and Computer Use, Wild OpenAI "Orion" Rumors, Dark Side of AI Companions & Ex-OpenAI Researcher Sounds Alarm on AGI

The Marketing AI Show

Play Episode Listen Later Oct 29, 2024 76:19


Next-gen models emerge while safety concerns reach a boiling point. Join Mike Kaput and Paul Roetzer as they unpack last weeks wave of AI updates, including Anthropic's Claude 3.5 models and computer use capabilities, plus the brewing rumors about OpenAI's "Orion" and Google's Gemini 2.0. In our other main topics, we review the tragic Florida case raising alarms about AI companion apps, and ex-OpenAI researcher Miles Brundage's stark warnings about AGI preparedness. Today's episode is brought to you by rasa.io. Rasa.io makes staying in front of your audience easy. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber, ensuring every email you send is not just relevant but compelling. Visit rasa.io/maii and sign up with the code 5MAII for an exclusive 5% discount for podcast listeners.  Today's episode is also brought to you by our AI for Agencies Summit, a virtual event taking place from 12pm - 5pm ET on Wednesday, November 20. Visit www.aiforagencies.com and use the code POD100 for $100 off your ticket. 00:05:04 — AI Model Releases and Rumors: New Claude Model + Computer Use, Claude Analysis Tool, OpenAI Doubles Down on AI for Code, Perplexity Pro Reasoning Update, Runway Act-One, Eleven Labs Voice Design, Stable Diffusion 3.5, The Rumors 00:27:07 — The Dark Side of AI Companions 00:39:29 — Ex-OpenAI Researcher Sounds Alarm on AGI Preparedness 00:47:57 — AI + National Security 00:53:14 — Microsoft vs. Salesforce Over Agents 00:57:08 — Disney AI Initiative  01:00:17 — Apple Intelligence Photos 01:03:03 — Google Open Sourcing SynthID 01:06:32 — OpenAI + Fair Use 01:10:43 — Using Gemini to Prep for Public Speaking Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute

Let's Talk AI
#187 - Anthropic Agents, Mochi1, 3.4B data center, OpenAI's FAST image gen

Let's Talk AI

Play Episode Listen Later Oct 28, 2024 129:38


Our 187th episode with a summary and discussion of last week's big AI news, now with Jeremie co-hosting once again! With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris) Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter (00:03:07) Response to listener comments / corrections (00:05:13) Sponsor Read) Tools & Apps(00:06:22) Anthropic's latest AI update can use a computer on its own (00:18:09) AI video startup Genmo launches Mochi 1, an open source rival to Runway, Kling, and others (00:20:37) Canva has a shiny new text-to-image generator (00:23:35) Canvas Beta brings Remix, Extend, and Magic Fill to Ideogram users (00:26:16) StabilityAI releases Stable Diffusion 3.5  (00:28:27) Bringing Agentic Workflows into Inflection for Enterprise Applications & Business(00:32:35) Crusoe's $3.4B joint venture to build AI data center campus with up to 100,000 GPUs (00:39:08) Anthropic reportedly in early talks to raise new funding on up to $40B valuation (00:45:47) Longtime policy researcher Miles Brundage leaves OpenAI (00:49:53) NVIDIA's Blackwell GB200 AI Servers Ready For Mass Deployment In December (00:52:41) Foxconn building Nvidia superchip facility in Mexico, executives say (00:55:27) xAI, Elon Musk's AI startup, launches an API Projects & Open Source(00:58:32) INTELLECT-1: The First Decentralized 10-Billion-Parameter AI Model Training (01:06:34) Meta FAIR Releases Eight New AI Research Artifacts—Models, Datasets, and Tools to Inspire the AI Community (01:10:02) Google DeepMind is making its AI text watermark open source Research & Advancements(01:13:21) OpenAI researchers develop new model that speeds up media generation by 50X (01:17:54) How much AI compute is out there, and who owns it? (01:25:28) Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning (01:33:30) Inference Scaling for Long-Context Retrieval Augmented Generation Policy & Safety(01:41:50) Announcing our updated Responsible Scaling Policy (01:48:52) Anthropic is testing AI's capacity for sabotage (01:56:30) OpenAI asked US to approve energy-guzzling 5GW data centers, report says (02:00:05) US Probes TSMC's Dealings with Huawei (02:03:03) TikTok owner ByteDance taps TSMC to make its own AI GPUs to stop relying on Nvidia — the company has reportedly spent over $2 billion on Nvidia AI GPUs (02:06:37) Outro

Grumpy Old Geeks
671: Lorum Ipsum Is My Sister

Grumpy Old Geeks

Play Episode Listen Later Oct 26, 2024 67:34


San Fran embracing self-driving cars; not-Bitcoin creator in hiding; i h8 ai; anti-AI artist open letter; X updates their policies; more people leave OpenAI; SynthID; 23andMe and your genetic data; no more fake online reviews; private equity acquires Squarespace; right to repair; Tesla Blade Runner AI ripoff; Star Trek frogs; the Riker Maneuver; de-extinction; a whole slew of great new shows dropping - Star Trek, Dune, Silo & more; Good Omens season 3 now just a movie; Cruel World and nostalgia fatigue; are we not retired? we are Devo; Fresco, free; Penguin adds a robots.txt file to their books.Sponsors:HelloFresh - Get 10 FREE meals at HelloFresh.com/freegogPrivate Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordDeleteMe - Head over to JoinDeleteMe.com/GOG and use the code "GOG" for 20% off.1Password Extended Access Management - Check it out at 1Password.com/grumpyoldgeeks. Secure every sign-in for every app on every device.Show notes at https://gog.show/671FOLLOW UPHow San Francisco Learned to Love Self-Driving CarsPeter Todd Is in Hiding After a Documentary Named Him as Bitcoin's CreatorIN THE NEWSi h8 aiMore than 10,500 artists sign open letter protesting unlicensed AI trainingX updates its privacy policy to allow third parties to train AI models with its dataFormer OpenAI Researcher Says the Company Broke Copyright LawOpenAI and Microsoft are funding $10 million in grants for AI-powered journalismByteDance intern fired for planting malicious code in AI modelsLongtime policy researcher Miles Brundage leaves OpenAIGoogle offers its AI watermarking tech as free open source toolkit23andMe faces an uncertain future — so does your genetic dataA federal ban on fake online reviews is now in effectPrivate Equity Firm Permira Acquires Squarespace for $7.2 BillionThe Feds Are Coming for John Deere Over the Right to RepairMeta bans private jet tracking accounts on Instagram and ThreadsElon Musk, Tesla and WBD sued over alleged 'Blade Runner 2049' AI ripoff for Cybercab promotionSeven newly named frog species make whistles that sound like Star TrekRiker sits downDe-extinction company provides a progress report on thylacine effortsMEDIA CANDYShrinking S2 - Out nowThe Diplomat S2 - Oct 31Star Trek: Lower Decks S5 - Oct 24Silo S2 - Nov 15Dune: Prophecy - Nov 17Star Trek: Section 31 - Jan 25, 2025Star Trek: Strange New Worlds S3 - 2025‘Star Trek: Starfleet Academy' Gets Early Season 2 Renewal, Adds Tatiana Maslany As Recurring‘Black Mirror': ‘Outer Banks' & ‘She Hulk' Actor Nicholas Cirillo Joins Cast Of Season 7‘Good Omens' To End With One 90-Minute Episode As Neil Gaiman Exits Following Sexual Assault AllegationsMidnight MassBuffy the Vampire Slayer Is Finally Streaming for Free in Time for HalloweenThe Lincoln Lawyer Season 3Lioness | Season 2 Sneak Peek | Paramount+ - Oct 27thCruel World FestDevo Has the Uncontrollable Urge to RetireAPPS & DOODADSAdobe made its painting app completely free to take on ProcreateMidjourney launches AI image editor: how to use itStartup School: Gen AIAI in Marketing: Fast-track your skillsPerplexity AI app for macOS now available on the Mac App StoreBluesky Teases Creator Payments While New Sign-Ups Explode After Elon Musk's Destruction of TwitterNew AirPods Pro 2 firmware now available for iOS 18.1's hearing health featuresApple's macOS Sequoia lets you snap windows into position — here's howWeb Design MuseumDiff Text - Compare Text OnlineSetAppJOIN TIMBALAND AND DISCOVER HOW SUNO CAN ELEVATE YOUR SOUNDSan Francisco to pay $212 million to end reliance on 5.25-inch floppy disksAT THE LIBRARYPenguin Adds a Do-Not-Scrape-for-AI Page to Its BooksBookcase by AstropadCLOSING SHOUT-OUTSPhilip G. Zimbardo, the Stanford psychologist behind the controversial 'Stanford Prison Experiment' dies at 91Ward Christensen, BBS inventor and architect of our online age, dies at age 78Dodgers icon Fernando Valenzuela is gone. But 'Fernandomania' will live forever.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Tech Update | BNR
'OpenAI komt in december met opvolger GPT-4 taalmodel'

Tech Update | BNR

Play Episode Listen Later Oct 25, 2024 6:11


Op 30 november is het twee jaar geleden dat ChatGPT gelanceerd werd, destijds met het taalmodel GPT-3.5. Inmiddels is ChatGPT met GPT-4 flink gegroeid in prestaties, maar voor het einde van het jaar zou de volgende generatie van het taalmodel alweer gepresenteerd worden door OpenAI.  Dat meldt techsite The Verge op basis van bronnen. Of het gaat om 'GPT-5' is niet bekend, maar intern zou OpenAI werken met de codenaam Orion. Vernoemd naar een sterrenbeeld dat over het algemeen tussen november en februari te zien is. De verwachting is dat het nieuwe taalmodel rond de verjaardag van ChatGPT gepresenteerd wordt, maar niet direct voor het grote publiek beschikbaar is. Eerst zal het nieuwe taalmodel intern met partners verder getest worden. Het taalmodel zou op dit moment al getraind worden met 'systhetische data', data die door de huidige taalmodellen gegenereerd is.  OpenAI zit in een hectische periode. Het bedrijf haalde laatst 6,6 miljard dollar aan investering op en is van plan om een for profit bedrijf te worden. Tegelijk hebben een hoop topmensen het bedrijf de afgelopen maanden verlaten. Afgelopen week vertrok AGI-adviseur Miles Brundage nog, hij denk dat niemand klaar is voor AGI. Ook OpenAI zelf niet.  Verder in deze Tech Update: Britse marktwaakhond ACM start officieel onderzoek naar deal tussen Google-moeder Alphabet en AI-startup Anthropic Apple teaset naar 'week vol spannende aankondigingen', met mogelijk nieuwe iMac, Macbook Pro en Mac Mini.    See omnystudio.com/listener for privacy information.

Philosophical Disquisitions
108 - Miles Brundage (Head of Policy Research at Open AI) on the speed of AI development and the risks and opportunities of GPT

Philosophical Disquisitions

Play Episode Listen Later May 3, 2023


[UPDATED WITH CORRECT EPISODE LINK]In this episode I chat to Miles Brundage. Miles leads the policy research team at Open AI. Unsurprisingly, we talk a lot about GPT and generative AI. Our conservation covers the risks that arise from their use, their speed of development, how they should be regulated, the harms they may cause and the opportunities they create. We also talk a bit about what it is like working at OpenAI and why Miles made the transition from academia to industry (sort of). Lots of useful insight in this episode from someone at the coalface of AI development. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

The Nonlinear Library
EA - Future Matters #4: AI timelines, AGI risk, and existential risk from climate change by Pablo

The Nonlinear Library

Play Episode Listen Later Aug 8, 2022 28:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #4: AI timelines, AGI risk, and existential risk from climate change, published by Pablo on August 8, 2022 on The Effective Altruism Forum. But if it is held that each generation can by its own deliberate acts determine for good or evil the destinies of the race, then our duties towards others reach out through time as well as through space, and our contemporaries are only a negligible fraction of the “neighbours” to whom we owe obligations. The ethical end may still be formulated, with the Utilitarians, as the greatest happiness of the greatest number [...] This extension of the moral code, if it is not yet conspicuous in treatises on Ethics, has in late years been obtaining recognition in practice. John Bagnell Bury Future Matters is a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Research Jacob Steinhardt's AI forecasting: one year in reports and discusses the results of a forecasting contest on AI progress that the author launched a year ago. Steinhardt's main finding is that progress on all three capability benchmarks occurred much faster than the forecasters predicted. Moreover, although the forecasters performed poorly, they would—in Steinhardt's estimate—probably have outperformed the median AI researcher. That is, the forecasters in the tournament appear to have had more aggressive forecasts than the experts did, yet their forecasts turned out to be insufficiently, rather than excessively, aggressive. The contest is still ongoing; you can participate here. Tom Davidson's Social returns to productivity growth estimates the long-run welfare benefits of increasing productivity via R&D funding to determine whether it might be competitive with other global health and wellbeing interventions, such as cash transfers or malaria nets. Davidson's toy model suggests that average returns to R&D are roughly 20 times lower than Open Philanthropy's minimum bar for funding in this space. He emphasizes that only very tentative conclusions should be drawn from this work, given substantial limitations to his modelling. Miles Brundage discusses Why AGI timeline research/discourse might be overrated. He suggests that more work on the issue has diminishing returns, and is unlikely to narrow our uncertainty or persuade many more relevant actors that AGI could arrive soon. Moreover, Brundage is somewhat skeptical of the value of timelines information for decision-making by important actors. In the comments, Adam Gleave reports finding such information useful for prioritizing within technical AI safety research, and Carl Shulman points to numerous large philanthropic decisions whose cost-benefit depends heavily on AI timelines. In Two-year update on my personal AI timelines, Ajeya Cotra outlines how her forecasts for transformative AI (TAI) have changed since 2020. Her timelines have gotten considerably shorter: she now puts ~35% probability density on TAI by 2036 (vs. 15% previously) and her median TAI date is now 2040 (vs. 2050). One of the drivers of this update is a somewhat lowered threshold for TAI. While Cotra was previously imagining that a TAI model would have to be able to automate most of scientific research, she now believes that AI systems able to automate most of AI/ML research specifically would be sufficient to set off an explosive feedback loop of accelerating capabilities. Back in 2016, Katja Grace and collaborators ran a survey of machine learning researchers, the main results of which were published the following year. Grace's What do ML researchers think about AI in 2022? reports on the preliminary re...

SuperDataScience
SDS 597: A.I. Policy at OpenAI

SuperDataScience

Play Episode Listen Later Aug 2, 2022 83:17


Dr. Miles Brundage, Head of Policy Research at OpenAI, joins Jon Krohn this week to discuss AI model production, policy, safety, and alignment. Tune in to hear him speak on GPT-3, DALL-E, Codex, and CLIP as well. In this episode you will learn: • Miles' role as Head of Policy Research at OpenAI [4:35] • OpenAI's DALL-E model [7:20] • OpenAI's natural language model GPT-3 [30:43] • OpenAI's automated software-writing model Codex [36:57] • OpenAI's CLIP model [44:01] • What sets AI policy, AI safety, and AI alignment apart from each other [1:07:03] • How A.I. will likely augment more professions than it displaces them [1:12:06] Additional materials: www.superdatascience.com/597

head ai policy openai gpt clip codex policy research openai's dall e miles brundage jon krohn
The Nonlinear Library
EA - Why AGI Timeline Research/Discourse Might Be Overrated by Miles Brundage

The Nonlinear Library

Play Episode Listen Later Jul 3, 2022 16:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI Timeline Research/Discourse Might Be Overrated, published by Miles Brundage on July 3, 2022 on The Effective Altruism Forum. TL;DR: Research and discourse on AGI timelines aren't as helpful as they may at first appear, and a lot of the low-hanging fruit (i.e. motivating AGI-this-century as a serious possibility) has already been plucked. Introduction A very common subject of discussion among EAs is “AGI timelines.” Roughly, AGI timelines, as a research or discussion topic, refer to the time that it will take before very general AI systems meriting the moniker “AGI” are built, deployed, etc. (one could flesh this definition out and poke at it in various ways, but I don't think the details matter much for my thesis here—see “What this post isn't about” below). After giving some context and scoping, I argue below that while important in absolute terms, improving the quality of AGI timelines isn't as useful as it may first appear. Just in the past few months, a lot of digital ink has been spilled, and countless in-person conversations have occurred, about whether recent developments in AI (e.g. DALL-E 2.0, Imagen, PALM, Minerva) suggest a need for updating one's AGI timelines to be shorter. Interest in timelines has informed a lot of investment in surveys, research on variables which may be correlated with timelines like compute, etc. At least dozens of smart-person-years have been spent on this question; possibly the number is more like hundreds or thousands. AGI timelines are, at least a priori, very important to reduce uncertainty about, to the extent that's possible. Whether one's timelines are “long” or “short” could be relevant to how one makes career investments—e.g. “exploiting” by trying to maximize influence over AI outcomes in the near-term, or “exploring” by building up skills that can be leveraged later. Timelines could also be relevant to what kinds of alignment research directions are useful, and which policy levers to consider (e.g. whether a plan that may take decades to pan out is worth seriously thinking about, or whether the “ship will have sailed” before then). I buy those arguments to an extent, and indeed I have spent some time myself working on this topic. I've written or co-authored various papers and blog posts related to AI progress and its conceptualization/measurement, I've contributed to papers and reports that explicitly made forecasts about what capabilities were plausible on a given time horizon, and I have participated in numerous surveys/scenario exercises/workshops/conferences etc. where timelines loomed large. And being confused/intrigued by people's widely varying timelines is part of how I first got involved in AI, so it has a special place in my heart. I'll kcertainly eep doing some things related to timelines myself, and think some others with special knowledge and skills should also continue to do so. But I think that, as with many research and discussion topics, there are diminishing returns on trying to understand AGI timelines better and talking widely about them. A lot of the low-hanging fruit from researching timelines has already been plucked, and even much higher levels of certainty on this question (if that were possible) wouldn't have all the benefits that might naively be suspected. I'm not sure exactly how much is currently being invested in timeline research, so I am deliberately vague here as to how big of a correction, if any, is actually needed compared to the current level of investment. As a result of feedback on this post, I may find out that there's actually less work on this than I thought, that some of my arguments are weaker than I thought, etc. and update my views. But currently, while I think timelines should be valued very highly compared to a random research topic, I suspect that many reading thi...

Papers Read on AI
Evaluating Large Language Models Trained on Code

Papers Read on AI

Play Episode Listen Later Jun 28, 2022 53:01


We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Fur-thermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics. 2021: Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, F. Such, D. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, I. Babuschkin, S. Balaji, Shantanu Jain, A. Carr, J. Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, M. Knight, Miles Brundage, Mira Murati, Katie Mayer, P. Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba https://arxiv.org/pdf/2107.03374v2.pdf

The Gradient Podcast
Miles Brundage on AI Misuse and Trustworthy AI

The Gradient Podcast

Play Episode Listen Later Nov 23, 2021 54:03


In episode 17 of The Gradient Podcast, we talk to Miles Brundage, Head of Policy Research at OpenAI and a researcher passionate about the responsible governance of artificial intelligence. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSLinks:Will Technology Make Work Better for Everyone?Economic Possibilities for Our Children: Artificial Intelligence and the Future of Work, Education, and LeisureTaking Superintelligence SeriouslyThe Malicious Use of Artificial Intelligence: Forecasting, Prevention, and MitigationRelease Strategies and the Social Impact of Language ModelsAll the News that's Fit to Fabricate: AI-Generated Text as a Tool of Media MisinformationToward Trustworthy AI Development: Mechanisms for Supporting Verifiable ClaimsTimeline:(00:00) Intro(01:05) How did you get started in AI(07:05) Writing about AI on Slate(09:20) Start of PhD(13:00) AI and the End of Scarcity(18:12) Malicious Uses of AI(28:00) GPT-2 and Publication Norms(33:30) AI-Generated Text for Misinformation(37:05) State of AI Misinformation(41:30) Trustworthy AI(48:50) OpenAI Policy Research Team(53:15) OutroMiles is a researcher and research manager, and is passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before that, he was a Research Fellow at the University of Oxford's Future of Humanity Institute, where he is still a Research Affiliate).He also serves as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019.Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"Hosted by Andrey Kurenkov (@andrey_kurenkov), a PhD student with the Stanford Vision and Learning Lab working on learning techniques for robotic manipulation and search. Get full access to The Gradient at thegradientpub.substack.com/subscribe

Papers Read on AI
Evaluating Large Language Models Trained on Code

Papers Read on AI

Play Episode Listen Later Aug 22, 2021 52:47


Codex, a GPT language model fine tuned on publicly available code from GitHub, and study its Python code-writing capabilities. 2021: Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, J. Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea. Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, F. Such, D. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, I. Babuschkin, S. Balaji, Shantanu Jain, A. Carr, J. Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, M. Knight, Miles Brundage, Mira Murati, Katie Mayer, P. Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba https://arxiv.org/pdf/2107.03374.pdf

Gradient Dissent - A Machine Learning Podcast by W&B
Societal Impacts of AI with Miles Brundage

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Jun 30, 2020 62:25


Miles Brundage researches the societal impacts of artificial intelligence and how to make sure they go well. In 2018, he joined OpenAI, as a Research Scientist on the Policy team. Previously, he was a Research Fellow at the University of Oxford's Future of Humanity Institute and served as a member of Axon's AI and Policing Technology Ethics Board. Keep up with Miles on his website: https://www.milesbrundage.com/ and on Twitter: https://twitter.com/miles_brundage Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast

80,000 Hours Podcast with Rob Wiblin
#54 - OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 19, 2019 173:39


OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm. How is this possible and what does it show? In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems. A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map. When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space. Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcript This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software. The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy. Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2. We discuss: • What are the most significant changes in the AI policy world over the last year or two? • What capabilities are likely to develop over the next five, 10, 15, 20 years? • How much should we focus on the next couple of years, versus the next couple of decades? • How should we approach possible malicious uses of AI? • What are some of the potential ways OpenAI could make things worse, and how can they be avoided? • Publication norms for AI research • Where do we stand in terms of arms races between countries or different AI labs? • The case for creating newsletters • Should the AI community have a closer relationship to the military? • Working at OpenAI vs. working in the US government • How valuable is Twitter in the AI policy world? Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss: • The reaction to OpenAI's release of GPT-2 • Jack’s critique of our US AI policy article • How valuable are roles in government? • Where do you start if you want to write content for a specific audience? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Dissecting the Controversy around OpenAI's New Language Model - TWiML Talk #234

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Feb 25, 2019 66:22


If you’re listening to this podcast, you’ve likely seen some of the press coverage and discussion surrounding the release, or lack thereof, of OpenAI’s new GPT-2 Language Model. The announcement caused quite a stir, with reactions spanning confusion, frustration, concern, and many points in between. Several days later, many open questions remained about the model and the way the release was handled. Seeing the continued robust discourse, and wanting to offer the community a forum for exploring this topic with more nuance than Twitter’s 280 characters allow, we convened the inaugural “TWiML Live” panel. I was joined on the panel by Amanda Askell and Miles Brundage of OpenAI, Anima Anandkumar of NVIDIA and CalTech, Robert Munro of Lilt, and Stephen Merity, the latter being some of the most outspoken voices in the online discussion of this issue. Our discussion thoroughly explored the many issues surrounding the GPT-2 release controversy. We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many. The discussion initially aired via Youtube Live and we’re happy to share it with you via the podcast as well. To be clear, both the panel discussion and live stream format were a bit of an experiment for us and we’d love to hear your thoughts on it. Would you like to see, or hear, more of these TWiML Live conversations? If so, what issues would you like us to take on? If you have feedback for us on the format or if you’d like to join the discussion around OpenAI’s GPT-2 model, head to the show notes page for this show at twimlai.com/talk/234 and leave us a comment.

Eye On A.I.
Episode 5 - Miles Brundage

Eye On A.I.

Play Episode Listen Later Nov 6, 2018 22:33


In this episode of Eye on AI, I talk to Miles Brundage, who studies the societal impacts of artificial intelligence and works on the policy team of OpenAI, the nonprofit A.I. research company founded by Elon Musk. When I spoke to Miles, he was a research fellow at the University of Oxford’s Future of Humanity Institute, where he remains an associate. We talked about the policy side of AI security and whether he is optimistic that regulations can steer machine learning applications away from the nightmare scenarios popularly imagined. I hope you find Miles as interesting as I did.

Top of Mind with Julie Rose
Hawaii's Erupting Volcano, Motherhood and Politics, Carbon Neutrality

Top of Mind with Julie Rose

Play Episode Listen Later May 23, 2018 101:52


Estelle Chaussard explains why Kilauea keeps erupting. Debra Schilling Wolfe of the Univ of PA explains why homeless youth are victims of human trafficking. Laurel Elder of Hartwick College points out a new emphasis on motherhood in campaigns. Miles Brundage of Arizona State Univ and the future of A-I. Storyteller Sam Payne of The Apple Seed. Lera Boroditsky of the UCSD argues that language shapes the way we think. Nobel Prize winner William Moomaw questions EPA statement on carbon neutrality.

Y Combinator
#72 - Miles Brundage and Tim Hwang

Y Combinator

Play Episode Listen Later Apr 25, 2018 46:18


Miles Brundage is an AI Policy Research Fellow with the Strategic AI Research Center at the Future of Humanity Institute. He is also a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University.Miles recently co-authored The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.Tim Hwang is the Director of the Harvard-MIT Ethics and Governance of AI Initiative. He is also a Visiting Associate at the Oxford Internet Institute and a Fellow at the Knight-Stanford Project on Democracy and the Internet. This is Tim's second time on the podcast; he was also on episode 11.The YC podcast is hosted by Craig Cannon.

The Cyberlaw Podcast
Interview with Miles Brundage and Shahar Avin

The Cyberlaw Podcast

Play Episode Listen Later Mar 5, 2018 57:07


In our 206th episode of The Cyberlaw Podcast, Stewart Baker, Maury Shenk, Megan Reiss and Gus Hurwitz discuss: evaluating the oral argument in Microsoft’s Ireland case; Google issues a report on how it’s implementing the Right To Be Forgotten; the Securities and Exchange Commission issues cybersecurity guidance; CFIUS: Chinese bodies keep piling up: Xcerra deal fails; Cogint fails too; and Genworth is on the bubble; next steps in attribution: false flags at the Olympics; Facebook, Google get one hour from the European Union to scrub terror content; related: Section 230 “platform” immunity begins to fray in the land of its birth; why this will end in tears; the story; the apology; blurred line between criminal and state cyberespionage; Edward Snowden criticizes Apple for posing as a protector of privacy while actually cozying up to a dictatorship. Words fail me; should we be worried about interstellar hacks. Our guest interview is Miles Brundage, AI Policy Research Fellow at the Future of Humanity Institute at Oxford and Shahar Avin of the Centre for the Study of Existential Risk and Research Associate at Cambridge to discuss their newly released paper The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm. 096866

Algocracy and Transhumanism Podcast
Episode #35 – Brundage on the Case for Conditional Optimism about AI

Algocracy and Transhumanism Podcast

Play Episode Listen Later Jan 15, 2018


In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal … More Episode #35 – Brundage on the Case for Conditional Optimism about AI

Philosophical Disquisitions
Episode #35 - Brundage on the Case for Conditional Optimism about AI

Philosophical Disquisitions

Play Episode Listen Later Jan 15, 2018


In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford's Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal of Responsible Innovation (JRI). His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI.You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here). Show Notes0:00 - Introduction1:00 - Why did Miles write the conditional case for AI optimism?5:07 - What is AI anyway?8:26 - The difference between broad and narrow forms of AI12:00 - Is the current excitement around AI hype or reality?16:13 - What is the conditional case for AI conditional upon?22:00 - The First Argument: The Value of Task Expedition29:30 - The downsides of task expedition and the problem of speed mismatches33:28 - How AI changes our cognitive ecology36:00 - The Second Argument: The Value of Improved Coordination40:50 - Wouldn't AI be used for malicious purposes too?45:00 - Can we create safe AI in the absence of global coordination?48:03 - The Third Argument: The Value of a Leisure Society52:30 - Would a leisure society really be utopian?56:24 - How were Miles's arguments received when presented at the EU parliament?  Relevant LinksMiles's HomepageMiles's past publicationsMiles at the Future of Humanity InstituteVideo of Miles's presentation to the EU Parliament (starts at approx 10:05:19 or 1 hour and 1 minute into the video)Olle Haggstrom's write-up about the EU parliament event'Cognitive Scarcity and Artificial Intelligence' by Miles Brundage and John Danaher  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

EARadio
EAG 2017 SF: Working in AI (multiple speakers)

EARadio

Play Episode Listen Later Nov 3, 2017 49:34


Working in AI with Jan Leike, Andrew Snyder-Beattie, Malo Bourgonm, Miles Brundage, and Helen Toner. Source: Effective Altruism Global (video).

ai multiple speakers miles brundage
80,000 Hours Podcast with Rob Wiblin
#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jun 5, 2017 55:15


Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.