Podcast appearances and mentions of Thomas Larsen

  • 40PODCASTS
  • 121EPISODES
  • 45mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 14, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Thomas Larsen

Latest podcast episodes about Thomas Larsen

Unsupervised Learning
Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years

Unsupervised Learning

Play Episode Listen Later May 14, 2025 83:27


The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME's 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models. (0:00) Intro(1:15) Overview of AI 2027(2:32) AI Development Timeline(4:10) Race and Slowdown Branches(12:52) US vs China(18:09) Potential AI Misalignment(31:06) Getting Serious About the Threat of AI(47:23) Predictions for AI Development by 2027(48:33) Public and Government Reactions to AI Concerns(49:27) Policy Recommendations for AI Safety(52:22) Diverging Views on AI Alignment Timelines(1:01:30) The Role of Public Awareness in AI Safety(1:02:38) Reflections on Insider vs. Outsider Strategies(1:10:53) Future Research and Scenario Planning(1:14:01) Best and Worst Case Outcomes for AI(1:17:02) Final Thoughts and Hopes for the Future With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

LessWrong Curated Podcast
“Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis” by jeanne_, eeeee

LessWrong Curated Podcast

Play Episode Listen Later May 1, 2025 27:35


In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean has been discussed across Chinese language platforms. We present: Our research methodology and synthesis of key findings across media artefacts A proposal for how censorship patterns may provide signal for the Chinese government's thinking about AGI and the race to superintelligence A more detailed analysis of each of the nine artefacts, organised by type: Mainstream Media, Forum Discussion, Bilibili (Chinese Youtube) Videos, Personal Blogs.Methodology We conducted a comprehensive search across major Chinese-language platforms–including news outlets, video platforms, forums, microblogging sites, and personal blogs–to collect the media featured in this report. We supplemented this with Deep Research to identify additional sites mentioning AI 2027. Our analysis focuses primarily on content published in the first few days (4-7 April) following the report's release. More media [...] ---Outline:(00:58) Methodology(01:36) Summary(02:48) Censorship as Signal(07:29) Analysis(07:53) Mainstream Media(07:57) English Title: Doomsday Timeline is Here! Former OpenAI Researcher's 76-page Hardcore Simulation: ASI Takes Over the World in 2027, Humans Become NPCs(10:27) Forum Discussion(10:31) English Title: What do you think of former OpenAI researcher's AI 2027 predictions?(13:34) Bilibili Videos(13:38) English Title: [AI 2027] A mind-expanding wargame simulation of artificial intelligence competition by a former OpenAI researcher(15:24) English Title: Predicting AI Development in 2027(17:13) Personal Blogs(17:16) English Title: Doomsday Timeline: AI 2027 Depicts the Arrival of Superintelligence and the Fate of Humanity Within the Decade(18:30) English Title: AI 2027: Expert Predictions on the Artificial Intelligence Explosion(21:57) English Title: AI 2027: A Science Fiction Article(23:16) English Title: Will AGI Take Over the World in 2027?(25:46) English Title: AI 2027 Prediction Report: AI May Fully Surpass Humans by 2027(27:05) Acknowledgements--- First published: April 30th, 2025 Source: https://www.lesswrong.com/posts/JW7nttjTYmgWMqBaF/early-chinese-language-media-coverage-of-the-ai-2027-report --- Narrated by TYPE III AUDIO.

Slate Star Codex Podcast
Introducing AI 2027

Slate Star Codex Podcast

Play Episode Listen Later Apr 14, 2025 8:10


Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn't expect what happened next. He got it all right. Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel's document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel's blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years. I wasn't the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized. Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including: Eli Lifland, a superforecaster who is ranked first on RAND's Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models. Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion. Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle. Romeo Dean, a leader of Harvard's AI Safety Student Team and budding expert in AI hardware. …and me! Since October, I've been volunteering part-time, doing some writing and publicity work. I can't take credit for the forecast itself - or even for the lion's share of the writing and publicity - but it's been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we'll get as lucky as last time, but we still think it's a valuable contribution to the discussion. https://www.astralcodexten.com/p/introducing-ai-2027 https://ai-2027.com/

On The Brink with Castle Island
Weekly Roundup 04/03/25 (Circle S1, Tariffs, Galaxy settles, BitMEX pardons) (EP.609)

On The Brink with Castle Island

Play Episode Listen Later Apr 4, 2025 36:14


Matt and Nic are back for another week of news and deals. In this episode:  What's the deal with the tariffs? Are the tariffs 4D chess? STABLE Act advances from committee Galaxy settles with the NYAG for touting Luna Trump pardons the BitMEX founders Coinlist returns to the US FDUSD has a depeg amidst Justin Sun drama Fidelity launches a no-fee crypto IRA product Larry Fink is bullish bitcoin Content mentioned: AI 2027, by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean

LessWrong Curated Podcast
“AI 2027: What Superintelligence Looks Like” by Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, romeo

LessWrong Curated Podcast

Play Episode Listen Later Apr 3, 2025 54:30


In 2021 I wrote what became my most popular blog post: What 2026 Looks Like. I intended to keep writing predictions all the way to AGI and beyond, but chickened out and just published up till 2026. Well, it's finally time. I'm back, and this time I have a team with me: the AI Futures Project. We've written a concrete scenario of what we think the future of AI will look like. We are highly uncertain, of course, but we hope this story will rhyme with reality enough to help us all prepare for what's ahead. You really should go read it on the website instead of here, it's much better. There's a sliding dashboard that updates the stats as you scroll through the scenario! But I've nevertheless copied the first half of the story below. I look forward to reading your comments.Mid 2025: Stumbling Agents The [...] ---Outline:(01:35) Mid 2025: Stumbling Agents(03:13) Late 2025: The World's Most Expensive AI(08:34) Early 2026: Coding Automation(10:49) Mid 2026: China Wakes Up(13:48) Late 2026: AI Takes Some Jobs(15:35) January 2027: Agent-2 Never Finishes Learning(18:20) February 2027: China Steals Agent-2(21:12) March 2027: Algorithmic Breakthroughs(23:58) April 2027: Alignment for Agent-3(27:26) May 2027: National Security(29:50) June 2027: Self-improving AI(31:36) July 2027: The Cheap Remote Worker(34:35) August 2027: The Geopolitics of Superintelligence(40:43) September 2027: Agent-4, the Superhuman AI Researcher--- First published: April 3rd, 2025 Source: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 --- Narrated by TYPE III AUDIO. ---Images from the article:

EXPresso
#117 Thomas Larsen: Oljefondet og det å forvalte 90 milliarder dollar

EXPresso

Play Episode Listen Later Feb 18, 2025 69:01


I episode #117 prater jeg med Thomas Larsen; ledende porteføljeforvalter i Oljefondet, bosatt i New York. I dag snakker vi om: Thomas sine tidligere år og eksponering til den internasjonal scenen Starten på karrieren i EY og hvordan han skapte en mulighet for å flytte til New York tidlig Hvordan det er å bo i New York - som familiefar og hvordan få hverdagen til å gå opp Ingenting kommer gratis, du må putte inn timene Forskjellen på å jobbe mye vs jobbe smart Forskjellen mellom Norge og USA Makrobildet i dag og perspektiver tiden fremover Hvordan Oljefondet jobber Thomas sin hverdag og hvordan han jobber med eksterne forvaltere og partnere over hele verden strategier for å navigere i globale markeder.

P1 Morgen
P1 Live

P1 Morgen

Play Episode Listen Later Jan 1, 2025 55:00


P1 Morgen sætter fokus på Kong Frederiks første nytårstale sammen med kongehusekspert Thomas Larsen og DR's kongehuskorrespondent Cecilie Nielsen. Vi analyserer de vigtigste budskaber, hvordan de blev leveret, og hvad der foregår i kulissen forud for en royal nytårstale. Vært: Bjarne Steensbeck.

thomas larsen cecilie nielsen p1 morgen bjarne steensbeck
Ledelsesalmanakken 2020
Tættere på sundheden: Almene boliger som nøgle

Ledelsesalmanakken 2020

Play Episode Listen Later Dec 1, 2024 34:15


Hvordan kan sundhedsvæsenet komme tættere på borgerne – både fysisk og menneskeligt? I denne episode af Ledelsesalmanakken undersøger vi, hvordan partnerskaber mellem regioner, kommuner og almene boligselskaber kan gøre en forskel i danskernes sundhed. Sammen med Bent Madsen, administrende direktør i BL, Thomas Larsen, lægefaglig koncerndirektør i Region Midtjylland, og Claus Bjørn Billehøj, partner i Mobilize, diskuterer vi konkrete løsninger, der møder de lokale behov: hvordan skaber vi et sundhedsvæsen, der møder folk dér, hvor behovet er størst? Hvordan bliver almene boligområder til fundamentet for bedre sundhedsindsatser? Og hvad kræver det at bygge partnerskaber, der gør en forskel? Lyt med, og få svarene! Månedens gæster: administrende direktør i BL, Bent Madsen, lægefaglig koncerndirektør i Region Midtjylland, Thomas Larsen og Claus Bjørn Billehøj, partner i Mobilize Strategy Consulting Vært: Ida Spannow, juniorkonsulent i Mobilize Strategy Consulting Titelmusik: "Idéer ad omveje" af Asbjørn Busk Jørgensen

Tabloid
Kampen om krigen

Tabloid

Play Episode Listen Later Oct 11, 2024 54:53


Er det begivenhederne eller nogens mening om dem, der er vigtigst for medierne? Efter et års historier om terror og krig i Mellemøsten og en rasende debat i Danmark, ser Tabloid på dækningen, sproget, vinklerne og vægtningen i selskab med Layal Freije, vært på Radio IIII-programmet Gaza FM. Og så forsøger vi med hjælp fra ugens medvært Thomas Larsen - kongehus- og politisk kommentator - at finde ud af, hvorfor tidligere statsminister Anders Fogh Rasmussen ikke var med i dokumentarserien Velkommen til Frontlinjen. Burde han have været inviteret? spørger vi dokumentarchef i DR Mikala Krogh. Vært: Marie Louise Toksvig.

The ThinkND Podcast
Support & Care for Yourself and Others, Part 3: Aging Well

The ThinkND Podcast

Play Episode Listen Later Sep 9, 2024 50:12 Transcription Available


Episode Topic: Aging Well How can we age gracefully, resiliently, and mindfully? How can we best take advantage of all the opportunities offered by the aging process, such as the growth of our wisdom, experiences, and relationships? Join Notre Dame psychology professor Cindy Bergeman and her guest, family medicine practitioner Dr. Thomas Larsen of the South Bend Clinic in the final live session of the series as they discuss how we can learn to lean into change whether it is one we choose or not.Featured Speakers:Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: go.nd.edu/3ee891.This podcast is a part of the ThinkND Series titled Support & Care for Yourself and Others. Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career. Learn more about ThinkND and register for upcoming live events at think.nd.edu. Join our LinkedIn community for updates, episode clips, and more.

The Nonlinear Library
EA - Long-Term Future Fund: March 2024 Payout recommendations by Linch

The Nonlinear Library

Play Episode Listen Later Jun 12, 2024 56:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund: March 2024 Payout recommendations, published by Linch on June 12, 2024 on The Effective Altruism Forum. Introduction This payout report covers the Long-Term Future Fund's grantmaking from May 1 2023 to March 31 2024 (11 months). It follows our previous April 2023 payout report. Total funding recommended: $6,290,550 Total funding paid out: $5,363,105 Number of grants paid out: 141 Acceptance rate (excluding desk rejections): 159/672 = 23.7% Acceptance rate (including desk rejections): 159/825 = 19.3% Report authors: Linchuan Zhang (primary author), Caleb Parikh (fund chair), Oliver Habryka, Lawrence Chan, Clara Collier, Daniel Eth, Lauro Langosco, Thomas Larsen, Eli Lifland 25 of our grantees, who received a total of $790,251, requested that our public reports for their grants are anonymized (the table below includes those grants). 13 grantees, who received a total of $529, 819, requested that we not include public reports for their grants. You can read our policy on public reporting here. We referred at least 2 grants to other funders for evaluation. Highlighted Grants (The following grants writeups were written by me, Linch Zhang. They were reviewed by the primary investigators of each grant). Below, we highlighted some grants that we thought were interesting and covered a relatively wide scope of LTFF's activities. We hope that reading the highlighted grants can help donors make more informed decisions about whether to donate to LTFF.[1] Gabriel Mukobi ($40,680) - 9-month university tuition support for technical AI safety research focused on empowering AI governance interventions The Long-Term Future Fund provided a $40,680 grant to Gabriel Mukobi from September 2023 to June 2024, originally for 9 months of university tuition support. The grant enabled Gabe to pursue his master's program in Computer Science at Stanford, with a focus on technical AI governance. Several factors favored funding Gabe, including his strong academic background (4.0 GPA in Stanford CS undergrad with 6 graduate-level courses), experience in difficult technical AI alignment internships (e.g., at the Krueger lab), and leadership skills demonstrated by starting and leading the Stanford AI alignment group. However, some fund managers were skeptical about the specific proposed technical research directions, although this was not considered critical for a skill-building and career-development grant. The fund managers also had some uncertainty about the overall value of funding Master's degrees. Ultimately, the fund managers compared Gabe to marginal MATS graduates and concluded that funding him was favorable. They believed Gabe was better at independently generating strategic directions and being self-motivated for his work, compared to the median MATS graduate. They also considered the downside risks and personal costs of being a Master's student to be lower than those of independent research, as academia tends to provide more social support and mental health safeguards, especially for Master's degrees (compared to PhDs). Additionally, Gabe's familiarity with Stanford from his undergraduate studies was seen as beneficial on that axis. The fund managers also recognized the value of a Master's degree credential for several potential career paths, such as pursuing a PhD or working in policy. However, a caveat is that Gabe might have less direct mentorship relevant to alignment compared to MATS extension grantees. Outcomes: In a recent progress report, Gabe noted that the grant allowed him to dedicate more time to schoolwork and research instead of taking on part-time jobs. He produced several new publications that received favorable media coverage and was accepted to 4 out of 6 PhD programs he applied to. The grant also allowed him to finish graduating in March instead of Ju...

The Nonlinear Library
AF - New report: Safety Cases for AI by Josh Clymer

The Nonlinear Library

Play Episode Listen Later Mar 20, 2024 1:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New report: Safety Cases for AI, published by Josh Clymer on March 20, 2024 on The AI Alignment Forum. ArXiv paper: https://arxiv.org/abs/2403.10462 The idea for this paper occurred to me when I saw Buck Shlegeris' MATS stream on "Safety Cases for AI." How would one justify the safety of advanced AI systems? This question is fundamental. It informs how RSPs should be designed and what technical research is useful to pursue. For a long time, researchers have (implicitly or explicitly) discussed ways to justify that AI systems are safe, but much of this content is scattered across different posts and papers, is not as concrete as I'd like, or does not clearly state their assumptions. I hope this report provides a helpful birds-eye view of safety arguments and moves the AI safety conversation forward by helping to identify assumptions they rest on (though there's much more work to do to clarify these arguments). Thanks to my coauthors: Nick Gabrieli, David Krueger, and Thomas Larsen -- and to everyone who gave feedback: Henry Sleight, Ashwin Acharya, Ryan Greenblatt, Stephen Casper, David Duvenaud, Rudolf Laine, Roger Grosse, Hjalmar Wijk, Eli Lifland, Oliver Habryka, Sim eon Campos, Aaron Scher, Lukas Berglund, and Nate Thomas. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - New report: Safety Cases for AI by joshc

The Nonlinear Library

Play Episode Listen Later Mar 20, 2024 1:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New report: Safety Cases for AI, published by joshc on March 20, 2024 on LessWrong. ArXiv paper: https://arxiv.org/abs/2403.10462 The idea for this paper occurred to me when I saw Buck Shlegeris' MATS stream on "Safety Cases for AI." How would one justify the safety of advanced AI systems? This question is fundamental. It informs how RSPs should be designed and what technical research is useful to pursue. For a long time, researchers have (implicitly or explicitly) discussed ways to justify that AI systems are safe, but much of this content is scattered across different posts and papers, is not as concrete as I'd like, or does not clearly state their assumptions. I hope this report provides a helpful birds-eye view of safety arguments and moves the AI safety conversation forward by helping to identify assumptions they rest on (though there's much more work to do to clarify these arguments). Thanks to my coauthors: Nick Gabrieli, David Krueger, and Thomas Larsen -- and to everyone who gave feedback: Henry Sleight, Ashwin Acharya, Ryan Greenblatt, Stephen Casper, David Duvenaud, Rudolf Laine, Roger Grosse, Hjalmar Wijk, Eli Lifland, Oliver Habryka, Sim eon Campos, Aaron Scher, Lukas Berglund, and Nate Thomas. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - New report: Safety Cases for AI by joshc

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 20, 2024 1:32


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New report: Safety Cases for AI, published by joshc on March 20, 2024 on LessWrong. ArXiv paper: https://arxiv.org/abs/2403.10462 The idea for this paper occurred to me when I saw Buck Shlegeris' MATS stream on "Safety Cases for AI." How would one justify the safety of advanced AI systems? This question is fundamental. It informs how RSPs should be designed and what technical research is useful to pursue. For a long time, researchers have (implicitly or explicitly) discussed ways to justify that AI systems are safe, but much of this content is scattered across different posts and papers, is not as concrete as I'd like, or does not clearly state their assumptions. I hope this report provides a helpful birds-eye view of safety arguments and moves the AI safety conversation forward by helping to identify assumptions they rest on (though there's much more work to do to clarify these arguments). Thanks to my coauthors: Nick Gabrieli, David Krueger, and Thomas Larsen -- and to everyone who gave feedback: Henry Sleight, Ashwin Acharya, Ryan Greenblatt, Stephen Casper, David Duvenaud, Rudolf Laine, Roger Grosse, Hjalmar Wijk, Eli Lifland, Oliver Habryka, Sim eon Campos, Aaron Scher, Lukas Berglund, and Nate Thomas. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Prædiken på vej
Søndag seksagesima. Peder Kjærsgaard Roulund i samtale med Thomas Larsen

Prædiken på vej

Play Episode Listen Later Jan 22, 2024 20:31


Sognepræsterne Peder Kjærsgaard Roulund og Thomas Larsen taler om prædikenteksten til seksagesimasøndag: lignelsen om sædemanden (Markusevangeliet 4,26-32). En samtale om fornuftens begrænsning, og om ukrudtet, der ender med at være det, der bliver baggrund og fundament for kirken

On The Gutter
3 PBA titles or just 1? with Thomas Larsen! Ep99

On The Gutter

Play Episode Listen Later Jan 18, 2024 89:46


This week, Kurt is back from Team Canada trials. We recap the tournament and discuss longer tournament formats. We are joined by PBA Tour major champion Thomas Larsen. We talk about his different career path and his VERY unique PBA resume. Stay tuned in 2 weeks for Episode 100!https://www.amazon.com/dp/B0BNNMYSGQDiscount Code: EAFS2W25

Tabloid
Royal breaking og den første danske journalist i Gaza

Tabloid

Play Episode Listen Later Jan 5, 2024 54:39


Mens Danmark talte om kongehus, gik journalist Jotam Confino som den første danske journalist over grænsen til Gaza 1. januar. Men hvad kunne han egentlig opleve og fortælle, når han var pakket ind i det israelske militærs krav og kontrol på turen? "Min hidtil største breaking-story", kalder Ritzau-reporter Julie Stine Johansen den opgave, der ramte på nytårstale-vagten. Både hun og TV2 News-redaktør Lars Apel besøger Tabloid med frontberetninger fra de redaktioner, der pludselig fik meget travlt nytårsaften. Kongehus-kommentatorerne talt i døgndrift, men er det journalistik eller gætterier, når journalist og forfatter Thomas Larsen analyserer Dronningens beslutninger i fjernsynet? Tabloid spørger manden, der vidste mere end de fleste om den tophemmelige plan. Årets første medvært er journalist, kommunikationsrådgiver og tidligere spindoktor Rikke Lyngdal. Vært: Marie Louise Toksvig.

Mette & Magten
Sådan fik Mette F. besked fra Dronningen, KU taget i medlemsfusk og Mia Wagner i lortesag

Mette & Magten

Play Episode Listen Later Jan 5, 2024 53:34


Selvom politisk kommentator på News Peter Mogensen påstår, at man har kendt til Dronning Margrethes abdikation i et år i Statsministeriet, kan vi nu afsløre, at det var kort før nytår, statsministeren fik beskeden. Faktisk kom det bag på stort set alle - undtagen kongehuskommentator Thomas Larsen, der 10 minutter inden talen næsten kunne afsløre den store nyhed. Nu er han kommet i modvind, for hvordan vidste han det? Vi løfter også sløret for denne skandale.Værter: Anna Thygesen og Sanne Fahnøe.Gæster: Peter Astrup, B.T. og Christian Vigilius, Konservativ Ungdom.Producer: Pola Rojan Bagger.

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

Artificial General Intelligence (AGI) Show with Soroush Pour

Play Episode Listen Later Dec 14, 2023 97:19


We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.We talk to Thomas about a huge breadth of technical alignment areas including:* Prosaic alignment  * Scalable oversight (e.g. RLHF, debate, IDA)  * Intrepretability  * Heuristic arguments, from ARC  * Model evaluations* Agent foundations* Other areas more briefly:  * Model splintering  * Out-of-distribution (OOD) detection  * Low impact measures  * Threat modelling  * Scaling laws  * Brain-like AI safety  * Inverse reinforcement learning (RL)  * Cooperative AI  * Adversarial training  * Truthful AI  * Brain-machine interfaces (Neuralink)Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Thomas --Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.* Center for AI Policy - https://www.aipolicy.us* LinkedIn - https://www.linkedin.com/in/thomas-larsen/* LessWrong - https://www.lesswrong.com/users/thomas-larsen-- Further resources --* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is  * Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.

SHOTGUN - Viden er noget vi deler
Dronning Margrethe II og fremtidens kongehus

SHOTGUN - Viden er noget vi deler

Play Episode Listen Later Nov 3, 2023 26:22


Dronning Margrethe II og vores fremtids kongehus - hvad ser vi ind i? Velkommen til en særlig episode af vores videopodcast SHOTGUN! I dag har vi en unik gæst i studiet: kongehuseksperten Thomas Larsen. Han deler sin viden om Kongehuset, som han har tilegnet sig gennem mange dybdegående interviews af Dronning Margrete II.   Thomas fortæller om hendes utrolige viden om Danmarks historie, hendes evne til at samle det danske folk – se blot på nytårsaften, hvor vi som danskere mødes kl. 18.00 for at være i samme stue og lytte til Dronningens nytårstale. Emil udnævner hende som Danmarks ultimative influencer.     Thomas deler historier om de udfordringer, Dronning Margrethe har mødt undervejs. Vi hører om den glidende overgang til næste generation, hvor kronprins Frederik og kronprinsesse Mary allerede tager flere opgaver på sig.     Vi dykker også ned i de historiske øjeblikke, der har formet Kongehuset, og vi får et indblik i de råd, Thomas Larsen ville give Kronprins Frederik for at følge i sin mors fodspor. Men hvordan kan man egentlig efterfølge en så ikonisk skikkelse som Dronning Margrethe?   Så tag plads og lyt med, når Thomas Larsen fortæller om den fascinerende royale verden i denne spændende episode af SHOTGUN!   Velkommen til SHOTGUN! 

Prædiken på vej
17. søndag efter trinitatis. Flemming Kloster Poulsen i samtale med Thomas Larsen

Prædiken på vej

Play Episode Listen Later Sep 18, 2023 23:04


I denne udgave af Prædiken på vej om teksten til 17. søndag efter trinitatis handler det om bordfællesskab, identitet og ydmyghed i en kristen kontekst. Thomas Larsen, sognepræst i Sct. Mortens Kirke i Randers, taler med præst i den danske kirke i London, Flemming Kloster Poulsen, der med eksempler fra nyere dansk litteratur giver bud på veje ind til kernen af evangelieteksten.

The Nonlinear Library
EA - Policy ideas for mitigating AI risk by Thomas Larsen

The Nonlinear Library

Play Episode Listen Later Sep 16, 2023 16:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Policy ideas for mitigating AI risk, published by Thomas Larsen on September 16, 2023 on The Effective Altruism Forum. Note: This post contains personal opinions that don't necessarily match the views of others at CAIP. Executive Summary Advanced AI has the potential to cause an existential catastrophe. In this essay, I outline some policy ideas which could help mitigate this risk. Importantly, even though I focus on catastrophic risk here, there are many other reasons to ensure responsible AI development. I am not advocating for a pause right now. If we had a pause, I think it would only be useful insofar as we use the pause to implement governance structures that mitigate risk after the pause has ended. This essay outlines the important elements I think a good governance structure would include: visibility into AI development, and brakes that the government could use to stop dangerous AIs from being built. First, I'll summarize some claims about the strategic landscape. Then, I'll present a cursory overview of proposals I like for US domestic AI regulation. Finally, I'll talk about a potential future global coordination framework, and the relationship between this and a pause. The Strategic Landscape Claim 1: There's a significant chance that AI aligment is difficult. There is no scientific consensus on the difficulty of AI alignment. Chris Olah from Anthropic tweeted the following, simplified picture: ~40% of their estimate is on AI safety being harder than Apollo, which took around 1 million person-years. Given that less than a thousand people are working on AI safety, this viewpoint would seem to imply that there's a significant chance that we are far from being ready to build powerful AI safely. Given just Anthropic's alleged views, I think it makes sense to be ready to stop AI development. My personal views are more pessimistic than Anthropic's. Claim 2: In the absence of powerful aligned AIs, we need to prevent catastrophe-capable AI systems from being built. Given developers are not on track to align AI before it becomes catastrophically dangerous, we need the ability to slow down or stop before AI is catastrophically dangerous. There are several ways to do this. I think the best one involves building up the government's capacity to safeguard AI development. Set up government mechanisms to monitor and mitigate catastrophic AI risk, and empower them to institute a national moratorium on advancing AI if it gets too dangerous. (Eventually, the government could transition this into an international moratorium, while coordinating internationally to solve AI safety before that moratorium becomes infeasible to maintain. I describe this later.) Some others think it's better to try to build aligned AIs that defend against AI catastrophes. For example, you can imagine building defensive AIs that identify and stop emerging rogue AIs. To me, the main problem with this plan is that it assumes we will have the ability to align the defensive AI systems. Claim 3: There's a significant (>20%) chance AI will be capable enough to cause catastrophe by 2030. AI timelines have been discussed thoroughly elsewhere, so I'll only briefly note a few pieces of evidence for this claim I find compelling: Current trends in AI. Qualitatively, I think another jump of the size from GPT-2 to GPT-4 could get us to catastrophe-capable AI systems. Effective compute arguments, such as Ajeya Cotra's Bioanchors report. Hardware scaling, continued algorithmic improvement, investment hype are all continuing strongly, leading to a 10x/year increase of effective compute used to train the best AI system. Given the current rates of progress, I expect another factor of a million increase in effective compute by 2030. Some experts think powerful AI is coming soon, both inside and outside of frontier labs. ...

The Nonlinear Library
LW - Introducing the Center for AI Policy (& we're hiring!) by Thomas Larsen

The Nonlinear Library

Play Episode Listen Later Aug 28, 2023 3:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for AI Policy (& we're hiring!), published by Thomas Larsen on August 28, 2023 on LessWrong. Summary The Center for AI Policy is a new organization designed to influence US policy to reduce existential and catastrophic risks from advanced AI. We are hiring for an AI Policy Analyst and a Communications Director. We're also open to other roles. What is CAIP? The Center for AI Policy (CAIP) is an advocacy organization that aims to develop and promote policies that reduce risks from advanced AI. Our current focus is building "stop button for AI" capacity in the US government. We have proposed legislation to establish a federal authority that engages in hardware monitoring, licensing for advanced AI systems, and strict liability for extreme model harms. Our proposed legislation also develops the ability to "press the button" - the federal authority would also monitor catastrophic risks from advanced AI development, inform congress and the executive branch about frontier AI progress, and have emergency powers to shut down frontier AI development in the case of a clear emergency. More detail can be found in the work section of our website. We also aim to broadly raise awareness about extreme risks from AI by engaging with policymakers in congress and the executive branch. How does CAIP differ from other AI governance organizations? Nature of the work: Many organizations are focused on developing ideas and amassing influence that can be used later. CAIP is focused on turning policy ideas into concrete legislative text and conducting advocacy now. We want to harness the current energy to pass meaningful legislation this policy window, in addition to building a coalition for the future. We are also being explicit about extinction risk with policy makers as the motivation behind our policy ideas. Worldview: We believe that in order to prevent an AI catastrophe, governments likely need to prevent unsafe AI development for multiple years, which requires they have secured computing resources, understand risks, and are prepared to shut projects down. Our regulation aims to build that capacity. Who works at CAIP? CAIP's team includes Thomas Larsen (CEO), Jason Green-Lowe (Legislative Director), and Jakub Kraus (COO). CAIP is also advised by experts from other organizations and is supported by many volunteers. How does CAIP receive funding? We received initial funding through Lightspeed Grants and private donors. We are currently funding constrained and think that donating to us is very impactful. You can donate to us here. If you are considering donating but would like to learn more, please message us at info@aipolicy.us. CAIP is hiring CAIP is looking for an AI Policy Analyst and a Communications Director. We are also open to applicants with different skills. If you would be excited to work at CAIP, but don't fit into these specific job descriptions, we encourage you to reach out to info@aipolicy.us directly. If you know someone who might be a good fit, please fill out this referral form. Note that we are actively fundraising, and the number of people we are able to recruit is currently uncertain. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong Daily
LW - Introducing the Center for AI Policy (& we're hiring!) by Thomas Larsen

The Nonlinear Library: LessWrong Daily

Play Episode Listen Later Aug 28, 2023 3:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for AI Policy (& we're hiring!), published by Thomas Larsen on August 28, 2023 on LessWrong.SummaryThe Center for AI Policy is a new organization designed to influence US policy to reduce existential and catastrophic risks from advanced AI.We are hiring for an AI Policy Analyst and a Communications Director. We're also open to other roles.What is CAIP?The Center for AI Policy (CAIP) is an advocacy organization that aims to develop and promote policies that reduce risks from advanced AI.Our current focus is building "stop button for AI" capacity in the US government. We have proposed legislation to establish a federal authority that engages in hardware monitoring, licensing for advanced AI systems, and strict liability for extreme model harms. Our proposed legislation also develops the ability to "press the button" - the federal authority would also monitor catastrophic risks from advanced AI development, inform congress and the executive branch about frontier AI progress, and have emergency powers to shut down frontier AI development in the case of a clear emergency. More detail can be found in the work section of our website.We also aim to broadly raise awareness about extreme risks from AI by engaging with policymakers in congress and the executive branch.How does CAIP differ from other AI governance organizations?Nature of the work: Many organizations are focused on developing ideas and amassing influence that can be used later. CAIP is focused on turning policy ideas into concrete legislative text and conducting advocacy now. We want to harness the current energy to pass meaningful legislation this policy window, in addition to building a coalition for the future. We are also being explicit about extinction risk with policy makers as the motivation behind our policy ideas.Worldview: We believe that in order to prevent an AI catastrophe, governments likely need to prevent unsafe AI development for multiple years, which requires they have secured computing resources, understand risks, and are prepared to shut projects down. Our regulation aims to build that capacity.Who works at CAIP?CAIP's team includes Thomas Larsen (CEO), Jason Green-Lowe (Legislative Director), and Jakub Kraus (COO). CAIP is also advised by experts from other organizations and is supported by many volunteers.How does CAIP receive funding?We received initial funding through Lightspeed Grants and private donors.We are currently funding constrained and think that donating to us is very impactful. You can donate to us here. If you are considering donating but would like to learn more, please message us at info@aipolicy.us.CAIP is hiringCAIP is looking for an AI Policy Analyst and a Communications Director. We are also open to applicants with different skills. If you would be excited to work at CAIP, but don't fit into these specific job descriptions, we encourage you to reach out to info@aipolicy.us directly.If you know someone who might be a good fit, please fill out this referral form.Note that we are actively fundraising, and the number of people we are able to recruit is currently uncertain.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - Introducing the Center for AI Policy (and we're hiring!) by Thomas Larsen

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 28, 2023 3:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for AI Policy (& we're hiring!), published by Thomas Larsen on August 28, 2023 on LessWrong. Summary The Center for AI Policy is a new organization designed to influence US policy to reduce existential and catastrophic risks from advanced AI. We are hiring for an AI Policy Analyst and a Communications Director. We're also open to other roles. What is CAIP? The Center for AI Policy (CAIP) is an advocacy organization that aims to develop and promote policies that reduce risks from advanced AI. Our current focus is building "stop button for AI" capacity in the US government. We have proposed legislation to establish a federal authority that engages in hardware monitoring, licensing for advanced AI systems, and strict liability for extreme model harms. Our proposed legislation also develops the ability to "press the button" - the federal authority would also monitor catastrophic risks from advanced AI development, inform congress and the executive branch about frontier AI progress, and have emergency powers to shut down frontier AI development in the case of a clear emergency. More detail can be found in the work section of our website. We also aim to broadly raise awareness about extreme risks from AI by engaging with policymakers in congress and the executive branch. How does CAIP differ from other AI governance organizations? Nature of the work: Many organizations are focused on developing ideas and amassing influence that can be used later. CAIP is focused on turning policy ideas into concrete legislative text and conducting advocacy now. We want to harness the current energy to pass meaningful legislation this policy window, in addition to building a coalition for the future. We are also being explicit about extinction risk with policy makers as the motivation behind our policy ideas. Worldview: We believe that in order to prevent an AI catastrophe, governments likely need to prevent unsafe AI development for multiple years, which requires they have secured computing resources, understand risks, and are prepared to shut projects down. Our regulation aims to build that capacity. Who works at CAIP? CAIP's team includes Thomas Larsen (CEO), Jason Green-Lowe (Legislative Director), and Jakub Kraus (COO). CAIP is also advised by experts from other organizations and is supported by many volunteers. How does CAIP receive funding? We received initial funding through Lightspeed Grants and private donors. We are currently funding constrained and think that donating to us is very impactful. You can donate to us here. If you are considering donating but would like to learn more, please message us at info@aipolicy.us. CAIP is hiring CAIP is looking for an AI Policy Analyst and a Communications Director. We are also open to applicants with different skills. If you would be excited to work at CAIP, but don't fit into these specific job descriptions, we encourage you to reach out to info@aipolicy.us directly. If you know someone who might be a good fit, please fill out this referral form. Note that we are actively fundraising, and the number of people we are able to recruit is currently uncertain. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Comin' Home Podcast with John Alan
How to Enjoy the Process of Life, Music, and Emotion - episode 207 with my guest Thomas Larsen

The Comin' Home Podcast with John Alan

Play Episode Listen Later Aug 27, 2023 52:35


People often look for balance in life. Thomas Larsen seems to have found a good balance while enjoying his life, his music, and his self-expression. Thomas Larsen is the owner of Fabriken Studios in Malmø Sweden. Check out his podcast titled WRITE THAT DAMN SONG! HERE Check out Thomas Larsen on Facebook https://www.facebook.com/FabrikenStudios and Instagram https://www.instagram.com/fabriken_studios/   * MY AUDIOBOOK SUBSCRIPTION IS NOW AVAILABLE! Click HERE to get it. * If you'd like to support The Comin' Home Podcast With John Alan, you can do that at one of the links here: https://patron.podbean.com/JohnAlan https://www.buymeacoffee.com/johnalanpod https://paypal.me/johnalanpod   Go check out my new comic strip "Loyal Oak" at https://johnalanpod.com/loyal-oak-the-comicstrip/ You can find my music here: https://open.spotify.com/artist/5F4Jgrwy2fMa54webx5yzk?si=TTCDdVjdQCSf4GsRM7UyZg More info and my blog are here at https://johnalanpod.com/blog/ #CominHomeWithJohnAlan #musicappreciation #menshealth

MANDAT
Partitjek: Alternativet - og usynligheden

MANDAT

Play Episode Listen Later Aug 3, 2023 27:48


Alternativet gjorde det umulige og kom over spærregrænsen. Men siden er det ikke meget, man har hørt til partiet. Og det er lidt af et mysterium hvorfor, mener én af dagens dommere – måske er fusionen med de andre små grønne partier ikke gået helt så glat, som man ellers fik indtryk af? Til gengæld burde der være gode muligheder for, at Alternativet igen kan markere sig i den nye sæson, når klima og CO2-afgift igen kommer øverst på den politiske dagsorden. I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Alternativet, der bliver tjekket igennem. Og politisk rådgiver, Benny Damsgaard, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Pernille Rudbæk See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Nye Borgerlige - og overlevelseskampen

MANDAT

Play Episode Listen Later Aug 3, 2023 27:37


Det har været en sæson i totalt kaos og nedsmeltning for Nye Borgerlige. Et simpelt formandsskifte sendte partiet ud i sit livs krise, og nu er Pernille Vermund så tilbage igen. Men kan hun løfte partiet tilbage over spærregrænsen, og hvem er der egentlig til at hjælpe hende? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Nye Borgerlige, der bliver tjekket igennem. Og politisk analytiker på Jyllands-Posten, Niels Th. Dahl, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Pernille RudbækSee omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Dansk Folkeparti - og genoprejsningen

MANDAT

Play Episode Listen Later Aug 1, 2023 27:52


Det så sort ude længe, men ud af asken er Dansk Folkeparti så småt ved at få genopbygget partiet. Der er kommet ro på, og sågar er ilden genantændt i Pia Kjærsgaards øjne, mener én af dagens dommere. Men kan Morten Messerschmidt finde en formandsfacon, der kan holde? Og kan de klassiske Dansk Folkeparti-mærkesager som ældre og udlændinge endnu engang bære partiet hele vejen tilbage til rollen som et stort og indflydelsesrigt, borgerligt parti? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Dansk Folkeparti, der bliver tjekket igennem. Og politisk rådgiver, Benny Damsgaard, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Pernille Rudbæk See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Radikale Venstre - og den uforståelige beslutning

MANDAT

Play Episode Listen Later Aug 1, 2023 27:50


Radikale Venstre drømte om en regering hen over midten, men da den nye SVM-regering blev præsenteret i december, var der ikke noget ‘R' i bogstavkombinationen. Det kom som en overraskelse for mange, og politiske iagttagere har stadig svært ved at forstå beslutningen. Hvilken rolle har Radikale Venstre egentlig nu? Kan partiet overhovedet få den indflydelse, det gerne vil have, når det står uden for? Og er Martin Lidegaard manden, der kan få det radikale skib på ret kurs igen efter valgnederlaget? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Radikale Venstre, der bliver tjekket igennem. Og politisk kommentator på Altinget, Erik Holstein, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Liberal Alliance - og Daddy Vanopslagh-effekten

MANDAT

Play Episode Listen Later Jul 27, 2023 27:56


De unge elsker ham, og han kan tiltrække tusindvis af gæster til politiske arrangementer. Daddy-Vanopslagh-effekten er stor, og Liberal Alliance stråler i meningsmålingerne. Men vil det også lykkedes partiet at få politisk indflydelse, eller vil det hele falde til jorden? Og hvis det lykkes, kan Alex Vanopslagh så ligefrem ende som ny leder af blå blok? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Liberal Alliance, der bliver tjekket igennem. Og politisk kommentator Benny Damsgaard er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Enhedslisten - og en helt ny verden

MANDAT

Play Episode Listen Later Jul 27, 2023 27:50


Alt er vendt på hovedet for Enhedslisten siden folketingsvalget. Hvor partiet før var et vigtigt støtteparti, skal det nu finde et nyt ben at stå på som en stærk venstreopposition. Men er Enhedslisten overhovedet i stand til at indtage den position lige nu? Og hvad skal der egentlig ske, når Mai Villadsen går på barsel, og der skal findes en afløser som politisk ordfører? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Enhedslisten, der bliver tjekket igennem. Og politisk kommentator på Altinget, Erik Holstein, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Danmarksdemokraterne - og den nye virkelighed

MANDAT

Play Episode Listen Later Jul 20, 2023 27:46


Konfettien regnede ned over Inger Støjberg, da Danmarksdemokraterne stormede ind i Folketinget i november. Siden da har hun brugt tid på at bygge partiet op og komme rundt i landet og tale med danskerne. Men har hun brugt tiden rigtigt? Har Inger Støjberg været synlig nok? Og hvordan skal Danmarksdemokraterne for alvor bevise, at der er brug for partiet i dansk politik? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Danmarksdemokraterne, der bliver tjekket igennem. Og politisk redaktør på Avisen Danmark, Casper Dall, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Det konservative Folkeparti - og den uklare strategi

MANDAT

Play Episode Listen Later Jul 20, 2023 27:51


Det står stille i Det Konservative Folkeparti efter et skuffende valgresultat, og det kan være svært at se, hvor partiet egentlig er på vej hen lige nu. Partiet har heller ikke fået tiltrukket de vælgere, som har vendt Venstre ryggen. Så der er behov for et gearskifte, hvis det skal lykkes partiet og formand Søren Pape Poulsen at få liv i Det Konservative Folkeparti igen. Men hvad skal der til for at få partiet op at stå igen? Hvor kan partiet få indflydelse? Og er Søren Pape Poulsen manden, der kan få vendt skuden og lægge en klarere strategi? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Det Konservative Folkeparti, der bliver tjekket igennem. Og politisk kommentator Benny Damsgaard er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Moderaterne - og det løkkelige halve år

MANDAT

Play Episode Listen Later Jul 13, 2023 27:53


Lars Løkke Rasmussen har været her, der og alle vegne det seneste halve år, og han har haft en tæt pakket kalender både som udenrigsminister og som politisk leder for et helt nyt parti. Men hvordan skal Lars Løkke Rasmussen blive ved med at holde sig flyvende? Kan han blive for dominerende i regeringssamarbejdet? Og hvad sker der egentlig med Moderaterne, hvis Lars Løkke Rasmussen en dag ikke længere skal stå i spidsen for partiet? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Moderaterne, der bliver tjekket igennem. Og politisk kommentator Benny Damsgaard er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: SF - og den svære balance

MANDAT

Play Episode Listen Later Jul 13, 2023 27:47


SF's formand Pia Olsen Dyhr var slidt og udmattet i starten af året efter en ekstremt travl periode i dansk politik. Men så kom kampgejsten tilbage, hun trak i arbejdstøjet, og nu kvitterer vælgerne med flotte meningsmålinger. Men hvordan rammer man balancen imellem at være en stærk opposition og samtidig gå konstruktivt til værks? Hvordan bevarer SF momentum? Og er Pia Olsen Dyhr egentlig stadig venner med Mette Frederiksen? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det SF, der bliver tjekket igennem. Og politisk redaktør på Avisen Danmark, Casper Dall, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Socialdemokratiet - og rutsjebaneturen

MANDAT

Play Episode Listen Later Jul 6, 2023 27:50


Vælgerne er flygtet fra Socialdemokratiet siden valget, og de gamle kaffeklubber i partiet er begyndt at røre på sig, efter det rygtedes, at statsminister Mette Frederiksen måske var på vej til en toppost i NATO. Hvordan skal Socialdemokratiet komme på ret kurs igen både internt og udadtil over for vælgerne? Kan den gode fortælling overhovedet genskabes? Og hvem skal egentlig tage over, hvis Mette Frederiksen får en international toppost? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Socialdemokratiet, der bliver tjekket igennem. Og politisk kommentator på Altinget, Erik Holstein, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Stine Lynghard See omnystudio.com/listener for privacy information.

MANDAT
Partitjek: Venstre - og høvdingens fald

MANDAT

Play Episode Listen Later Jul 6, 2023 27:54


Et kæmpe løftebrud sendte Venstre i regering. Og kort efter meldte Jakob Ellemann-Jensen sig syg med stress. Det har været en hård sæson for det blå regeringsparti, men samtidig har et nyt stærkt makkerpar i spidsen for partiet set dagens lys. Så kan det gamle Venstre igen finde fodfæste, og kan det ske i rollen som regeringsparti? I en række sommerspecials giver Mandat alle partierne i Folketinget et grundigt partitjek, før en ny politisk sæson starter: Hvordan er kampformen på en skala fra 1-10? I hvilke sager har partiet formået at markere sig, og hvor er der plads til forbedring? Hvilket ord beskriver bedst formanden? Og hvem er partiets kronprins eller kronprinsesse, der står klar i kulissen til at tage over den dag, der skal skiftes ud på partilederposten? I dette afsnit er det Venstre, der bliver tjekket igennem. Og politisk analytiker på Jyllands-Posten, Niels Th. Dahl, er med som gæstedommer sammen med Radio4's politiske redaktør Thomas Larsen. Vært: Pernille RudbækSee omnystudio.com/listener for privacy information.

MANDAT
De fem store politiske dramaer i sigte

MANDAT

Play Episode Listen Later Jun 29, 2023 55:13


Der er dømt slutspurt før ferien på Borgen, og de sidste forlig skal i hus. Christina Egelund svarer på, om de brede forlig nu er vigtigere end de høje ambitioner. Og så fortæller hun, hvordan hun har oplevet det første halve år som minister i SVM-regeringen i modvind. Peter Ernstved Rasmussen revser det nye forsvarsforlig for dets nølen, men måske peger pilen mere mod forsvaret end politikerne? Og så giver Thomas Larsen og Benny Damsgaard deres top 5 over de største politiske dramaer, der venter i den kommende sæson. Og de involverer alt fra ældre til landmænd, regeringens krisefortælling og ikke mindst to hovedpersoner i selve toppen af regeringen. Gæster: Christina Egelund, Peter Ernstved Rasmussen og Benny Damsgaard Værter: Pernille Rudbæk og Thomas LarsenSee omnystudio.com/listener for privacy information.

Trail EAffect
Thomas Larsen Schmidt, President of IMBA Europe – Mountain Biking in Europe #124

Trail EAffect

Play Episode Listen Later Apr 25, 2023 71:16


Thomas Larsen Schmidt, President of IMBA Europe – Mountain Biking in Europe Topics: Changes that Thomas has seen over the years How Thomas got into Mountain Biking and Advocacy The Difference between IMBA (US) vs IMBA Europe National Organization Membership Based IMBA Europe Summit (June 1st – 3rd 2023) Riding in Europe Challenges that IMBA EU faces for access and advocacy Comparing Cycling in Europe with the U.S. Working bikes into transportation in Europe More Trails Close to Home – EU The importance of access to nature for kids Thomas' first time / experience in the U.S. in terms of Bike / Ped accommodations (or lack of) Thomas' experience at the 2022 PTBA Conference in Bentonville, AR IMBA DIRTT Project – Knowledge / Learning about Trails The Trail Building scene in Europe / Denmark Norway / World Trails – Glen Jacobs The quality of a trail that is repetitive The impact that IMBA EU is making today and Take Care of Your Trails Taken from Scotland – DMBnS The things that Thomas looks for in a Trail Community Connected Trails from the Town Center (Idea's from Bentonville) Night Riding If you want to protect something you need to know it Closing Comments Thank You's and Staff Shout Outs SRAM Support Specialized Support Bonus talk about SRAM Innovations   Links: IMBA Europe: https://www.imba-europe.org/ DIRTT Project: https://www.imba-europe.org/knowledge-hub/dirtt-project/ Take Care of Your Trails: https://www.imba-europe.org/events/take-care-of-your-trails/   Episode Sponsor - Coulee Creative: www.dudejustsendit.com https://www.couleecreative.com/   Trail EAffect Show Links: Trail Effect Podcast Website: www.traileaffectpodcast.com KETL Mtn Apparel Affiliate Link: https://ketlmtn.com/josh Worldwide Cyclery Affiliate Link: https://www.worldwidecyclery.com/?aff=559 Trail One Components: https://trailone.bike/ Contact Josh at evolutiontrails@gmail.com This Podcast has been edited and produced by Evolution Trail Services  

The Nonlinear Library
AF - Challenge: construct a Gradient Hacker by Thomas Larsen

The Nonlinear Library

Play Episode Listen Later Mar 9, 2023 2:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Challenge: construct a Gradient Hacker, published by Thomas Larsen on March 9, 2023 on The AI Alignment Forum. This is a relatively clean subproblem that we came upon a few months ago while thinking about gradient hacking. We're throwing it out to the world to see if anyone can make progress. Problem: Construct a gradient hacker (definition below), or prove that one cannot exist under the given conditions. Setup: Let x∈Rd be the network inputs, θ∈Rw, be the network trainable parameters, ^y∈R be the network outputs, and f(x,θ)=^y be the network architecture. f(x,θ) must be a feedforward neural network that reads in its own weights as input. Weight sharing is allowed (vertical or horizontal), and some weight sharing is necessary in order to take all w weights as input. Suppose that we train this network on a dataset D={x(i),y(i)}, with MSE loss L(θ)=1n∑i(f(x(i),θ)−y(i))2, assuming gradient flows: θ′(t)=−∇θL(θ(t)). Given an initial point θ0, the training dynamics are therefore: Definition: A tuple (f,θg,D) is a gradient hacker if the following conditions hold: θg is not a global optimum: ∃θ such that L(θ)0 such that ∀θ0:θ0−θgϵ: the network's training converges to θg, i.e., limt∞θ(t)=θg. There is internal structure of f(,θ0) that computes the gradient; i.e., there is some subset of the activations whose mean over every datapoint in D is ∇θ0L(θ0). This captures my intuition that a gradient hacker knows where it wants to go (in this case "get to θg"), and then it should decide what it outputs in order to make the gradient true. Some more ambitious problems (if gradient hackers exist): Characterize the set of all gradient hackers. Show that they all must satisfy some property. Construct gradient hackers for arbitrarily large n, d, w, and neural net depth. Variations on the problem: a subset of the activations equals ∇θ0L(θ0) for every input, or the subset of activations correspond to the gradient on that input. This is a bit strict, but we didn't want to list different ways something could be isomorphic to the gradient. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - List of technical AI safety exercises and projects by Jakub Kraus

The Nonlinear Library

Play Episode Listen Later Jan 20, 2023 3:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of technical AI safety exercises and projects, published by Jakub Kraus on January 19, 2023 on LessWrong. I intend to maintain a list at this doc. I'll paste the current state of the doc (as of January 19th, 2023) below. I encourage people to comment with suggestions. Levelling Up in AI Safety Research Engineering [Public] (LW) Highly recommended list of AI safety research engineering resources for people at various skill levels. AI Alignment Awards Alignment jams / hackathons from Apart Research Past / upcoming hackathons: LLM, interpretability 1, AI test, interpretability 2 Projects on AI Safety Ideas: LLM, interpretability, AI test Resources: black-box investigator of language models, interpretability playground (LW), AI test Examples of past projects; interpretability winners How to run one as an in-person event at your school Neel Nanda: 200 Concrete Open Problems in Mechanistic Interpretability (doc and previous version) Project page from AGI Safety Fundamentals and their Open List of Project ideas AI Safety Ideas by Apart Research; EAF post Most Important Century writing prize (Superlinear page) Center for AI Safety Competitions like SafeBench Student ML Safety Research Stipend Opportunity – provides stipends for doing ML research. course.mlsafety.org projects CAIS is looking for someone to add details about these projects on course.mlsafety.org Distilling / summarizing / synthesizing / reviewing / explaining Forming your own views on AI safety (without stress!) – also see Neel's presentation slides and "Inside Views Resources" doc Answer some of the application questions from the winter 2022 SERI-MATS, such as Vivek Hebbar's problems 10 exercises from Akash in “Resources that (I think) new alignment researchers should know about” [T] Deception Demo Brainstorm has some ideas (message Thomas Larsen if these seem interesting) Upcoming 2023 Open Philanthropy AI Worldviews Contest Alignment research at ALTER – interesting research problems, many have a theoretical math flavor Open Problems in AI X-Risk [PAIS #5] Amplify creative grants (old) Evan Hubinger: Concrete experiments in inner alignment, ideas someone should investigate further, sticky goals Richard Ngo: Some conceptual alignment research projects, alignment research exercises Buck Shlegeris: Some fun ML engineering projects that I would think are cool, The case for becoming a black box investigator of language models Implement a key paper in deep reinforcement learning “Paper replication resources” section in “How to pursue a career in technical alignment” Daniel Filan idea Summarize a reading from Reading What We Can Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Wentworth and Larsen on buying time by Akash

The Nonlinear Library

Play Episode Listen Later Jan 10, 2023 19:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wentworth and Larsen on buying time, published by Akash on January 9, 2023 on LessWrong. This post summarizes a discussion between Thomas Larsen and John Wentworth. They discuss efforts to buy time, outreach to AGI labs, models of institutional change, and other ideas relating to a post that Thomas, Olivia Jimenez, and I released: Instead of technical research, more people should focus on buying time. I moderated the discussion and prepared a transcript with some of the key parts of the conversation. On the value of specific outreach to lab decision-makers vs. general outreach to lab employees John: When thinking about "social" strategies in general, a key question/heuristic is "which specific people do you want to take which specific actions?" For instance, imagine I have a start-up building software for hospitals, and I'm thinking about marketing/sales. Then an example of what not to do (under this heuristic) would be "Buy google ads, banner ads in medical industry magazines, etc". That strategy would not identify the specific people who I want to take a specific action (i.e. make the decision to purchase my software for a hospital). An example of what to do instead would be "Make a list of hospitals, then go to the website of each hospital one-by-one and look at their staff to figure out who would probably make the decision to purchase/not purchase my software. Then, go market to those people specifically - cold call/email them, track them down at conferences, set up a folding chair on the sidewalk outside their house and talk to them when they get home, etc. Applying that to e.g. efforts to regulate AI development, the heuristic would say to not try to start some giant political movement. Instead, identify which specific people you need to write a law (e.g. judges or congressional staff), which specific people will decide whether it goes into effect (e.g. judges or swing votes in congress), which specific people will implement/enforce it (e.g. specific bureaucrats, who may be sufficient on their own if we don't need to create new law). Applying it to e.g. efforts to convince AI labs to stop publishing advances or shift their research projects, the heuristic would say to not just go convincing random ML researchers. Instead, identify which specific people at the major labs are de-facto responsible for the decisions you're interested in (like e.g. decisions about whether to publish some advancement), and then go talk to those people specifically. Also, make sure to walk them through the reasoning enough that they can see why the decisions you want them to make are right; a vague acknowledgement that AI X-risk is a thing doesn't cut it. Thomas: It seems like we're on a similar page here. I do think that on current margins, if typical OpenAI employees become more concerned about x-risk, this will likely have positive follow through effects on general OpenAI epistemics. And I expect this will likely lead to improved decisions. Perhaps you disagree with that? John: Making random OpenAI (or Deepmind, or Anthropic, or ...) employees more concerned about X-risk is plausibly net-positive value in expectation; I'm unsure about that. But more importantly, it is not plausibly high value in expectation. When I read your post on time-buying, the main picture I end up with is a bunch of people running around randomly spreading the gospel of AI X-risk. More generally, that seems-to-me to be the sort of thing most people jump to when they think about "time-buying". In my mind, 80% of the value is in identifying which specific people we want to make which specific decisions, and then getting in contact with those specific people. And I usually don't see people thinking about that very much, when they talk about "time-buying" interventions. Thomas: Fully agree with this [the last tw...

The Nonlinear Library
LW - My thoughts on OpenAI's alignment plan by Akash

The Nonlinear Library

Play Episode Listen Later Jan 1, 2023 33:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on OpenAI's alignment plan, published by Akash on December 30, 2022 on LessWrong. Epistemic Status: This is my first attempt at writing up my thoughts on an alignment plan. I spent about a week on it. I'm grateful to Olivia Jimenez, Thomas Larsen, and Nicholas Dupuis for feedback. A few months ago, OpenAI released its plan for alignment. More recently, Jan Leike (one of the authors of the original post) released a blog post about the plan, and Eliezer & Nate encouraged readers to write up their thoughts. In this post, I cover some thoughts I have about the OpenAI plan. This is a long post, and I've divided it into a few sections. Each section gets increasingly more specific and detailed. If you only have ~5 minutes, I suggest reading section 1 and skimming section 2. The three sections: An overview of the plan and some of my high-level takes (here) Some things I like about the plan, some concerns, and some open questions (here) Specific responses to claims in OpenAI's post and Jan's post (here) Section 1: High-level takes Summary of OpenAI's Alignment Plan As I understand it, OpenAI's plan involves using reinforcement learning from human feedback and recursive reward modeling to build AI systems that are better than humans at alignment research. Open AI's plan is not aiming for a full solution to alignment (that scales indefinitely or that could work on a superintelligent system). Rather, the plan is intended to (a) build systems that are better at alignment research than humans (AI assistants), (b) use these AI assistants to accelerate alignment research, and (c) use these systems to build/align more powerful AI assistants. Six things that need to happen for the plan to work For the plan to work, I think it needs to get through the following 6 steps: OpenAI builds LLMs that can help us with alignment research OpenAI uses those models primarily to help us with alignment research, and we slow down/stop capabilities research when necessary OpenAI has a way to evaluate the alignment strategies proposed by the LLM OpenAI has a way to determine when it's OK to scale up to more powerful AI assistants (and OpenAI has policies in place that prevent people from scaling up before it's OK) Once OpenAI has a highly powerful and aligned system, they do something with it that gets us out of the acute risk period Once we are out of the acute risk period, OpenAI has a plan for how to use transformative AI in ways that allow humanity to “fulfill its potential” or “achieve human flourishing” (and they have a plan to figure out what we should even be aiming for) If any of these steps goes wrong, I expect the plan will fail to avoid an existential catastrophe. Note that steps 1-5 must be completed before another actor builds an unaligned AGI. Here's a table that lists each step and the extent to which I think it's covered by the two posts about the OpenAI plan: Step Extent to which this step is discussed in the OpenAI plan Build an AI assistant Adequate; OpenAI acknowledges that this is their goal, mentions RLHF and RRM as techniques that could help us achieve this goal, and mentions reasonable limitations Use the assistant for alignment research (and slow down capabilities) Inadequate; OpenAI mentions that they will use the assistant for alignment research, but they don't describe how they will slow down capabilities research if necessary or how they plan to shift the current capabilities-alignment balance Evaluate the alignment strategies proposed by the assistant Unclear; OpenAI acknowledges that they will need to evaluate strategies from the AI assistant, though the particular metrics they mention seem unlikely to detect alignment concerns that may only come up at high capability levels (e.g., deception, situational awareness) Figure out when it's OK to scale up to more ...

The Nonlinear Library
AF - Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility by Akash

The Nonlinear Library

Play Episode Listen Later Nov 22, 2022 6:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility, published by Akash on November 22, 2022 on The AI Alignment Forum. We're grateful to our advisors Nate Soares, John Wentworth, Richard Ngo, Lauro Langosco, and Amy Labenz. We're also grateful to Ajeya Cotra and Thomas Larsen for their feedback on the contests. TLDR: AI Alignment Awards is running two contests designed to raise awareness about AI alignment research and generate new research proposals. Prior experience with AI safety is not required. Promising submissions will win prizes up to $100,000 (though note that most prizes will be between $1k-$20k; we will only award higher prizes if we receive exceptional submissions.) You can help us by sharing this post with people who are or might be interested in alignment research (e.g., student mailing lists, FB/Slack/Discord groups.) What are the contests? We're currently running two contests: Goal Misgeneralization Contest (based on Langosco et al., 2021): AIs often learn unintended goals. Goal misgeneralization occurs when a reinforcement learning agent retains its capabilities out-of-distribution yet pursues the wrong goal. How can we prevent or detect goal misgeneralization? Shutdown Problem Contest (based on Soares et al., 2015): Given that powerful AI systems might resist attempts to turn them off, how can we make sure they are open to being shut down? What types of submissions are you interested in? For the Goal Misgeneralization Contest, we're interested in submissions that do at least one of the following: Propose techniques for preventing or detecting goal misgeneralization Propose ways for researchers to identify when goal misgeneralization is likely to occur Identify new examples of goal misgeneralization in RL or non-RL domains. For example: We might train an imitation learner to imitate a "non-consequentialist" agent, but it actually ends up learning a more consequentialist policy. We might train an agent to be myopic (e.g., to only care about the next 10 steps), but it actually learns a policy that optimizes over a longer timeframe. Suggest other ways to make progress on goal misgeneralization For the Shutdown Problem Contest, we're interested in submissions that do at least one of the following: Propose ideas for solving the shutdown problem or designing corrigible AIs. These submissions should also include (a) explanations for how these ideas address core challenges raised in the corrigibility paper and (b) possible limitations and ways the idea might fail Define The Shutdown Problem more rigorously or more empirically Propose new ways of thinking about corrigibility (e.g., ways to understand corrigibility within a deep learning paradigm) Strengthen existing approaches to training corrigible agents (e.g., by making them more detailed, exploring new applications, or describing how they could be implemented) Identify new challenges that will make it difficult to design corrigible agents Suggest other ways to make progress on corrigibility Why are you running these contests? We think that corrigibility and goal misgeneralization are two of the most important problems that make AI alignment difficult. We expect that people who can reason well about these problems will be well-suited for alignment research, and we believe that progress on these subproblems would be meaningful advances for the field of AI alignment. We also think that many people could potentially contribute to these problems (we're only aware of a handful of serious attempts at engaging with these challenges). Moreover, we think that tackling these problems will offer a good way for people to "think like an alignment researcher." We hope the contests will help us (a) find people who could become promising theoretical and empirical AI safety...

The Nonlinear Library
AF - (My understanding of) What Everyone in Technical Alignment is Doing and Why by Thomas Larsen

The Nonlinear Library

Play Episode Listen Later Aug 29, 2022 54:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (My understanding of) What Everyone in Technical Alignment is Doing and Why, published by Thomas Larsen on August 29, 2022 on The AI Alignment Forum. Epistemic Status: My best guess Epistemic Effort: ~50 hours of work put into this document Contributions: Thomas wrote ~85% of this, Eli wrote ~15% and helped edit + structure it. Unless specified otherwise, writing in the first person is by Thomas and so are the opinions. Thanks to Miranda Zhang, Caleb Parikh, and Akash Wasil for comments. Thanks to many others for relevant conversations. Introduction Despite a clear need for it, a good source explaining who is doing what and why in technical AI alignment doesn't exist. This is our attempt to produce such a resource. We expect to be inaccurate in some ways, but it seems great to get out there and let Cunningham's Law do its thing. The main body contains our understanding of what everyone is doing in technical alignment and why, as well as at least one of our opinions on each approach. We include supplements visualizing differences between approaches and Thomas's big picture view on alignment. The opinions written are Thomas and Eli's independent impressions, many of which have low resilience. Our all-things-considered views are significantly more uncertain. A summary of our understanding of each approach: Problem FocusCurrent Approach SummaryModel splinteringSolve extrapolation problems. Inaccessible informationELK + LLM power-seeking evaluationLack of good interpretability tools (?)Interpretability + HHH + augmenting alignment research with LLMsBrain-like AGI SafetyUse brains as a model for how AGI will be developed, think about alignment in this contextEngaging the ML community, many technical problems Technical research, Infrastructure, and ML community field-building for safetyOuter alignment, though CHAI is diverseImprove CIRL + many other independent approaches. Suffering risksFoundational game theory researchInner alignmentInterpretability + automating alignment research with LLMsMany including scalable oversight and goal misgeneralizationMany including Debate, ERO, and discovering agents. Multipolar failure from lack of coordinationVideo gameDeceptionGet the reasoning of the AGI to happen in natural language, then oversee that reasoningMany (?)Incubate new, scalable alignment research agendasMany including deception, the sharp left turn, corrigibility is anti-naturalMathematical research to resolve fundamental confusion about the nature of goals/agency/optimizationScalable oversightRLHF / Recursive Reward Modeling, then automate alignment researchScalable oversightSupervise process rather than outcomes + augment alignment researchersInner alignment (?)Interpretability + Adversarial Training Being able to robustly point at objects in the worldSelection Theorems based on natural abstractionsInstilling inner values from an outer training loopFind patterns of values given by current RL setups and humans, then create quantitative rules to do thisDeceptionCreate standards and datasets to evaluate model truthfulness Approach Aligned AI ARC Anthropic Brain-like-AGI Safety CAIS CHAI CLR Conjecture DeepMind Encultured Externalized Reasoning Oversight FAR MIRI OpenAI Ought Redwood Selection Theorems Team Shard Truthful AI Previous related overviews include: Neel Nanda's My Overview of the AI Alignment Landscape Evan Hubinger's An overview of 11 proposals for building safe advanced AI Larks' yearly Alignment Literature Review and Charity Comparison Nate Soares' On how various plans miss the hard bits of the alignment challenge Andrew Critch's Some AI research areas and their relevance to existential safety 80,000 Hours' list of organizations working in the area Aligned AI / Stuart Armstrong One of the key problems in AI safety is that there are many ways for an AI to gener...

The Nonlinear Library
LW - The Core of the Alignment Problem is... by Thomas Larsen

The Nonlinear Library

Play Episode Listen Later Aug 18, 2022 14:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Core of the Alignment Problem is..., published by Thomas Larsen on August 17, 2022 on LessWrong. Produced As Part Of The SERI ML Alignment Theory Scholars Program 2022 Under John Wentworth Introduction When trying to tackle a hard problem, a generally effective opening tactic is to Hold Off On Proposing Solutions: to fully discuss a problem and the different facets and aspects of it. This is intended to prevent you from anchoring to a particular pet solution and (if you're lucky) to gather enough evidence that you can see what a Real Solution would look like. We wanted to directly tackle the hardest part of the alignment problem, and make progress towards a Real Solution, so when we had to choose a project for SERI MATS, we began by arguing in a Google doc about what the core problem is. This post is a cleaned-up version of that doc. The Technical Alignment Problem The overall problem of alignment is the problem of, for an Artificial General Intelligence with potentially superhuman capabilities, making sure that the AGI does not use these capabilities to do things that humanity would not want. There are many reasons that this may happen such as instrumental convergent goals or orthogonality. Layout In each section below we make a different case for what the "core of the alignment problem" is. It's possible we misused some terminology when naming each section. The document is laid out as follows: We have two supra-framings on alignment: Outer Alignment and Inner Alignment. Each of these is then broken down further into subproblems. Some of these specific problems are quite broad, and cut through both Outer and Inner alignment, we've tried to put problems in the sections we think fits best (and when neither fits best, collected them in an Other category) though reasonable people may disagree with our classifications. In each section, we've laid out some cruxes, which are statements that support that frame on the core of the alignment problem. These cruxes are not necessary or sufficient conditions for a problem to be central. Frames on outer alignment The core of the alignment problem is being able to precisely specify what we value, so that we can train an AGI on this, deploy it, and have it do things we actually want it to do. The hardest part of this is being mathematically precise about 'what we value', so that it is robust to optimization pressure. The Pointers Problem The hardest part of this problem is being able to point robustly at anything in the world at all (c.f. Diamond Maximizer). We currently have no way to robustly specify even simple, crisply defined tasks, and if we want an AI to be able to do something like 'maximize human values in the Universe,' the first hurdle we need to overcome is having a way to point at something that doesn't break in calamitous ways off-distribution and under optimization pressure. Once we can actually point at something, the hope is that this will enable us to point the AGI at some goal that we are actually okay with applying superhuman levels of optimization power on. There are different levels at which people try to tackle the pointers problem: some tackle it on the level of trying to write down a utility function that is provably resilient to large optimization pressure, and some tackle it on the level of trying to prove things about how systems must represent data in general (e.g. selection theorems). Cruxes (around whether this is the highest priority problem to work on) This problem being tractable relies on some form of the Natural Abstractions Hypothesis. There is, ultimately, going to end up being a thing like "Human Values," that can be pointed to and holds up under strong optimization pressure. We are sufficiently confused about 'pointing to things in the real world' that we could not reliably train a diamond...

RADIO4 MORGEN
Radio4 Morgen - 23. juni - kl. 8-9

RADIO4 MORGEN

Play Episode Listen Later Jun 23, 2022 55:01


Inger Støjberg annoncerer nyt parti. Virolog: Coronapille har potentiale til at være et meget vigtigt fremskridt. Første radiointerview med Inger Støjberg om Danmarksdemokraterne. Reaktioner på annoncering af Danmarksdemokraterne med Thomas Larsen, Liselott Blixt og Kristoffer Hjort Storm. Politisk redaktør Thomas Larsen forholder sig til reaktioner på annoncering af Danmarksdemokraterne. Værter: Astrid Date & Dagmar Eben Østergaard See omnystudio.com/listener for privacy information.

BioScience Talks
Learning What Our Ancestors Ate with Stable Isotope Analysis of Amino Acids

BioScience Talks

Play Episode Listen Later Jun 9, 2022 23:57


Thomas Larsen and Patrick Roberts of the Max Planck Institute of the Science of Human History join us to discuss how we can learn about early hominins diets using stable isotope analysis. The abstract of their BioScience article follows.Stable isotope analysis of teeth and bones is regularly applied by archeologists and paleoanthropologists seeking to reconstruct diets, ecologies, and environments of past hominin populations. Moving beyond the now prevalent study of stable isotope ratios from bulk materials, researchers are increasingly turning to stable isotope ratios of individual amino acids to obtain more detailed and robust insights into trophic level and resource use. In the present article, we provide a guide on how to best use amino acid stable isotope ratios to determine hominin dietary behaviors and ecologies, past and present. We highlight existing uncertainties of interpretation and the methodological developments required to ensure good practice. In doing so, we hope to make this promising approach more broadly accessible to researchers at a variety of career stages and from a variety of methodological and academic backgrounds who seek to delve into new depths in the study of dietary composition.

Fitness M/K
#344 The Larsenator i ringhjørnet

Fitness M/K

Play Episode Listen Later Feb 5, 2022 132:20


Thomas Larsen blev som ung bidt af kampsport, først karate og siden thaiboiksning. Men hvor talentet kun holdt til at blive god, men ikke rigtigt god, så var han bidt af det og fulgte med i det på alle tænktelige måder, så det blev en levevej som skribent, kommentator og promotor. Han har rejst verden rundt og mødt stort set alle de store kanoner i thaiboksning, kickboxing og MMA. Lyt med, når han deler lidt ud af sine røverhistorier.