Podcast appearances and mentions of Michael Nielsen

  • 61PODCASTS
  • 102EPISODES
  • 1h 6mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 13, 2026LATEST
Michael Nielsen

POPULARITY

20192020202120222023202420252026


Best podcasts about Michael Nielsen

Latest podcast episodes about Michael Nielsen

Aufhebunga Bunga
/539/ Reading Club: Where's Our Flying Cars?

Aufhebunga Bunga

Play Episode Listen Later Mar 13, 2026 28:40


On the slowing rate of technological progress. Alex, George and contributing editor (and science writer) Leigh Phillips discuss David Graeber's 2012 essay, Of Flying Cars and the Declining Rate of Profit. This builds on two of this year's themes: state capitalism (how planning and growth – or their absence – intersect with technology) and the pre-political (how technology shapes •⁠  ⁠Were we right to expect jetpacks? And are we looking in the right place for technological advances today? •⁠  ⁠⁠Has technical progress actually slowed in the way Graeber says?  •⁠  ⁠⁠Are the explanations he gives for slowdown correct? •⁠  ⁠⁠What political tasks does this reality impose on us? •⁠  ⁠⁠What is the role of geopolitics and war in the rate of technological development? Links: Of Flying Cars and the Declining Rate of Profit, David Graeber, The Baffler Science Is Getting Less Bang for Its Buck, Patrick Collison & Michael Nielsen, The Atlantic /59/ Übermenschen of Capital Pt. 3 ft. Leigh Phillips & Michal Rozworski Progress is in the balance between innovation and implementation, Phil Bell, LSE Global Economic History: A Very Short Introduction (On Robert C. Allen) Engels's Second Theory: Technology, Warfare and the Growth of the State

Morning Prayer Sermonette from KFUO Radio
Meditation on Mark 7:1-23

Morning Prayer Sermonette from KFUO Radio

Play Episode Listen Later Mar 4, 2026 5:37


The Rev. Michael Nielsen gives today's sermonette based on Mark 7:1-23. Hear a guest pastor give a short sermonette based on the day's Daily Lectionary New Testament text during Morning and Evening Prayer. Submit comments or questions to: listener@kfuo.org

The Jacked Up Review Show Podcast
Tom Clancy's Splinter Cell Novels & Comic Books Overview

The Jacked Up Review Show Podcast

Play Episode Listen Later Feb 11, 2026 22:29


In this minisode, we're getting into more of the nuts and bolts that makes the beloved yet deadly Sam Fisher character such a fascinating badass to witness:   How do the recent SPLINTER CELL novels compared to the earlier yet surprisingly bland entries?   And what do the comic novels bring to the table that other tie-ins don't necessarily do?   Grab your night vision goggles and get stealthy with us!       MUSIC USED: "Abandoned Reservoir" by Kaveh Cohen and Michael Nielsen, "Jerusalem- Discovered!" by Lalo Schifrin, "Burma Fight" by Michael Richard Plowman, "Jail Fight" by Behavior.

The Jacked Up Review Show Podcast
Splinter Cell Web Series Review

The Jacked Up Review Show Podcast

Play Episode Listen Later Feb 10, 2026 21:23


In this overview, I continue the weekly tribute to the SPLINTER CELL videogame franchise: I dive into the IGN Fan-Made Web Series EXTINCTION & the recent Netflix Anime Show DEATHWATCH.   Would this fan-effort entertain non-fans who just want to see a cool indie Action movie done by "do-it-yourself" folks?   How does the Netflix cartoon handle its vision with the assistance of the JOHN WICK creator?   And more pros and cons to list for these two very suspenseful visions!       MUSIC USED: "Bunker" and "Airfield" by Kaveh Cohen and Michael Nielsen.   "Common Fight" by Michael Richard Plowman   "Jail Timed Cue" and "JBA HQ Ambient Theme 3" by Behavior.   "Chaise Lounge Long" by Apple Free-To-Use Music.

Law Abiding Biker | Street Biker Motorcycle Podcast
LAB-420-2026 Hoka Hey Motorcycle Challenge with Patron Michael Nielsen

Law Abiding Biker | Street Biker Motorcycle Podcast

Play Episode Listen Later Jan 31, 2026 110:55


In this episode, we're joined by Patron Member Michael Nielsen.  Michael is a serious rider who lays down a ton of miles every year.  This year Michael is taking on the 2026 Hoka Hey Motorcycle Challenge.  The Hoka Hey Motorcycle Challenge is a long-distance endurance ride designed to test a rider's planning, focus, and mental toughness more than outright speed. Rather than being a traditional race, the challenge centers on navigating a defined route while completing required checkpoints, documentation, and time-based objectives. Riders must balance efficient routing, smart time management, and personal stamina to successfully complete the challenge within the rules. SUPPORT US AND SHOP IN THE OFFICIAL LAW ABIDING BIKER STORE What sets the Hoka Hey Motorcycle Challenge apart is its emphasis on rider discipline and self-reliance. Participants are responsible for following the prescribed guidelines and safely managing fatigue.  The event rewards preparation and consistency—things like knowing your bike, planning fuel stops, and adapting to changing weather or traffic conditions can make or break a successful run. CHECK OUT OUR HUNDREDS OF FREE HELPFUL VIDEOS ON OUR YOUTUBE CHANNEL AND SUBSCRIBE! At its core, the Hoka Hey Motorcycle Challenge celebrates the spirit of endurance motorcycling. It attracts riders who enjoy pushing themselves, not just their machines, while experiencing one of the most iconic stretches of road in the U.S. Whether taken on as a personal achievement or a competitive benchmark against other riders' times and strategies, the challenge has earned a reputation as a true test of commitment, skill, and mindset on two wheels. The Hoka Hey organizers encourage riders to fundraise for charities.  Michael is fundraising for Mile Monsters who support young boys battling Duchenne Muscular Dystrophy (DMD).   Here are opportunities to support Michael and his charity: Hoka Hey 2026 Fundraiser PayPal Venmo CashApp Connect with Michael Facebook Instagram NEW FREE VIDEO RELEASED: The Best Motorcycle Road Isn't the Tail of the Dragon – It's This One! Step-by-Step: Adjust Your 2024 Harley's Rear Suspension Like a Pro Sponsor-Ciro 3D CLICK HERE! Innovative products for Harley-Davidson & Goldwing Affordable chrome, lighting, and comfort products Ciro 3D has a passion for design and innovation Sponsor-Butt Buffer CLICK HERE Want to ride longer? Tired of a sore and achy ass? Then fix it with a high-quality Butt Buffer seat cushion? New Patrons: Les Brooke of Jackson, California Martin Mitchell of Cupar, United Kingdom Lillo Rubino of Islip Terrace, New York If you appreciate the content we put out and want to make sure it keeps on coming your way then become a Patron too! There are benefits and there is no risk. Thanks to the following bikers for supporting us via a flat donation: Marco Zan Es Zuidweg of Rijpwetering, Netherlands Joseph Malecki Daniel Pierce HELP SUPPORT US! JOIN THE BIKER REVOLUTION! #BikerRevolution #LawAbidingBiker #Bikaholics #RyanUrlacher

tired innovative tail riders motorcycle michael nielsen duchenne muscular dystrophy dmd hoka hey
Drive Radio
Connected but Exposed: What Every Homeowner Needs to Know. (1-23-26)

Drive Radio

Play Episode Listen Later Jan 24, 2026 57:37


Cybersecurity, Smart Homes, and the Hidden Risks You Don't See Coming Guest host Pastor Bill Anderson fills in on https://Ready-Radio.com with co-host Dennis Brewster and guest Michael Nielsen of https://DataVoiceOptions.com to explore the real-world challenges of digital security. In today's world of smart homes, connected cameras, mobile devices, and always-on Wi-Fi, are you protected—or quietly exposed? This episode explains how cybercriminals operate. They use AI phone scams, giftcard schemes, Wi-Fi hijacking, and hidden malware that can remain undetected for months. Thinking 'I'm too small to be a target' puts you at risk. Could your phone bring malware home? Is your guest Wi-Fi, camera system, or smart lock an entry point for attackers? Michael Nielsen explains practical strategies for everyday people—how network gateways work, why default modem setups fall short, and how separating devices into virtual networks can contain threats before they spread, giving listeners practical tools to protect their own networks. Dennis shares real-world examples of cases where cameras didn't prevent crime—but did make prosecution possible, showing how evidence can aid justice. And throughout the hour, the focus stays on realistic, affordable steps listeners can take now, empowering them to act without fear-driven hype. Wondering how vulnerable you are? Tune in for simple, effective tips to reduce your risk.

Thy Strong Word from KFUO Radio
Genesis 46:28-47:12: Shepherds in Goshen

Thy Strong Word from KFUO Radio

Play Episode Listen Later Jun 26, 2025 56:29


Judah goes ahead to prepare the way, and Joseph meets his father in Goshen. The reunion is deeply emotional—Joseph weeps on his father's neck "a good while," and Jacob declares he can now die in peace. Joseph wisely prepares his brothers for their audience with Pharaoh, instructing them to identify as shepherds, knowing this will secure them land in Goshen, separated from Egyptian society. When Pharaoh meets them, he grants them the best of the land and even offers employment for the capable among them. At 130 years old, Jacob blesses Pharaoh—a beautiful picture of God's promise that through Abraham's seed, all nations would be blessed.  The Rev. Dr. Michael Nielsen, pastor of Salem Lutheran Church in Barron, WI, joins the Rev. Dr. Phil Booe to study Genesis 46:28-47:12.  To learn more about Salem, visit stjohnsnp.org. Genesis isn't just the start of the Bible; it's the foundation of everything. Creation, sin, judgment, grace, covenant, and promise all take root in this remarkable book. The stories are ancient, but their truths are eternal. In this new series from Thy Strong Word, Pastor Phil Booe and his guests walk verse by verse through Genesis, exploring how God reveals Himself as Creator, Judge, and Redeemer. From the grandeur of the cosmos to the struggles of ordinary families, Genesis introduces us to a God who speaks, acts, and keeps His promises. So, whether you've read it a hundred times or are just now cracking it open for a serious look, this series will help you see Genesis with fresh eyes—and a deeper faith. Thy Strong Word, hosted by Rev. Dr. Phil Booe, pastor of St. John Lutheran Church of Luverne, MN, reveals the light of our salvation in Christ through study of God's Word, breaking our darkness with His redeeming light. Each weekday, two pastors fix our eyes on Jesus by considering Holy Scripture, verse by verse, in order to be strengthened in the Word and be equipped to faithfully serve in our daily vocations. Submit comments or questions to: thystrongword@kfuo.org.

Into the Bytecode
#52 – Michael Nielsen on being a wise optimist about science and technology

Into the Bytecode

Play Episode Listen Later Mar 27, 2025 76:51


This is my conversation with Michael Nielsen, scientist, author, and research fellow at the Astera Institute.Timestamps:- (00:00:00) intro- (00:01:06) cultivating optimism amid existential risks- (00:07:16) asymmetric leverage- (00:12:09) are "unbiased" models even feasible?- (00:18:44) AI and the scientific method- (00:23:23) unlocking AI's full power through better interfaces- (00:30:33) sponsor: Splits- (00:31:18) AIs, independent agents or intelligent tools?- (00:35:47) autonomous military and weapons- (00:42:14) finding alignment- (00:48:28) aiming for specific moral outcomes with AI?- (00:54:42) freedom/progress vs safety- (00:57:46) provable beneficiary surveillance- (01:04:16) psychological costs- (01:12:40) the ingenuity gapLinks:- Michael Nielsen: https://michaelnielsen.org/- Michael Nielsen on X: https://x.com/michael_nielsen- Michael's essay on being a wise optimist about science and technology: https://michaelnotebook.com/optimism/- Michael's Blog: https://michaelnotebook.com/- The Ingenuity Gap (Tad Homer-Dixon): https://homerdixon.com/books/the-ingenuity-gap/Thank you to our sponsor for making this podcast possible:- Splits: https://splits.orgInto the Bytecode:- Sina Habibian on X: https://twitter.com/sinahab- Sina Habibian on Farcaster: https://warpcast.com/sinahab- Into the Bytecode: https://intothebytecode.comDisclaimer: This podcast is for informational purposes only. It is not financial advice nor a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.

Fluidity
Do AI As Engineering Instead

Fluidity

Play Episode Listen Later Dec 15, 2024 15:47


Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems.   https://betterwithout.ai/AI-as-engineering   This episode has a lot of links! Here they are.   Michael Nielsen's “The role of ‘explanation' in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI   Subbarao Kambhampati's “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954   Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/   Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf   Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf   François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800   Neel Nanda's “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability   Zachary C. Lipton's “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490   Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf   Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112   “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217   Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997   Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html   Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf   Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

P1 Debat
Kød eller kikærter?

P1 Debat

Play Episode Listen Later Nov 19, 2024 70:54


Får vi fisk i stedet for fedtemøg i havet? Kommer landmænd til at betale CO2-afgifter som andre virksomheder? Skal skatteborgerne betale 43 milliarder kroner for omlægning af landbruget? Hvor skal vores mad komme fra i fremtiden? Den Grønne Trepartsaftale er landet. Bliver Danmark foregangsland? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Jeppe Bruus (S), minister for Grøn Trepart, Michael Nielsen, landmand, svineproducent, Tilsbæk, Hillerød, Louise Køster, Rabarbergården, Vejby, Nordsjælland, Thomas Poulsen, landmand, kvægbrug og planteavl, Mern, Sydsjælland, Kristoffer Hald, landmand, svineproducent, Møborg, Bækmarksbro, Vestjylland, Selma Montgomery, klimaaktivist, Baku, Aserbajdsjan og Stiig Markager, professor i havmiljø. Vært: Gitte Hansen.

co2 eller kommer skal hvor baku den gr hiller nordsj vestjylland michael nielsen sydsj louise k vejby gitte hansen
P1 Debat
Kød eller kikærter?

P1 Debat

Play Episode Listen Later Nov 19, 2024 70:54


Får vi fisk i stedet for fedtemøg i havet? Kommer landmænd til at betale CO2-afgifter som andre virksomheder? Skal skatteborgerne betale 43 milliarder kroner for omlægning af landbruget? Hvor skal vores mad komme fra i fremtiden? Den Grønne Trepartsaftale er landet. Bliver Danmark foregangsland? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Jeppe Bruus (S), minister for Grøn Trepart, Michael Nielsen, landmand, svineproducent, Tilsbæk, Hillerød, Louise Køster, Rabarbergården, Vejby, Nordsjælland, Thomas Poulsen, landmand, kvægbrug og planteavl, Mern, Sydsjælland, Kristoffer Hald, landmand, svineproducent, Møborg, Bækmarksbro, Vestjylland, Selma Montgomery, klimaaktivist, Baku, Aserbajdsjan og Stiig Markager, professor i havmiljø. Vært: Gitte Hansen.

co2 eller kommer skal hvor baku den gr hiller nordsj vestjylland michael nielsen sydsj louise k vejby gitte hansen
P1 Debat
Kød eller kikærter?

P1 Debat

Play Episode Listen Later Nov 19, 2024 70:54


Får vi fisk i stedet for fedtemøg i havet? Kommer landmænd til at betale CO2-afgifter som andre virksomheder? Skal skatteborgerne betale 43 milliarder kroner for omlægning af landbruget? Hvor skal vores mad komme fra i fremtiden? Den Grønne Trepartsaftale er landet. Bliver Danmark foregangsland? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Jeppe Bruus (S), minister for Grøn Trepart, Michael Nielsen, landmand, svineproducent, Tilsbæk, Hillerød, Louise Køster, Rabarbergården, Vejby, Nordsjælland, Thomas Poulsen, landmand, kvægbrug og planteavl, Mern, Sydsjælland, Kristoffer Hald, landmand, svineproducent, Møborg, Bækmarksbro, Vestjylland, Selma Montgomery, klimaaktivist, Baku, Aserbajdsjan og Stiig Markager, professor i havmiljø. Vært: Gitte Hansen.

co2 eller kommer skal hvor baku den gr hiller nordsj vestjylland michael nielsen sydsj louise k vejby gitte hansen
P1 Debat
Kan grise tale?

P1 Debat

Play Episode Listen Later Sep 6, 2024 71:38


Kan dyrene fortælle, om de er bange, stressede eller tilfredse? TV-dokumentaren "Hvis grise kunne tale" - sætter fokus på svineavl og dyrevelfærd. I P1Debat spørger vi: Skal grise ud af stalden, ud på marken for at blive glade og mindre stressede? Og hvad med andre dyr? Hvad ville hesten på ridebanen sige, hvis den fik ordet? Og hvad med hunden, der bor i en lejlighed på 4. sal, og er alene hjemme 8 timer om dagen, hvad ville den vælge? Eller dyrene i Zoologisk have, bliver de glade af at bo i et bur, hvor mennesker kommer og kigger på dem hver dag? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Miki Mistrati, journalist bag TV-dokumentaren "Hvis grise kunne tale", Louise Køster, Rabarbergården/ tidl formand økologisk landbrug, Mickey Gjerris, bio-etiker og forfatter, Mads Frost Bertelsen, direktør Københavns Zoologiske Have, Michael Nielsen, landmand, svineavler Hillerød, Jacob Jensen (V) minister for Fødevarer, Landbrug og Fiskeri og Sofie Graarup Jensen, biologilærer Thy-Mors HF & VUC. Vært: Gitte Hansen.

tv tale eller skal hvis hvad hiller grise landbrug vuc michael nielsen fiskeri zoologisk mickey gjerris louise k gitte hansen
P1 Debat
Kan grise tale?

P1 Debat

Play Episode Listen Later Sep 6, 2024 71:38


Kan dyrene fortælle, om de er bange, stressede eller tilfredse? TV-dokumentaren "Hvis grise kunne tale" - sætter fokus på svineavl og dyrevelfærd. I P1Debat spørger vi: Skal grise ud af stalden, ud på marken for at blive glade og mindre stressede? Og hvad med andre dyr? Hvad ville hesten på ridebanen sige, hvis den fik ordet? Og hvad med hunden, der bor i en lejlighed på 4. sal, og er alene hjemme 8 timer om dagen, hvad ville den vælge? Eller dyrene i Zoologisk have, bliver de glade af at bo i et bur, hvor mennesker kommer og kigger på dem hver dag? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Miki Mistrati, journalist bag TV-dokumentaren "Hvis grise kunne tale", Louise Køster, Rabarbergården/ tidl formand økologisk landbrug, Mickey Gjerris, bio-etiker og forfatter, Mads Frost Bertelsen, direktør Københavns Zoologiske Have, Michael Nielsen, landmand, svineavler Hillerød, Jacob Jensen (V) minister for Fødevarer, Landbrug og Fiskeri og Sofie Graarup Jensen, biologilærer Thy-Mors HF & VUC. Vært: Gitte Hansen.

tv tale eller skal hvis hvad hiller grise landbrug vuc michael nielsen fiskeri zoologisk mickey gjerris louise k gitte hansen
P1 Debat
Kan grise tale?

P1 Debat

Play Episode Listen Later Sep 6, 2024 71:38


Kan dyrene fortælle, om de er bange, stressede eller tilfredse? TV-dokumentaren "Hvis grise kunne tale" - sætter fokus på svineavl og dyrevelfærd. I P1Debat spørger vi: Skal grise ud af stalden, ud på marken for at blive glade og mindre stressede? Og hvad med andre dyr? Hvad ville hesten på ridebanen sige, hvis den fik ordet? Og hvad med hunden, der bor i en lejlighed på 4. sal, og er alene hjemme 8 timer om dagen, hvad ville den vælge? Eller dyrene i Zoologisk have, bliver de glade af at bo i et bur, hvor mennesker kommer og kigger på dem hver dag? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Miki Mistrati, journalist bag TV-dokumentaren "Hvis grise kunne tale", Louise Køster, Rabarbergården/ tidl formand økologisk landbrug, Mickey Gjerris, bio-etiker og forfatter, Mads Frost Bertelsen, direktør Københavns Zoologiske Have, Michael Nielsen, landmand, svineavler Hillerød, Jacob Jensen (V) minister for Fødevarer, Landbrug og Fiskeri og Sofie Graarup Jensen, biologilærer Thy-Mors HF & VUC. Vært: Gitte Hansen.

tv tale eller skal hvis hvad hiller grise landbrug vuc michael nielsen fiskeri zoologisk mickey gjerris louise k gitte hansen
The Nonlinear Library
LW - AI #66: Oh to Be Less Online by Zvi

The Nonlinear Library

Play Episode Listen Later Jun 1, 2024 87:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #66: Oh to Be Less Online, published by Zvi on June 1, 2024 on LessWrong. Tomorrow I will fly out to San Francisco, to spend Friday through Monday at the LessOnline conference at Lighthaven in Berkeley. If you are there, by all means say hello. If you are in the Bay generally and want to otherwise meet, especially on Monday, let me know that too and I will see if I have time to make that happen. Even without that hiccup, it continues to be a game of playing catch-up. Progress is being made, but we are definitely not there yet (and everything not AI is being completely ignored for now). Last week I pointed out seven things I was unable to cover, along with a few miscellaneous papers and reports. Out of those seven, I managed to ship on three of them: Ongoing issues at OpenAI, The Schumer Report and Anthropic's interpretability paper. However, OpenAI developments continue. Thanks largely to Helen Toner's podcast, some form of that is going back into the queue. Some other developments, including new media deals and their new safety board, are being covered normally. The post on DeepMind's new scaling policy should be up tomorrow. I also wrote a full post on a fourth, Reports of our Death, but have decided to shelve that post and post a short summary here instead. That means the current 'not yet covered queue' is as follows: 1. DeepMind's new scaling policy. 1. Should be out tomorrow before I leave, or worst case next week. 2. The AI Summit in Seoul. 3. Further retrospective on OpenAI including Helen Toner's podcast. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. You heard of them first. 4. Not Okay, Google. A tiny little problem with the AI Overviews. 5. OK Google, Don't Panic. Swing for the fences. Race for your life. 6. Not Okay, Meta. Your application to opt out of AI data is rejected. What? 7. Not Okay Taking Our Jobs. The question is, with or without replacement? 8. They Took Our Jobs Anyway. It's coming. 9. A New Leaderboard Appears. Scale.ai offers new capability evaluations. 10. Copyright Confrontation. Which OpenAI lawsuit was that again? 11. Deepfaketown and Botpocalypse Soon. Meta fails to make an ordinary effort. 12. Get Involved. Dwarkesh Patel is hiring. 13. Introducing. OpenAI makes media deals with The Atlantic and… Vox? Surprise. 14. In Other AI News. Jan Leike joins Anthropic, Altman signs giving pledge. 15. GPT-5 Alive. They are training it now. A security committee is assembling. 16. Quiet Speculations. Expectations of changes, great and small. 17. Open Versus Closed. Two opposing things cannot dominate the same space. 18. Your Kind of People. Verbal versus math versus otherwise in the AI age. 19. The Quest for Sane Regulation. Lina Khan on the warpath, Yang on the tax path. 20. Lawfare and Liability. How much work can tort law do for us? 21. SB 1047 Unconstitutional, Claims Paper. I believe that the paper is wrong. 22. The Week in Audio. Jeremie & Edouard Harris explain x-risk on Joe Rogan. 23. Rhetorical Innovation. Not everyone believes in GI. I typed what I typed. 24. Abridged Reports of Our Death. A frustrating interaction, virtue of silence. 25. Aligning a Smarter Than Human Intelligence is Difficult. You have to try. 26. People Are Worried About AI Killing Everyone. Yes, it is partly about money. 27. Other People Are Not As Worried About AI Killing Everyone. Assumptions. 28. The Lighter Side. Choose your fighter. Language Models Offer Mundane Utility Which model is the best right now? Michael Nielsen is gradually moving back to Claude Opus, and so am I. GPT-4o is fast and has some nice extra features, so when I figure it is 'smart enough' I will use it, but when I care most about quality and can wait a bit I increasingly go to Opus. Gemini I'm reserving for a few niche purposes, when I nee...

The Nonlinear Library: LessWrong
LW - AI #66: Oh to Be Less Online by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 1, 2024 87:31


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #66: Oh to Be Less Online, published by Zvi on June 1, 2024 on LessWrong. Tomorrow I will fly out to San Francisco, to spend Friday through Monday at the LessOnline conference at Lighthaven in Berkeley. If you are there, by all means say hello. If you are in the Bay generally and want to otherwise meet, especially on Monday, let me know that too and I will see if I have time to make that happen. Even without that hiccup, it continues to be a game of playing catch-up. Progress is being made, but we are definitely not there yet (and everything not AI is being completely ignored for now). Last week I pointed out seven things I was unable to cover, along with a few miscellaneous papers and reports. Out of those seven, I managed to ship on three of them: Ongoing issues at OpenAI, The Schumer Report and Anthropic's interpretability paper. However, OpenAI developments continue. Thanks largely to Helen Toner's podcast, some form of that is going back into the queue. Some other developments, including new media deals and their new safety board, are being covered normally. The post on DeepMind's new scaling policy should be up tomorrow. I also wrote a full post on a fourth, Reports of our Death, but have decided to shelve that post and post a short summary here instead. That means the current 'not yet covered queue' is as follows: 1. DeepMind's new scaling policy. 1. Should be out tomorrow before I leave, or worst case next week. 2. The AI Summit in Seoul. 3. Further retrospective on OpenAI including Helen Toner's podcast. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. You heard of them first. 4. Not Okay, Google. A tiny little problem with the AI Overviews. 5. OK Google, Don't Panic. Swing for the fences. Race for your life. 6. Not Okay, Meta. Your application to opt out of AI data is rejected. What? 7. Not Okay Taking Our Jobs. The question is, with or without replacement? 8. They Took Our Jobs Anyway. It's coming. 9. A New Leaderboard Appears. Scale.ai offers new capability evaluations. 10. Copyright Confrontation. Which OpenAI lawsuit was that again? 11. Deepfaketown and Botpocalypse Soon. Meta fails to make an ordinary effort. 12. Get Involved. Dwarkesh Patel is hiring. 13. Introducing. OpenAI makes media deals with The Atlantic and… Vox? Surprise. 14. In Other AI News. Jan Leike joins Anthropic, Altman signs giving pledge. 15. GPT-5 Alive. They are training it now. A security committee is assembling. 16. Quiet Speculations. Expectations of changes, great and small. 17. Open Versus Closed. Two opposing things cannot dominate the same space. 18. Your Kind of People. Verbal versus math versus otherwise in the AI age. 19. The Quest for Sane Regulation. Lina Khan on the warpath, Yang on the tax path. 20. Lawfare and Liability. How much work can tort law do for us? 21. SB 1047 Unconstitutional, Claims Paper. I believe that the paper is wrong. 22. The Week in Audio. Jeremie & Edouard Harris explain x-risk on Joe Rogan. 23. Rhetorical Innovation. Not everyone believes in GI. I typed what I typed. 24. Abridged Reports of Our Death. A frustrating interaction, virtue of silence. 25. Aligning a Smarter Than Human Intelligence is Difficult. You have to try. 26. People Are Worried About AI Killing Everyone. Yes, it is partly about money. 27. Other People Are Not As Worried About AI Killing Everyone. Assumptions. 28. The Lighter Side. Choose your fighter. Language Models Offer Mundane Utility Which model is the best right now? Michael Nielsen is gradually moving back to Claude Opus, and so am I. GPT-4o is fast and has some nice extra features, so when I figure it is 'smart enough' I will use it, but when I care most about quality and can wait a bit I increasingly go to Opus. Gemini I'm reserving for a few niche purposes, when I nee...

Conversations with Tyler
Michael Nielsen on Collaboration, Quantum Computing, and Civilization's Fragility

Conversations with Tyler

Play Episode Listen Later May 29, 2024 62:10


Take our Listener Survey Michael Nielsen is scientist who helped pioneer quantum computing and the modern open science movement. He's worked at Y Combinator, co-authored on scientific progress with Patrick Collison, and is a prolific writer, reader, commentator, and mentor.  He joined Tyler to discuss why the universe is so beautiful to human eyes (but not ears), how to find good collaborators, the influence of Simone Weil, where Olaf Stapledon's understand of the social word went wrong, potential applications of quantum computing, the (rising) status of linear algebra, what makes for physicists who age well, finding young mentors, why some scientific fields have pre-print platforms and others don't, how so many crummy journals survive, the threat of cheap nukes, the many unknowns of Mars colonization, techniques for paying closer attention, what you learn when visiting the USS Midway, why he changed his mind about Emergent Ventures, why he didn't join OpenAI in 2015, what he'll learn next, and more.  Read a full transcript enhanced with helpful links, or watch the full video. Recorded March 24th, 2024. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Michael on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.

The Nonlinear Library
LW - Building intuition with spaced repetition systems by Jacob G-W

The Nonlinear Library

Play Episode Listen Later May 14, 2024 6:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building intuition with spaced repetition systems, published by Jacob G-W on May 14, 2024 on LessWrong. Do you ever go to a lecture, follow it thinking it makes total sense, then look back at your notes later and realize it makes no sense? This used to happen to me, but I've learned how to use spaced repetition to fully avoid this if I want. I'm going to try to convey this method in this post. Much of my understanding of how to create flashcards comes from "Using spaced repetition systems to see through a piece of mathematics" by Michael Nielsen and "How to write good prompts: using spaced repetition to create understanding" by Andy Matuschak, but I think my method falls in between both, in terms of abstraction. Finally, I want to credit Quantum Country for being an amazing example of flashcards created to develop intuition in users. My method is more abstract than Michael Nielsen's approach, since it does not only apply to mathematics, but to any subject. Yet it is less abstract than Andy Matuschak's approach because I specifically use it for 'academic subjects' that require deep intuition of (causal or other) relationships between concepts. Many of Matuschak's principles in his essay apply here (I want to make sure to give him credit), but I'm looking at it through the 'how can we develop deep intuition in an academic subject in the fastest possible time?' lens. Minimize Inferential Distance on Flashcards A method that I like to repeat to myself while making flashcards that I haven't seen in other places is that each flashcard should only have one inferential step on it. I'm using 'inferential step' here to mean a step such as remembering a fact, making a logical deduction, visualizing something, or anything that requires thinking. It's necessary that a flashcard only have a single inferential step on it. Anki trains the mind to do these steps. If you learn all the inferential steps, you will be able to fully re-create any mathematical deduction, historical story, or scientific argument. Knowing (and continually remembering) the full story with spaced repetition builds intuition. I'm going to illustrate this point by sharing some flashcards that I made while trying to understand how Transformers (GPT-2) worked. I made these flashcards while implementing a transformer based on Neel Nanda's tutorials and these two blog posts. Understanding Attention The first step in my method is to learn or read enough so that you have part of the whole loaded into your head. For me, this looked like picking the attention step of a transformer and then reading about it in the two blog posts and watching the section of the video on it. It's really important to learn about something from multiple perspectives. Even when I'm making flashcards from a lecture, I have my web browser open and I'm looking up things that I thought were confusing while making flashcards. My next step is to understand that intuition is fake! Really good resources make you feel like you understand something, but to actually understand something, you need to engage with it. This engagement can take many forms. For technical topics, it usually looks like solving problems or coding, and this is good! I did this for transformers! But I also wanted to not forget it long term, so I used spaced repetition to cement my intuition. Enough talk, here are some flashcards about attention in a transformer. For each flashcard, I'll explain why I made it. Feel free to scroll through. Examples I start with a distillation of the key points of the article. I wanted to make sure that I knew what the attention operation was actually doing, as the blog posts emphasized this. When building intuition, I find it helpful to know "the shape" or constraints about something so that I can build a more accurate mental model. In this case, th...

The Nonlinear Library: LessWrong
LW - Building intuition with spaced repetition systems by Jacob G-W

The Nonlinear Library: LessWrong

Play Episode Listen Later May 14, 2024 6:25


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building intuition with spaced repetition systems, published by Jacob G-W on May 14, 2024 on LessWrong. Do you ever go to a lecture, follow it thinking it makes total sense, then look back at your notes later and realize it makes no sense? This used to happen to me, but I've learned how to use spaced repetition to fully avoid this if I want. I'm going to try to convey this method in this post. Much of my understanding of how to create flashcards comes from "Using spaced repetition systems to see through a piece of mathematics" by Michael Nielsen and "How to write good prompts: using spaced repetition to create understanding" by Andy Matuschak, but I think my method falls in between both, in terms of abstraction. Finally, I want to credit Quantum Country for being an amazing example of flashcards created to develop intuition in users. My method is more abstract than Michael Nielsen's approach, since it does not only apply to mathematics, but to any subject. Yet it is less abstract than Andy Matuschak's approach because I specifically use it for 'academic subjects' that require deep intuition of (causal or other) relationships between concepts. Many of Matuschak's principles in his essay apply here (I want to make sure to give him credit), but I'm looking at it through the 'how can we develop deep intuition in an academic subject in the fastest possible time?' lens. Minimize Inferential Distance on Flashcards A method that I like to repeat to myself while making flashcards that I haven't seen in other places is that each flashcard should only have one inferential step on it. I'm using 'inferential step' here to mean a step such as remembering a fact, making a logical deduction, visualizing something, or anything that requires thinking. It's necessary that a flashcard only have a single inferential step on it. Anki trains the mind to do these steps. If you learn all the inferential steps, you will be able to fully re-create any mathematical deduction, historical story, or scientific argument. Knowing (and continually remembering) the full story with spaced repetition builds intuition. I'm going to illustrate this point by sharing some flashcards that I made while trying to understand how Transformers (GPT-2) worked. I made these flashcards while implementing a transformer based on Neel Nanda's tutorials and these two blog posts. Understanding Attention The first step in my method is to learn or read enough so that you have part of the whole loaded into your head. For me, this looked like picking the attention step of a transformer and then reading about it in the two blog posts and watching the section of the video on it. It's really important to learn about something from multiple perspectives. Even when I'm making flashcards from a lecture, I have my web browser open and I'm looking up things that I thought were confusing while making flashcards. My next step is to understand that intuition is fake! Really good resources make you feel like you understand something, but to actually understand something, you need to engage with it. This engagement can take many forms. For technical topics, it usually looks like solving problems or coding, and this is good! I did this for transformers! But I also wanted to not forget it long term, so I used spaced repetition to cement my intuition. Enough talk, here are some flashcards about attention in a transformer. For each flashcard, I'll explain why I made it. Feel free to scroll through. Examples I start with a distillation of the key points of the article. I wanted to make sure that I knew what the attention operation was actually doing, as the blog posts emphasized this. When building intuition, I find it helpful to know "the shape" or constraints about something so that I can build a more accurate mental model. In this case, th...

The AIAS Game Maker's Notebook
Composer Duo Ninja Tracks talk Forza Motorsport and Music Publishing

The AIAS Game Maker's Notebook

Play Episode Listen Later Feb 5, 2024 90:38


Austin Wintory chats with the composing team collectively known as Ninja Tracks, Kaveh Cohen and Michael Nielsen. Together they discussed how they first met and what drove them to create a composing partnership; how they've worked together across various video games, TV shows, and films; and why the decided to add music publishing to their repertoire. If you enjoyed this episode, please consider leaving us a rating and review. The Game Maker's Notebook is sponsored by Xsolla. To learn more, go to xsolla.pro/AOIAAS.  

The Nonlinear Library
LW - Spaced repetition for teaching two-year olds how to read (Interview) by Chipmonk

The Nonlinear Library

Play Episode Listen Later Nov 27, 2023 8:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spaced repetition for teaching two-year olds how to read (Interview), published by Chipmonk on November 27, 2023 on LessWrong. Update: this post now has another video. This father has been using spaced repetition (Anki) to teach his children how to read several years earlier than average. Michael Nielsen and Gwern[1] tweeted about the interesting case of a reddit user, u/caffeine314 (henceforth dubbed "CoffeePie"), who has been using spaced repetition with his daughter from a very young age. CoffeePie started using Anki with his daughter when she turned 2, and he continued using Anki with his son starting when he was 1 year 9 months. Here's his daughter's progress as recounted in January 2020: My daughter is now about to turn 5 in a few days… She's still going strong -- she uses Anki every single day for English, Hebrew, and Spanish. She's very confident about reading, and moreover, she reads with ... "context". Many kids her age read mechanically, but she reads like a real storyteller, and that comes from her confidence. At the beginning of the school year her teachers said she definitely has the reading ability of fifth grade, and if we're just going by the ability to read and not focus on comprehension of abstract ideas, her reading level may rival an 8th grader. (From Update on my daughter and Anki) For reference, fifth graders are usually 10 or 11yo in the US, and 8th graders are usually 13 or 14yo, so this puts her ~5-9 years ahead of the average child. You can see a video of his daughter reading at 2 years, 2 months later in this post. CoffeePie has made several posts about their experience but I still had questions so I reached out to interview him back in January. Interview Responses have been edited for clarity. What did you learn in going from using Anki on your daughter to your son? How has it gone with your son? It's a hard question, because I got so much right. We were so wildly successful that I "cloned" just about every aspect with my son. A couple of things I can think of: With my daughter, I held back on lowercase letters for a long time because I thought it would confuse her, but when I started to introduce lowercase to her, to my extreme shock, she already knew them, down cold! I think what happened is that she learned them just by looking at books, TV, magazines, storefront signs, menus, etc. So when we started with my son, I started doing lower case letters the very day after we finished capital letters. Another difference is that we did numbers the very next day after lowercase letters. I really, really thought I was pushing too hard; I had no desire to be a "tiger dad", but he took it with extreme grace. I was ready to stop at any moment, but he was fine. Another difference is that our expectations of what the kids were getting out of it had changed, as well. At first, I just really wanted my daughter to get a jump start on reading, but stupid me, I didn't realize there were unintended consequences. A four year old with a 3rd grade reading ability learns about a WHOLE lot more -- it opened up politics for her. She would read our junk mail, and learn who our council member was, who our representative is, the mayor, current events, history, etc. I know it's stupid of me to say, but I underestimated the effect that reading early would have on her breadth of learning. One last thing is math. I mentioned that we started numbers early with my son. But we also started arithmetic. He wasn't reading by 3 the way Hannah was, but he knew all his multiplication tables up to 12 by 12. This year we tackled prime factorization, Fibonacci sequences, decimal and place values, mixed, proper, and improper fractions, light algebra, etc. I was much more aggressive with the math, and again, he handled it with grace. I was ready to stop at any moment. Do you still u...

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00

Tone-Talk.com
New Friedman IR-X Deep Dive with special guest Michael Nielsen!

Tone-Talk.com

Play Episode Listen Later Sep 28, 2023 122:44


New Friedman IR-X Deep Dive with special guest Michael Nielsen!Support the show

deep dive friedman michael nielsen
The Nonlinear Library
LW - AI #30: Dalle-3 and GPT-3.5-Instruct-Turbo by Zvi

The Nonlinear Library

Play Episode Listen Later Sep 21, 2023 73:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #30: Dalle-3 and GPT-3.5-Instruct-Turbo, published by Zvi on September 21, 2023 on LessWrong. We are about to see what looks like a substantial leap in image models. OpenAI will be integrating Dalle-3 into ChatGPT, the pictures we've seen look gorgeous and richly detailed, with the ability to generate pictures to much more complex specifications than existing image models. Before, the rule of thumb was you could get one of each magisteria, but good luck getting two things you want from a given magisteria. Now, perhaps, you can, if you are willing to give up on adult content and images of public figures since OpenAI is (quite understandably) no fun. We will find out in a few weeks, as it rolls out to ChatGPT+ users. As usual a bunch of other stuff also happened, including a model danger classification system from Anthropic, OpenAI announcing an outside red teaming squad, a study of AI impact on consultant job performance, some incremental upgrades to Bard including an extension for GMail, new abilities to diagnose medical conditions and some rhetorical innovations. Also don't look now but GPT-3.5-Turbo-Instruct plays Chess at 1800 Elo, and due to its relative lack of destructive RLHF seems to offer relatively strong performance at a very low cost and very high speed, although for most purposes its final quality is still substantially behind GPT-4. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. GPT-4 boosts consultant productivity. Language Models Don't Offer Mundane Utility. Do we want to boost that? Level Two Bard. Some improvements, I suppose. Still needs a lot of work. Wouldn't You Prefer a Good Game of Chess? An LLM at 1800 Elo. World model. GPT-4 Real This Time. GPT-3.5-Instruct-Turbo proves its practical use, perhaps. Fun With Image Generation. Introducing Dalle-3. Deepfaketown and Botpocalypse Soon. Amazon limits self-publishing to 3 a day. Get Involved. OpenAI hiring for mundane safety, beware the double-edged sword. Introducing. OpenAI red team network, Anthropic responsible scaling policy. In Other AI News. UK government and AI CEO both change their minds. Technical Details. One grok for grammar, another for understanding. Quiet Speculations. Michael Nielsen offers extended thoughts on extinction risk. The Quest for Sane Regulation. Everyone is joining the debate, it seems. The Week in Audio. A lecture about copyright law. Rhetorical Innovation. We keep trying. No One Would Be So Stupid As To. Are we asking you to stop? Aligning a Smarter Than Human Intelligence is Difficult. Asimov's laws? No. I Didn't Do It, No One Saw Me Do It, You Can't Prove Anything. Can you? People Are Worried About AI Killing Everyone. Yet another round of exactly how. Other People Are Not As Worried About AI Killing Everyone. Tony Blair. The Lighter Side. Jesus flip the tables. Language Models Offer Mundane Utility Diagnose eye diseases. This seems like a very safe application even with false positives, humans can verify anything the AI finds. Diagnose foetal growth restrictions early. In theory and technically using graph neural networks, use the resulting 'reading mode' in Android or Chrome to strip out the words from a webpage, in an actually readable size and font, much more accurate than older attempts. Seems you have to turn it on under chrome flags. GPT-4 showing some solid theory of mind in a relatively easy situation. Always notice whether you are finding out it can do X consistently, can do X typically, or can do X once with bespoke prompting. The same with failure to do X. What does it mean that a model would ever say ~X, versus that it does all the time, versus it does every time? Each is different. How to convince people who are unimpressed by code writing that LLMs are not simply parrots? Eliezer asked on Twitter, and said ...

The Nonlinear Library
EA - [Link post] Michael Nielsen's "Notes on Existential Risk from Artificial Superintelligence" by Joel Becker

The Nonlinear Library

Play Episode Listen Later Sep 20, 2023 10:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link post] Michael Nielsen's "Notes on Existential Risk from Artificial Superintelligence", published by Joel Becker on September 20, 2023 on The Effective Altruism Forum. Summary From the piece: Earlier this year I decided to take a few weeks to figure out what I think about the existential risk from Artificial Superintelligence (ASI xrisk). It turned out to be much more difficult than I thought. After several months of reading, thinking, and talking with people, what follows is a discussion of a few observations arising during this exploration, including: Three ASI xrisk persuasion paradoxes, which make it intrinsically difficult to present strong evidence either for or against ASI xrisk. The lack of such compelling evidence is part of the reason there is such strong disagreement about ASI xrisk, with people often (understandably) relying instead on prior beliefs, self-interest, and tribal reasoning to decide their opinions. The alignment dilemma: should someone concerned with xrisk contribute to concrete alignment work, since it's the only way we can hope to build safe systems; or should they refuse to do such work, as contributing to accelerating a bad outcome? Part of a broader discussion of the accelerationist character of much AI alignment work, so capabilities / alignment is a false dichotomy. The doomsday question: are there recipes for ruin -- simple, easily executed, immensely destructive recipes that could end humanity, or wreak catastrophic world-changing damage? What bottlenecks are there on ASI speeding up scientific discovery? And, in particular: is it possible for ASI to discover new levels of emergent phenomena, latent in existing theories? Excerpts Here are the passages I thought were interesting enough to tweet about: "So, what's your probability of doom?" I think the concept is badly misleading. The outcomes humanity gets depend on choices we can make. We can make choices that make doom almost inevitable, on a timescale of decades - indeed, we don't need ASI for that, we can likely4 arrange it in other ways (nukes, engineered viruses, .). We can also make choices that make doom extremely unlikely. The trick is to figure out what's likely to lead to flourishing, and to do those things. The term "probability of doom" began frustrating me after starting to routinely hear people at AI companies use it fatalistically, ignoring the fact that their choices can change the outcomes. "Probability of doom" is an example of a conceptual hazard5 - a case where merely using the concept may lead to mistakes in your thinking. Its main use seems to be as marketing: if widely-respected people say forcefully that they have a high or low probability of doom, that may cause other people to stop and consider why. But I dislike concepts which are good for marketing, but bad for understanding; they foster collective misunderstanding, and are likely to eventually lead to collective errors in action. With all that said: practical alignment work is extremely accelerationist. If ChatGPT had behaved like Tay, AI would still be getting minor mentions on page 19 of The New York Times. These alignment techniques play a role in AI somewhat like the systems used to control when a nuclear bomb goes off. If such bombs just went off at random, no-one would build nuclear bombs, and there would be no nuclear threat to humanity. Practical alignment work makes today's AI systems far more attractive to customers, far more usable as a platform for building other systems, far more profitable as a target for investors, and far more palatable to governments. The net result is that practical alignment work is accelerationist. There's an extremely thoughtful essay by Paul Christiano, one of the pioneers of both RLHF and AI safety, where he addresses the question of whether he regrets working ...

English Academic Vocabulary Booster
4628. 175 Academic Words Reference from "Michael Nielsen: Open science now! | TED Talk"

English Academic Vocabulary Booster

Play Episode Listen Later Sep 9, 2023 158:07


This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/michael_nielsen_open_science_now ■Post on this topic (You can get FREE learning materials!) https://englist.me/175-academic-words-reference-from-michael-nielsen-open-science-now-ted-talk/ ■Youtube Video https://youtu.be/zgZkBL3ZMII (All Words) https://youtu.be/kBOSnangx1c (Advanced Words) https://youtu.be/vV-xe04p0Pc (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)

The Nonlinear Library
EA - [Linkpost] Michael Nielsen remarks on 'Oppenheimer' by Tom Barnes

The Nonlinear Library

Play Episode Listen Later Aug 31, 2023 3:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Michael Nielsen remarks on 'Oppenheimer', published by Tom Barnes on August 31, 2023 on The Effective Altruism Forum. This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I've pulled out some quotes I find especially interesting (bolding my own) I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car". [...] I often meet people who claim to sincerely believe (or at least seriously worry) that AI may cause significant damage to humanity. And yet they are also working on it, justifying it in ways that sometimes seem sincerely thought out, but which all-too-often seem self-serving or self-deceiving. Part of what makes the Manhattan Project interesting is that we can chart the arcs of moral thinking of multiple participants [...] Here are four caricatures: Klaus Fuchs and Ted Hall were two Manhattan Project physicists who took it upon themselves to commit espionage, communicating the secret of the bomb to the Soviet Union. It's difficult to know for sure, but both seem to have been deeply morally engaged and trying to do the right thing, willing to risk their lives; they also made, I strongly believe, a terrible error of judgment. I take it as a warning that caring and courage and imagination are not enough; they can, in fact, lead to very bad outcomes. Robert Wilson, the physicist who recruited Richard Feynman to the project. Wilson had thought deeply about Nazi Germany, and the capabilities of German physics and industry, and made a principled commitment to the project on that basis. He half-heartedly considered leaving when Germany surrendered, but opted to continue until the bombings in Japan. He later regretted that choice; immediately after the Trinity Test he was disconsolate, telling an exuberant Feynman: "It's a terrible thing that we made". Oppenheimer, who I believe was motivated in part by a genuine fear of the Nazis, but also in part by personal ambition and a desire for "success". It's interesting to ponder his statements after the War: while he seems to have genuinely felt a strong need to work on the bomb in the face of the Nazi threat, his comments about continuing to work up to the bombing of Hiroshima and Nagasaki contain many strained self-exculpatory statements about how you have to work on it as a scientist, that the technical problem is too sweet. It smells, to me, of someone looking for self-justification. Joseph Rotblat, the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb. He was threatened by the head of Los Alamos security, and falsely accused of having met with Soviet agents. In leaving he was turning his back on his most important professional peers at a crucial time in his career. Doing so must have required tremendous courage and moral imagination. Part of what makes the choice intriguing is that he himself didn't think it would make any difference to the success of the project. I know I personally find it tempting to think about such choices in abstract systems terms: "I, individually, can't change systems outcomes by refusing to participate ['it's inevitable!'], therefore it's okay to participate". And yet while that view seems reasonable, Rotblat's example shows it is incorrect. His private moral...

The Gear Podcast
Michael Nielsen | Guitarist, Producer, Composer & Gear Addict

The Gear Podcast

Play Episode Listen Later Jul 20, 2023 112:30


Friend of the podcast Michael Nielsen joins us for an in depth chat about his career and gear obsessions.

Running Tales
Michael Nielsen: Ultra runner aiming to inspire future generations by sharing his Runner's Resource

Running Tales

Play Episode Listen Later Feb 14, 2023 36:05


Running coach, personal trainer, podcaster and ultra runner - Michael Nielsen wears a few hats when it comes to his running journey. As a coach and through his Runner's Resource podcast, Michael is eager to share his knowledge of a sport which he has enjoyed a life-long relationship with. On his podcast, Michael has not only talked to a succession of runners with fascinating stories, but also shares his own wide-ranging knowledge giving hints and tips on a variety of subjects, from the benefits of pilates to the different energy systems used to fuel your muscles. We also spoke to Michael about his own running, including taking on 50km trail races and conquering the 50 mile distance…   ---------------------------------- You can listen to The Runner's Resource wherever you get you podcast including Apple: https://podcasts.apple.com/gb/podcast/the-runners-resource/id1623169728 Subscribe to our Substack newsletter at https://runningtales.substack.com If you like this episode please consider donating to help us keep going: https://www.buymeacoffee.com/stepforward

Ben Yeoh Chats
Kanjun Qiu: AI, metascience, institutional knowledge, trauma models, structure of knowledge, creativity and dance

Ben Yeoh Chats

Play Episode Listen Later Jan 17, 2023 99:20


Kanjun is co-founder and CEO of Generally Intelligent, an AI research company. She works on metascience ideas often with Michael Nielsen, a previous podcast guest. She's a VC investor and co-hosts her own podcast for Generally Intelligent. She is part of building the Neighborhood, which is intergenerational campus in a square mile of central San Francisco. Generally Intelligent (as of podcast date ) are looking for great talent looking to work on AI. We get a little nerdy on the podcast but we cover AI thinking, fears on rogue AI, and the breakthroughs of Chat AI. We discuss some of her latest ideas in meta science based on the work she has done with Michael Nielsen (previous podcast here) and what are the important questions we should be looking at. We chat about the challenge of old institutions, the value of dance and creativity and why her friends use “to kanjun” as a verb. We cover her ideas on models of trauma and why EMDR (Eye Movement Desensitization and Reprocessing therapy) and cognitive therapies might work. We discuss why dinosaurs didn't develop more. We chat around “what is meaning” and “what is the structure of knowledge”, what are the strengths and weakness of old institutions; culture vs knowledge vs history and other confusing questions. Kanjun gives her advice on how to think about dance (dance like you are moving through molasses). "Dance is inside of you. It just needs to be unlocked." We play underrated/overrated on: having agency, city planning, death of institutions, innovation agencies, high frequency trading; diversity Kanjun thinks on how capitalism might want to be augmented and what excites Kanjun about AI and complex systems. Kanjun asks me questions and I offer my critique on Effective Altruism. This is quirky long form conversation on a range of fascinating topics. Transcript and video available here: https://www.thendobetter.com/arts/2023/1/17/kanjun-qiu-ai-metascience-institutional-knowledge-trauma-models-podcast

The Vtwin Life
Milepost 85 with @2wheels2survive Michael Nielsen

The Vtwin Life

Play Episode Listen Later Dec 14, 2022 102:31


Michael's big drive is riding for Mission 22 he is an official ambassador for them and is also a retired Army Combat Veteran. Also he is a Mile Monster Rider and a very good friend of mine Want to help support the channel check out my social media pages and follow there as well

The Jim Rutt Show
Currents 075: Michael Nielsen on Metascience

The Jim Rutt Show

Play Episode Listen Later Dec 5, 2022 61:41


Jim talks with Michael Nielsen about the ideas in his and Kanjun Qiu's recent essay, "A Vision of Metascience: An Engine of Improvement for the Social Processes of Science." They discuss the meaning of metascience, a vivid example in Genovese maritime insurance, attracting intellectual dark matter, creation & limitations of the h-index, frozen accidents in our scientific operating system, what allowed the original DARPA to be so productive, funding-by-variance, failure audits, changing the unit of evaluation from papers to software, at-the-bench fellowships, science funders as detectors & predictors, endowed professorships by age 25, eliciting the secret thesis, metascience as an imaginative design practice, bottlenecks to decentralized improvement, the Open Science Collaboration, pre-registered study designs, metascience entrepreneurship, the arXiv preprint server, and much more. Episode Transcript "A Vision of Metascience: An Engine of Improvement for the Social Processes of Science," by Michael Nielsen and Kanjun Qiu Michael Nielsen (website) JRS EP12 - Brian Nosek – Open Science and Reproducibility Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. His main current projects are in metascience, programmable matter, and tools for thought.  He is the recent co-author of a book-long essay, "A Vision of Metascience", outlining the ways in which the institutions of science can become self-improving.  All his work is united by a broader interest in tools that help people think and create, both individually and collectively. He is a research fellow at the Astera Institute in the San Francisco Bay Area.

Effective Altruism Forum Podcast
"Notes on effective altruism" by Michael Nielsen

Effective Altruism Forum Podcast

Play Episode Listen Later Nov 30, 2022 55:56


Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom."Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis": that's the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world's most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.Original article:https://michaelnotebook.com/eanotes/Narrated for the Effective Altruism Forum by TYPE III AUDIO.

original ea thoughtful effective altruism michael nielsen effective altruism ea
Ben Yeoh Chats
Michael Nielsen: metascience, how to improve science, open science, and decentralisation

Ben Yeoh Chats

Play Episode Listen Later Nov 15, 2022 96:54


Michael Nielsen is a scientist at the Astera Institute. He helped pioneer quantum computing and the modern open science movement. He is a leading thinker on the topic of meta science and how to improve science, in particular, the social processes of science. His latest co-authored work is ‘A Vision of metascience: An engine of improvement for the social processes of Science' co-authored with Kanjun Qiu . His website notebook is here, with further links to his books including on quantum, memory systems, deep learning, open science and the future of matter. I ask: What is the most important question in science or meta science we should be seeking to understand at the moment ? We discuss his vision for what a metascience ecosystem could be; what progress could be and ideas for improving the the culture of science and social processes. We imagine what an alien might think about our social processes and discuss failure audits, high variance funding and whether organisations really fund ‘high risk' projects if not that many fail, and how we might measure this. We discuss how these ideas might not work and be wrong; the difficulty of (the lack of) language for new forming fields; how an interdisciplinary institute might work. The possible importance of serendipity and agglomeration effects; what to do about attracting outsiders, and funding unusual ideas. We touch on the stories of Einstein, Katalin Kariko (mRNA) and Doug Prasher (molecular biologist turned van driver) and what they might tell us. We discuss how metascience can be treated as a research field and also as an entrepreneurial discipline. We discuss how decentralisation may help. How new institutions may help. The challenges funders face in wanting to wait until ideas become clearer. We discuss the opportunity that developing nations such as Indonesia might have. We chat about rationality and critical rationality. Michael gives some insights into how AI art might be used and how we might never master certain languages, like the languages of early computing. We end on some thoughts Michael might give his younger self: The one thing I wish I'd understood much earlier is the extent to which there's kind of an asymmetry in what you see, which is you're always tempted not to make a jump because you see very clearly what you're giving up and you don't see very clearly what it is you're going to gain. So almost all of the interesting opportunities on the other side of that are opaque to you now. You have a very limited kind of a vision into them. You can get around it a little bit by chatting with people who maybe are doing something similar, but it's so much more limited. And yet I know when reasoning about it, I want to treat them like my views of the two are somehow parallel but they're just not. Transcript/Video available here: https://www.thendobetter.com/arts/2022/11/15/michael-nielsen-metascience-how-to-improve-science-open-science-podcast

Tools & Craft
Michael Nielsen

Tools & Craft

Play Episode Listen Later Nov 1, 2022 95:10


Michael Nielsen is a quantum physicist, science writer, computer programming researcher, and modern polymath working on tools to expand human capacity to think and create. He's previously authored pioneering quantum computing books, propelled forward the open science movement, and published research on artificial intelligence. He now researches meta-science at the Astera Institute, while writing about his many interests online. See www.notion.so/blog/michael-nielsen for episode transcript. Hosted by Devon Zuegel Edited by Anson Yu Audio by The Land Films

michael nielsen
Diz Runs Radio: Running, Life, & Everything In Between
1096 Michael Nielsen Is Giving Runners Resources Needed To Succeed

Diz Runs Radio: Running, Life, & Everything In Between

Play Episode Listen Later Oct 31, 2022 66:19


Michael Nielsen has covered a lot of race miles in his day, and now he's helping others do the same thing. One of his biggest coaching pillars: the value of a good warm-up. Check out the show notes for today's episode at http://DizRuns.com/1096. Today's episode of the show is sponsored by: the Little Things course! Check out this FREE course to help you shore up some potential weak links that are slowing your growth as a runner. http://DizRuns.com/littlethings Love the show? Check out the support page for ways you can help keep the Diz Runs Radio going strong! http://dizruns.com/support Become a Patron of the Show! Visit http://Patreon.com/DizRuns to find out how. Get Your Diz Runs Radio Swag! http://dizruns.com/magnet Subscribe to the Diz Runs Radio Find Me on an Apple Device http://dizruns.com/itunes Find Me on an Android http://dizruns.com/stitcher Find Me on SoundCloud http://dizruns.com/soundcloud Please Take the Diz Runs Radio Listener Survey http://dizruns.com/survey Win a Free 16-Week Training Plan Enter at http://dizruns.com/giveaway Join The Tribe If you'd like to stay up to date with everything going on in the Diz Runs world, become a member of the tribe! The tribe gets a weekly email where I share running tips and stories about running and/or things going on in my life. To get the emails, just sign up at http://dizruns.com/join-the-tribe The tribe also has an open group on Facebook, where tribe members can join each other to talk about running, life, and anything in between. Check out the group and join the tribe at https://www.facebook.com/groups/thedizrunstribe/

The Nonlinear Library
EA - Some Carl Sagan quotations by finm

The Nonlinear Library

Play Episode Listen Later Oct 11, 2022 17:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Carl Sagan quotations, published by finm on October 10, 2022 on The Effective Altruism Forum. Carl Sagan (1934–1996) was an astronomer and science communicator. He organised the first physical messages to space (the Pioneer plaque and the Voyager Golden Record), presented the hugely popular TV series Cosmos (1980), and considered humanity's long-term future in Pale Blue Dot (1994). He was also part of the team of researchers who first discovered the possibility of nuclear winter, and so became a leading voice of concern about the use of nuclear weapons. Sagan's words were often prescient and always poetic. In particular, I think he captures many ideas related to longtermism and existential risk as powerfully as anyone writing today. I've tried collecting some quotations that stand out to me from Sagan's work, though I've only read a minority of his published writing. You can find a slightly more comprehensive version here. The website for Toby Ord's book The Precipice contains a list of quotations pertaining to existential risk, which I partially borrowed from here. Michael Nielsen has also written some fantastic 'working notes' on Cosmos. Cosmos: A Personal Voyage (1980) Note that Cosmos was co-written with Ann Druyan. Episode 1 — "The Shores of the Cosmic Ocean" The cosmos is all that is, or ever was, or ever will be. Our contemplations of the Cosmos stir us — there is a tingling in the spine, a catch in the voice, a faint sensation, as if a distant memory, of falling from a great height. We know we are approaching the greatest of mysteries. The size and age of the cosmos are beyond ordinary human understanding. Lost somewhere between immensity and eternity is our tiny planetary home, the Earth. For the first time we have the power to determine the fate of our planet, and ourselves. This is a time of great danger, but our species is young and curious and brave. It shows much promise. In the last few millennia we have made the most astonishing and unexpected discoveries about the cosmos, and our place within it. I believe our future depends powerfully on how we understand this cosmos; in which we float, like a mote of dust, in the morning sky. You can watch this opening scene here. The surface of the Earth is the shore of the cosmic ocean. On this shore, we've learned most of what we know. Recently, we've waded a little way out; maybe ankle-deep: and the water seems inviting. Some part of our being knows this is where we came from; we long to return — and we can, because the Cosmos is also within us: we are made of star stuff. We are the legacy of 15 billion years of cosmic evolution. We have a choice. We can enhance life and come to know the universe that made us, or we can squander our 15 billion year heritage in meaningless self-destruction. What happens in the first second of the next cosmic year depends on what we do — here and how — with our intelligence, and our knowledge of the cosmos. Episode 13 — "Who Speaks for Earth?" [Imagining human extinction] Maybe the reptiles will evolve intelligence once more. Perhaps, one day, there will be civilizations again on Earth. There will be life. There will be intelligence. But there will be no more humans. Not here, not on a billion worlds. [T]he world impoverishes itself by spending a trillion dollars a year on preparations for war. And by employing perhaps half the scientists and high technologists on the planet in military endeavors. How would we explain all this to a dispassionate extraterrestrial observer? What account would we give of our stewardship of the planet Earth? We have heard the rationales offered by the superpowers. We know who speaks for the nations. But who speaks for the human species? It's probably here. [Alexandria] that the word "cosmopolitan" realized its true meaning of a citizen, not just...

TechTopia
Techtopia 250: Hvad er en digital tvilling?

TechTopia

Play Episode Listen Later Sep 5, 2022 35:19


Digitale tvillinger er et af tidens buzzbegreber. Tanken er at skabe software kopier af alt fra vindmøller og robotter til biler og mennesker. Men hvor langt er vi egentlig med at lave disse tvillinger? Tvillingerne huserer især i industrivirksomheder, der kan drage konkurrencefordele af at have digitale tvillinger til rådighed. Desværre halter små og mellemstore danske virksomheder bagefter. Derfor har Automatikmessen i Brøndbyhallen i år fokus på digitale tvillinger. Medvirkende: Michael Nielsen, adm. dir., Beckhoff Automation Peter Gorm Larsen, professor, Center for Digitale Tvillinger, Aarhus Universitet Links: Automatikmessen https://www.automatikmesse.dk/for-besoegende/konferencer Center for Digitale Tvillinger https://digit.au.dk/centre-for-digital-twins

Clearer Thinking with Spencer Greenberg
Critiquing Effective Altruism (with Michael Nielsen and Ajeya Cotra)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 19, 2022 98:17


Read the full transcriptWhat is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there!

Clearer Thinking with Spencer Greenberg
Critiquing Effective Altruism (with Michael Nielsen and Ajeya Cotra)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 19, 2022 98:17


Read the full transcript here. What is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there! [Read more]

Clearer Thinking with Spencer Greenberg
Critiquing Effective Altruism (with Michael Nielsen and Ajeya Cotra)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 19, 2022 98:17


What is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there!

The Nonlinear Library
EA - Michael Nielsen's "Notes on effective altruism" by Pablo

The Nonlinear Library

Play Episode Listen Later Jun 3, 2022 9:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Michael Nielsen's "Notes on effective altruism", published by Pablo on June 3, 2022 on The Effective Altruism Forum. Quantum physicist Michael Nielsen has published an impressive critical essay on EA. Summary: Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom. Some passages I highlighted: I have EA friends who donate a large fraction of their income to charitable causes. In some cases it's all their income above some fairly low (by rich developed world standards) threshold, say $30k. In some cases it seems plausible that their personal donations are responsible for saving dozens of lives, helping lift many people out of poverty, and preventing many debilitating diseases, often in some of the poorest and most underserved parts of the world. Some of those friends have directly helped save many lives. That's a simple sentence, but an extraordinary one, so I'll repeat it: they've directly helped save many lives. As extraordinary as my friend's generosity was, there is something further still going on here. Kravinsky's act is one of moral imagination, to even consider donating a kidney, and then of moral conviction, to follow through. This is an astonishing act of moral invention: someone (presumably Kravinsky) was the first to both imagine doing this, and then to actually do it. That moral invention then inspired others to do the same. It actually expanded the range of human moral experience, which others can learn from and then emulate. In this sense a person like Kravinsky can be thought of as a moral pioneer or moral psychonaut, inventing new forms of moral experience. Moral reasoning, if taken seriously and acted upon, is of the utmost concern, in part because there is a danger of terrible mistakes. The Nazi example is overly dramatic: for one thing, I find it hard to believe that the originators of Nazi ideas didn't realize that these were deeply evil acts. But a more everyday example, and one which should give any ideology pause, is overly self-righteous people, acting in what they "know" is a good cause, but in fact doing harm. I'm cautiously enthusiastic about EA's moral pioneering. But it is potentially a minefield, something to also be cautious about. when EA judo is practiced too much, it's worth looking for more fundamental problems. The basic form of EA judo is: "Look, disagreement over what is good does nothing directly to touch EA. Indeed, such disagreement is the engine driving improvement in our notion of what is good." This is perhaps true in some God's-eye, omniscient, in-principle philosopher's sense. But EA community and organizations are subject to fashion and power games and shortcomings and biases, just like every other community and organization. Good intentions alone aren't enough to ensure effective decisions about effectiveness. And the reason many people are bothered by EA is not that they think it's a bad idea to "do good better". But rather that they doubt the ability of EA institutions and community to live up to the aspirations. These critiques can come from many directions. From people interested in identity politics I've heard: "Look, many of these EA organizations are being run by powerful white men, reproducing existing power structures, biased toward technocratic capitalism and the status quo, and ignoring many of the things which really matter." From libertarian...

Design Much
137 - How to Stay Creative with Michael Nielsen

Design Much

Play Episode Listen Later Apr 12, 2022


Andy sits down with Michael Nielsen to chat about how you can stay creative in all your endeavors

stay creative michael nielsen
Wealth Watchers Podcast
9. Generating Passive Income on YouTube w/ Michael Nielsen

Wealth Watchers Podcast

Play Episode Listen Later Apr 16, 2021 20:53


Michael Nielsen went from cooped up to near financial freedom in less than one year by putting his inner-creative to work. His YouTube channel, Outdoor Therapy, is generating him a six-figure income and seven-figures is within his reach. He attributes most of his success to his mindset and tenacity.You can find Michael's work on YouTube by searching for Outdoor Therapy.Support the show (https://www.buymeacoffee.com/wealthwatchers)

The Vtwin Life
Michael Nielsen the man behind 2Wheels 2Survive

The Vtwin Life

Play Episode Listen Later Dec 16, 2020 60:24


2Wheels 2Survive was started to help support our Veterans by a Veteran. Michael a combat veteran himself with tours in Iraq and Afghanistan knows first hand the struggles and mental fights many of our veterans go through when they come home. He is always there for anyone that needs someone to talk with or just ride with. He also wants our men and women to not be afraid to say you might need help. So This coming year he is partnering with Mission 22 and taking a 30 day motorcycle trip to 22 national parks. With meet and greets along way the way . Meeting more our veterans and also discussing the importance of this mission, Mission 22 and 2Wheels 2Survive. To find out more info you check head over to https://www.2wheels2survive.com/ and don't forget his follow his adventures on Facebook and Instagram @2wheels2survive --- Support this podcast: https://anchor.fm/thevtwinlife/support

Clearer Thinking with Spencer Greenberg
Scientific Progress and the Replication Crisis (with Geoff Anders)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Dec 9, 2020 90:33


NOTE: The beginning of this conversation touches on some of the same themes that were discussed in the recent episode with Michael Nielsen. After that, though, this conversation heads off in other directions.Is scientific progress speeding up or slowing down? How can we understand and explain the replication crisis in the social sciences? In the context of research, does speed have a quality all its own in the same way that quantity has a quality all its own? What are Geoff and Spencer doing in the social science field that's significantly different from what others are doing?Geoff Anders is the founder of Leverage Research, a non-profit research institute that studies the history of science to learn how a better understanding of early stage science can inform scientific efforts today. Geoff is also the co-founder of Paradigm, a training and coaching organization that uses knowledge of learning, thinking, and motivation to help people think better and better pursue their missions. Geoff has a PhD in Philosophy from Rutgers University. You can learn more about Geoff via his website and can follow him on Twitter at @geoffanders.

Clearer Thinking with Spencer Greenberg
Scientific Progress and the Replication Crisis (with Geoff Anders)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Dec 9, 2020 90:33


NOTE: The beginning of this conversation touches on some of the same themes that were discussed in the recent episode with Michael Nielsen. After that, though, this conversation heads off in other directions.Is scientific progress speeding up or slowing down? How can we understand and explain the replication crisis in the social sciences? In the context of research, does speed have a quality all its own in the same way that quantity has a quality all its own? What are Geoff and Spencer doing in the social science field that's significantly different from what others are doing?Geoff Anders is the founder of Leverage Research, a non-profit research institute that studies the history of science to learn how a better understanding of early stage science can inform scientific efforts today. Geoff is also the co-founder of Paradigm, a training and coaching organization that uses knowledge of learning, thinking, and motivation to help people think better and better pursue their missions. Geoff has a PhD in Philosophy from Rutgers University. You can learn more about Geoff via his website and can follow him on Twitter at @geoffanders.[Read more]

Clearer Thinking with Spencer Greenberg
Scientific Progress and Political Feedback Loops (with Michael Nielsen)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Nov 24, 2020 82:34


Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other?Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. He also has a strong side interest in artificial intelligence. All are part of a broader interest in developing tools that help people think and create, both individually and collectively. His most recent book is Quantum Country, an introduction to quantum computing. Find out more at his website, michaelnielsen.org, or follow him on Twitter at @michael_nielsen.

Clearer Thinking with Spencer Greenberg
Scientific Progress and Political Feedback Loops (with Michael Nielsen)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Nov 24, 2020 82:34


Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other?Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. He also has a strong side interest in artificial intelligence. All are part of a broader interest in developing tools that help people think and create, both individually and collectively. His most recent book is Quantum Country, an introduction to quantum computing. Find out more at his website, michaelnielsen.org, or follow him on Twitter at @michael_nielsen.[Read more]