POPULARITY
This is my conversation with Michael Nielsen, scientist, author, and research fellow at the Astera Institute.Timestamps:- (00:00:00) intro- (00:01:06) cultivating optimism amid existential risks- (00:07:16) asymmetric leverage- (00:12:09) are "unbiased" models even feasible?- (00:18:44) AI and the scientific method- (00:23:23) unlocking AI's full power through better interfaces- (00:30:33) sponsor: Splits- (00:31:18) AIs, independent agents or intelligent tools?- (00:35:47) autonomous military and weapons- (00:42:14) finding alignment- (00:48:28) aiming for specific moral outcomes with AI?- (00:54:42) freedom/progress vs safety- (00:57:46) provable beneficiary surveillance- (01:04:16) psychological costs- (01:12:40) the ingenuity gapLinks:- Michael Nielsen: https://michaelnielsen.org/- Michael Nielsen on X: https://x.com/michael_nielsen- Michael's essay on being a wise optimist about science and technology: https://michaelnotebook.com/optimism/- Michael's Blog: https://michaelnotebook.com/- The Ingenuity Gap (Tad Homer-Dixon): https://homerdixon.com/books/the-ingenuity-gap/Thank you to our sponsor for making this podcast possible:- Splits: https://splits.orgInto the Bytecode:- Sina Habibian on X: https://twitter.com/sinahab- Sina Habibian on Farcaster: https://warpcast.com/sinahab- Into the Bytecode: https://intothebytecode.comDisclaimer: This podcast is for informational purposes only. It is not financial advice nor a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen's “The role of ‘explanation' in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati's “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda's “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton's “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Får vi fisk i stedet for fedtemøg i havet? Kommer landmænd til at betale CO2-afgifter som andre virksomheder? Skal skatteborgerne betale 43 milliarder kroner for omlægning af landbruget? Hvor skal vores mad komme fra i fremtiden? Den Grønne Trepartsaftale er landet. Bliver Danmark foregangsland? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Jeppe Bruus (S), minister for Grøn Trepart, Michael Nielsen, landmand, svineproducent, Tilsbæk, Hillerød, Louise Køster, Rabarbergården, Vejby, Nordsjælland, Thomas Poulsen, landmand, kvægbrug og planteavl, Mern, Sydsjælland, Kristoffer Hald, landmand, svineproducent, Møborg, Bækmarksbro, Vestjylland, Selma Montgomery, klimaaktivist, Baku, Aserbajdsjan og Stiig Markager, professor i havmiljø. Vært: Gitte Hansen.
Får vi fisk i stedet for fedtemøg i havet? Kommer landmænd til at betale CO2-afgifter som andre virksomheder? Skal skatteborgerne betale 43 milliarder kroner for omlægning af landbruget? Hvor skal vores mad komme fra i fremtiden? Den Grønne Trepartsaftale er landet. Bliver Danmark foregangsland? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Jeppe Bruus (S), minister for Grøn Trepart, Michael Nielsen, landmand, svineproducent, Tilsbæk, Hillerød, Louise Køster, Rabarbergården, Vejby, Nordsjælland, Thomas Poulsen, landmand, kvægbrug og planteavl, Mern, Sydsjælland, Kristoffer Hald, landmand, svineproducent, Møborg, Bækmarksbro, Vestjylland, Selma Montgomery, klimaaktivist, Baku, Aserbajdsjan og Stiig Markager, professor i havmiljø. Vært: Gitte Hansen.
Får vi fisk i stedet for fedtemøg i havet? Kommer landmænd til at betale CO2-afgifter som andre virksomheder? Skal skatteborgerne betale 43 milliarder kroner for omlægning af landbruget? Hvor skal vores mad komme fra i fremtiden? Den Grønne Trepartsaftale er landet. Bliver Danmark foregangsland? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Jeppe Bruus (S), minister for Grøn Trepart, Michael Nielsen, landmand, svineproducent, Tilsbæk, Hillerød, Louise Køster, Rabarbergården, Vejby, Nordsjælland, Thomas Poulsen, landmand, kvægbrug og planteavl, Mern, Sydsjælland, Kristoffer Hald, landmand, svineproducent, Møborg, Bækmarksbro, Vestjylland, Selma Montgomery, klimaaktivist, Baku, Aserbajdsjan og Stiig Markager, professor i havmiljø. Vært: Gitte Hansen.
Kan dyrene fortælle, om de er bange, stressede eller tilfredse? TV-dokumentaren "Hvis grise kunne tale" - sætter fokus på svineavl og dyrevelfærd. I P1Debat spørger vi: Skal grise ud af stalden, ud på marken for at blive glade og mindre stressede? Og hvad med andre dyr? Hvad ville hesten på ridebanen sige, hvis den fik ordet? Og hvad med hunden, der bor i en lejlighed på 4. sal, og er alene hjemme 8 timer om dagen, hvad ville den vælge? Eller dyrene i Zoologisk have, bliver de glade af at bo i et bur, hvor mennesker kommer og kigger på dem hver dag? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Miki Mistrati, journalist bag TV-dokumentaren "Hvis grise kunne tale", Louise Køster, Rabarbergården/ tidl formand økologisk landbrug, Mickey Gjerris, bio-etiker og forfatter, Mads Frost Bertelsen, direktør Københavns Zoologiske Have, Michael Nielsen, landmand, svineavler Hillerød, Jacob Jensen (V) minister for Fødevarer, Landbrug og Fiskeri og Sofie Graarup Jensen, biologilærer Thy-Mors HF & VUC. Vært: Gitte Hansen.
Kan dyrene fortælle, om de er bange, stressede eller tilfredse? TV-dokumentaren "Hvis grise kunne tale" - sætter fokus på svineavl og dyrevelfærd. I P1Debat spørger vi: Skal grise ud af stalden, ud på marken for at blive glade og mindre stressede? Og hvad med andre dyr? Hvad ville hesten på ridebanen sige, hvis den fik ordet? Og hvad med hunden, der bor i en lejlighed på 4. sal, og er alene hjemme 8 timer om dagen, hvad ville den vælge? Eller dyrene i Zoologisk have, bliver de glade af at bo i et bur, hvor mennesker kommer og kigger på dem hver dag? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Miki Mistrati, journalist bag TV-dokumentaren "Hvis grise kunne tale", Louise Køster, Rabarbergården/ tidl formand økologisk landbrug, Mickey Gjerris, bio-etiker og forfatter, Mads Frost Bertelsen, direktør Københavns Zoologiske Have, Michael Nielsen, landmand, svineavler Hillerød, Jacob Jensen (V) minister for Fødevarer, Landbrug og Fiskeri og Sofie Graarup Jensen, biologilærer Thy-Mors HF & VUC. Vært: Gitte Hansen.
Kan dyrene fortælle, om de er bange, stressede eller tilfredse? TV-dokumentaren "Hvis grise kunne tale" - sætter fokus på svineavl og dyrevelfærd. I P1Debat spørger vi: Skal grise ud af stalden, ud på marken for at blive glade og mindre stressede? Og hvad med andre dyr? Hvad ville hesten på ridebanen sige, hvis den fik ordet? Og hvad med hunden, der bor i en lejlighed på 4. sal, og er alene hjemme 8 timer om dagen, hvad ville den vælge? Eller dyrene i Zoologisk have, bliver de glade af at bo i et bur, hvor mennesker kommer og kigger på dem hver dag? Du kan blande dig i debatten ved at ringe ind fra 12:15-13:30 på 7021 1919 eller send en sms til 1212. Medvirkende: Miki Mistrati, journalist bag TV-dokumentaren "Hvis grise kunne tale", Louise Køster, Rabarbergården/ tidl formand økologisk landbrug, Mickey Gjerris, bio-etiker og forfatter, Mads Frost Bertelsen, direktør Københavns Zoologiske Have, Michael Nielsen, landmand, svineavler Hillerød, Jacob Jensen (V) minister for Fødevarer, Landbrug og Fiskeri og Sofie Graarup Jensen, biologilærer Thy-Mors HF & VUC. Vært: Gitte Hansen.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #66: Oh to Be Less Online, published by Zvi on June 1, 2024 on LessWrong. Tomorrow I will fly out to San Francisco, to spend Friday through Monday at the LessOnline conference at Lighthaven in Berkeley. If you are there, by all means say hello. If you are in the Bay generally and want to otherwise meet, especially on Monday, let me know that too and I will see if I have time to make that happen. Even without that hiccup, it continues to be a game of playing catch-up. Progress is being made, but we are definitely not there yet (and everything not AI is being completely ignored for now). Last week I pointed out seven things I was unable to cover, along with a few miscellaneous papers and reports. Out of those seven, I managed to ship on three of them: Ongoing issues at OpenAI, The Schumer Report and Anthropic's interpretability paper. However, OpenAI developments continue. Thanks largely to Helen Toner's podcast, some form of that is going back into the queue. Some other developments, including new media deals and their new safety board, are being covered normally. The post on DeepMind's new scaling policy should be up tomorrow. I also wrote a full post on a fourth, Reports of our Death, but have decided to shelve that post and post a short summary here instead. That means the current 'not yet covered queue' is as follows: 1. DeepMind's new scaling policy. 1. Should be out tomorrow before I leave, or worst case next week. 2. The AI Summit in Seoul. 3. Further retrospective on OpenAI including Helen Toner's podcast. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. You heard of them first. 4. Not Okay, Google. A tiny little problem with the AI Overviews. 5. OK Google, Don't Panic. Swing for the fences. Race for your life. 6. Not Okay, Meta. Your application to opt out of AI data is rejected. What? 7. Not Okay Taking Our Jobs. The question is, with or without replacement? 8. They Took Our Jobs Anyway. It's coming. 9. A New Leaderboard Appears. Scale.ai offers new capability evaluations. 10. Copyright Confrontation. Which OpenAI lawsuit was that again? 11. Deepfaketown and Botpocalypse Soon. Meta fails to make an ordinary effort. 12. Get Involved. Dwarkesh Patel is hiring. 13. Introducing. OpenAI makes media deals with The Atlantic and… Vox? Surprise. 14. In Other AI News. Jan Leike joins Anthropic, Altman signs giving pledge. 15. GPT-5 Alive. They are training it now. A security committee is assembling. 16. Quiet Speculations. Expectations of changes, great and small. 17. Open Versus Closed. Two opposing things cannot dominate the same space. 18. Your Kind of People. Verbal versus math versus otherwise in the AI age. 19. The Quest for Sane Regulation. Lina Khan on the warpath, Yang on the tax path. 20. Lawfare and Liability. How much work can tort law do for us? 21. SB 1047 Unconstitutional, Claims Paper. I believe that the paper is wrong. 22. The Week in Audio. Jeremie & Edouard Harris explain x-risk on Joe Rogan. 23. Rhetorical Innovation. Not everyone believes in GI. I typed what I typed. 24. Abridged Reports of Our Death. A frustrating interaction, virtue of silence. 25. Aligning a Smarter Than Human Intelligence is Difficult. You have to try. 26. People Are Worried About AI Killing Everyone. Yes, it is partly about money. 27. Other People Are Not As Worried About AI Killing Everyone. Assumptions. 28. The Lighter Side. Choose your fighter. Language Models Offer Mundane Utility Which model is the best right now? Michael Nielsen is gradually moving back to Claude Opus, and so am I. GPT-4o is fast and has some nice extra features, so when I figure it is 'smart enough' I will use it, but when I care most about quality and can wait a bit I increasingly go to Opus. Gemini I'm reserving for a few niche purposes, when I nee...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #66: Oh to Be Less Online, published by Zvi on June 1, 2024 on LessWrong. Tomorrow I will fly out to San Francisco, to spend Friday through Monday at the LessOnline conference at Lighthaven in Berkeley. If you are there, by all means say hello. If you are in the Bay generally and want to otherwise meet, especially on Monday, let me know that too and I will see if I have time to make that happen. Even without that hiccup, it continues to be a game of playing catch-up. Progress is being made, but we are definitely not there yet (and everything not AI is being completely ignored for now). Last week I pointed out seven things I was unable to cover, along with a few miscellaneous papers and reports. Out of those seven, I managed to ship on three of them: Ongoing issues at OpenAI, The Schumer Report and Anthropic's interpretability paper. However, OpenAI developments continue. Thanks largely to Helen Toner's podcast, some form of that is going back into the queue. Some other developments, including new media deals and their new safety board, are being covered normally. The post on DeepMind's new scaling policy should be up tomorrow. I also wrote a full post on a fourth, Reports of our Death, but have decided to shelve that post and post a short summary here instead. That means the current 'not yet covered queue' is as follows: 1. DeepMind's new scaling policy. 1. Should be out tomorrow before I leave, or worst case next week. 2. The AI Summit in Seoul. 3. Further retrospective on OpenAI including Helen Toner's podcast. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. You heard of them first. 4. Not Okay, Google. A tiny little problem with the AI Overviews. 5. OK Google, Don't Panic. Swing for the fences. Race for your life. 6. Not Okay, Meta. Your application to opt out of AI data is rejected. What? 7. Not Okay Taking Our Jobs. The question is, with or without replacement? 8. They Took Our Jobs Anyway. It's coming. 9. A New Leaderboard Appears. Scale.ai offers new capability evaluations. 10. Copyright Confrontation. Which OpenAI lawsuit was that again? 11. Deepfaketown and Botpocalypse Soon. Meta fails to make an ordinary effort. 12. Get Involved. Dwarkesh Patel is hiring. 13. Introducing. OpenAI makes media deals with The Atlantic and… Vox? Surprise. 14. In Other AI News. Jan Leike joins Anthropic, Altman signs giving pledge. 15. GPT-5 Alive. They are training it now. A security committee is assembling. 16. Quiet Speculations. Expectations of changes, great and small. 17. Open Versus Closed. Two opposing things cannot dominate the same space. 18. Your Kind of People. Verbal versus math versus otherwise in the AI age. 19. The Quest for Sane Regulation. Lina Khan on the warpath, Yang on the tax path. 20. Lawfare and Liability. How much work can tort law do for us? 21. SB 1047 Unconstitutional, Claims Paper. I believe that the paper is wrong. 22. The Week in Audio. Jeremie & Edouard Harris explain x-risk on Joe Rogan. 23. Rhetorical Innovation. Not everyone believes in GI. I typed what I typed. 24. Abridged Reports of Our Death. A frustrating interaction, virtue of silence. 25. Aligning a Smarter Than Human Intelligence is Difficult. You have to try. 26. People Are Worried About AI Killing Everyone. Yes, it is partly about money. 27. Other People Are Not As Worried About AI Killing Everyone. Assumptions. 28. The Lighter Side. Choose your fighter. Language Models Offer Mundane Utility Which model is the best right now? Michael Nielsen is gradually moving back to Claude Opus, and so am I. GPT-4o is fast and has some nice extra features, so when I figure it is 'smart enough' I will use it, but when I care most about quality and can wait a bit I increasingly go to Opus. Gemini I'm reserving for a few niche purposes, when I nee...
Take our Listener Survey Michael Nielsen is scientist who helped pioneer quantum computing and the modern open science movement. He's worked at Y Combinator, co-authored on scientific progress with Patrick Collison, and is a prolific writer, reader, commentator, and mentor. He joined Tyler to discuss why the universe is so beautiful to human eyes (but not ears), how to find good collaborators, the influence of Simone Weil, where Olaf Stapledon's understand of the social word went wrong, potential applications of quantum computing, the (rising) status of linear algebra, what makes for physicists who age well, finding young mentors, why some scientific fields have pre-print platforms and others don't, how so many crummy journals survive, the threat of cheap nukes, the many unknowns of Mars colonization, techniques for paying closer attention, what you learn when visiting the USS Midway, why he changed his mind about Emergent Ventures, why he didn't join OpenAI in 2015, what he'll learn next, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded March 24th, 2024. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Michael on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building intuition with spaced repetition systems, published by Jacob G-W on May 14, 2024 on LessWrong. Do you ever go to a lecture, follow it thinking it makes total sense, then look back at your notes later and realize it makes no sense? This used to happen to me, but I've learned how to use spaced repetition to fully avoid this if I want. I'm going to try to convey this method in this post. Much of my understanding of how to create flashcards comes from "Using spaced repetition systems to see through a piece of mathematics" by Michael Nielsen and "How to write good prompts: using spaced repetition to create understanding" by Andy Matuschak, but I think my method falls in between both, in terms of abstraction. Finally, I want to credit Quantum Country for being an amazing example of flashcards created to develop intuition in users. My method is more abstract than Michael Nielsen's approach, since it does not only apply to mathematics, but to any subject. Yet it is less abstract than Andy Matuschak's approach because I specifically use it for 'academic subjects' that require deep intuition of (causal or other) relationships between concepts. Many of Matuschak's principles in his essay apply here (I want to make sure to give him credit), but I'm looking at it through the 'how can we develop deep intuition in an academic subject in the fastest possible time?' lens. Minimize Inferential Distance on Flashcards A method that I like to repeat to myself while making flashcards that I haven't seen in other places is that each flashcard should only have one inferential step on it. I'm using 'inferential step' here to mean a step such as remembering a fact, making a logical deduction, visualizing something, or anything that requires thinking. It's necessary that a flashcard only have a single inferential step on it. Anki trains the mind to do these steps. If you learn all the inferential steps, you will be able to fully re-create any mathematical deduction, historical story, or scientific argument. Knowing (and continually remembering) the full story with spaced repetition builds intuition. I'm going to illustrate this point by sharing some flashcards that I made while trying to understand how Transformers (GPT-2) worked. I made these flashcards while implementing a transformer based on Neel Nanda's tutorials and these two blog posts. Understanding Attention The first step in my method is to learn or read enough so that you have part of the whole loaded into your head. For me, this looked like picking the attention step of a transformer and then reading about it in the two blog posts and watching the section of the video on it. It's really important to learn about something from multiple perspectives. Even when I'm making flashcards from a lecture, I have my web browser open and I'm looking up things that I thought were confusing while making flashcards. My next step is to understand that intuition is fake! Really good resources make you feel like you understand something, but to actually understand something, you need to engage with it. This engagement can take many forms. For technical topics, it usually looks like solving problems or coding, and this is good! I did this for transformers! But I also wanted to not forget it long term, so I used spaced repetition to cement my intuition. Enough talk, here are some flashcards about attention in a transformer. For each flashcard, I'll explain why I made it. Feel free to scroll through. Examples I start with a distillation of the key points of the article. I wanted to make sure that I knew what the attention operation was actually doing, as the blog posts emphasized this. When building intuition, I find it helpful to know "the shape" or constraints about something so that I can build a more accurate mental model. In this case, th...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building intuition with spaced repetition systems, published by Jacob G-W on May 14, 2024 on LessWrong. Do you ever go to a lecture, follow it thinking it makes total sense, then look back at your notes later and realize it makes no sense? This used to happen to me, but I've learned how to use spaced repetition to fully avoid this if I want. I'm going to try to convey this method in this post. Much of my understanding of how to create flashcards comes from "Using spaced repetition systems to see through a piece of mathematics" by Michael Nielsen and "How to write good prompts: using spaced repetition to create understanding" by Andy Matuschak, but I think my method falls in between both, in terms of abstraction. Finally, I want to credit Quantum Country for being an amazing example of flashcards created to develop intuition in users. My method is more abstract than Michael Nielsen's approach, since it does not only apply to mathematics, but to any subject. Yet it is less abstract than Andy Matuschak's approach because I specifically use it for 'academic subjects' that require deep intuition of (causal or other) relationships between concepts. Many of Matuschak's principles in his essay apply here (I want to make sure to give him credit), but I'm looking at it through the 'how can we develop deep intuition in an academic subject in the fastest possible time?' lens. Minimize Inferential Distance on Flashcards A method that I like to repeat to myself while making flashcards that I haven't seen in other places is that each flashcard should only have one inferential step on it. I'm using 'inferential step' here to mean a step such as remembering a fact, making a logical deduction, visualizing something, or anything that requires thinking. It's necessary that a flashcard only have a single inferential step on it. Anki trains the mind to do these steps. If you learn all the inferential steps, you will be able to fully re-create any mathematical deduction, historical story, or scientific argument. Knowing (and continually remembering) the full story with spaced repetition builds intuition. I'm going to illustrate this point by sharing some flashcards that I made while trying to understand how Transformers (GPT-2) worked. I made these flashcards while implementing a transformer based on Neel Nanda's tutorials and these two blog posts. Understanding Attention The first step in my method is to learn or read enough so that you have part of the whole loaded into your head. For me, this looked like picking the attention step of a transformer and then reading about it in the two blog posts and watching the section of the video on it. It's really important to learn about something from multiple perspectives. Even when I'm making flashcards from a lecture, I have my web browser open and I'm looking up things that I thought were confusing while making flashcards. My next step is to understand that intuition is fake! Really good resources make you feel like you understand something, but to actually understand something, you need to engage with it. This engagement can take many forms. For technical topics, it usually looks like solving problems or coding, and this is good! I did this for transformers! But I also wanted to not forget it long term, so I used spaced repetition to cement my intuition. Enough talk, here are some flashcards about attention in a transformer. For each flashcard, I'll explain why I made it. Feel free to scroll through. Examples I start with a distillation of the key points of the article. I wanted to make sure that I knew what the attention operation was actually doing, as the blog posts emphasized this. When building intuition, I find it helpful to know "the shape" or constraints about something so that I can build a more accurate mental model. In this case, th...
Austin Wintory chats with the composing team collectively known as Ninja Tracks, Kaveh Cohen and Michael Nielsen. Together they discussed how they first met and what drove them to create a composing partnership; how they've worked together across various video games, TV shows, and films; and why the decided to add music publishing to their repertoire. If you enjoyed this episode, please consider leaving us a rating and review. The Game Maker's Notebook is sponsored by Xsolla. To learn more, go to xsolla.pro/AOIAAS.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spaced repetition for teaching two-year olds how to read (Interview), published by Chipmonk on November 27, 2023 on LessWrong. Update: this post now has another video. This father has been using spaced repetition (Anki) to teach his children how to read several years earlier than average. Michael Nielsen and Gwern[1] tweeted about the interesting case of a reddit user, u/caffeine314 (henceforth dubbed "CoffeePie"), who has been using spaced repetition with his daughter from a very young age. CoffeePie started using Anki with his daughter when she turned 2, and he continued using Anki with his son starting when he was 1 year 9 months. Here's his daughter's progress as recounted in January 2020: My daughter is now about to turn 5 in a few days… She's still going strong -- she uses Anki every single day for English, Hebrew, and Spanish. She's very confident about reading, and moreover, she reads with ... "context". Many kids her age read mechanically, but she reads like a real storyteller, and that comes from her confidence. At the beginning of the school year her teachers said she definitely has the reading ability of fifth grade, and if we're just going by the ability to read and not focus on comprehension of abstract ideas, her reading level may rival an 8th grader. (From Update on my daughter and Anki) For reference, fifth graders are usually 10 or 11yo in the US, and 8th graders are usually 13 or 14yo, so this puts her ~5-9 years ahead of the average child. You can see a video of his daughter reading at 2 years, 2 months later in this post. CoffeePie has made several posts about their experience but I still had questions so I reached out to interview him back in January. Interview Responses have been edited for clarity. What did you learn in going from using Anki on your daughter to your son? How has it gone with your son? It's a hard question, because I got so much right. We were so wildly successful that I "cloned" just about every aspect with my son. A couple of things I can think of: With my daughter, I held back on lowercase letters for a long time because I thought it would confuse her, but when I started to introduce lowercase to her, to my extreme shock, she already knew them, down cold! I think what happened is that she learned them just by looking at books, TV, magazines, storefront signs, menus, etc. So when we started with my son, I started doing lower case letters the very day after we finished capital letters. Another difference is that we did numbers the very next day after lowercase letters. I really, really thought I was pushing too hard; I had no desire to be a "tiger dad", but he took it with extreme grace. I was ready to stop at any moment, but he was fine. Another difference is that our expectations of what the kids were getting out of it had changed, as well. At first, I just really wanted my daughter to get a jump start on reading, but stupid me, I didn't realize there were unintended consequences. A four year old with a 3rd grade reading ability learns about a WHOLE lot more -- it opened up politics for her. She would read our junk mail, and learn who our council member was, who our representative is, the mayor, current events, history, etc. I know it's stupid of me to say, but I underestimated the effect that reading early would have on her breadth of learning. One last thing is math. I mentioned that we started numbers early with my son. But we also started arithmetic. He wasn't reading by 3 the way Hannah was, but he knew all his multiplication tables up to 12 by 12. This year we tackled prime factorization, Fibonacci sequences, decimal and place values, mixed, proper, and improper fractions, light algebra, etc. I was much more aggressive with the math, and again, he handled it with grace. I was ready to stop at any moment. Do you still u...
Det er både farligere hårdere og dårligere betalt at være udenlandsk håndværker i Danmark, end det er at være dansk. Det viser en ny rapport. Men hvor galt står det egentlig til? Og hvad kan vi gøre ved det? Grise i krise. Danmarks største slagterikoncern melder om krise. Den kan ikke følge med de priser, griseproducenterne får i udlandet, men er det et forbigående problem eller noget mere fundamentalt, der er galt? Gæster: Laust Høgedahl, Lektor og arbejdsmarkedsforsker ved Aalborg Universitet. Mette Møller Nielsen, chef for arbejdsmiljø Byggeri, DI. Palle Bisgaard, næstformand og forhandlingssekretær i Byggegruppen i 3F. Michael Nielsen, Griseproducent, Tilsbæk. Jakob Vesterlund Olsen, seniorrådgiver ved Institut for Fødevarer- og ressourceøkonomi. Vært: Mette Simonsen.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spaced repetition for teaching two-year olds how to read (Interview), published by Chipmonk on November 27, 2023 on LessWrong. Update: this post now has another video. This father has been using spaced repetition (Anki) to teach his children how to read several years earlier than average. Michael Nielsen and Gwern[1] tweeted about the interesting case of a reddit user, u/caffeine314 (henceforth dubbed "CoffeePie"), who has been using spaced repetition with his daughter from a very young age. CoffeePie started using Anki with his daughter when she turned 2, and he continued using Anki with his son starting when he was 1 year 9 months. Here's his daughter's progress as recounted in January 2020: My daughter is now about to turn 5 in a few days… She's still going strong -- she uses Anki every single day for English, Hebrew, and Spanish. She's very confident about reading, and moreover, she reads with ... "context". Many kids her age read mechanically, but she reads like a real storyteller, and that comes from her confidence. At the beginning of the school year her teachers said she definitely has the reading ability of fifth grade, and if we're just going by the ability to read and not focus on comprehension of abstract ideas, her reading level may rival an 8th grader. (From Update on my daughter and Anki) For reference, fifth graders are usually 10 or 11yo in the US, and 8th graders are usually 13 or 14yo, so this puts her ~5-9 years ahead of the average child. You can see a video of his daughter reading at 2 years, 2 months later in this post. CoffeePie has made several posts about their experience but I still had questions so I reached out to interview him back in January. Interview Responses have been edited for clarity. What did you learn in going from using Anki on your daughter to your son? How has it gone with your son? It's a hard question, because I got so much right. We were so wildly successful that I "cloned" just about every aspect with my son. A couple of things I can think of: With my daughter, I held back on lowercase letters for a long time because I thought it would confuse her, but when I started to introduce lowercase to her, to my extreme shock, she already knew them, down cold! I think what happened is that she learned them just by looking at books, TV, magazines, storefront signs, menus, etc. So when we started with my son, I started doing lower case letters the very day after we finished capital letters. Another difference is that we did numbers the very next day after lowercase letters. I really, really thought I was pushing too hard; I had no desire to be a "tiger dad", but he took it with extreme grace. I was ready to stop at any moment, but he was fine. Another difference is that our expectations of what the kids were getting out of it had changed, as well. At first, I just really wanted my daughter to get a jump start on reading, but stupid me, I didn't realize there were unintended consequences. A four year old with a 3rd grade reading ability learns about a WHOLE lot more -- it opened up politics for her. She would read our junk mail, and learn who our council member was, who our representative is, the mayor, current events, history, etc. I know it's stupid of me to say, but I underestimated the effect that reading early would have on her breadth of learning. One last thing is math. I mentioned that we started numbers early with my son. But we also started arithmetic. He wasn't reading by 3 the way Hannah was, but he knew all his multiplication tables up to 12 by 12. This year we tackled prime factorization, Fibonacci sequences, decimal and place values, mixed, proper, and improper fractions, light algebra, etc. I was much more aggressive with the math, and again, he handled it with grace. I was ready to stop at any moment. Do you still u...
Vi skal dykke ned i en plade vi burde havde spillet omkring 20 gange her i programmet. Nemlig noget af det sidste som den store brasilianske mester João Donato udgav: Sintetizamor, som han faktisk udgav med sin søn Donatinho. En plade bestående af let perlende brasiliansk pop, svulstig boogie og klub-venlig funk og disco. Udover det kommer en fyr ved navn Michael Nielsen forbi og sætter en ny standard for hvor følsomt det kan blive her i programmet med sit nummer Bluebird of Heaven og sidst men ikke mindst skal vi dykke ned i det nigerianske hippie-orkester Ofege, som spiller vestafrikansk funk og rock på den helt fede måde.
Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00
New Friedman IR-X Deep Dive with special guest Michael Nielsen!Support the show
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #30: Dalle-3 and GPT-3.5-Instruct-Turbo, published by Zvi on September 21, 2023 on LessWrong. We are about to see what looks like a substantial leap in image models. OpenAI will be integrating Dalle-3 into ChatGPT, the pictures we've seen look gorgeous and richly detailed, with the ability to generate pictures to much more complex specifications than existing image models. Before, the rule of thumb was you could get one of each magisteria, but good luck getting two things you want from a given magisteria. Now, perhaps, you can, if you are willing to give up on adult content and images of public figures since OpenAI is (quite understandably) no fun. We will find out in a few weeks, as it rolls out to ChatGPT+ users. As usual a bunch of other stuff also happened, including a model danger classification system from Anthropic, OpenAI announcing an outside red teaming squad, a study of AI impact on consultant job performance, some incremental upgrades to Bard including an extension for GMail, new abilities to diagnose medical conditions and some rhetorical innovations. Also don't look now but GPT-3.5-Turbo-Instruct plays Chess at 1800 Elo, and due to its relative lack of destructive RLHF seems to offer relatively strong performance at a very low cost and very high speed, although for most purposes its final quality is still substantially behind GPT-4. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. GPT-4 boosts consultant productivity. Language Models Don't Offer Mundane Utility. Do we want to boost that? Level Two Bard. Some improvements, I suppose. Still needs a lot of work. Wouldn't You Prefer a Good Game of Chess? An LLM at 1800 Elo. World model. GPT-4 Real This Time. GPT-3.5-Instruct-Turbo proves its practical use, perhaps. Fun With Image Generation. Introducing Dalle-3. Deepfaketown and Botpocalypse Soon. Amazon limits self-publishing to 3 a day. Get Involved. OpenAI hiring for mundane safety, beware the double-edged sword. Introducing. OpenAI red team network, Anthropic responsible scaling policy. In Other AI News. UK government and AI CEO both change their minds. Technical Details. One grok for grammar, another for understanding. Quiet Speculations. Michael Nielsen offers extended thoughts on extinction risk. The Quest for Sane Regulation. Everyone is joining the debate, it seems. The Week in Audio. A lecture about copyright law. Rhetorical Innovation. We keep trying. No One Would Be So Stupid As To. Are we asking you to stop? Aligning a Smarter Than Human Intelligence is Difficult. Asimov's laws? No. I Didn't Do It, No One Saw Me Do It, You Can't Prove Anything. Can you? People Are Worried About AI Killing Everyone. Yet another round of exactly how. Other People Are Not As Worried About AI Killing Everyone. Tony Blair. The Lighter Side. Jesus flip the tables. Language Models Offer Mundane Utility Diagnose eye diseases. This seems like a very safe application even with false positives, humans can verify anything the AI finds. Diagnose foetal growth restrictions early. In theory and technically using graph neural networks, use the resulting 'reading mode' in Android or Chrome to strip out the words from a webpage, in an actually readable size and font, much more accurate than older attempts. Seems you have to turn it on under chrome flags. GPT-4 showing some solid theory of mind in a relatively easy situation. Always notice whether you are finding out it can do X consistently, can do X typically, or can do X once with bespoke prompting. The same with failure to do X. What does it mean that a model would ever say ~X, versus that it does all the time, versus it does every time? Each is different. How to convince people who are unimpressed by code writing that LLMs are not simply parrots? Eliezer asked on Twitter, and said ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #30: Dalle-3 and GPT-3.5-Instruct-Turbo, published by Zvi on September 21, 2023 on LessWrong. We are about to see what looks like a substantial leap in image models. OpenAI will be integrating Dalle-3 into ChatGPT, the pictures we've seen look gorgeous and richly detailed, with the ability to generate pictures to much more complex specifications than existing image models. Before, the rule of thumb was you could get one of each magisteria, but good luck getting two things you want from a given magisteria. Now, perhaps, you can, if you are willing to give up on adult content and images of public figures since OpenAI is (quite understandably) no fun. We will find out in a few weeks, as it rolls out to ChatGPT+ users. As usual a bunch of other stuff also happened, including a model danger classification system from Anthropic, OpenAI announcing an outside red teaming squad, a study of AI impact on consultant job performance, some incremental upgrades to Bard including an extension for GMail, new abilities to diagnose medical conditions and some rhetorical innovations. Also don't look now but GPT-3.5-Turbo-Instruct plays Chess at 1800 Elo, and due to its relative lack of destructive RLHF seems to offer relatively strong performance at a very low cost and very high speed, although for most purposes its final quality is still substantially behind GPT-4. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. GPT-4 boosts consultant productivity. Language Models Don't Offer Mundane Utility. Do we want to boost that? Level Two Bard. Some improvements, I suppose. Still needs a lot of work. Wouldn't You Prefer a Good Game of Chess? An LLM at 1800 Elo. World model. GPT-4 Real This Time. GPT-3.5-Instruct-Turbo proves its practical use, perhaps. Fun With Image Generation. Introducing Dalle-3. Deepfaketown and Botpocalypse Soon. Amazon limits self-publishing to 3 a day. Get Involved. OpenAI hiring for mundane safety, beware the double-edged sword. Introducing. OpenAI red team network, Anthropic responsible scaling policy. In Other AI News. UK government and AI CEO both change their minds. Technical Details. One grok for grammar, another for understanding. Quiet Speculations. Michael Nielsen offers extended thoughts on extinction risk. The Quest for Sane Regulation. Everyone is joining the debate, it seems. The Week in Audio. A lecture about copyright law. Rhetorical Innovation. We keep trying. No One Would Be So Stupid As To. Are we asking you to stop? Aligning a Smarter Than Human Intelligence is Difficult. Asimov's laws? No. I Didn't Do It, No One Saw Me Do It, You Can't Prove Anything. Can you? People Are Worried About AI Killing Everyone. Yet another round of exactly how. Other People Are Not As Worried About AI Killing Everyone. Tony Blair. The Lighter Side. Jesus flip the tables. Language Models Offer Mundane Utility Diagnose eye diseases. This seems like a very safe application even with false positives, humans can verify anything the AI finds. Diagnose foetal growth restrictions early. In theory and technically using graph neural networks, use the resulting 'reading mode' in Android or Chrome to strip out the words from a webpage, in an actually readable size and font, much more accurate than older attempts. Seems you have to turn it on under chrome flags. GPT-4 showing some solid theory of mind in a relatively easy situation. Always notice whether you are finding out it can do X consistently, can do X typically, or can do X once with bespoke prompting. The same with failure to do X. What does it mean that a model would ever say ~X, versus that it does all the time, versus it does every time? Each is different. How to convince people who are unimpressed by code writing that LLMs are not simply parrots? Eliezer asked on Twitter, and said ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link post] Michael Nielsen's "Notes on Existential Risk from Artificial Superintelligence", published by Joel Becker on September 20, 2023 on The Effective Altruism Forum. Summary From the piece: Earlier this year I decided to take a few weeks to figure out what I think about the existential risk from Artificial Superintelligence (ASI xrisk). It turned out to be much more difficult than I thought. After several months of reading, thinking, and talking with people, what follows is a discussion of a few observations arising during this exploration, including: Three ASI xrisk persuasion paradoxes, which make it intrinsically difficult to present strong evidence either for or against ASI xrisk. The lack of such compelling evidence is part of the reason there is such strong disagreement about ASI xrisk, with people often (understandably) relying instead on prior beliefs, self-interest, and tribal reasoning to decide their opinions. The alignment dilemma: should someone concerned with xrisk contribute to concrete alignment work, since it's the only way we can hope to build safe systems; or should they refuse to do such work, as contributing to accelerating a bad outcome? Part of a broader discussion of the accelerationist character of much AI alignment work, so capabilities / alignment is a false dichotomy. The doomsday question: are there recipes for ruin -- simple, easily executed, immensely destructive recipes that could end humanity, or wreak catastrophic world-changing damage? What bottlenecks are there on ASI speeding up scientific discovery? And, in particular: is it possible for ASI to discover new levels of emergent phenomena, latent in existing theories? Excerpts Here are the passages I thought were interesting enough to tweet about: "So, what's your probability of doom?" I think the concept is badly misleading. The outcomes humanity gets depend on choices we can make. We can make choices that make doom almost inevitable, on a timescale of decades - indeed, we don't need ASI for that, we can likely4 arrange it in other ways (nukes, engineered viruses, .). We can also make choices that make doom extremely unlikely. The trick is to figure out what's likely to lead to flourishing, and to do those things. The term "probability of doom" began frustrating me after starting to routinely hear people at AI companies use it fatalistically, ignoring the fact that their choices can change the outcomes. "Probability of doom" is an example of a conceptual hazard5 - a case where merely using the concept may lead to mistakes in your thinking. Its main use seems to be as marketing: if widely-respected people say forcefully that they have a high or low probability of doom, that may cause other people to stop and consider why. But I dislike concepts which are good for marketing, but bad for understanding; they foster collective misunderstanding, and are likely to eventually lead to collective errors in action. With all that said: practical alignment work is extremely accelerationist. If ChatGPT had behaved like Tay, AI would still be getting minor mentions on page 19 of The New York Times. These alignment techniques play a role in AI somewhat like the systems used to control when a nuclear bomb goes off. If such bombs just went off at random, no-one would build nuclear bombs, and there would be no nuclear threat to humanity. Practical alignment work makes today's AI systems far more attractive to customers, far more usable as a platform for building other systems, far more profitable as a target for investors, and far more palatable to governments. The net result is that practical alignment work is accelerationist. There's an extremely thoughtful essay by Paul Christiano, one of the pioneers of both RLHF and AI safety, where he addresses the question of whether he regrets working ...
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/michael_nielsen_open_science_now ■Post on this topic (You can get FREE learning materials!) https://englist.me/175-academic-words-reference-from-michael-nielsen-open-science-now-ted-talk/ ■Youtube Video https://youtu.be/zgZkBL3ZMII (All Words) https://youtu.be/kBOSnangx1c (Advanced Words) https://youtu.be/vV-xe04p0Pc (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Michael Nielsen remarks on 'Oppenheimer', published by Tom Barnes on August 31, 2023 on The Effective Altruism Forum. This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I've pulled out some quotes I find especially interesting (bolding my own) I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car". [...] I often meet people who claim to sincerely believe (or at least seriously worry) that AI may cause significant damage to humanity. And yet they are also working on it, justifying it in ways that sometimes seem sincerely thought out, but which all-too-often seem self-serving or self-deceiving. Part of what makes the Manhattan Project interesting is that we can chart the arcs of moral thinking of multiple participants [...] Here are four caricatures: Klaus Fuchs and Ted Hall were two Manhattan Project physicists who took it upon themselves to commit espionage, communicating the secret of the bomb to the Soviet Union. It's difficult to know for sure, but both seem to have been deeply morally engaged and trying to do the right thing, willing to risk their lives; they also made, I strongly believe, a terrible error of judgment. I take it as a warning that caring and courage and imagination are not enough; they can, in fact, lead to very bad outcomes. Robert Wilson, the physicist who recruited Richard Feynman to the project. Wilson had thought deeply about Nazi Germany, and the capabilities of German physics and industry, and made a principled commitment to the project on that basis. He half-heartedly considered leaving when Germany surrendered, but opted to continue until the bombings in Japan. He later regretted that choice; immediately after the Trinity Test he was disconsolate, telling an exuberant Feynman: "It's a terrible thing that we made". Oppenheimer, who I believe was motivated in part by a genuine fear of the Nazis, but also in part by personal ambition and a desire for "success". It's interesting to ponder his statements after the War: while he seems to have genuinely felt a strong need to work on the bomb in the face of the Nazi threat, his comments about continuing to work up to the bombing of Hiroshima and Nagasaki contain many strained self-exculpatory statements about how you have to work on it as a scientist, that the technical problem is too sweet. It smells, to me, of someone looking for self-justification. Joseph Rotblat, the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb. He was threatened by the head of Los Alamos security, and falsely accused of having met with Soviet agents. In leaving he was turning his back on his most important professional peers at a crucial time in his career. Doing so must have required tremendous courage and moral imagination. Part of what makes the choice intriguing is that he himself didn't think it would make any difference to the success of the project. I know I personally find it tempting to think about such choices in abstract systems terms: "I, individually, can't change systems outcomes by refusing to participate ['it's inevitable!'], therefore it's okay to participate". And yet while that view seems reasonable, Rotblat's example shows it is incorrect. His private moral...
Friend of the podcast Michael Nielsen joins us for an in depth chat about his career and gear obsessions.
Running coach, personal trainer, podcaster and ultra runner - Michael Nielsen wears a few hats when it comes to his running journey. As a coach and through his Runner's Resource podcast, Michael is eager to share his knowledge of a sport which he has enjoyed a life-long relationship with. On his podcast, Michael has not only talked to a succession of runners with fascinating stories, but also shares his own wide-ranging knowledge giving hints and tips on a variety of subjects, from the benefits of pilates to the different energy systems used to fuel your muscles. We also spoke to Michael about his own running, including taking on 50km trail races and conquering the 50 mile distance… ---------------------------------- You can listen to The Runner's Resource wherever you get you podcast including Apple: https://podcasts.apple.com/gb/podcast/the-runners-resource/id1623169728 Subscribe to our Substack newsletter at https://runningtales.substack.com If you like this episode please consider donating to help us keep going: https://www.buymeacoffee.com/stepforward
Kanjun is co-founder and CEO of Generally Intelligent, an AI research company. She works on metascience ideas often with Michael Nielsen, a previous podcast guest. She's a VC investor and co-hosts her own podcast for Generally Intelligent. She is part of building the Neighborhood, which is intergenerational campus in a square mile of central San Francisco. Generally Intelligent (as of podcast date ) are looking for great talent looking to work on AI. We get a little nerdy on the podcast but we cover AI thinking, fears on rogue AI, and the breakthroughs of Chat AI. We discuss some of her latest ideas in meta science based on the work she has done with Michael Nielsen (previous podcast here) and what are the important questions we should be looking at. We chat about the challenge of old institutions, the value of dance and creativity and why her friends use “to kanjun” as a verb. We cover her ideas on models of trauma and why EMDR (Eye Movement Desensitization and Reprocessing therapy) and cognitive therapies might work. We discuss why dinosaurs didn't develop more. We chat around “what is meaning” and “what is the structure of knowledge”, what are the strengths and weakness of old institutions; culture vs knowledge vs history and other confusing questions. Kanjun gives her advice on how to think about dance (dance like you are moving through molasses). "Dance is inside of you. It just needs to be unlocked." We play underrated/overrated on: having agency, city planning, death of institutions, innovation agencies, high frequency trading; diversity Kanjun thinks on how capitalism might want to be augmented and what excites Kanjun about AI and complex systems. Kanjun asks me questions and I offer my critique on Effective Altruism. This is quirky long form conversation on a range of fascinating topics. Transcript and video available here: https://www.thendobetter.com/arts/2023/1/17/kanjun-qiu-ai-metascience-institutional-knowledge-trauma-models-podcast
Michael's big drive is riding for Mission 22 he is an official ambassador for them and is also a retired Army Combat Veteran. Also he is a Mile Monster Rider and a very good friend of mine Want to help support the channel check out my social media pages and follow there as well
Jim talks with Michael Nielsen about the ideas in his and Kanjun Qiu's recent essay, "A Vision of Metascience: An Engine of Improvement for the Social Processes of Science." They discuss the meaning of metascience, a vivid example in Genovese maritime insurance, attracting intellectual dark matter, creation & limitations of the h-index, frozen accidents in our scientific operating system, what allowed the original DARPA to be so productive, funding-by-variance, failure audits, changing the unit of evaluation from papers to software, at-the-bench fellowships, science funders as detectors & predictors, endowed professorships by age 25, eliciting the secret thesis, metascience as an imaginative design practice, bottlenecks to decentralized improvement, the Open Science Collaboration, pre-registered study designs, metascience entrepreneurship, the arXiv preprint server, and much more. Episode Transcript "A Vision of Metascience: An Engine of Improvement for the Social Processes of Science," by Michael Nielsen and Kanjun Qiu Michael Nielsen (website) JRS EP12 - Brian Nosek – Open Science and Reproducibility Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. His main current projects are in metascience, programmable matter, and tools for thought. He is the recent co-author of a book-long essay, "A Vision of Metascience", outlining the ways in which the institutions of science can become self-improving. All his work is united by a broader interest in tools that help people think and create, both individually and collectively. He is a research fellow at the Astera Institute in the San Francisco Bay Area.
Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom."Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis": that's the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world's most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.Original article:https://michaelnotebook.com/eanotes/Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Michael Nielsen is a scientist at the Astera Institute. He helped pioneer quantum computing and the modern open science movement. He is a leading thinker on the topic of meta science and how to improve science, in particular, the social processes of science. His latest co-authored work is ‘A Vision of metascience: An engine of improvement for the social processes of Science' co-authored with Kanjun Qiu . His website notebook is here, with further links to his books including on quantum, memory systems, deep learning, open science and the future of matter. I ask: What is the most important question in science or meta science we should be seeking to understand at the moment ? We discuss his vision for what a metascience ecosystem could be; what progress could be and ideas for improving the the culture of science and social processes. We imagine what an alien might think about our social processes and discuss failure audits, high variance funding and whether organisations really fund ‘high risk' projects if not that many fail, and how we might measure this. We discuss how these ideas might not work and be wrong; the difficulty of (the lack of) language for new forming fields; how an interdisciplinary institute might work. The possible importance of serendipity and agglomeration effects; what to do about attracting outsiders, and funding unusual ideas. We touch on the stories of Einstein, Katalin Kariko (mRNA) and Doug Prasher (molecular biologist turned van driver) and what they might tell us. We discuss how metascience can be treated as a research field and also as an entrepreneurial discipline. We discuss how decentralisation may help. How new institutions may help. The challenges funders face in wanting to wait until ideas become clearer. We discuss the opportunity that developing nations such as Indonesia might have. We chat about rationality and critical rationality. Michael gives some insights into how AI art might be used and how we might never master certain languages, like the languages of early computing. We end on some thoughts Michael might give his younger self: The one thing I wish I'd understood much earlier is the extent to which there's kind of an asymmetry in what you see, which is you're always tempted not to make a jump because you see very clearly what you're giving up and you don't see very clearly what it is you're going to gain. So almost all of the interesting opportunities on the other side of that are opaque to you now. You have a very limited kind of a vision into them. You can get around it a little bit by chatting with people who maybe are doing something similar, but it's so much more limited. And yet I know when reasoning about it, I want to treat them like my views of the two are somehow parallel but they're just not. Transcript/Video available here: https://www.thendobetter.com/arts/2022/11/15/michael-nielsen-metascience-how-to-improve-science-open-science-podcast
Michael Nielsen is a quantum physicist, science writer, computer programming researcher, and modern polymath working on tools to expand human capacity to think and create. He's previously authored pioneering quantum computing books, propelled forward the open science movement, and published research on artificial intelligence. He now researches meta-science at the Astera Institute, while writing about his many interests online. See www.notion.so/blog/michael-nielsen for episode transcript. Hosted by Devon Zuegel Edited by Anson Yu Audio by The Land Films
Michael Nielsen has covered a lot of race miles in his day, and now he's helping others do the same thing. One of his biggest coaching pillars: the value of a good warm-up. Check out the show notes for today's episode at http://DizRuns.com/1096. Today's episode of the show is sponsored by: the Little Things course! Check out this FREE course to help you shore up some potential weak links that are slowing your growth as a runner. http://DizRuns.com/littlethings Love the show? Check out the support page for ways you can help keep the Diz Runs Radio going strong! http://dizruns.com/support Become a Patron of the Show! Visit http://Patreon.com/DizRuns to find out how. Get Your Diz Runs Radio Swag! http://dizruns.com/magnet Subscribe to the Diz Runs Radio Find Me on an Apple Device http://dizruns.com/itunes Find Me on an Android http://dizruns.com/stitcher Find Me on SoundCloud http://dizruns.com/soundcloud Please Take the Diz Runs Radio Listener Survey http://dizruns.com/survey Win a Free 16-Week Training Plan Enter at http://dizruns.com/giveaway Join The Tribe If you'd like to stay up to date with everything going on in the Diz Runs world, become a member of the tribe! The tribe gets a weekly email where I share running tips and stories about running and/or things going on in my life. To get the emails, just sign up at http://dizruns.com/join-the-tribe The tribe also has an open group on Facebook, where tribe members can join each other to talk about running, life, and anything in between. Check out the group and join the tribe at https://www.facebook.com/groups/thedizrunstribe/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Carl Sagan quotations, published by finm on October 10, 2022 on The Effective Altruism Forum. Carl Sagan (1934–1996) was an astronomer and science communicator. He organised the first physical messages to space (the Pioneer plaque and the Voyager Golden Record), presented the hugely popular TV series Cosmos (1980), and considered humanity's long-term future in Pale Blue Dot (1994). He was also part of the team of researchers who first discovered the possibility of nuclear winter, and so became a leading voice of concern about the use of nuclear weapons. Sagan's words were often prescient and always poetic. In particular, I think he captures many ideas related to longtermism and existential risk as powerfully as anyone writing today. I've tried collecting some quotations that stand out to me from Sagan's work, though I've only read a minority of his published writing. You can find a slightly more comprehensive version here. The website for Toby Ord's book The Precipice contains a list of quotations pertaining to existential risk, which I partially borrowed from here. Michael Nielsen has also written some fantastic 'working notes' on Cosmos. Cosmos: A Personal Voyage (1980) Note that Cosmos was co-written with Ann Druyan. Episode 1 — "The Shores of the Cosmic Ocean" The cosmos is all that is, or ever was, or ever will be. Our contemplations of the Cosmos stir us — there is a tingling in the spine, a catch in the voice, a faint sensation, as if a distant memory, of falling from a great height. We know we are approaching the greatest of mysteries. The size and age of the cosmos are beyond ordinary human understanding. Lost somewhere between immensity and eternity is our tiny planetary home, the Earth. For the first time we have the power to determine the fate of our planet, and ourselves. This is a time of great danger, but our species is young and curious and brave. It shows much promise. In the last few millennia we have made the most astonishing and unexpected discoveries about the cosmos, and our place within it. I believe our future depends powerfully on how we understand this cosmos; in which we float, like a mote of dust, in the morning sky. You can watch this opening scene here. The surface of the Earth is the shore of the cosmic ocean. On this shore, we've learned most of what we know. Recently, we've waded a little way out; maybe ankle-deep: and the water seems inviting. Some part of our being knows this is where we came from; we long to return — and we can, because the Cosmos is also within us: we are made of star stuff. We are the legacy of 15 billion years of cosmic evolution. We have a choice. We can enhance life and come to know the universe that made us, or we can squander our 15 billion year heritage in meaningless self-destruction. What happens in the first second of the next cosmic year depends on what we do — here and how — with our intelligence, and our knowledge of the cosmos. Episode 13 — "Who Speaks for Earth?" [Imagining human extinction] Maybe the reptiles will evolve intelligence once more. Perhaps, one day, there will be civilizations again on Earth. There will be life. There will be intelligence. But there will be no more humans. Not here, not on a billion worlds. [T]he world impoverishes itself by spending a trillion dollars a year on preparations for war. And by employing perhaps half the scientists and high technologists on the planet in military endeavors. How would we explain all this to a dispassionate extraterrestrial observer? What account would we give of our stewardship of the planet Earth? We have heard the rationales offered by the superpowers. We know who speaks for the nations. But who speaks for the human species? It's probably here. [Alexandria] that the word "cosmopolitan" realized its true meaning of a citizen, not just...
Digitale tvillinger er et af tidens buzzbegreber. Tanken er at skabe software kopier af alt fra vindmøller og robotter til biler og mennesker. Men hvor langt er vi egentlig med at lave disse tvillinger? Tvillingerne huserer især i industrivirksomheder, der kan drage konkurrencefordele af at have digitale tvillinger til rådighed. Desværre halter små og mellemstore danske virksomheder bagefter. Derfor har Automatikmessen i Brøndbyhallen i år fokus på digitale tvillinger. Medvirkende: Michael Nielsen, adm. dir., Beckhoff Automation Peter Gorm Larsen, professor, Center for Digitale Tvillinger, Aarhus Universitet Links: Automatikmessen https://www.automatikmesse.dk/for-besoegende/konferencer Center for Digitale Tvillinger https://digit.au.dk/centre-for-digital-twins
Read the full transcriptWhat is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there!
What is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there!
Read the full transcript here. What is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.Michael Nielsen was on the podcast back in episode 016. You can read more about him there! [Read more]
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Michael Nielsen's "Notes on effective altruism", published by Pablo on June 3, 2022 on The Effective Altruism Forum. Quantum physicist Michael Nielsen has published an impressive critical essay on EA. Summary: Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom. Some passages I highlighted: I have EA friends who donate a large fraction of their income to charitable causes. In some cases it's all their income above some fairly low (by rich developed world standards) threshold, say $30k. In some cases it seems plausible that their personal donations are responsible for saving dozens of lives, helping lift many people out of poverty, and preventing many debilitating diseases, often in some of the poorest and most underserved parts of the world. Some of those friends have directly helped save many lives. That's a simple sentence, but an extraordinary one, so I'll repeat it: they've directly helped save many lives. As extraordinary as my friend's generosity was, there is something further still going on here. Kravinsky's act is one of moral imagination, to even consider donating a kidney, and then of moral conviction, to follow through. This is an astonishing act of moral invention: someone (presumably Kravinsky) was the first to both imagine doing this, and then to actually do it. That moral invention then inspired others to do the same. It actually expanded the range of human moral experience, which others can learn from and then emulate. In this sense a person like Kravinsky can be thought of as a moral pioneer or moral psychonaut, inventing new forms of moral experience. Moral reasoning, if taken seriously and acted upon, is of the utmost concern, in part because there is a danger of terrible mistakes. The Nazi example is overly dramatic: for one thing, I find it hard to believe that the originators of Nazi ideas didn't realize that these were deeply evil acts. But a more everyday example, and one which should give any ideology pause, is overly self-righteous people, acting in what they "know" is a good cause, but in fact doing harm. I'm cautiously enthusiastic about EA's moral pioneering. But it is potentially a minefield, something to also be cautious about. when EA judo is practiced too much, it's worth looking for more fundamental problems. The basic form of EA judo is: "Look, disagreement over what is good does nothing directly to touch EA. Indeed, such disagreement is the engine driving improvement in our notion of what is good." This is perhaps true in some God's-eye, omniscient, in-principle philosopher's sense. But EA community and organizations are subject to fashion and power games and shortcomings and biases, just like every other community and organization. Good intentions alone aren't enough to ensure effective decisions about effectiveness. And the reason many people are bothered by EA is not that they think it's a bad idea to "do good better". But rather that they doubt the ability of EA institutions and community to live up to the aspirations. These critiques can come from many directions. From people interested in identity politics I've heard: "Look, many of these EA organizations are being run by powerful white men, reproducing existing power structures, biased toward technocratic capitalism and the status quo, and ignoring many of the things which really matter." From libertarian...
A basket of indicators all seem to document a similar trend. Even as the number of scientists and publications rises substantially, we do not appear to be seeing a concomitant rise in new discoveries that supplant older ones. Science is getting harder.This podcast is an audio read through of the (initial draft of the) post Science is getting harder, published on New Things Under the Sun.Articles mentioned:Bloom, Nicholas, Charles I. Jones, John Van Reenen, and Michael Webb. 2020. Are Ideas Getting Harder to Find? American Economics Review 110(4): 1104-1144. https://doi.org/10.1257/aer.20180338Wang, Dashun and Albert-László Barabási. 2021. The Science of Science. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781108610834Li, Jichao, Yian Yin, Santo Fortunato, and Dashun Wang. 2019. A dataset of publication records for Nobel Laureates. Scientific Data 6: 33. https://doi.org/10.1038/s41597-019-0033-6Collison, Patrick and Michael Nielsen. 2018. Science is Getting Less Bang for Its Buck. The Atlantic. Chu, Johan S.G. and James A. Evans. 2021. Slowed canonical progress in large fields of science. PNAS 118(41): e2021636118. https://doi.org/10.1073/pnas.2021636118Milojević, Staša. 2015. Quantifying the cognitive extent of science. Journal of Informetrics 9(4): 962-973. https://doi.org/10.1016/j.joi.2015.10.005Carayol, Nicolas, Agenor Lahatte, and Oscar Llopis. 2019. The Right Job and the Job Right: Novelty, Impact and Journal Stratification in Science. SSRN working paper. http://dx.doi.org/10.2139/ssrn.3347326Larivière, Vincent, Éric Archambault, & Yves Gingras. 2007. Long-term patterns in the aging of the scientific literature, 1900–2004. Proceedings of ISSI 2007, ed. Daniel Torres-Salinas and Henk F. Moed. https://www.issi-society.org/publications/issi-conference-proceedings/proceedings-of-issi-2007/Cui, Haochuan, Lingfei Wu, and James A. Evans. 2022. Aging scientists and slowed advance. arXiv 2202.04044. https://doi.org/10.48550/arXiv.2202.04044Marx, Matt, and Aaron Fuegi. Reliance on Science: Worldwide Front-Page Patent Citations to Scientific Articles. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3331686
Andy sits down with Michael Nielsen to chat about how you can stay creative in all your endeavors
Denne gang dykker vi ned i rollen som DPO sammen med Michael Nielsen fra DPO Danmark. Michael har lang erfaring med IT Governance, informationssikkerhed og it-ledelse. Compliance, rådgivning og modning. Han har fungeret som ekstern DPO for en lang række af virksomheder.Vi kommer til at tale om:1. Hvornår man skal have en DPO, og hvornår det måske bare er en god ide2. Skal det være en intern eller en ekstern ressource?3. DPO'en opgaver4. Samspillet mellem DPO'en og det øvrige GDPR team5. DPO'ens forhold til ledelsen, og hvordan man i praksis håndterer den balance6. Hvordan man bliver en succes i rollen som DPOMichael Nielsen er partner og DPO i DPO Danmark. Han har lang erfaring med IT ledelse, IT-sikkerhed, IT-Governance, og er uddannet (DPO).Michael har hjulpet mange virksomheder med at gennemføre processen – at overholde EU dataforordningen, med fokusset ”at gøre det indviklede nemt” for virksomhederne. Derudover fungerer han som DPO for flere organisationer, små som store.Privacy League Danmark er en podcast fra Wired Relations om GDPR og informationssikkerhed. Se mere om Wired Relations på www.wiredrelations.com
Andy and Patrick are joined by friend of the show Michael Nielsen to discuss whether designers have the chops to become PM's
TalentLab præsenterer fritidspodcast på tværs af landet. Denne time byder på interview med Michael Nielsen Søberg
TalentLab præsenterer fritidspodcast på tværs af landet. Afsnittets podcast er Kryptopia
"I read The Mundanity of Excellence by Daniel Chambliss. I came across this essay through a tweet from Michael Nielsen, who mentioned Freyja's excellent “Becoming a magician” essay (which will be part of the Archive in the future – I blogged about it briefly). Jason Crawford replied to Michael, pointing to today's essay. I have since read and reread it and cannot stop thinking about it."
Michael Nielsen var kriminel og havde et dårligt forhold til én specifik betjent. Den betjent endte med at skyde ham i brystet i 2009 og tage livet af ham. Vært: Per Lysholt. Tilrettelæggelse: Nicholas Durup Thomsen & Sophie Lier. Mix: Tobias Ingemann. Redaktør: Mads Petter Kühnel. Skriv til detaljen@dr.dk Produceret for DR af Filt Cph.
In this week's CatPick Fridays episode Rich and I check out the new Friedman BE Mini, Fender FSR Bullet and Classic Vibe guitars, upcoming Epiphone Adam Jones signature, the bands we want to see live, talk about my Gibson Les Paul Standard 2012 I was forced to sell, answer some of your questions & comments and check out a hilarious Weekend Watch recommendation from Gear Gods. CatPick Fridays can be found both on YouTube and Apple Podcasts and Spotify. Get 25% off of Get Songs Done songwriting course by using the code ”PODCAST” at the checkout: https://www.catpickstudios.com/get-songs-done-offer Rich. Words. Music: https://www.youtube.com/channel/UCbNnXraM9oHWWBACNS07DMw Friedman BE Mini Head https://friedmanamplification.com/heads/be-mini-head Michael Nielsen video on Friedman BE Mini https://youtu.be/0dCZwVpAvZY Squier Guitars https://www.musicradar.com/news/squier-launches-fsr-bullet-competition-mustang-and-hss-stratocaster-plus-a-refreshed-classic-vibe-60s-custom-tele-in-two-fruity-finishes Adam Jones Epiphone Les Paul https://www.musicradar.com/news/adam-jones-teases-an-epiphone-edition-of-his-signature-1979-gibson-les-paul-custom Gibson Les Paul Standard 2012 https://www.dropbox.com/s/cjj92j1hni4u3bi/16.jpg?dl=0 P&W Comment as a screenshot https://www.dropbox.com/s/lcml7qlohgge24z/Screenshot%202021-05-04%20at%2021.50.05.png?dl=0 Gear Gods Songwriting Entries https://www.youtube.com/watch?v=fu3EOJVL-5I 00:00 Intro 05:42 Friedman BE Mini 15:16 Fender Squier FSR Bullet Guitars 21:13 Adam Jones Signature Epiphone Les Paul 27:35 My Recent Livestream 30:38 The Gigs We Want To see 47:55 Random Positive Thing 52:08 Gibson Les Paul Standard 2012 1:00:56 Questions & Comments 1:16:12 Trey's Songwriting Contest
Rig Doctor Podcast: Tone Tips, Pedalboard Tricks, & Easy DIY Hacks
Michael Nielsen | Pedals That Sound Like Vintage Rack Gear SUPPORT The Rig Doctor Podcast RIG DOCTOR - https://www.therigdr.com PAYPAL - https://paypal.me/rigdoctor STORE - https://vertexeffects.com/store FOLLOW The Rig Doctor YOUTUBE - https://youtube.com/vertexeffectsinc INSTAGRAM - https://instagram.com/vertexeffects FACEBOOK - https://facebook.com/vertexeffectsinc WEBSITE - https://vertexeffects.com Rig Doctor Approved MATERIALS https://vertexeffects.com/rig-dr-recommended-pedalboard-materials CONTACT The Rig Doctor info@therigdr.com
Michael Nielsen went from cooped up to near financial freedom in less than one year by putting his inner-creative to work. His YouTube channel, Outdoor Therapy, is generating him a six-figure income and seven-figures is within his reach. He attributes most of his success to his mindset and tenacity.You can find Michael's work on YouTube by searching for Outdoor Therapy.Support the show (https://www.buymeacoffee.com/wealthwatchers)
Timestamps(1:55) Alba shared her background growing up interested in studying Physics and pivoting into quantum mechanics.(3:33) Alba went over her Bachelor’s in Fundamental Physics at The University of Barcelona.(4:54) Alba continued her education with an M.S. degree that specialized in Particle Physics and Gravitation.(6:40) Alba started her Ph.D. in Physics in 2015 and discussed her first publication, “Operational Approach to Bell Inequalities: Application to Qutrits.”(9:48) Alba also spent time as a visiting scholar at the University of Oxford and the University of Madrid during her Ph.D.(11:25) Alba explained her second paper to understand the connection between maximal entanglement and the fundamental symmetries of high-energy physics.(13:27) Alba dissected her next work titled “Multipartite Entanglement in Spin Chains and The Hyperdeterminant.”(18:56) Alba shared the origin of Quantic, a quantum computation joint effort between the University of Barcelona and the Barcelona Supercomputing Center.(22:27) Alba unpacked her article “Quantum Computation: Playing The Quantum Symphony,” making a metaphor between quantum computing and musical symphony.(27:47) Alba discussed the motivation and contribution of her paper “Exact Ising Model Simulation On A Quantum Computer.”(32:51) Alba recalled creating a tutorial that ended up winning the Teach Me QISKit challenge from IBM back in 2018.(35:01) Alba elaborated on her paper “Quantum Circuits For the Maximally Entangled States,” which designs a series of quantum circuits that generate absolute maximally entangled states to benchmark a quantum computer.(38:54) Alba dissected key ideas in her paper “Data Re-Uploading For a Universal Quantum Classifier.”(43:51) Alba explained how she leveled up her knowledge of classical neural networks.(47:40) Alba shared her experience as a Postdoctoral Fellow at The Matter Lab at the University of Toronto — working on quantum machine learning and variational quantum algorithms (checked out the Quantum Research Seminars Toronto that she has been organizing).(52:18) Alba explained her work on the Meta-Variational Quantum Eigensolver algorithm capable of learning the ground state energy profile of a parametrized Hamiltonian.(59:23) Alba went over Tequila, a development package for quantum algorithms in Python that her group created.(01:04:49) Alba presented a quantum calling for new algorithms, applications, architectures, quantum-classical interface, and more (as presented here).(01:08:59) Alba has been active in education and public outreach activities about encouraging scientific vocations for young minds, especially in Catalonia.(01:12:07) Closing segment.Her Contact InfoWebsiteTwitterLinkedInGoogle ScholarGitHubHer Recommended ResourcesEwin Tang (Ph.D. Student in Theoretical Computer Science at the University of Washington)Alán Aspuru-Guzik (Professor of Chemistry and Computer Science at the University of Toronto, Alba’s current supervisor)José Ignacio Latorre (Professor of Theoretical Physics at the University of Barcelona, Alba’s former supervisor)Quantum Computation and Quantum Information (by Michael Nielsen and Isaac Chuang)Quantum Field Theory and The Standard Model (by Matthew Schwarz)The Structure of Scientific Revolutions (by Thomas Kuhn)Against Method (by Paul Feyerabend)Quantum Computing Since Democritus (by Scott Aaronson)
Timestamps(1:55) Alba shared her background growing up interested in studying Physics and pivoting into quantum mechanics.(3:33) Alba went over her Bachelor’s in Fundamental Physics at The University of Barcelona.(4:54) Alba continued her education with an M.S. degree that specialized in Particle Physics and Gravitation.(6:40) Alba started her Ph.D. in Physics in 2015 and discussed her first publication, “Operational Approach to Bell Inequalities: Application to Qutrits.”(9:48) Alba also spent time as a visiting scholar at the University of Oxford and the University of Madrid during her Ph.D.(11:25) Alba explained her second paper to understand the connection between maximal entanglement and the fundamental symmetries of high-energy physics.(13:27) Alba dissected her next work titled “Multipartite Entanglement in Spin Chains and The Hyperdeterminant.”(18:56) Alba shared the origin of Quantic, a quantum computation joint effort between the University of Barcelona and the Barcelona Supercomputing Center.(22:27) Alba unpacked her article “Quantum Computation: Playing The Quantum Symphony,” making a metaphor between quantum computing and musical symphony.(27:47) Alba discussed the motivation and contribution of her paper “Exact Ising Model Simulation On A Quantum Computer.”(32:51) Alba recalled creating a tutorial that ended up winning the Teach Me QISKit challenge from IBM back in 2018.(35:01) Alba elaborated on her paper “Quantum Circuits For the Maximally Entangled States,” which designs a series of quantum circuits that generate absolute maximally entangled states to benchmark a quantum computer.(38:54) Alba dissected key ideas in her paper “Data Re-Uploading For a Universal Quantum Classifier.”(43:51) Alba explained how she leveled up her knowledge of classical neural networks.(47:40) Alba shared her experience as a Postdoctoral Fellow at The Matter Lab at the University of Toronto — working on quantum machine learning and variational quantum algorithms (checked out the Quantum Research Seminars Toronto that she has been organizing).(52:18) Alba explained her work on the Meta-Variational Quantum Eigensolver algorithm capable of learning the ground state energy profile of a parametrized Hamiltonian.(59:23) Alba went over Tequila, a development package for quantum algorithms in Python that her group created.(01:04:49) Alba presented a quantum calling for new algorithms, applications, architectures, quantum-classical interface, and more (as presented here).(01:08:59) Alba has been active in education and public outreach activities about encouraging scientific vocations for young minds, especially in Catalonia.(01:12:07) Closing segment.Her Contact InfoWebsiteTwitterLinkedInGoogle ScholarGitHubHer Recommended ResourcesEwin Tang (Ph.D. Student in Theoretical Computer Science at the University of Washington)Alán Aspuru-Guzik (Professor of Chemistry and Computer Science at the University of Toronto, Alba’s current supervisor)José Ignacio Latorre (Professor of Theoretical Physics at the University of Barcelona, Alba’s former supervisor)Quantum Computation and Quantum Information (by Michael Nielsen and Isaac Chuang)Quantum Field Theory and The Standard Model (by Matthew Schwarz)The Structure of Scientific Revolutions (by Thomas Kuhn)Against Method (by Paul Feyerabend)Quantum Computing Since Democritus (by Scott Aaronson)
Ep. 93 - Rack Gear Show with Michael Nielsen and Michael Toren!
2Wheels 2Survive was started to help support our Veterans by a Veteran. Michael a combat veteran himself with tours in Iraq and Afghanistan knows first hand the struggles and mental fights many of our veterans go through when they come home. He is always there for anyone that needs someone to talk with or just ride with. He also wants our men and women to not be afraid to say you might need help. So This coming year he is partnering with Mission 22 and taking a 30 day motorcycle trip to 22 national parks. With meet and greets along way the way . Meeting more our veterans and also discussing the importance of this mission, Mission 22 and 2Wheels 2Survive. To find out more info you check head over to https://www.2wheels2survive.com/ and don't forget his follow his adventures on Facebook and Instagram @2wheels2survive --- Support this podcast: https://anchor.fm/thevtwinlife/support
NOTE: The beginning of this conversation touches on some of the same themes that were discussed in the recent episode with Michael Nielsen. After that, though, this conversation heads off in other directions.Is scientific progress speeding up or slowing down? How can we understand and explain the replication crisis in the social sciences? In the context of research, does speed have a quality all its own in the same way that quantity has a quality all its own? What are Geoff and Spencer doing in the social science field that's significantly different from what others are doing?Geoff Anders is the founder of Leverage Research, a non-profit research institute that studies the history of science to learn how a better understanding of early stage science can inform scientific efforts today. Geoff is also the co-founder of Paradigm, a training and coaching organization that uses knowledge of learning, thinking, and motivation to help people think better and better pursue their missions. Geoff has a PhD in Philosophy from Rutgers University. You can learn more about Geoff via his website and can follow him on Twitter at @geoffanders.
NOTE: The beginning of this conversation touches on some of the same themes that were discussed in the recent episode with Michael Nielsen. After that, though, this conversation heads off in other directions.Is scientific progress speeding up or slowing down? How can we understand and explain the replication crisis in the social sciences? In the context of research, does speed have a quality all its own in the same way that quantity has a quality all its own? What are Geoff and Spencer doing in the social science field that's significantly different from what others are doing?Geoff Anders is the founder of Leverage Research, a non-profit research institute that studies the history of science to learn how a better understanding of early stage science can inform scientific efforts today. Geoff is also the co-founder of Paradigm, a training and coaching organization that uses knowledge of learning, thinking, and motivation to help people think better and better pursue their missions. Geoff has a PhD in Philosophy from Rutgers University. You can learn more about Geoff via his website and can follow him on Twitter at @geoffanders.[Read more]
NOTE: The beginning of this conversation touches on some of the same themes that were discussed in the recent episode with Michael Nielsen. After that, though, this conversation heads off in other directions. Is scientific progress speeding up or slowing down? How can we understand and explain the replication crisis in the social sciences? In the context of research, does speed have a quality all its own in the same way that quantity has a quality all its own? What are Geoff and Spencer doing in the social science field that's significantly different from what others are doing? Geoff Anders is the founder of Leverage Research, a non-profit research institute that studies the history of science to learn how a better understanding of early stage science can inform scientific efforts today. Geoff is also the co-founder of Paradigm, a training and coaching organization that uses knowledge of learning, thinking, and motivation to help people think better and better pursue their missions. Geoff has a PhD in Philosophy from Rutgers University. You can learn more about Geoff via his website and can follow him on Twitter at @geoffanders.
NOTE: The beginning of this conversation touches on some of the same themes that were discussed in the recent episode with Michael Nielsen. After that, though, this conversation heads off in other directions.Is scientific progress speeding up or slowing down? How can we understand and explain the replication crisis in the social sciences? In the context of research, does speed have a quality all its own in the same way that quantity has a quality all its own? What are Geoff and Spencer doing in the social science field that's significantly different from what others are doing?Geoff Anders is the founder of Leverage Research, a non-profit research institute that studies the history of science to learn how a better understanding of early stage science can inform scientific efforts today. Geoff is also the co-founder of Paradigm, a training and coaching organization that uses knowledge of learning, thinking, and motivation to help people think better and better pursue their missions. Geoff has a PhD in Philosophy from Rutgers University. You can learn more about Geoff via his website and can follow him on Twitter at @geoffanders.
Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other? Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. He also has a strong side interest in artificial intelligence. All are part of a broader interest in developing tools that help people think and create, both individually and collectively. His most recent book is Quantum Country, an introduction to quantum computing. Find out more at his website, michaelnielsen.org, or follow him on Twitter at @michael_nielsen.
Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other?Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. He also has a strong side interest in artificial intelligence. All are part of a broader interest in developing tools that help people think and create, both individually and collectively. His most recent book is Quantum Country, an introduction to quantum computing. Find out more at his website, michaelnielsen.org, or follow him on Twitter at @michael_nielsen.
Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other?Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. He also has a strong side interest in artificial intelligence. All are part of a broader interest in developing tools that help people think and create, both individually and collectively. His most recent book is Quantum Country, an introduction to quantum computing. Find out more at his website, michaelnielsen.org, or follow him on Twitter at @michael_nielsen.[Read more]
Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other?Michael Nielsen is a scientist who helped pioneer quantum computing and the modern open science movement. He also has a strong side interest in artificial intelligence. All are part of a broader interest in developing tools that help people think and create, both individually and collectively. His most recent book is Quantum Country, an introduction to quantum computing. Find out more at his website, michaelnielsen.org, or follow him on Twitter at @michael_nielsen.
Michael Nielsens gode læhegn giver plads til fugle, råvildt og andet naturindhold rundt om markerne - og hans læhegn giver samtidig et større udbytte i marken. Hør Michael Nielsen fortælle om sin naturglæde og naturtiltag i selskab med Bent Rasmussen, der er biolog og økologikonsulent i Økologisk Landsforening. Udsendelsen med Michael Nielsen er første udgave i serien Fire økologer taler om bæredygtighed. En podcastserie, som er produceret for projektet Bedste praksis er bæredygtig praksis. Projektet er støttet af Promilleafgiftsfonden for landbrug.
Ep. 83 - Dedicated to Edward Van Halen w/ Pete Thorn, David Black and Michael Nielsen
Andy and Patrick sit down beside the campfire to tell scary stories when PM legend Michael Nielsen jumps out of the bushes.
A conversation with Adam Marblestone about his new project - Focused Research Organizations. Focused Research Organizations (FROs) are a new initiative that Adam is working on to address gaps in current institutional structures. You can read more about them in this white paper that Adam released with Sam Rodriques. Links FRO Whitepaper Adam on Twitter Adam's Website Transcript [00:00:00] In this conversation, I talked to Adam marble stone about focused research organizations. What are focused research organizations you may ask. It's a good question. Because as of this recording, they don't exist yet. There are new initiatives that Adam is working on to address gaps. In current institutional structures, you can read more about them in the white paper that Adam released recently with San Brad regens. I'll put them in the show notes. Uh, [00:01:00] just a housekeeping note. We talk about F borrows a lot, and that's just the abbreviation for focus, research organizations. just to start off, in case listeners have created a grave error and not yet read the white paper to explain what an fro is. Sure. so an fro is stands for focus research organization. the idea is, is really fundamentally, very simple and maybe we'll get into it. On this chat of why, why it sounds so trivial. And yet isn't completely trivial in our current, system of research structures, but an fro is simply a special purpose organization to pursue a problem defined problem over us over a finite period of time. Irrespective of, any financial gain, like in a startup and, and separate from any existing, academic structure or existing national lab or things [00:02:00] like that. It's just a special purpose organization to solve, a research and development problem. Got it. And so the, you go much more depth in the paper, so I encourage everybody to go read that. I'm actually also really interested in what's what's sort of the backstory that led to this initiative. Yeah. it's kind of, there's kind of a long story, I think for each of us. And I would be curious your, a backstory of how, how you got involved in, in thinking about this as well. And, but I can tell you in my personal experience, I had been spending a number of years, working on neuroscience and technologies related to neuroscience. And the brain is sort of a particularly hard a technology problem in a number of ways. where I think I ran up against our existing research structures. in addition to just my own abilities and [00:03:00] everything, but, but I think, I think I ran up against some structural issues too, in, in dealing with, the brain. So, so basically one thing we want to do, is to map is make a map of the brain. and to do that in a, in a scalable high-speed. Way w what does it mean to have a map of the brain? Like what, what would, what would I see if I was looking at this map? Yeah, well, we could, we could take this example of a mouse brain, for example. just, just, just for instance, so that there's a few things you want to know. You want to know how the individual neurons are connected to each other often through synopsis, but also through some other types of connections called gap junctions. And there are many different kinds of synopsis. and there are many different kinds of neurons and, There's also this incredibly multi-scale nature of this problem where a neuron, you know, it's, it's axon, it's wire that it sends out can shrink down to like a hundred nanometers in [00:04:00] thickness or less. but it can also go over maybe centimeter long, or, you know, if you're talking about, you know, the neurons that go down your spinal cord could be meter long, neurons. so this incredibly multi-scale it poses. Even if irrespective of other problems like brain, computer interfacing or real time communication or so on, it just poses really severe technological challenges, to be able to make the neurons visible and distinguishable. and to do it in a way where, you can use microscopy, two image at a high speed while still preserving all of that information that you need, like which molecules are aware in which neuron are we even looking at right now? So I think, there's a few different ways to approach that technologically one, one is with. The more mature technology is called the electron microscope, electromicroscopy approach, where basically you look at just the membranes of the neurons at any given pixel sort of black or white [00:05:00] or gray scale, you know, is there a membrane present here or not? and then you have to stitch together images. Across this very large volume. but you have to, because you're just able to see which, which, which pixels have membrane or not. you have to image it very fine resolution to be able to then stitch that together later into a three D reconstruction and you're potentially missing some information about where the molecules are. And then there's some other more, less mature technologies that use optical microscopes and they use other technologies like DNA based barcoding or protein based barcoding to label the neurons. Lots of fancy, but no matter how you do this, This is not about the problem that I think can be addressed by a small group of students and postdocs, let's say working in an academic lab, we can go a little bit into why. Yeah, why not? They can certainly make big contributions and have to, to being able to do this. But I think ultimately if we're talking about something like mapping a mouse brain, it's not [00:06:00] going to be, just a, a single investigator science, Well, so it depends on how you think about it. One, one, one way to think about it is if you're just talking about scaling up, quote, unquote, just talking about scaling up the existing, technologies, which in itself entails a lot of challenges. there's a lot of work that isn't academically novel necessarily. It's things like, you know, making sure that, Improving the reliability with which you can make slices of the brain, into, into tiny slices are making sure that they can be loaded, onto, onto the microscope in an automated fast way. those are sort of more engineering problems and technology or process optimization problems. That's one issue. And just like, so Y Y Can't like, why, why couldn't you just sort of have like, isn't that what grad students are for like, you know, it's like pipetting things and, doing, doing graduate work. So like why, why couldn't that be done in the lab? That's not why [00:07:00] they're ultimately there. Although I, you know, I was, I was a grad student, did a lot of pipetting also, but, But ultimately they're grad student. So are there in order to distinguish themselves as, as scientists and publish their own papers and, and really generate a unique academic sort of brand really for their work. Got it. So there's, there's both problems that are lower hanging fruit in order to. in order to generate that type of academic brand, but don't necessarily fit into a systems engineering problem of, of putting together a ConnectTo mapping, system. There's also the fact that grad students in, you know, in neuroscience, you know, may not be professional grade engineers, that, for example, know how to deal with the data handling or computation here, where you would need to be, be paying people much higher salaries, to actually do, you know, the kind of industrial grade, data, data piping, and, and, and many other [00:08:00] aspects. But I think the fundamental thing that I sort of realized that I think San Rodriquez, my coauthor on this white paper also realized it through particularly working on problems that are as hard as, as clinic Comix and as multifaceted as a system building problem. I th I think that's, that's the key is that there's, there's certain classes of problems that are hard to address in academia because they're system building problems in the sense that maybe you need five or six different. activities to be happening simultaneously. And if any, one of them. Doesn't follow through completely. you're sort of, you don't have something that's novel and exciting unless you have all the pieces putting, you know, put together. So I don't have something individually. That's that exciting on my own as a paper, Unless you, and also three other people, separately do very expert level, work, which is itself not academically that interesting. Now having the connectome is academically [00:09:00] interesting to say the least. but yes, not only my incentives. but also everybody else's incentives are to, to maybe spend say 60% of their time doing some academically novel things for their thesis and only spend 40% of their time on, on building the connectome system. Then it's sort of, the probability of the whole thing fitting together. And then. We see everyone can perceive that. And so, you know, they basically, the incentives don't align well, for, for what you would think of as sort of team science or team engineering or systems engineering. yeah. And so I'm like, I think, I think everybody knows that I'm actually like very much in favor of this thing. So, I'm going to play devil's advocate to sort of like tease out. what I think are. Important things to think about. so, so one sort of counter argument would be like, well, what about projects? Like cert, right? Like that [00:10:00] is a government yeah. Led, you should, if you do requires a lot of systems engineering, there's probably a lot of work that is not academic interesting. And yet, it, it, it happens. So like there's clearly like proof of concepts. So like what what's like. W why, why don't we just have more things like, like certain for, the brain. Yeah. And I think this gets very much into why we want to talk about a category of focused research organizations and also a certain scale, which we can get into. So, so I think certain is actually in many ways, a great example of, of this, obviously this kind of team science and team engineering is incredible. And there are many others, like LIGO or, or CBO observatory or the human genome project. These are great examples. I think the, the problem there is simply that these, these are multibillion dollar initiatives that really take decades of sustained. government involvement, to make it happen. And so once they get going, and [00:11:00] once that flywheel sort of start spinning, then you have you have it. And so, and so that, that is a nonacademic research project and also the physics and astronomy communities, I think have more of a track record and pipeline overall. perhaps because it's easier, I think in physical sciences, then in some of these sort of emerging areas of, of, you know, biology or sort of next gen fabrication or other areas where it's, it's, there's less of a, a grounded set of principles. So, so for CERN, everybody in the physics basically can agree. You need to get to a certain energy scale. Right. And so none of the theoretical physicists who work on higher energy systems are going to be able to really experimentally validate what they're doing without a particle accelerator of a certain level. None of the astronomers are gonna be able to really do deep space astronomy without a space telescope. and so you can agree, you know, community-wide that, This is something that's worth doing. And I think there's a lot of incredible innovation that happens in those with focus, research organizations. We're thinking about a scale that, [00:12:00] that sort of medium science, as opposed to small science, which is like a, you know, academic or one or a few labs working together, Or big science, which is like the human genome project was $3 billion. For example, a scope to be about $1 per base pair. I don't know what actually came out, but the human genome has 3 billion basis. So that was a good number. these are supposed to be medium scale. So maybe similar to the size of a DARPA project, which is like maybe between say 25 and. A hundred or $150 million for a project over a finite period of time. And they're there. The idea is also that they can be catalytic. So there's a goal that you could deliver over a, some time period. It doesn't have to be five years. It could be seven years, but there's some, some definable goal over definable time period, which is then also catalytic. so in some ways it will be more equivalent to. For the genome project example, what happened after the genome project where, the [00:13:00] cost of genome sequencing through, through new technologies was brought down, basically by a million fold or so is, is, is, how George Church likes to say it, inventing new technologies, bringing them to a level of, of readiness where they can then be, be used catalytically. whereas CERN, you know, It's just a big experiment that really has to keep going. Right. And it's also sort of a research facility. there's also permanent institutes. I think there's a, is a, is a, certainly a model that can do team science and, and many of the best in the brain mapping space, many of the sort of largest scale. connectomes in particular have come either from Janelia or from the Allen Institute for brain science, which are both sort of permanent institutes, that are, that are sort of, nonacademic or semi academic. but that's also a different thing in the sense that it's, it takes a lot of activation energy to create an Institute. And then that becomes now, a permanent career path rather than sort of focusing solely on what's the shortest path to. To some [00:14:00] innovation, the, the, the permanence. So, so the, the flip side of the permanence is that, I guess, how are you going to convince people to do this, this, like this temporary thing, where. I think, someone asked on Twitter about like, you know, if it's being run by the government, these people are probably going to get, government salaries. So you're, you're getting a government salary, without the like one upside of a government job, which is the security. so like what, what is the incentive for, for people to, to come do this? Yeah. And I think, I think it depends on whether it's government or philanthropic, philanthropic fro Faros are also definitely. An option and maybe in many ways more flexible, because the, you know, the government sort of has to, has to contract in a certain way and compete out, you know, contracts in a certain way. They can't just decide, the exact set of people to do something, for example. So, so the government side has. Both a huge [00:15:00] opportunity in the sense that I think this is a very good match for a number of things that the government really would care about. and the government has, has, has the money, and resources to do this, but philanthropic is also one we should consider. but in any case, there are questions about who and who will do Froy and, and why. and I think the basic answer though, it, it comes down to, it's not a matter of, of cushiness of the career certainty. it's, it's really, these are for problems that are not doable any other way. this is actually in many ways, the definition is that you're only going to do this. if this is the only way to do it, and if it's incredibly important. So it really is a, it's a medium scale moonshots. you would have to be extremely passionate about it. That being said, there are reasons I think in approximate sense why one might want to do it both in terms of initiating one and in terms of sort of B being part of them. [00:16:00] so one is simply that you can do science. that is for a fundamental purpose or, or, or, pure, purely driven toward your passion to solve a problem. and yet can have potentially a number of the affordances of, of industry such as, industry competitive salaries, potentially. I think the government, we have to ask about what the government can do, but, but in a certain philanthropic setting, you could do it another aspect that I think a lot of scientists find. Frustrating in the academic system is precisely that they have to. spend so much work to differentiate themselves and do something that's completely separate from what their friends are doing, in order to pay the bills basically. So, so if, if you don't eventually go and get your own appealing, you know, Tenure track job or, or so on and so forth. the career paths available in academia are much, much fewer, and often not, not super well compensated. And, and [00:17:00] so there are a number of groups of people that I've seen in sort of, if you want critical mass labs or environments where they're working together, actually, despite perhaps the. Incentive to, to, differentiate where they're working, does a group of three or four together. and they would like to stay that way, but they can't stay that way forever. And so it's also an opportunity if you, if you have a group of people that wants to solve a problem, to create something a little bit like, like a seal team. so like when, when I was, I'm not very generally militaristic person, but, when I was a kid, I was very obsessed with the Navy seals. But, but anyway, I think the seal team was sort of very tight knit. kind of a special forces operation that works together on one project is something that a lot of scientists and engineers I think want. and the problem is just that they don't have a structure in which they can do that. Yeah. So then finally, I think that, although in many cases maybe essentially built into the structure fro is make sense. We can [00:18:00] talk about this as, as nonprofit organizations. these are the kinds of projects where, you would be getting a relatively small team together to basically create a new industry. and if you're in the right place at the right time, then after an fro is over, you would be in the ideal place to start. The next startup in an area where it previously, it's not been possible to do startups because the horizons for a venture investment would have been too long to make it happen from the beginning. Well, that's actually a great transition to a place that I'm still not certain about, which is what happens. After it fro, cause you, you said that it, that it's a explicitly temporary organization. And then, how do you make sure that it sort of achieves its goal, right? Like, because you can see so many of these, these projects that actually sound really great and they like go in and possibly could do good work and then somehow it all just sort of diffuses. [00:19:00] so, so have you thought about how to sort of make sure that that lives on. Well, this is a tricky thing as we've discussed, in a number of settings. So, in a, like to maybe throw that question back to you after I answer it. Cause I think you have interesting thoughts about that too, but, but in short, it's, it's a tricky thing. So, so the fro. Is entirely legal focused there isn't, there's no expectation that it would continue, by default and simply because it's a great group of people, or because it's been doing interesting work, it's sort of, it is designed to fulfill a certain goal and it should be designed also from the beginning to have a, a plan of the transition. Like it could be a nonprofit organization where it is explicitly intended that at the end, assuming success, One or more startups could be created. One or more datasets could be released and then a, you know, a much less expensive and intensive, nonprofits, structure could be be there to [00:20:00] host the data and provide it to the world. it could be something where. the government would be using it as a sort of prototyping phase for something that could then become a larger project or be incorporated into a larger moonshot project. So I think you explicitly want a, a goal of a finite tune to it, and then also a explicit, upfront, deployment or transition plan, being central to it much more so than any publication or anything. Of course. At the same time. there is the pitfall that when you have a milestone driven or goal focused organization, that the funder would try to micromanage that and say, well, actually, not only do I care about you meeting this goal, but also I really care that by month six, you've actually got exactly this with this instrument and this throughput, and I'm not going to let you buy this other piece of equipment. Unless, you know, you show me that, you know, [00:21:00] and that's a problem that I think, we sometimes see with, externalized research models, like DARPA ARPA models, that try to. achieve more coordination and, and, and goal driven among otherwise, somewhat uncoordinated entities like contractors and, and universities that, that are working on programs, but then they, they, they, they achieve that coordination by then, managing the process and, with an fro, I think it will be closer to. You know, if you have a series, a investment in the startup, you know, you are reporting back to your investors and, and they, they, at some level care, you know, about the process and maybe they're on your board. but ultimately the CEO gets to decide, how am I going to spend the money? And it's extremely flexible to get to the goal. Yeah. Yeah. The, the micromanage, like [00:22:00] figuring out how to avoid, Micromanagement seems like it's going to be really tricky because it's sort of like once you get to that amount of money, I like, have you, have you thought about, like how, like, if you could do some kind of like actually, well, I'll, I'll give her the, the, the, the, the, the thing that the cruxy thing is like this, I think there's a huge amount of trust that needs to happen in it. And what I'm. like I constantly wonder about is like, is there this like fundamental tension between the fact that, especially with like government money, we really do want it to be transparent and well-spent, but at the same time, in order to sort of do these like knowledge frontier projects, sometimes you need to do things that. Are a little weird or like seem like a waste of money at the time, if you're not like intimately connected. and so there's, there's this sort of tension [00:23:00] between accountability and, Sort of like doing the things that need to get done. I agree with that and Efros, we're going to navigate that. Yeah. I agree with that. And I think it relates to a number of themes that you've touched on and that we've discussed with, which has sort of, has to do with the changing overall research landscape of, in what situations can that trust actually occur, you know, in bell labs, I think there was a lot of trust. throughout, throughout that system. And as you have more externalized research, conflicting incentives and so on it, it's, it's hard. It's hard to obtain that trust. startups of course, can align that financially, to a large degree. I think there are things that we want to avoid. so one of the reasons I think that these need to be scoped as. Deliverables driven and roadmaps, systematic projects over finite periods of time, is to avoid, individual [00:24:00] personalities, interests, and sort of conflicting politics, ending up. Fragmenting that resource into a million pieces. So, so I think this is a problem that you see a lot with billion dollar scale projects, major international and national initiatives. Everybody has a different, if you say, I want this to be, to solve neuroscience, you know, and here's $10 billion. Everybody has a different opinion about what solved neuroscience is. And there's also lots of different conflicting personalities and, and leadership there. So I think for an fro, there needs to be an initial phase, where there's a sort of objective process of technology roadmapping. And people figure people understand and transparently understand what are the competing technologies? What are the approaches? What, what are the risks? And you understand it. and you also closely understand the people involved. but importantly, the people doing that roadmapping and sort of catalyzing the initial formation of that [00:25:00] fro need to have a somewhat objective perspective. It's not just funding my lab. It's actually, you, you want to have vision, but you, you need to. Subjected to a relatively objective process, which, which is hard because you also don't want it to be a committee driven consensus process. You want it to be active, in, in a, in a systematic, analysis sense, but, but not in a, everyone agrees and likes it, you know, emotionally sense. and so that, that's a hard thing. but you need to establish it's that trust upfront, with, with the funder, And that's a hard process and it gets a hard process to do as a large government program. I think DARPA does it pretty well with their program managers where a program manager will come in and they will pitch DARPA on the idea of the program. there'll be a lot of analysis behind it and, but then once, once they're going, that program manager has tremendous discretion, and trust. To how they actually run that [00:26:00] project. And so I think you need something like a program manager driven process to initiate the fro and figure out is there appropriate leadership and goals and our livable as reasonable, Yeah, that seems the way, at least the way that it's presented in the paper, it, it feels a little bit chicken and egg in that. so with DARPA, DARPA is a sort of permanent organization that brings in program managers. And then those programmers program managers then go, start programs, whereas, The look at fro it seems like there's this chicken and egg between like, you sort of, you need someone spearheading it. It seems like, but then it, you sort of like, it, it seems like it will be very hard to get someone who's qualified to, to spearhead it, to do that before you have funding, but then you need someone spearheading it in order to get that [00:27:00] funding. yeah. Like, yeah. How, how are you thinking about. Cracking that that's, that's sort of the motivation for me behavior over the next year or two, is that I'm trying to go out and search for them. And, a little bit of it is from my own creativity, but a lot of it is going out and talking to people and try and understand what the best ideas. Here would be, and who are the networks of, of human beings behind those ideas, and trying to make kind of a prioritized set of borrows. Now, this kind of thing would have to be done again, I think to some degree, if there was a, larger umbrella program that someone else wanted to do, but, I'm both trying to get a set of, of exemplary. And representative ideas and people together, and try to help those people get funding. You know, I think there can be a stage process. I agree that, in the absence of a funder showing [00:28:00] really strong interest, people committing, to really be involved is difficult, because it is a big change to people's normal. Progression through life to do something like that. but just like with startups, to the extent that you can identify, someone who's. We spiritually just really wants to do this and we'll kind of do anything to do it, the sort of founder type, and also teams that want to behave like that. that's obviously powerful, and also ideas where there's a kind of inevitability, where based on scientific roadmapping, it, it just has to happen. There's no way, you know, for neuroscience to progress unless we get better. Connectomics and I think we can go through many other fields where, because of. The structures we've had available and just the difficulty of problems now, where arguably Faros are needed in order to make progress in fields that people really care about. So, so I think you can get engagement at the level of, of discussion, and, and, and starting to nucleate [00:29:00] people. But, but there is a bit of a chicken and egg problem. In the sense that it's, it's not so much as here's an fro, would you please fund to me it's we need to go and figure out where there might be Faros to be had, and then who is interested in those problems as well to, to fund and support those things. So, yeah. So I guess to recap what I see your process that is, is that you're going out, you're sort of really trying to. Identify possible people possible ideas, then go to funders and say, here, like sort of get some, some tentative interest of like, okay, what, which of these things might you be interested in if I could get it to go further and then you'll circle back to. the, the people who might be interested in sort of say like, okay, I have someone, a funder who's potentially interested. Can we [00:30:00] sort of like refine the idea? and then sort of like, like you will drive that loop hopefully to, Getting a, an fro funded that's right. And there's, there's further chicken and egg to it. that has to be solved in the sense that, when you go to funders and you say, why, you know, I have an idea for an fro. We also need to explain what an fro is, right? in a way that both, engages people in creating these futuristic models, which many people want to do, While also having some specificity of, of what we're looking for and what, what, what we think is as possible. So, and then the same on, on the, on the side of, of scientists and engineers and entrepreneurs all over the world, who, you know, have the ideas certainly, but most of those ideas have been optimized to hit, the needs of existing structures. So, so we are, we are trying to, I think, broker between those, And [00:31:00] then start prototyping a few. but the, you know, the immediate thing I think is to make, w Tom Coolio has referred to a catalog, a Sears catalog of moonshots. and so we're trying to make a catalog of, of moonshots that fit the fro category. but that sounds like the perfect name for this podcast, by the way. the cataloging mood child, like, you're kind of kind of cataloging moonshots and ways to get moonshots and yeah, absolutely. Yeah. and so I guess another sort of, thing that I've seen, and I'm not sure, it's almost like for people like a lot of people who like really want. Who like sees something as inevitable and they really want to get it done. In sort of like the current environment we're recording in October, 2020. there's. There's sort of this perception that capital is really cheap. [00:32:00] you know, there's a lot of venture capitalists there. They're pretty aggressive about funding and one could make an argument that, if it's, if it, it really is going to be inevitable and it really is going to start a new industry. Then that is exactly where venture capital funding should come in. And I do see this a lot where people, you know, it's like they have this thing that they really want to see exist and they, you know, come out of the lab and it started a company that's sort of extremely common. so. I guess, like, what almost would you say to someone who you see doing this that you think maybe should do an fro instead? Yeah, that's a great question. I mean, I think it's a complicated question and obviously, you know, we got to see VC also, you know, obviously VC backed, you know, innovation is, is, is one of, if not sort of the key, [00:33:00] Things that is driving technology right now. So, so I'm in no way saying that fro is, are somehow superior to two startups, in any generalized way. So I think that things that can be startups and are good as startups should be startups and people, if you have an idea that could be good for a startup, I think you should go do it. Generally speaking. But, there, there are a few considerations, so yeah. So I think you can divide it into categories where VCs, no, it's not a good idea for startups. And therefore won't talk to you, in cases where VCs don't always know whether or not it's good for a startup or whether there's a way that you could do it as a startup, but it would involve some compromise that is actually better not to make, even potentially for the longterm. economic prospects of, of an area. So things that can happen, would be, if you have something that's basically meant to be a kind of platform technology or which you [00:34:00] need to develop a tool or a platform in order to explore a whole very wide space of potential applications. maybe you have something like a new method of microscopy or something, or a new way to measure proteins in the cell or things like that, that, you know, you could target it to a very particular, if you want product market fit application, where you would be able to make the most money on that and get the most traction, the soonest. Yeah. Sometimes people call this, you know, the, the, the, the sort of Tesla Roadster, equivalent. You want to guys as quickly as you can to the Tesla Roadster. And I think generally, what people are doing with, with that kind of model, where you take people that have science, to offer, and you say what's the closest fastest you can get, to a Tesla Roadster that lets you it lets you build, get, get revenue and start, start being financially sustainable and start building a team, to go further. generally that's really good. and generally we need more scientists to learn how to do that. it'd be supported to do that, but, [00:35:00] sometimes you have things that really are meant to be. either generalized platforms or public goods, public data, or knowledge to underlie an entire field. And if you work to try to take the path, the shortest path to the Roadster, you would end up not producing that platform. You would end up, producing something that is specialized to compete in that lowest hanging fruit regime, but then in the, in, in doing so you would forego the more general larger. Larger thing. And, you know, Alan Kay has, has the set of quotes, that Brett Victor took is linked on his website. and I think Alan K meant something very different actually, when he said this, but he's, he refers to the dynamics of the trillions rather than the billions. Right. and this is something where in, and we can talk about this more. I'd be curious about your thoughts on that, but something like the transistor. You know, you, you could try to do the transistor as a startup. and maybe at the time, you know, the best application for transistors would have been [00:36:00] radios. I don't think like that. I think it was, it was guiding a rockets. Yeah. So you could have, you could have sort of had had a transistors for rockets company and then tried to branch out into, becoming Intel. You know, but really, given the structures we had, then the transistor was allowed to be more of a, a broadly, broadly explored platform. yeah, that, that progressed in a way where we got the trillions version. And I worry sometimes that even some startups that have been funded at least for a seed round kind of stage, and that are claiming that they want to develop a general platform are going to actually struggle a little bit later. when investors, you know, see that, see that they would need to spend way more money to build that thing. then the natural shortest path to a Roadster, or another words the Roadster is, is, somehow illusory. yeah. Yeah, this [00:37:00] is, this is a. Sort of like a regime that I'm really interested in and a, just on the transistor example, I've, I've looked at it. So just the, the history is that it was developed at bell labs, in order to prevent a T and T from being broken up, bell labs had to, under strictly licensed a bunch of their innovations, including the transistor William Shockley went off and, Started, chocolate semiconductor, the traders eight then left and started, Fairchild and then Intel. And, believe that that's roughly the right history. but the, the really interesting thing about that is to ask the question of like, one, what would have happened if, bell labs had exclusive license to the transistor and then to what would have happened if they had like exclusively licensed it to, Shockley semiconductor. And I think I would argue in both of those situations, you don't [00:38:00] end up. Having the world we have today because I fell labs. It probably goes down this path where it's not part of the core product. and so they just sort of like do some vaguely interesting things with it, but are never incentivized to like, you know, invent, like the, the planner processing method or anything. Interesting. yeah. Yeah. And so I guess where I'm. Go. And then like at the same time, the interesting thing is like, so Shockley is more, akin to like doing a startup. Right. And so it's like, what if they had exclusive license to it? And the, what I would argue is actually like that also would've killed it because, you have like, they had notoriously bad management. And so if you have this, this company with. And like the only reason that, the trader could go and start a Fairchild was because they, that was, that was [00:39:00] an open license. So this is actually a very long way of asking the question of, if F borrows are going to have a huge impact, it seems like they should default to. Really being open about what they create from like IP to data. but at the same time, that sort of raises this incentive problem where, people who think that they are working on something incredibly valuable, should want to do a startup. And then. And so there, and then similarly, even if they'd be like that sort of couldn't be a thing, they would want to privatize as much of the output of an fro. and so which. Maybe necessary in order to, to get the funding to make it happen. So I guess like, how are you thinking about that tension? That was a very long winded. Yeah. [00:40:00] Yeah. Well, there's, there's a lot, a lot there, I think, to loop back to you. So, so I think, right, so, so this idea that we've talked a bit about as sort of default openness, so, so things that can be open for maximum impact should be open. there are some exceptions to that. So, so if, And it's also has to do partly with how you're scoping the problem. Right? So, so rather than having an SRO that develops drugs, let's say, because drugs really need to be patentable, right. In order to get through clinical trials, we're talking about much more money than the fro funding, you know, to do the initial discovery of a target or something. Right. So to actually bring that to humans, you know, you need to have the ability to get exclusive IP. for downstream investors and pharma companies that that would get involved in that. so there are some things that need to be patented in order to have to have their impact. but in general, you, you want, I think fro problems to steer themselves to things where indeed. it can be maximally open and maybe, maybe you, you provide [00:41:00] a system that can be used to, to, or underlie the discovery of a whole new sets of classes of drugs and so on. But you're not so much focused on the drugs themselves. Now, that being said, right. if I invest in an SRO, and I've enabled this thing, right. It kind of would make sense for the effort, you know, maybe three of the people of, of, of, of 15 in the fro will then go and start a company afterwards that then capitalizes on this and actually develops those drugs or what have you, or it takes it to the next stage. And gosh, it would really make sense if I had funded in fro. that's, those people would like to take me as a sort of first, first, first refusal to get a good deal on, on investing in this startup, for example. Right. so I think there are indirect network-based, or potentially even legal based, structure, structure based ways to both incentivize the investors and, But it's, it's a weaker, admittedly weaker, incentive financially than, [00:42:00] than, than the full capture of, of, of something. But then, but then there's, I think this gets back to the previous discussion. So which is sort of the trillions rather than the billions. So if you have something where maybe there are 10 different applications of it, Right in 10 different fields. you know, maybe, maybe we have a better way to measure proteins and based on this better way to measure proteins, we can do things in oncology and we can do things in Alzheimer's and we can do things in a bunch of different directions. We can do things in diagnostics and pandemic surveillance, and so many fields that one startup, It would be hard even to design, to start, if that could capture all of that value just as it would have been hard to design sort of transistor incorporated. Right. Right. given that, I think there's, there's a lot of reason to. To do an fro and then explore the space of applications. Use it as a means to explore a full space in which you'll then get [00:43:00] 10 startups. so if I'm the investor, I might like to be involved in all 10 of the new industry, right. And the way to do that would be to create a platform with which I can explore, but then I have a longer time horizon. Cause I have to first build the thing. Then I have to explore the application space and only then. do I get to invest in a specific verticals, right? Yeah. I think the, the two sort of tricky questions that I, I wonder about what that is one. So you mentioned like, Oh, there's 15 people in an fro, three of them go off to start a startup. What about those other 12 people? Like, I, I assume that they might be a little bit frustrated if, if that happens, Yeah, because like, like they, they did, they did help generate that value in it. It sort of gets into two questions of like capturing, like sort of kicking back, value generated by research in general, but like, yeah, it could, it could, it could be all 15 people, you know, we saw something [00:44:00] similar with open AI, you know, in a way, for example, converting, you know, into, into a, for profit or at least a big arm of it being, being the for-profit, and keeping all the people. Right. So you, you, you, you can imagine, just blanket converting. but yeah, I think, I think it's sort of, In the nature of it, that these are supposed to be things that open up such wide spaces that there's, there's sort of enough for everyone, but no, no, no one person necessarily one startup would completely capture. And I think that's true for clinic Comix too, for example. Right. So if you had really high throughput clinical, connectomics just, just to keep going on this example, that's a great example of perfect. It's a good thing as a good example. It's not. Depending on the details, whether this is exactly the first fro or not. I think it's totally, totally other issue, but, but. Connectomics there's potentially applications for AI and you know, how, how the neurocircuits work, and sort of fundamental, funding. Mental is a brain architecture and intelligence. although there's a bunch of ranges of the sort of uncertainty of exactly what that's going to be. So it's hard to sort of [00:45:00] know it until you see the data. There's also potentially applications for something like drug screening, where you could put a bunch of different, Kind of some CRISPR molecules or drug perturbations on, on a, on a brain and then look at what each one does to their, the synopsis or, and look at that in a, in a brain region specific way and sort of have ultra high, but connect to them based drug screening. Neither of those are things you can start a start up until you have connected. Right. working. but so anyway, so maybe three people would start an AI company and maybe those would be the very risk tolerant ones. and then three would start at, you know, a crisper drug company and, and, and, three would just do, do fundamental neuroscience with it and, take those capabilities and, and, and go, go back into the university system or so on and yeah. And start using that. Yeah. And the, the sort of the other related to. like creating value with it. there's, there's a little like uncut discomfort that like even I have [00:46:00] with, say like philanthropic or government funding, then going to fund a thing that proceeds to make a couple of people very wealthy. Which like, and like, there's very much arguments on both sides, right. Where it's like, it'll generate a lot of good for the world. and, and all and, and such. so, so like, I guess what would you say? I guess like, as a, as a, like, if I were a very wealthy philanthropist and I'm like, do it, like, you know, it's like, I'm just giving away money so that these people can. Yeah, the company is a complicated thing. Right? How much, how many further rich people, you know, did the Rockefeller foundation, you know, investing in the basics of molecular biology or things like that ended up generating? I mean, I think that, I think you, I think in some way the government does want to end up is they want the widely distributed benefit. And I think everything that should be an SRO should have widely distributed benefits. It shouldn't just [00:47:00] be a kind of, A startup that just, just enhances one, one person. It should be something that really contributes very broadly to economic growth and understanding of the universe and all that. But it's almost inevitable. I think that, if you create a new industry, you're gonna, you know, you're gonna, you're gonna feel it going to be some more written about rich successful people in that industry. And they're probably going to be some of the people that were involved. Early and thinking about it for the longest and waiting for the right time to really enter it. And so, yeah, that's a really good point. I guess the, then the question would be like, how do you know, like, like what are, what are sort of a, the sniff test you use to think about whether something would have broadly distributed benefits? That's a great question. Cause it's like connect to them. It seems like fairly clear cut or, or generating sort of like a massive data set that you then open up. Feels very [00:48:00] clear. Cut. it's. We we've talked before about that, like fro is, could like scale up a process or build a proof of concept of, of a technology. and it, it seems like that it's less clear cut how you can be sure that those are going to, like if they succeed. Yeah. I mean, there are a few different frames on it, but I mean, I think one is, FRS could develop technologies that allow you to really reduce the cost of having some. Downstream set of capabilities. so, you know, if, just to give you an example, right? If, if we had, much lower costs, gene therapies available, right? So, so sometimes when drug prices are high, you know, this is basically it's recouping these very large R and D costs and then there's competition and, and, and profit and everything involved. you know, there was the marching squarely situation and, you know, there's a bunch of, sort of. What was that? there was, remember the details, but there [00:49:00] was some instance within which, a financially controlling entity to sort of arbitrarily bumped drug prices way high, right. A particular drug. and then w was, you know, was regarded as an evil person then, and maybe that's right. but anyway, there are some places I think, within the biomedical system where you can genuinely reduce costs for everyone. Right. and it's not simply that I, you know, I make this drug and I captured a bunch of value on this drug, but you know, it's really, it should be available to everyone and I'm just copying there. There's genuine possibility to reduce costs. So if I could reduce the cost of, of the actual manufacturing of. The viruses that you use for gene therapy, that's a, that's a process innovation. that would be, you could order as a magnitude drop the cost of gene therapy. If you could figure out what's going on, in the aging process and what are the real levers on a single, you know, biological interventions that would prevent multiple age related diseases that [00:50:00] would massively drop the cost. Right? So those, those are things where, Maybe even in some ways it would be threatening, to some of, some of the pharma companies, you know, that, that work on specific age related diseases, right? Because you're going to have something that, that replaced, but this is, this is what, you know, things that are broad productivity improvements. And I think economists and people very broadly agree that, that the science and technology innovations, For the most part. although sometimes they can be used to in a way that sort of, only benefits, a very small number of people that generally speaking there's a lot you can do, with technology that will be extremely broadly shared in terms of benefit, right? Yeah. Yeah. I mean, I, I do actually, like I agree with that. I'm, I'm just, I'm trying to represent as much skepticism as, as possible. Definitely. I know you agree with that. And actually, another thing that I have no idea about which I'm really interested in is as you're going and sort of creating this, [00:51:00] this moonshot catalog. how do you tell the difference between people who have these really big ideas who are like hardcore legit? but like maybe a little bit crazy. And then people who are just crackpots. Yeah, well, I don't claim to be able to do it in every field. and, and I think there's a reason why I've, I'm not trying to do a quantum gravity, fro you know, both, both, because I don't know that that's, you know, I think that's maybe better matched for just individual. Totally. Open-ended Sunday, you know, fun, brilliant people for 30 or 40 year long period to just do whatever they want. Right. Yeah. For quantum gravity, rather than directed, you know, research, but, But also because there's a class of problem that I think requires a sort of Einsteinian type breakthrough in fr fro is, are not, not perfect for that in terms of finding people. I mean, I, I find that, there's a lot of pent up need for, this is that's my preliminary feeling. and you can see there's a [00:52:00] question of prioritizing, which are the most important, but there's a huge number of. Process innovations or system building innovations that are needed across many, many fields. And you don't need to necessarily have things that even sound that crazy. There are some that just kind of just make sense, you know, are, are very simple. You know, we here, here in our lab, we have this measurement technology, but we, you know, we can only have the throughput of one cell, you know, every, every few weeks. And if we could build the system, we could get a throughput of, you know, A hundred thousand cells, you know, every month or something. Right. there are some, there's some sort of ones that are pretty obvious, or where there's an obvious inefficiency. In kind of, how things are structured. Like every, every company and lab that's that's modeling fusion reactors, and then also within the fusion reactor, each individual component of it, like the neutrons in the wall versus the Plaza and the core, those are basically modeled with different. Codes many of which are many [00:53:00] decades old. So there's sort of an obvious opportunity to sort of make like a CAD software for fusion, for example, you know, that the, the, it doesn't, it's not actually crazy. It's actually just really basic stuff. In some cases, I think they're ones where we'll need more roadmapping and more bringing people together to really workshop the idea, to really have people that are more expert than me say, critique each other and see what's. Really going on in the fields. and I also rely on a lot of outside experts. if I have someone comes with an idea, you know, for, for energy, you know, and I'm talking to people that are like former RPE program managers or things like that, that, that know more of the questions. so I think we can, we can, we can do a certain amount of, of due diligence on ideas and. and then there are some that are, that are really far out. you know, we both have an interest in atomically precise manufacturing, and that that's when, where we don't know the path I think, forward. and so that's maybe a pre fro that's something where you [00:54:00] need a roadmapping approach, but it's maybe not quite ready to, to just immediately do an fro. Yeah, no, that's, you sort of hit on a really interesting point, which is that. when we think of moonshots, it's generally like this big, exciting thing, but perhaps some of the most valuable is will actually sound incredibly boring, but the things that they'll unlock will be. Extremely exciting. yeah, I think that's true. And, and you have to distinguish there's there's boring. Right? So, so I think there's, there's some decoupling of exactly how much innovation is required and exactly how important something is. And also just how much brute force is required. So I think in general, our system might under weight, the importance of brute force. And somewhat overweight the importance of sort of creative, individual breakthrough thinking. at the same time, there are problems where I think we are bottlenecked by thinking I'm like really how to do something, not just to [00:55:00] connect them of brain, but how do you actually do activity map of entire brain? You actually need to get a bunch of physicists together and stuff to really figure out what's, you know, there's a level of thinking that is not very non-obvious similarly for like truly next gen fabrication. You really, really, really need to do the technology roadmapping approach. And that's a little different than the fro. And in some cases there may be a, as we discussed, I think in the past, there was sort of a, a continuum potentially between DARPA type programs or programs that would start within the existing systems and try to catalyze the emergence of ideas and discoveries. And then fro is, which are a bit, a bit more cut and dry. And in some cases, even you could think of it as boring. but just very important. how do we prevent Faros from becoming a political football? because you see this all the time where, you know, a Senator will say, well, like I'll sponsor this bill, as long as we mandate that. 50% of the work has to happen in my particular state or [00:56:00] district. and, and I imagine that that would be counterproductive towards the goals of . so do you, do you have any sense of like how to, how to get around that probably much easier in philanthropic setting than governments? Although I think I'm overall, I'm, I'm sort of optimistic that, if. If the goals are made very clear, the goal is disruptive, you know, multiplicative improvements in scientific fields. that's the primary goal. It needs to be managed well. so it's not either about the individual peoples, if you want academic politics and also that it doesn't, doesn't become about sort of, you know, districts, congressional districts, or all sorts of other things. I think there's a certain amount of complexity, but the other, the other thing is. I think there's really amazing things to be done in all sorts of places and by all sorts of people that are not necessarily identified as, as the biggest egos or the largest cities also, although certainly there are hubs that [00:57:00] matter. yeah. Cool. I think so. I think those are all like the actual questions I have. Is there anything you want to talk about that we have not touched on? Yeah, that's a good question. I mean, how does this fit into two things that you're thinking about, in terms of your overall analysis of the research system, then, do you think this, what is this leave unsolved as well? if, even if we can get some big philanthropic and government, donors. Yeah. So, so there are sort of two things that I. see it not covering. And so the, the first that you you've sort of touched on is that there are, some problems that still like don't fit into academia, but are not quite at the point where they're ready to be at fro. And so, they need, the, the like mindset of the fro without. Having this sort of, cut and dryness [00:58:00] that you need to sort of plunk down, like have the confidence to plunk down $50 million. so, so we need sort of a, a, what I would see as a sustainable, way of. Getting to the point of fro type projects. And as you know, I'm spending a lot of time with that. and then sort of a, the other thing that I've realized is that when, when people, we sort of have these discussions that are like research is broken, I think what we're actually talking about is, is sort of two really separate phenomenon. So, what we've been talking about, like Efros, Are really sort of sitting in like the Valley of death where it's like helping bridge that. but I think that at the same time, there there's like what I would call like the, the Einstein wouldn't get in any funding problem, which is, as you alluded to there, there are some of these things, like some of the [00:59:00] problems with research that we talk about are just about, The sort of conformity and specialization of really idea based exploratory, like completely uncertain research. And that's also really important, but I, I think it's what we don't do is, is, is sort of like separate those two things out and say like, these are both fall under the category of research, but are in fact. Extremely different processes. They require very different solutions. Yeah. Actually let me, let me, since you mentioned that, and since we are here together on the podcast, I agree with that and I, I have some things to say about that as well. So, so I think that the fro is indeed only address, or are designed to address this issue of sort of system building. problems that have a sort of catalytic nature and are a particular kind of pre-commercial stage. Right? So in some ways, [01:00:00] even though I'm so excited about borrows and how much they can unlock, because I think that this is one of two or three categories that has been, you know, under emphasized by current systems or has systems currently have struggled with it. there are these others. So, so I think that. The, the supporting the next Einstein and people that may have also have just be cognitively socially in any other number of ways, just different and weird and not good at writing grants. You know, not good at competing. Maybe not even good at graduating undergrad. Yeah. You know, I'm running a lab who are, are brilliant and because the system now. Has proliferate in terms of the number of scientists. it's very competitive and, and there is a, there's a lot of need to sort of filter people based on credentials. So there's this sort of credential there's people that don't fit with perfectly with credentials or with a sort of monoculture of who is able to get NSF grants and go through the university system and [01:01:00] get the PhD and all those different Alexey goosey has this nice blog post is oriented toward biomedical, but saying basically that in order to get through the system, you need to do 10 or 15 things simultaneously. Well, and also be lucky. And maybe we want to be looking for some people that are only able to do three of those things about, but are orders of magnitude better than others, then there's people even who have done well with those things, but still don't have the funding or sort of sustained ability, to, to pursue their own individual ideas over decades. even if they do get tenure or something, because the grant system is based on peer review and is, is sort of filtering out really new ideas, for whatever reason, There's kind of the broader issue that Michael Nielsen has talked about, which is sort of the idea that too much funding is centralized in a single organizational model. So particularly the NIH, the NIH grant is kind of hegemonic as, as, as a structure and as a peer review mechanism. then I think we need more [01:02:00] DARPA stuff. We probably need more darker agencies for other problems. Even though I've, I've sort of said that I think Rose can solve some problems that DARPA DARPA will struggle with. Likewise, DARPA walls solve problems that fro may struggle with. particularly if there's a very widely distributed expertise across the world that you need to bring together in a, some transient, interesting way, for a little bit more discovery oriented, perhaps in Faros and less deliverable oriented or team oriented. And then there's even bigger things we need, you know, like we need to be able to create, you know, a bell labs for energy, you know, or sort of something even bigger than fro. so yeah, I think the thing that you're, you're getting at that I is, is sort of simple, but under done is actually analyzing like what the activity is and what. How to best support it. Yep. Which is instead of just saying [01:03:00] like, ah, there's some research let's give some money to the research and then magical things will happen actually saying like, okay, like, like how does this work? Like what, and then what can we do for these, these specific situation? Yes. I think as you've identified. Like there's both on the one hand, there's the tendency to micromanage research and say, research has to do this, this with this equipment and this timescale it's entirely, this is sort of subject to milestone. And on the other hand is research is this magical thing. We have no idea. but just. Let other scientists, peer review each other, and just sort of give as much money to it as we can. and then we see what happens. Right. And I think neither of those, is a, is a good design philosophy, right? Yeah. Yeah. And I think it involves people like thinking it's it's uncomfortable, but like, like thinking and learning about. How, how did you think then understanding how it could, how it could be different? [01:04:00] How it's not a it's it's a system. Kevin has felt set, said it said it well. And so in some ways it's been designed, but really our scientific systems are something that has evolved into large degree. No one has designed it. It's not. Something that's designed to be optimal is it's a, it's a emergent property of many different people's incentives. And, if we actually try to apply more design thinking, I think, I think that can be good as long as we're not over overconfident in saying that there's one model for everyone. Yeah. I think that the trick to, sort of fixing. Emergent systems is to like, basically like do little experiments, poking at them. And that's, that's very much what I see getting fro is going okay. It's like, you're not saying, Oh, we should like dismantle the NSF and have it all be . Okay. Let's do a couple of these. See what happens. That's right. It's I think it's inherently a small perturbation and it it's. And I [01:05:00] think DARPA, by the way is a similar thing. It's sort of dark. You wouldn't need DARPA. If everything else was already sort of efficient, right. Given that things are not perfectly efficient, Darko has all these, all these sort of this niche that it fills. I think similarly Faros, they can only exist. if you also have a huge university system and you also have companies that that doesn't make sense, otherwise it's, it's a perturbation, but as we, I think it's a perturbation in which you unlock a pretty big pressure stream sort of behind it when you open it up. So. Excellent. Well, I think that's, that's actually a great place to close. I guess the last question would be, Like, if people are interested in, in Faros, especially like funding or running one, what is the best way for them to reach you? Well, they can, they can talk to me or they can talk to you. my email has, is prominently listed on my website. Twitter is great. and that, yeah, I really interested in, people that have a kind of specificity [01:06:00] of, of, of what they want of, you know, here here's, here's what I would do, very specifically, but I'm also interested in talking to people that, See problems with the current systems and want to do something and want to learn about, other highly specific fro ideas that others might have, and how to enable those.
Gianluca and Jared have survived 2020 (so far) and are back for Season 3 of Bit of a Tangent. In this episode they bring you 7 new habits and techniques that can be used to iteratively upgrade yourself — even in lockdown. Forget everything else that's going on in the world, and take a deep dive into personal optimisation. Or, as they'd put it, prepare to geek out on organisation hacks, bootstrapped learning, and motivation pumps. -------- Shownotes: -------- Jared on Twitter: www.twitter.com/jnearestn Gianluca on Twitter: www.twitter.com/QVagabond Bit of a Tangent on Twitter (www.twitter.com/podtangent) and Instagram (instagram.com/podtangent/) Last episode: https://www.podtangent.com/e/026-drink-and-be-rational/ Matt D'Avella's video on Checklists: https://www.youtube.com/watch?v=8n2vL2I__WY Alex Vermeer's Tangibles: https://alexvermeer.com/tangibles/ Roam research: https://roamresearch.com/ Put iPhone in grayscale: https://www.youtube.com/watch?v=JNuziJOl61o FitNotes Android app: https://play.google.com/store/apps/details?id=com.github.jamesgay.fitnotes Matt D'Avella's 30-day challenges: https://www.youtube.com/playlist?list=PLXKuahfdkl6zkBULJhEMNy_RnErOYXwJk How Jerry Seinfeld writes jokes: https://www.youtube.com/watch?v=itWxXyCfW5s Anki (flashcards tool): https://apps.ankiweb.net/ Michael Nielsen's essay on learning with Anki: http://augmentingcognition.com/ltm.html POLAR bookshelf software: https://getpolarized.io/ How to wrap your headphones up: https://youtu.be/3_FueKBoRO0?t=171
Rune-Christoffer Dragsdahl fangede hvepse og oplevede, hvordan de led i fangenskab. Det prægede den lille dreng langt ind i voksenlivet med ønsket om aldrig mere at skade dyr. Hos svinebonden Michael i Slangerup bliver smågrise født til at blive spist, men bonden mener stadig at dyrene har et godt liv. Dette afsnit af Kødkrigen handler om dyreetik og om vi har ret til at spise dyr. Tilrettelæggere: Michael Ørtz Christiansen og Morten Olsen. Lyd: Niels Malte Lundsgaard. Redaktører: Hanne Budtz-Jørgensen og Karen Albertsen. Medvirkende: Rune-Christoffer Dragsdahl, generalsekretær i Dansk Vegetarisk Forening. Umut Sakarya, kok og ejer af Guldkroen på Nørrebro. Michael Nielsen, svinebonde og medlem af Det Dyreetiske Råd. Bengt Holst, videnskabelig direktør i Zoo København. Produceret af Munck Studios København. (Sendt første gang 10. juni 2019).
Taleudsendelse med Michael Nielsen fra 2018. Michael er kendt mand på Palmestranden i Frederikshavn, og han fortæller hvad hans arbejde består af, og hvordan denne mulighed som Palmestrands bestyrer opstod. Michael er også tidligere alkoholiker, og vil gerne dele sin historie med os. Han har nu været tørlagt i over 20 år. Han er desuden…
Michael talks about his work in computer vision for field use in agriculture and recycling. He started out in computer vision in the agriculture space doing machine vision and 3D reconstruction of plants. He then moved to the Danish Technological Institute when they expanded their work on machine vision for field use in agriculture.Michael worked with a fusion of sensors like stereo vision, thermography, radar, lidar and high frame rate cameras, merging multiple images for high dynamic range. All this to be able to navigate the tricky situation in a farm field where you need to navigate close to or even in what is grown. Multi-baseline cameras were also used to provide range detection over a wide range of distances.We also learn about how he expanded his work into sorting recycling, a very challenging problem. Here the sensor fusion gives him RGB as well as depth and temperature. Adding a powerful studio flash to the setup allowed him to heat the material being sorted, making it possible to determine the material, depending on how it absorbs the heat from the flash. Michael is also working on adding cameras capable of seeing above the human range of vision to make it easy to specify which materials to pick. We also hear about the problems faced when using time of flight and sheet of light cameras. He then shares some good results using stereo vision, especially combined with blue light random dot projectors.This podcast is part of the Wevolver network. Wevolver is a platform & community providing engineers informative content to help them innovate.Learn more at Wevolver.comPromote your company in our podcast?If you are interested in sponsoring the podcast, you can contact us at richard@wevolver.com
Support these videos: http://pgbovine.net/support.htmhttp://pgbovine.net/PG-Podcast-Hour-25.htm- [Principles of Effective Research](http://michaelnielsen.org/blog/principles-of-effective-research/) Michael Nielsen- [Philip's initial reaction vlog](https://www.youtube.com/watch?v=ttTEpgGwsts)- [Extreme thinking](http://michaelnielsen.org/blog/archive/tough-learning/tough-learning-final.html) by Michael Nielsen- [You and Your Research](https://www.cs.virginia.edu/~robins/YouAndYourResearch.html) by Richard HammingRecorded: 2020-01-26
Jared and Gianluca try something new on this episode! We read passages from Robert Pirsig's wonderful novel Zen and the Art of Motorcycle Maintenance, reacting and discussing as we go! Along the way, we explored the limits of conceptual understanding (a.k.a. Shut up and taste the wine!), how the words we use to describe reality also end up defining it, limiting it or expanding it, and why cliches are so easy to dismiss and when they shouldn't be (hint: your gran was right - there's nothing a good night's sleep won't solve). We also discuss what it means to truly understand something, and how our intuitive sense of what is excellent can guide us to cook great food, write beautiful code, and be delightful people! As a bonus, we drop some hints about an exciting upcoming episode, and at the end we each share the advice we've heard that has the highest impact with the fewest words! Listener feedback can be recorded here: https://www.speakpipe.com/podtangent ---------- Shownotes: Zen and the Art of Motorcycle Maintenance by Robert Pirsig: https://www.goodreads.com/book/show/629.Zen_and_the_Art_of_Motorcycle_Maintenance The Stranger by Albert Camus: https://www.goodreads.com/book/show/49552.The_Stranger Atlas Shrugged by Ayn Rand: https://www.goodreads.com/book/show/662.Atlas_Shrugged Fountainhead by Ayn Rand: https://www.goodreads.com/book/show/2122.The_Fountainhead The Philosopher's Toolkit by Julian Baggini: https://www.goodreads.com/book/show/192414.The_Philosophers_Toolkit Fermat's Enigma by Simon Singh: https://www.goodreads.com/book/show/38412.Fermat_s_Enigma The Cook & The Chef - Elon Musk's Secret Sauce by Tim Urban: https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html How To Win Friends & Influence People by Dale Carnegie: https://www.goodreads.com/book/show/4865.How_to_Win_Friends_and_Influence_People Michael Nielsen's personal blog: http://michaelnielsen.org/ Venture Stories podcast with Michael Nielsen: https://podcasts.apple.com/us/podcast/what-michael-nielsen-thinks-about-basically-everything/id1316769266?i=1000436484320 Tyler Cowen on The high-return activity of raising others' aspirations: https://marginalrevolution.com/marginalrevolution/2018/10/high-return-activity-raising-others-aspirations.html
We've all heard about the importance of learning. We've all heard about the importance of learning how to learn. Well, on this episode Gianluca and Jared dive into both of these topics. They discuss the philosophies of learning they've encountered on their own journeys, and share several key tricks that they've found most helpful over the years! Along the way they discovered a new way to think about how to keep your knowledge up to date in an ever changing world! Listener feedback can be recorded here: https://www.speakpipe.com/podtangent ---------- Shownotes: This essay by Michael Nielsen is what spurred me to say we might need a follow up. It's definitely worth a read: http://augmentingcognition.com/ltm.html Cultural evolution primer by Scott Alexander: https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/ Tim Urban's Elon Musk blog posts: https://waitbutwhy.com/2017/03/elon-musk-post-series.html - the last post in the series changed Jared's life https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html Poor Charlie's Almanack: https://www.goodreads.com/book/show/944652.Poor_Charlie_s_Almanack Shane Parrish on Chauffeur knowledge: https://fs.blog/2015/09/two-types-of-knowledge/ The Sequences by Eliezer Yudkowski: https://www.lesswrong.com/rationality We've include some relevant essays from the sequences to today's conversation below: Taboo Your Words: https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words Cached Thoughts: https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/2MD3NMLBPCqPfnfre Replace The Symbol with The Substance: https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb/p/GKfPL6LQFgB49FEnv Truly Part of You: https://www.lesswrong.com/posts/fg9fXrHpeaDD6pEPL/truly-part-of-you Living By Your Own Strength: https://www.lesswrong.com/posts/dKGfNvjGjq4rqffyF/living-by-your-own-strength Learning How to Learn course: https://www.coursera.org/learn/learning-how-to-learn/ Anki: https://apps.ankiweb.net/ r/medicalschoolanki decks: https://www.reddit.com/r/medicalschoolanki/ Jared used https://www.brosencephalon.com/flashcards/ in his earlier years of medschool Testing effect: https://www.wikiwand.com/en/Testing_effect Deliberate practice: https://www.wikiwand.com/en/Practice_(learning_method) Desirable difficulty: https://www.wikiwand.com/en/Desirable_difficulty Method of Loci: https://www.wikiwand.com/en/Method_of_loci Expecting Short Inferential Distances by Eliezer Yudkowski: https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter: https://www.goodreads.com/book/show/24113.G_del_Escher_Bach Conversations with Tyler podcast: https://conversationswithtyler.com/
INTRODUKSJON / UKENS VENN / SISTE DU SÅ PÅ / SPØRSMÅL FRA INSTA / ØKONOMI / TING JEG HATER / UKENS TEMA / FYLLALOGG
Ep. 47 - Michael Nielsen on Tone-Talk! Big Hairy Guitars! Ninja Tracks! More!
Michael Nielsen (@michael_nielsen), research fellow at Y Combinator Research, joins Erik for a wide-ranging discussion about a variety of topics, including:* Why the top names in the S&P 500 change over time, but the top names in global university rankings don’t — and how to fix that.* How Michael thinks about the role of risk in science, and why he'd like to see more risk-taking.* Memory, including how to improve yours and why professional athletes seem to have such good ones.* The “compliment deficit” in the world and how to fix it.* The silver lining of the Bay Area housing problem.* The reproducibility problem in social science.* Why he’s a fan of chaos.…and much more.Thanks for listening — if you like what you hear, please review us on your favorite podcast platform. Check us out on the web at villageglobal.vc or get in touch with us on Twitter @villageglobal.Venture Stories is brought to you by Village Global, is hosted by co-founder and partner, Erik Torenberg and is produced by Brett Bolkowy.
Michael Nielsen (@michael_nielsen), research fellow at Y Combinator Research, joins Erik for a wide-ranging discussion about a variety of topics, including:* Why the top names in the S&P 500 change over time, but the top names in global university rankings don’t — and how to fix that.* How Michael thinks about the role of risk in science, and why he'd like to see more risk-taking.* Memory, including how to improve yours and why professional athletes seem to have such good ones.* The “compliment deficit” in the world and how to fix it.* The silver lining of the Bay Area housing problem.* The reproducibility problem in social science.* Why he’s a fan of chaos.…and much more.Thanks for listening — if you like what you hear, please review us on your favorite podcast platform. Check us out on the web at villageglobal.vc or get in touch with us on Twitter @villageglobal.Venture Stories is brought to you by Village Global, is hosted by co-founder and partner, Erik Torenberg and is produced by Brett Bolkowy.
INTRODUKSJON / UKENS VENN / JÅNNIS STANDUPVITSER / SISTE SÅ PÅ / TATT MED FRA INTERNETT / BRUKT PENGER PÅ / PROBLEMER Å FØRE / TING JEG HATER
Andy and Dave discuss Rodney Brooks' predictions on AI from early 2018, and his (on-going) review of those predictions. The European Commission releases a report on AI and Ethics, a framework for "Trustworthy AI." DARPA announces the Knowledge-directed AI Reasoning over Schemas (KAIROS) program, aimed at understanding "complex events." The Standardized Project Gutenberg Corpus attempts to provide researchers broader data across the project's complete data holdings. And MORS announces a special meeting on AI and Autonomy at JHU/APL in February. In research, Andy and Dave discuss work from Keio University, which shows that slime mold can approximate solutions to NP-hard problems in linear time (and differently from other known approximations). Researchers in Spain, the UK, and the Netherlands demonstrate that kilobots (small 3 cm robots) with basic communication rule-sets will self-organize. Research from UCLA and Stanford creates an AI system that mimics how humans visualize and identify objects by feeding the system many pieces of an object, called "viewlets." NVIDIA shows off its latest GAN that can generate fictional human faces that are essentially indistinguishable from real ones; further, they structure their generator to provide more control over various properties of the latent space (such as pose, hair, face shape, etc). Other research attempts to judge a paper on how good it looks. And in the "click-bait" of the week, Andy and Dave discuss an article from TechCrunch, which misrepresented bona fide (and dated) AI research from Google and Stanford. Two surveys provide overviews on different topics: one on safety and trustworthiness of deep neural networks, and the other on mini-UAV-based remote sensing. A report from CIFAR summarizes national and regional AI strategies (minus the US and Russia). In books of the week, Miguel Herman and James Robins are working on a Causal Inference Book, and Michael Nielsen has provided a book on Neural Networks and Deep Learning. CW3 Jesse R. Crifasi provides a fictional peek into a combat scenario involving AI. And Samim Winiger has started a mini documentary series, "LIFE," on the intersection of humans and machines.
INTRODUKSJON / UKENS VENN / ØKONOMINYTT / TATT MED FRA INTERNETT / SKAMLØSE YOUTUBE-KANAL / PROBLEMER Å FØRE / SISTE DU SÅ PÅ / DUMME HISTORIER / FYLLALOGG
SIDEN SIST / UKAS VENN / GAMLE BLOGGINNLEGG / DUMME HISTORIER / SISTE DU SÅ PÅ / PROBLEMER Å FØRE / FYLLALOGG
Thanks for joining us on this episode of the CAMcast! We're joined by Michael Nielsen (aka Pumpkyn), and Rick Conk of RallySport Direct! Between trips to tangent town, the guys talk about the latest news! Daniel Ricciardo is going to Renault A whole grip of Tesla news Trump's Tariffs could kill off a Buick model IMSA is splitting to Prototypes up NASA Utah 6 Hour Enduro review Support our friends by following them on social media! Rick on Instagram Pumpkyn on Instagram RallySport Direct on Instagram, Facebook, and YouTube Support our sponsor Steady Broke! Head to their site, and use the code CAMAUTO15 to get 15% off your entire order! Thank you for joining us on this episode! You can find us on Apple Podcasts, Google Play, Spotify, and wherever else you find your podcasts. Please subscribe, rate, and review us! Support the providers of this podcast's theme song, Mathusaworm. Find us on social media, subscribe to the CAMcast podcast, and subscribe to our YouTube channel! Twitter Instagram Facebook The CAMcast on Apple Podcasts The CAMcast on Google Play The CAMcast on Spotify YouTube CAMautoSwag *Article, Photos, Videos, and Audio clips are copyright of CAMautoMag.Com and their respective owners.
Listen Here: iTunes | Overcast | PlayerFM Keep up with the North Star Podcast. My guest today is Michael Nielsen a scientist, writer and computer programmer who works as a research fellow at Y Combinator Research. Michael has written on various topics from quantum teleportation, geometric complexity and the future of science. Michael is the most original thinker I have discovered in a long time when it comes to artificial intelligence, augmenting human intelligence, reinventing explanation and using new media to enable new ways of thinking. Michael has pushed my mind towards new and unexpected places. This conversation gets a little wonky at times, but as you know, the best conversations are difficult. They are challenging because they venture into new, unexplored territory and that's exactly what we did here today. Michael and I explored the history of tools and jump back to the invention of language, the defining feature of human collaboration and communication. We explore the future of data visualization and talk about the history of the spreadsheet as a tool for human thought. “Before writing and mathematics, you have the invention of language which is the most significant event in some ways. That’s probably the defining feature of the human species as compared to other species.” LINKS Find Michael Online Michael’s Website Michael’s Twitter Michael’s Free Ebook: Neural Networks and Deep Learning Reinventing Discovery: The New Era of Networked Science Quantum Computation and Quantum Information Mentioned In the Show 2:12 Michael’s Essay Extreme Thinking 21:48 Photoshop 21:49 Microsoft Word 24:02 The David Bowie Exhibit 28:08 Google AI’s Deep Dream Images 29:26 Alpha Go 30:26 Brian Eno’s Infamous Airport Music 33:41 Listen to Speed of Life by Dirty South Books Mentioned 46:06 Zen and The Art of Motorcycle Maintenance by Robert M. Pirsig 54:12 Cat’s Cradle by Kurt Vonnegut People Mentioned 13:27 Rembrandt Van Rijn’s Artwork 15:01 Monet’s Gallery 15:02 Pierre Auguste Renoir’s Impressionist Art 15:05 Picasso’s Paintings 15:18 Paul Cezanne’s Post-Impressionist Art 25:40 David Brooke’s NYT Column 35:19 Franco of Cologne 56:58 Alan Kay’s Ted Talk on the future of education 57:04 Doug Engelbart 58:35 Karl Schroeder 01:02:06 Elon Musk’s Mars-bound company, SpaceX 01:04:25 Alex Tabarrok Show Topics 4:01 Michael’s North Star, which drives the direction of his research 5:32 Michael talks about how he sets his long-term goals and how he’s propelled by ideas he’s excited to see in the world. 7:13 The invention of language. Michael discusses human biology and how it’s easier to learn a language than writing or mathematics. 9:28 Michael talks about humanity’s ability to bootstrap itself. Examples include maps, planes, and photography 17:33 Limitations in media due to consolidation and the small number of communication platforms available to us 18:30 How self-driving cars and smartphones highlight the strange intersection where artificial intelligence meets human interaction and the possibilities that exist as technology improves 21:45 Why does Photoshop improve your editing skills, while Microsoft Word doesn’t improve your writing skills? 27:07 Michael’s opinion on how Artificial Intelligence can help people be more creative “Really good AI systems are going to depend upon building and currently depend on building very good models of different parts of the world, to the extent that we can then build tools to actually look in and see what those models are telling us about the world.” 30:22 The intersection of algorithms and creativity. Are algorithms the musicians of the future? 36:51 The emerging ability to create interactive visual representations of spreadsheets that are used in media, internally in companies, elections and more. “I’m interested in the shift from having media be predominantly static to dynamic, which the New York Times is a perfect example of. They can tell stories on newyorktimes.com that they can’t tell in the newspaper that gets delivered to your doorstep.” 45:42 The strategies Michael uses to successfully trail blaze uncharted territory and how they emulate building a sculpture 53:30 Michael’s learning and information consumption process, inspired by the idea that you are what you pretend to be 56:44 The foundation of Michael’s worldview. The people and ideas that have shaped and inspired Michael. 01:02:26 Michael’s hypothesis for the 21st century project involving blockchain and cryptocurrencies and their ability to make implementing marketplaces easier than ever before “The key point is that some of these cryptocurrencies actually, potentially, make it very easy to implement marketplaces. It’s plausible to me that the 21st century [project] turns out to be about [marketplaces]. It’s about inventing new types of markets, which really means inventing new types of collective action.” Host David Perell and Guest Michael Nielsen TRANSCRIPT Hello and welcome to the North Star. I'm your host, David Perell, the founder of North Star Media, and this is the North Star podcast. This show is a deep dive into the stories, habits, ideas, strategies, and rituals that guide fulfilled people and create enormous success for them, and while the guests are diverse, they share profound similarities. They're guided by purpose, live with intense joy, learn passionately, and see the world with a unique lens. With each episode, we get to jump into their minds, soak up their hard-earned wisdom and apply it to our lives. My guest today is Michael Nielson, a scientist, writer, and computer programmer, who works as a research fellow at Y Combinator Research. Michael's written on various topics from quantum teleportation to geometric complexity to the future of science, and now Michael is the most original thinker I've discovered in a long time. When it comes to artificial intelligence to augmenting human intelligence, reinventing explanation, or using new media to enable new ways of thinking, Michael has pushed my mind towards new and unexpected places. Now, this conversation gets a little wonky at times, but as you know, the best conversations are difficult. They're challenging because they venture into new, unexplored territory and that's exactly what we did here today. Michael and I explored the history of tools. This is an extension of human thought and we jump back to the invention of language, the defining feature of human collaboration and communication. We explore the future of data visualization and talk about the history of this spreadsheet as a tool for human thought. Here's my conversation with Michael Nielson. DAVID: Michael Nielson, welcome to the North Star Podcast. MICHAEL: Thank you, David. DAVID: So tell me a little bit about yourself and what you do. MICHAEL: So day to day, I'm a researcher at Y Combinator Research. I'm basically a reformed theoretical physicist. My original background is doing quantum computing work. And then I've moved around a bit over the years. I've worked on open science, I've worked on artificial intelligence and most of my current work is around tools for thought. DAVID: So you wrote an essay which I really enjoyed called Extreme Thinking. And in it, you said that one of the single most important principle of learning is having a strong sense of purpose and a strong sense of meaning. So let's be in there. What is that for you? MICHAEL: Okay. You've done your background. Haven't thought about that essay in years. God knows how long ago I wrote it. Having a strong sense of purpose. What did I actually mean? Let me kind of reboot my own thinking. It's, it's kind of the banal point of view. How much you want something really matters. There's this lovely interview with the physicist Richard Feynman, where he's asked about this Indian mathematical prodigy Ramanujan. A movie was made about Ramanujan’s mathematical prowess a couple of years ago. He was kind of this great genius. And a Feynman was asked what made Ramanujan so good. And the interview was expecting him to say something about how bright this guy was or whatever. And Feynman said instead, that it was desire. It was just that love of mathematics was at the heart of it. And he couldn't stop thinking about it and he was thinking about it. He was doing in many ways, I guess the hard things. It's very difficult to do the hard things that actually block you unless you have such a strong desire that you're willing to go through those things. Of course, I think you see that in all people who get really good at something, whether it be sort of a, just a skill like playing the violin or something, which is much more complicated. DAVID: So what is it for you? What is that sort of, I hate to say I want to just throw that out here, that North Star, so to speak, of what drives you in your research? MICHAEL: Research is funny. You go through these sort of down periods in which you don't necessarily have something driving you on. That used to really bother me early in my career. That was sort of a need to always be moving. But now I think that it's actually important to allow yourself to do that. That's actually how you find the problems, which really get, get you excited. If you don't sort of take those pauses, then you're not gonna find something that's really worth working on. I haven't actually answered your question. I think I know I've jumped to that other point because that's one thing that really matters to me and it was something that was hard to learn. DAVID: So one thing that I've been thinking a lot about recently is you sort of see it in companies. You see it in countries like Singapore, companies like Amazon and then something like the Long Now Foundation with like the 10,000-year clock. And I'm wondering to you in terms of learning, there's always sort of a tension between short-term learning and long-term learning. Like short-term learning so often is maybe trying to learn something that feels a little bit richer. So for me, that's reading, whereas maybe for a long-term learning project there are things I'd like to learn like Python. I'd like to learn some other things like that. And I'm wondering, do you set long-term learning goals for yourself or how would you think about that trade off? MICHAEL: I try to sit long-time learning goals to myself, in many ways against my better judgment. It's funny like you're very disconnected from you a year from now or five years from now, or 10 years from now. I can't remember, but Eisenhower or Bonaparte or somebody like that said that the planning is invaluable or planning plans are overrated, but planning is invaluable. And I think that's true. And this is the right sort of attitude to take towards these long-term lending goals. Sure. It's a great idea to decide that you're going out. Actually, I wouldn't say it was a great idea to say that you're going to learn python, I might say. However, there was a great idea to learn python if you had some project that you desperately wanted to do that it required you to learn python, then it's worth doing, otherwise stay away from python. I certainly favor, coupling learning stuff to projects that you're excited to actually see in the world. But also, then you may give stuff up, you don't become a master of python and instead you spend whatever, a hundred hours or so learning about it for this project that takes you a few hundred hours, and if you want to do a successor project which involves it, more of it. Great, you'll become better. And if you don't, well you move onto something else. DAVID: Right. Well now I want to dive into the thing that I'm most excited to talk to you about today and that's tools that extend human thought. And so let's start with the history of that. We'll go back sort of the history of tools and there's had great Walter Ong quote about how there are no new thoughts without new technologies. And maybe we can start there with maybe the invention of writing, the invention of mathematics and then work through that and work to where you see the future of human thought going with new technologies. MICHAEL: Actually, I mean before writing and mathematics, you have the invention of language, which is almost certainly the most significant single event in some ways. The history of the planet suddenly, you know, that's probably the defining feature of the human species as compared to other species. Um, I say invention, but it's not even really invention. There's certainly a lot of evidence to suggest that language is in some important sense built into our biology. Not the details of language. Um, but this second language acquisition device, it seems like every human is relatively very set to receive language. The actual details depend on the culture we grow up on. Obviously, you don't grow up speaking French if you were born in San Francisco and unless you were in a French-speaking household, some very interesting process of evolution going on there where you have something which is fundamentally a technology in some sense languages, humans, a human invention. It's something that's constructed. It's culturally carried. Um, it, there's all these connections between different words. There's almost sort of a graph of connections between the words if you like, or all sorts of interesting associations. So in that sense, it's a technology, something that's been constructed, but it's also something which has been over time built into our biology. Now if you look at later technologies of thought things like say mathematics, those are much, much later. That hasn't been the same sort of period of time. Those don't seem to be built into our biology in quite the same way. There's actually some hints of that we have some intrinsic sense of number and there's some sort of interesting experiments that suggest that we were built to do certain rudimentary kinds of mathematical reasoning but there's no, you know, section of the brain which specializes sort of from birth in solving quadratic equations, much less doing algebraic geometry or whatever, you know, super advanced. So it becomes this cultural thing over the last few thousand years, this kind of amazing process whereby we've started to bootstrap ourselves. If you think about something like say the invention of maps, which really has changed the way people relate to the environment. Initially, they were very rudimentary things. Um, and people just kept having new ideas for making maps more and more powerful as tools for thought. Okay. I can give you an example. You know, a very simple thing, if you've ever been to say the underground in London or most other subway systems around the world. It was actually the underground when this first happened, if you look at the map of the underground, I mean it's a very complicated map, but you can get pretty good at reasoning about how to get from one place to another. And if you look at maps prior to, I think it was 1936, in fact, the maps were much more complicated. And the reason was that mapmakers up to that point had the idea that where the stations were shown on the map had to correspond to the geography of London. Exactly. And then somebody involved in producing the underground map had just a brilliant insight that actually people don't care. They care about the connections between the stations and they want to know about the lines and they want some rough idea of the geography, but they're quite happy for it to be very rough indeed and he was able to dramatically simplify that map by simply doing away with any notion of exact geography. DAVID: Well, it's funny because I noticed the exact same thing in New York and so often you have insights when you see two things coming together. So I was on the subway coming home one day and I was looking at the map and I always thought that Manhattan was way smaller than Brooklyn, but on the subway map, Manhattan is actually the same size as Brooklyn. And in Manhattan where the majority of the subway action is, it takes up a disproportionate share of the New York City subway map. And then I went home to go read Power Broker, which is a book about Robert Moses building the highways and they had to scale map. And what I saw was that Brooklyn was way, way bigger than Manhattan. And from predominantly looking at subway maps. Actually, my topological geographical understanding of New York was flawed and I think exactly to your point. MICHAEL: It's interesting. When you think about what's going on there and what it is, is some person or a small group of people is thinking very hard about how to represent their understanding of the city and then the building, tools, sort of a technological tool of thought that actually then saves millions or in the case of a New York subway or the London underground, hundreds of millions or billions of people, mostly just seconds, sometimes, probably minutes. Like those maps would be substantially more complicated sort of every single day. So it's only a small difference. I mean, and it's just one invention, right? But, you know, our culture is of course accumulated thousands or millions of these inventions. DAVID: One of my other favorite ones from being a kid was I would always go on airplanes and I'd look at the route map and it would always show that the airplanes would fly over the North Pole, but on two-dimensional space that was never clear to me. And I remember being with my dad one night, we bought a globe and we took a rubber band and we stretched why it was actually shorter to fly over the North Pole, say if you're going from New York to India. And that was one of the first times in my life that I actually didn't realize it at the time, but understood exactly what I think you're trying to get at there. How about photography? Because that's another one that I think is really striking, vivid from the horse to slow motion to time lapses. MICHAEL: Photography I think is interesting in this vein in two separate ways. One is actually what it did to painting, which is of course painters have been getting more and more interested in being more and more realistic. And honestly, by the beginning of the 19th century, I think painting was pretty boring. Yeah, if you go back to say the 16th and 17th centuries, you have people who are already just astoundingly good at depicting things in a realistic fashion. To my mind, Rembrandt is probably still the best portrait painter in some sense to ever live. DAVID: And is that because he was the best at painting something that looked real? MICHAEL: I think he did something better than that. He did this very clever thing, you know, you will see a photograph or a picture of somebody and you'll say, oh, that really looks like them. And I think actually most of the time we, our minds almost construct this kind of composite image that we think of as what David looks like or what our mother looks like or whatever. But actually moment to moment, they mostly don't look like that. They mostly, you know, their faces a little bit more drawn or it's, you know, the skin color is a little bit different. And my guess, my theory of Rembrandt, is that he may have actually been very, very good at figuring out almost what that image was and actually capturing that. So, yeah, I mean this is purely hypothetical. I have no real reason to believe it, but I think it's why I responded so strongly to his paintings. DAVID: And then what happened? So after Rembrandt, what changed? MICHAEL: So like I said, you mean you keep going for a sort of another 200 years, people just keep getting more and more realistic in some sense. You have all the great landscape painters and then you have this catastrophe where photography comes along and all of a sudden you're being able to paint in a more and more realistic fashion. It doesn't seem like such a hot thing to be doing anymore. And if for some painters, I think this was a bit of a disaster, a bit of dose. I said of this modern wave, you start to see through people like Monet and Renoir. But then I think Picasso, for me anyway, was really the pivotal figure in realizing that actually what art could become, is the invention of completely new ways of seeing. And he starts to play inspired by Cezanne and others in really interesting ways with the construction of figures and such. Showing things from multiple angles in one painting and different points of view. And he just plays with hundreds of ideas along these lines, through all of his painting and how we see and what we see in how we actually construct reality in their heads from the images that we see. And he did so much of that. It really became something that I think a lot of artists, I'm not an artist or a sophisticated art theory person, but it became something that other people realized was actually an extraordinarily interesting thing to be doing. And much of the most interesting modern art is really a descendant of that understanding that it's a useful thing to be doing. A really interesting thing to be doing rather than becoming more and more realistic is actually finding more and more interesting ways of seeing and being able to represent the world. DAVID: So I think that the quote is attributed to Marshall McLuhan, but I have heard that Winston Churchill said it. And first, we shape our tools and then our tools shape us. And that seems to be sort of the foundation of a lot of the things that you're saying. MICHAEL: Yeah, that's absolutely right. I mean, on the other side, you also have, to your original question about photography. Photographers have gradually started to realize that they could shape how they saw nature. Ansel Adams and people like this, you know. Just what an eye. And understanding his tools so verbally he's not just capturing what you see. He's constructing stuff in really, really interesting ways. DAVID: And how about moving forward in terms of your work, thinking about where we are now to thinking about the future of technology. For example, one thing that frustrates me a bit as a podcast host is, you know, we just had this conversation about art and it's the limits of the audio medium to not be able to show the paintings of Rembrandt and Cezanne that we just alluded to. So as you think about jumping off of that, as you think about where we are now in terms of media to moving forward, what are some of the challenges that you see and the issues that you're grappling with? MICHAEL: One thing for sure, which I think inhibits a lot of exploration. We're trapped in a relatively small number of platforms. The web is this amazing thing as our phones, iOS and whatnot, but they're also pretty limited and that bothers me a little bit. Basically when you sort of narrow down to just a few platforms which have captured almost all of the attention, that's quite limiting. People also, they tend not to make their own hardware. They don't do these kinds of these kinds of things. If that were to change, I think that would certainly be exciting. Something that I think is very, very interesting over the next few years, artificial intelligence has gotten to the point now where we can do a pretty good job in understanding what's actually going on inside a room. Like we can set up sufficient cameras. If you think about something like self-driving cars, essentially what they're doing is they're building up a complete model of the environment and if that model is not pretty darned good, then you can't do self-driving cars, you need to know where the pedestrians are and where the signs are and all these kinds of things and if there's an obstruction and that technology when brought into, you know, the whole of the rest of the world means that you're pretty good at passing out. You know what's inside the room. Oh, there's a chair over there, there's a dog which is moving in that direction, there's a person, there’s a baby and sort of understanding all those actions and ideally starting to understand all the gestures which people are making as well. So we're in this very strange state right at the moment. Where the way we talk to computers is we have these tiny little rectangles and we talk to them through basically a square inch or so of sort of skin, which is our eyes. And then we, you know, we tap away with our fingers and the whole of the rest of our body and our existence is completely uncoupled from that. We've effectively reduced ourselves to our fingers and our eyes. We a couple to it only through the whatever, 100 square inches, couple hundred square inches of our screens or less if you're on a phone and everything else in the environment is gone. But we're actually at a point where we're nearly able to do an understanding of all of that sufficiently well that actually other modes of interaction will become possible. I don't think we're quite there yet, but we're pretty close. And you start to think about, something like one of my favorite sport is tennis. You think about what a tennis player can do with their body or you think about what a dancer can do with their body. It's just extraordinary. And all of that mode of being human and sort of understanding we can build up antibodies is completely shut out from the computing experience at the moment. And I think over the next sort of five to ten years that will start to reenter and then in the decades hence, it will just seem strange that it was ever shut out. DAVID: So help me understand this. So when you mean by start to reenter, do mean that we'll be able to control computers with other parts of our bodies or that we'll be spending less time maybe typing on keyboards. Help me flesh this out. MICHAEL: I just mean that at the moment. As you speak to David, you are waving your arms around and all sorts of interesting ways and there is no computer system which is aware of it, what your computer system is aware of. You're doing this recording. That's it. And even that, it doesn't understand in any sort of significant way. Once you've gained the ability to understand the environment. Lots of interesting things become possible. The obvious example, which everybody immediately understands is that self driving cars become possible. There's this sort of enormous capacity. But I think it's certainly reasonably likely that much more than that will become possible over the next 10 to 20 years. As your computer system becomes completely aware of your environment or as aware as you're willing to allow it to be. DAVID: You made a really interesting analogy in one of your essays about the difference between Photoshop and Microsoft Word. That was really fascinating to me because I know both programs pretty well. But to know Microsoft word doesn't necessarily mean that I'm a better writer. It actually doesn't mean that at all. But to know Photoshop well probably makes me pretty good at image manipulation. I'm sure there's more there, but if you could walk me through your thought process as you were thinking through that. I think that's really interesting. MICHAEL: So it's really about a difference in the type of tools which are built into the program. So in Photoshop, which I should say, I don't know that well, I know Word pretty well. I've certainly spent a lot more time in it than I have ever spent in Photoshop. But in Photoshop, you do have these very interesting tools which have been built in, which really condense an enormous amount of understanding of ideas like layers or an idea, different brushes, these kinds of ideas. There's just a tremendous amount of understanding which has been built in there. When I watch friends who are really good with these kinds of programs, what they can do with layers is just amazing. They understand all these kind of clever screening techniques. It seems like such a simple idea and yet they're able to do these things that let you do astonishing things just with sort of three or four apparently very simple operations. So in that sense, there are some very deep ideas about image manipulation, which had been built directly into Photoshop. By contrast, there's not really very many deep ideas about writing built into Microsoft Word. If you talk to writers about how they go about their actual craft and you say, well, you know, what heuristics do use to write stories and whatnot. Most of the ideas which they use aren't, you know, they don't correspond directly to any set of tools inside Word. Probably the one exception is ideas, like outlining. There are some tools which have been built into word and that's maybe an example where in fact Word does help the writer a little bit, but I don't think to nearly the same extent as Photoshop seems to. DAVID: I went to an awesome exhibit for David Bowie and one of the things that David but we did when he was writing songs was he had this word manipulator which would just throw him like 20, 30 words and the point wasn't that he would use those words. The point was that by getting words, his mind would then go to different places and so often when you're in my experience and clearly his, when you're trying to create something, it helps to just be thrown raw material at you rather than the perennial, oh my goodness, I'm looking at a white screen with like this clicking thing that is just terrifying, Word doesn't help you in that way. MICHAEL: So an example of something which does operate a little bit in that way, it was a Ph.D. thesis was somebody wrote at MIT about what was called the Remembrance Agent. And what it would do, it was a plugin essentially for a text editor that it would, look at what you are currently writing and it would search through your hard disk for documents that seemed like they might actually be relevant. Just kind of prompt you with what you're writing. Seems like it might be related to this or this or this or this or this. And to be perfectly honest, it didn't actually work all that well. I think mostly because the underlying machine learning algorithms it used weren't very clever. It's defunct now as far as I know. I tried to get it to run on my machine or a year or two ago and I couldn't get it running. It was still an interesting thing to do. It had exactly this same kind of the belly sort of experience. Even if they weren't terribly relevant. You kind of couldn't understand why on earth you are being shown it. It's still jogged your mind in an interesting way. DAVID: Yeah. I get a lot of help out of that. Actually, I’ll put this example. So David Brooks, you know the columnist for the New York Times. When he writes, what he does is he gets all of his notes and he just puts his notes on the floor and he literally crawls all around and tries to piece the notes together and so he's not even writing. He's just organizing ideas and it must really help him as it helps me to just have raw material and just organize it all in the same place. MICHAEL: There's a great British humorist, PG Boathouse, he supposedly wrote on I think it was the three by five-inch cards. He'd write a paragraph on each one, but he had supposedly a very complicated system in his office, well not complicated at all, but it must have looked amazing where he would basically paste the cards to the wall and as the quality of each paragraph rose, he would move the paragraph up the wall and I think the idea was something like once it got to the end, it was a lion or something, every paragraph in the book had to get above that line and at that point it was ready to go. DAVID: So I've been thinking a lot about sort of so often in normal media we take AI sort of on one side and art on another side. But I think that so many of the really interesting things that will emerge out of this as the collaboration between the two. And you've written a bit about art and AI, so how can maybe art or artificial intelligence help people be more creative in this way? MICHAEL: I think we still don't know the answer to the question, unfortunately. The hoped-for answer the answer that might turn out to be true. Real AI systems are going to build up very good models of different parts of the world, maybe better than any human has of those parts of the world. It might be the case, I don't know. It might be the case that something like the Google translate system, maybe in some sense that system already knows some facts about translation that would be pretty difficult to track down in any individual human mind and sort of so much about translation in some significant ways. I'm just speculating here. But if you can start to interrogate that understanding, it becomes a really useful sort of a prosthetic for human beings. If you've seen any of these amazing, well I guess probably the classics, the deep dream images that came out of Google brain a couple of years ago. Basically, you take ordinary images and you're sort of running them backwards through a neural net somehow. You're sort of seeing something about how the neural net sees that image. You get these very beautiful images as a result. There's something strange going on and sort of revealing about your own way of seeing the world. And at the same time, it's based on some structure which this neural net has discovered inside these images which is not ordinarily directly accessible to you. It's showing you that structure. So sort of I think the right way to think about this is that really good AI systems are going to depend upon building and do currently depend on building very good models of different parts of the world and to the extent that we can then build tools to actually look in and see what those models are telling us about the world, we can learn interesting new things which are useful for us. I think the conventional way, certainly the science fiction way to think about AI is that we're going to give it commands and it's going to do stuff. How you shut the whatever it is, the door or so on and so forth, and there was certainly will be a certain amount of that. Or with AlphaGo what is the best move to take now, but actually in some sense, with something like AlphaGo, it's probably more interesting to be able to look into it and see what it's understanding is of the board position than it is to ask what's the best move to be taken. A colleague showed me a go program, a prototype, what it would do. It was a very simple kind of a thing, but it would help train beginners. I think it was Go, but by essentially colorizing different parts of the board according to whether they were good or bad moves to be taking in its estimation. If you're a sophisticated player, it probably wasn't terribly helpful, but if you're just a beginner, there's an interesting kind of a conditioning going on there. At least potentially a which lets you start to see. You get a feeling for immediate feedback from. And all that's happening there is that you're seeing a little bit into one of these machine learning algorithms and that's maybe helping you see the world in a slightly different way. DAVID: As I was preparing for this podcast, you've liked a lot to Brian Eno and his work. So I spent as much time reading Brian Eno, which I'm super happy that I went down those rabbit holes. But one of the things that he said that was really interesting, so he's one of the fathers of ambient music and he said that a lot of art and especially music, there will sort of be algorithms where you sort of create an algorithm that to the listener might even sound better than what a human would produce. And he said two things that were interesting. The first one is that you create an algorithm and then a bunch of different musical forms could flower out of that algorithm. And then also said that often the art that algorithms create is more appealing to the viewer. But it takes some time to get there. And had the creator just followed their intuition. They probably would have never gotten there. MICHAEL: It certainly seems like it might be true. And that's the whole sort of interesting thing with that kind of computer-generated music is to, I think the creators of it often don't know where they're gonna end up. To be honest, I think my favorite music is all still by human composers. I do enjoy performances by people who live code. There's something really spectacular about that. So there are people who, they will set up the computer and hook it up to speakers and they will hook the text editor up to a projector and they'll have essentially usually a modified form of the programming language list a or people use a few different systems I guess. And they will write a program which producers music onstage and they'll just do it in real time and you know, it starts out sounding terrible of course. And that lasts for about 20 seconds and by about sort of 30 or 40 seconds in, already it's approaching the limits of complex, interesting music and I think even if you don't really have a clue what they're doing as they program, there's still something really hypnotic and interesting about watching them actually go through this process of creating music sort of both before your eyes and before your ears. It's a really interesting creative experience and sometimes quite beautiful. I think I suspect that if I just heard one of those pieces separately, I probably wouldn't do so much for me, but actually having a done in real time and sort of seeing the process of creation, it really changes the experience and makes it very, very interesting. And sometimes, I mean, sometimes it's just beautiful. That's the good moment, right? When clearly the person doing it has something beautiful happen. You feel something beautiful happen and everybody else around you feel something beautiful and spontaneous. It's just happened. That's quite a remarkable experience. Something really interesting is happening with the computer. It's not something that was anticipated by the creator. It arose out of an interaction between them and their machine. And it is actually beautiful. DAVID: Absolutely. Sort of on a similar vein, there's a song called Speed of Life by Dirty South. So I really liked electronic music, but what he does is he constructs a symphony, but he goes one layer at a time. It's about eight and a half minute song and he just goes layer after layer, after layer, after layer. And what's really cool about listening to it is you appreciate the depth of a piece of music that you would never be able to appreciate if you didn't have that. And also by being able to listen to it over and over again. Because before we had recording, you would only hear a certain piece of music live and one time. And so there are new forms that are bursting out of now because we listen to songs so often. MICHAEL: It's interesting to think, there's a sort of a history to that as well. If you go back, essentially modern systems for recording music, if you go back much more than a thousand years. And we didn't really have them. There's a multi-thousand-year history of recorded music. But a lot of the early technology was lost and it wasn't until sort of I think the eighth, ninth century that people started to do it again. But we didn't get all the way to button sheet music overnight. There was a whole lot of different inventions. For instance, the early representations didn't show absolute pitch. They didn't show the duration of the note. Those were ideas that had to be invented. So in I think it was 1026, somebody introduced the idea of actually showing a scale where you can have absolute pitch. And then a century or two after that, Franco of Cologne had the idea of representing duration. And so they said like tiny little things, but then you start to think about, well, what does that mean for the ability to compose music? It means now that actually, you can start to compose pieces, which for many, many, many different instruments. So you start to get the ability to have orchestral music. So you go from being able to basically you have to kind of instruct small groups of players that's the best you can hope to do and get them to practice together and whatever. So maybe you can do something like a piece for a relatively small number of people, but it's very hard to do something for an 80 piece orchestra. Right? So all of a sudden that kind of amazing orchestral music I think becomes possible. And then, you know, we're sort of in version 2.0 of that now where of course you can lay a thousand tracks on top of one another if you want. You get ideas like micropolyphony. And these things where you look at the score and it's just incredible, there are 10,000 notes in 10 seconds. DAVID: Well, to your point I was at a tea house in Berkeley on Monday right by UC Berkeley's campus and the people next to me, they were debating the musical notes that they were looking at but not listening to the music and it was evident that they both had such a clear ability to listen to music without even listening to it, that they could write the notes together and have this discussion and it was somebody who doesn't know so much about music. It was really impressive. MICHAEL: That sounds like a very interesting conversation. DAVID: I think it was. So one thing that I'm interested in and that sort of have this dream of, is I have a lot of friends in New York who do data visualization and sort of two things parallel. I have this vision of like remember the Harry Potter book where the newspaper comes alive and it becomes like a rich dynamic medium. So I have that compared with some immersive world that you can walk through and be able to like touch and move around data and I actually think there's some cool opportunities there and whatnot. But in terms of thinking about the future of being able to visualize numbers and the way that things change and whatnot. MICHAEL: I think it's a really complicated question like it actually needs to be broken down. So one thing, for example, I think it's one of the most interesting things you can do with computers. Lots of people never really get much experience playing with models and yet it's possible to do this. Now, basically, you can start to build very simple models. The example that a lot of people do get that they didn't use to get, is spreadsheets. So, you can sort of create a spreadsheet that is a simple model of your company or some organization or a country or of whatever. And the interesting thing about the spreadsheet is really that you can play with it. And it sort of, it's reactive in this interesting way. Anybody who spends as much time with spreadsheets is they start to build up hypotheses, oh, what would happen if I changed this number over here? How would it affect my bottom line? How would it affect the GDP of the country? How would it affect this? How would it affect that? And you know, as you kind of use it, you start to introduce, you start to make your model more complicated. If you're modeling some kind of a factory yet maybe you start to say, well, what would be the effect if a carbon tax was introduced? So you introduce some new column into the spreadsheet or maybe several extra columns into the spreadsheet and you start to ask questions, well, what would the structure of the carbon tax be? What would help you know, all these sorts of what if questions. And you start very incrementally to build up models. So this experience, of course, so many people take for granted. It was not an experience that almost anybody in the world had say 20 or 30 years ago. Well, spreadsheets data about 1980 or so, but this is certainly an experience that was extremely rare prior to 1980 and it's become a relatively common, but it hasn't made its way out into mass media. We don't as part of our everyday lives or the great majority of people don't have this experience of just exploring models. And I think it's one of the most interesting things which particularly the New York Times and to some extent some of the other newsrooms have done is they've started in a small way to build these models into the news reading experience. So, in particular, the data visualization team at the New York Times, people like Amanda Cox and others have done this really interesting thing where you start to get some of these models. You might have seen, for example, in the last few elections. They've built this very interesting model showing basically if you can sort of make choices about how different states will vote. So if such and such votes for Trump, what are Hillary's chances of winning the election. And you may have seen they have this sort of amazing interactive visualization of it where you can just go through and you can sort of look at the key swing states, what happens if Pennsylvania votes for so and so what happens if Florida does? And that's an example where they've built an enormous amount of sort of pulling information into this model and then you can play with it to build up some sort of understanding. And I mean, it's a very simple example. I certainly think that you know, normatively, we're not there yet. We don't actually have a shared understanding. There's very little shared language even around these models. You think about something like a map. A map is an incredibly sophisticated object, which however we will start learning from a very young age. And so we're actually really good at parsing them. We know if somebody shows us a map, how to engage, how to interpret it, how to use it. And if somebody just came from another planet, actually they need to learn all those things. How do you represent a road? How do you represent a shop on a map? How do you represent this or that, why do we know that up is north like that's a convention. All those kinds of things actually need to be learned and we learned them when we were small. With these kinds of things which the Times and other media outlets are trying to do, we lack all of that collective knowledge and so they're having to start from scratch and I think that over a couple of generations actually, they'll start to evolve a lot of conventions and people will start to take it for granted. But in a lot of contexts actually you're not just going to be given a narrative, you know, just going to be told sort of how some columnist thinks the world is. Instead, you'll actually expect to be given some kind of a model which you can play with. You can start to ask questions and sort of run your own hypotheses in much the same way as somebody who runs a business might actually set up a spreadsheet to model their business and ask interesting questions. It's not perfect. The model is certainly that the map is not the territory as they say, but it is nonetheless a different way of engaging rather than just having some expert tell you, oh, the world is this way. DAVID: I'm interested in sort of the shift from having media be predominantly static to dynamic, which the New York Times is a perfect example. They can tell stories on Newyorktimes.com that they can't tell in the newspaper that gets delivered to your doorstep. But what's really cool about spreadsheets that you're talking about is like when I use Excel, being able to go from numbers, so then different graphs and have the exact same data set, but some ways of visualizing that data totally clicked for me and sometimes nothing happens. MICHAEL: Sure. Yeah. And we're still in the early days of that too. There's so much sort of about literacy there. And I think so much about literacy is really about opportunity. People have been complaining essentially forever that the kids of today are not literate enough. But of course, once you actually provide people with the opportunity and a good reason to want to do something, then they can become very literate very quickly. I think basically going back to the rise of social media sort of 10 or 15 years ago, so Facebook around whatever, 2006, 2007 twitter a little bit later, and then all the other platforms which have come along since. They reward being a good writer. So all of a sudden a whole lot of people who normally wouldn't have necessarily been good writers are significantly more likely to become good writers. It depends on the platform. Certainly, Facebook is a relatively visual medium. Twitter probably helps. I think twitter and text messaging probably are actually good. Certainly, you're rewarded for being able to condense an awful lot into a small period. People complain that it's not good English, whatever that is. But I think I'm more interested in whether something is a virtuosic English than I am and whether or not it's grammatically correct. People are astonishingly good at that, but the same thing needs to start to happen with these kinds of models and with data visualizations and things like that. At the moment, you know, you have this priestly caste that makes a few of them and that's an interesting thing to be able to do, but it's not really part of the everyday experience of most people. It's an interesting question whether or not that's gonna change as it going to in the province of some small group of people, or will it actually become something that people just expect to be able to do? Spreadsheets are super interesting in that regard. They actually did. I think if you've talked to somebody in 1960 and said that by 2018, tens of millions of people around the world would be building sophisticated mathematical models as just part of their everyday life. It would've seemed absolutely ludicrous. But actually, that kind of model of literacy has become relatively common. I don't know whether we'll get to 8 billion people though. I think we probably will. DAVID: So when I was in high school I went to, what I like to say is the weirdest school in the weirdest city in America. I went to the weirdest high school in San Francisco and rather than teaching us math, they had us get in groups of three and four and they had us discover everything on our own. So we would have these things called problem sets and we would do about one a week and the teacher would come around and sort of help us every now and then. But the goal was really to get three or four people to think through every single problem. And they called it discovery-based learning, which you've also talked about too. So my question to you is we're really used to learning when the map is clear and it's clear what to do and you can sort of follow a set path, but you actually do the opposite. The map is unclear and you're actually trailblazing and charting new territory. What strategies do you have to sort of sense where to move? MICHAEL: There's sort of a precursor question which is how do you maintain your morale and the Robert Pirsig book, Zen and the Art of Motorcycle Maintenance. He proposes a university subject, gumptionology 101. Gumption is almost the most important quality that we have. The ability to keep going when things don't seem very good. And mostly that's about having ways of being playful and ways of essentially not running out of ideas. Some of that is about a very interesting tension between having, being ambitious in what you'd like to achieve, but also being very willing to sort of celebrate the tiniest, tiniest, tiniest successes. Suddenly a lot of creative people I know I think really struggle with that. They might be very good at celebrating tiny successes but not have that significant ambitions, but they might be extremely ambitious, but because they're so ambitious, if an idea doesn't look Nobel prize worthy, they're not particularly interested in it. You know, they struggle with just kind of the goofing around and they often feel pretty bad because of course most days you're not at your best, you don't actually have the greatest idea. So there's some interesting tension to manage there. There's really two different types of work. One is where you have a pretty good goal, you know what success looks like, right? But you may also be doing something that's more like problem discovery where you don't even know where you're going. Typically if you're going to compose a piece of music. Well, I'm not a composer, but certainly, my understanding from, from friends who are, is that they don't necessarily start out with a very clear idea of where they're going. Some composers do, but a lot, it's a process of discovery. Actually, a publisher once told me somebody who has published a lot of well-known books that she described one of her authors as a writing for discovery. Like he didn't know what his book was going to be about, he had a bunch of kind of vague ideas and the whole point of writing the book was to actually figure out what it was that he wanted to say, what problem was he really interested in. So we'd start with some very, very good ideas and they kind of get gradually refined. And it was very interesting. I really liked his books and it was interesting to see that. They looked like they'd been very carefully planned and he really knew what he was doing and she told me that no, he'd sort of come in and chat with her and be like, well, I'm sort of interested over here. And he'd have phrases and sort of ideas. But he didn't actually have a clear plan and then he'd get through this process of several years of gradually figuring out what it was that he wanted to say. And often the most significant themes wouldn't actually emerge until relatively late in that whole process. I asked another actually quite a well-known writer, I just bumped into when he was, he was reporting a story for a major magazine and I think he'd been working, he'd been reporting for two weeks, I think at that point. So just out interviewing people and whatever. And I said, how's it going? And he said, Oh yeah, pretty good. I said, what's your story about? He said, I don't know yet, which I thought was very interesting. He had a subject, he was following a person around. But he didn't actually know what his story was. DAVID: So the analogy that I have in my head as you're talking about this, it's like sculpture, right? Where you start maybe with a big thing of granite or whatnot, and slowly but surely you're carving the stone or whatnot and you're trying to come up with a form. But so often maybe it's the little details at the end that are so far removed from that piece of stone at the very beginning that make a sculpture exceptional. MICHAEL: Indeed. And you wonder what's going on. I haven't done sculpture. I've done a lot of writing and writing often feels so sometimes I know what I want to say. Those are the easy pieces to write, but more often it's writing for discovery and there you need to be very happy celebrating tiny improvements. I mean just fixing a word needs to be an event you actually enjoy, if not, the process will be an absolute nightmare. But then there's this sort of instinct where you realize, oh, that's a phrase that A: I should really refine and B: it might actually be the key to making this whole thing work and that seems to be a very instinctive kind of a process. Something that you, if you write enough, you start to get some sense of what actually works for you in those ways. The recognition is really hard. It's very tempting to just discount yourself. Like to not notice when you have a good phrase or something like that and sort of contrary wise sometimes to hang onto your darlings too long. You have the idea that you think it's about and it's actually wrong. DAVID: Why do you write and why do you choose the medium of writing to think through things sometimes? I know that you choose other ones as well. MICHAEL: Writing has this beautiful quality that you can improve your thoughts. That's really helpful. A friend of mine who makes very popular YouTube videos about mathematics has said to me that he doesn't really feel like people are learning much mathematics from them. Instead, it's almost a form of advertising like they get some sense of what it is. They know that it's very beautiful. They get excited. All those things are very important and matter a lot to him, but he believes that only a tiny, tiny number of people are actually really understanding much detail at all. There's actually a small group who have apparently do kind of. They have a way of processing video that lets them understand. DAVID: Also, I think you probably have to, with something like math, I've been trying to learn economics online and with something like math or economics that's a bit complex and difficult, you have to go back and re-watch and re-watch, but I think that there's a human tendency to want to watch more and more and more and it's hard to learn that way. You actually have to watch things again. MICHAEL: Absolutely. Totally. And you know, I have a friend who when he listens to podcasts, if he doesn't understand something, he, he rewinds it 30 seconds. But most people just don't have that discipline. Of course, you want to keep going. So I think the written word for most people is a little bit easier if they want to do that kind of detailed understanding. It's more random access to start with. It's easier to kind of skip around and to concentrate and say, well, I didn't really get that sentence. I'm going to think about it a little bit more, or yeah, I can see what's going to happen in those two or three paragraphs. I'll just very quickly skip through them. It's more built for that kind of detailed understanding, so you're getting really two very different experiences. In the case of the video, very often really what you're getting is principally an emotional experience with some bits and pieces of understanding tacked on with the written word. Often a lot of that emotion is stripped out, which makes can make it much harder to motivate yourself. You need that sort of emotional connection to the material, but it is actually, I think a great deal easier to understand sort of the details of it. There's a real kind of choice to be to be made. There's also the fact that people just seem to respond better to videos. If you want a large audience, you're probably better off making YouTube videos than you are publishing essays. DAVID: My last question to you, as somebody who admires your pace and speed of learning and what's been really fun about preparing for this podcast and come across your work is I really do feel like I've accessed a new perspective on the world which is really cool and I get excited probably most excited when I come across thinkers who don't think like anyone who I've come across before, so I'm asking to you first of all, how do you think about your learning process and what you consume and second of all, who have been the people and the ideas that have really formed the foundation of your thought? MICHAEL: A Kurt Vonnegut quote from his book, I think it's Cat's Cradle. He says, we become what we pretend to be, so you must be careful what we pretend to be and I think there's something closely analogously true, which is that we become what we pay attention to, so we should be careful what we pay attention to and that means being fairly careful how you curate your information diet. There's a lot of things. There's a lot of mistakes I've made. Paying attention to angry people is not very good. I think ideas like the filter bubble, for example, are actually bad ideas. And for the most part, it sounds virtuous to say, oh, I'm going to pay attention to people who disagree with me politically and whatever. Well, okay, there's a certain amount of truth to that. It's a good idea probably to pay attention to the very best arguments from the very best exponents of the other different political views. So sure, seek those people out, but you don't need to seek out the random person who has a different political view from you. And that's how most people actually interpret that kind of injunction. They, they're not looking for the very best alternate points of view. So that's something you need to be careful about. There's a whole lot of things like that I enjoy. So for example, I think one person, it's interesting on twitter to look, he's, he's no longer active but he's still following people is Marc Andreessen and I think he follows, it's like 18,000 people or something and it's really interesting just to look through the list of followers because it's all over the map and much of it I wouldn't find interesting at all, but you'll find the strangest corners people in sort of remote villages in India and people doing really interesting things in South Africa. Okay. So he's a venture capitalist but they're not connected to venture capital at all. So many of them, they're just doing interesting things all over the world and I wouldn't advocate doing the same thing. You kind of need to cultivate your own tastes and your own interests. But there's something very interesting about that sort of capitalist city of interests and curiosity about the world, which I think is probably very good for almost anybody to cultivate. I haven't really answered your question. DAVID: I do want to ask who were the people or the ideas or the areas of the world that have really shaped and inspired your thinking because I'm asking selfishly because I want to go down those rabbit holes. MICHAEL: Alright. A couple of people, Alan Kay and Doug Engelbart, who are two of the people who really developed the idea of what a computer might be. In the 1950's and 60's, people mostly thought computers were machines for solving mathematical problems, predicting the weather next week, computing artillery tables, doing these kinds of things. And they understood that actually there could be devices which humans would use for themselves to solve their own problems. That would be sort of almost personal prosthetics for the mind. They'd be new media. We could use to think with and a lot of their best ideas I think out there, there's still this kind of vision for the future. And if you look particularly at some of Alan Kay's talks, there's still a lot of interesting ideas there. DAVID: That the perspective is worth 80 IQ points. That's still true. MICHAEL: For example, the best way to predict the future is to invent it, right? He's actually, he's got a real gift for coming up with piddly little things, but there's also quite deep ideas. They're not two-year projects or five-year projects, they're thousand year projects or an entire civilization. And we're just getting started on them. I think that's true. Actually. It's in general, maybe that's an interesting variation question, which is, you know, what are the thousand year projects? A friend of mine, Cal Schroeder, who's a science fiction writer, has this term, The Project, which he uses to organize some of his thinking about science fictional civilizations. So The Project is whatever a civilization is currently doing, which possibly no member of the civilization is even aware of. So you might ask the question, what was the project for our planet in the 20th century? I think one plausible answer might be, for example, it was actually eliminating infectious diseases. You think about things like polio and smallpox and so many of these diseases were huge things at the start of the 20th century and they become much, much smaller by the end of the 20th century. Obviously AIDS is this terrible disease, but in fact, by historical comparison, even something like the Spanish flu, it's actually relatively small. I think it's several hundred million people it may have killed. Maybe that was actually the project for human civilization in the 20th century. I think it's interesting to think about those kinds of questions and sort of the, you know, where are the people who are sort of most connected to those? So I certainly think Doug Engelbart and Alan Kay. DAVID: Talk about Doug Engelbart, I know nothing about him. MICHAEL: So Engelbart is the person who I think more than anybody invented modern computing. He did this famous demo in 1968, 1969. It's often called the mother of all demos, in front of an audience of a thousand people I believe. Quite a while since I've watched it and it demonstrates a windowing system and what looks like a modern word processor, but it's not just a word processor. They're actually hooked up remotely to a person in another location and they're actually collaborating in real time. And it's the first public showing I believe of the mouse and of all these different sorts of ideas. And you look at other images of computers at the time and they're these giant machines with tapes and whatever. And here's this vision that looks a lot more like sort of Microsoft Windows and a than anything else. And it's got all these things like real-time collaboration between people in different locations that we really didn't have at scale until relatively recently. And he lays out a huge fraction of these ideas in 1962 in a paper he wrote then. But that paper is another one of these huge things. He's asking questions that you don't answer over two years or five years. You answer over a thousand years. I think it's Augmenting Human Intellect is the title of that paper. So he's certainly somebody else that I think is a very interesting thinker. There's something really interesting about the ability to ask an enormous question, but then actually to have other questions at every scale. So you know what to do in the next 10 minutes that will move you a little bit towar
Michael Nielsen and Kaveh Cohen - Forza Motorsport 7Kaveh Cohen and Michael Nielsen are composers and producers of music for film, television, video games, and motion picture advertising. They have collaborated on projects for over 10 years and, between them, they have scored successful game franchises such as: League of Legends and Splinter Cell; for television: the History channel's epic, 12-part series “America: The Story of Us" and Marvel's hit animated series “Wolverine and the X-Men”. They've also produced music used in the movie traliers such as: "Rogue One," Creed, The Hunger Games, Mad Max: Fury Road, True Detective, Game of Thrones "Doctor Strange," One of their most recent project has been for one the most successful motorsport franchises in the world, FORZA MOTORSPORT 7. The racing video game was developed by Turn 10 Studios and published by Microsoft Studios, and is the tenth installment in the Forza series. It was released on Microsoft Windows and Xbox One on October 3, 2017. The game has earned high marks from around the gaming community including a rating of 8/10 from Gamespot, a 9.2/10 from IGN, and an 86 from Metacritic, In this episode, Mchael Nielsen and Kaveh Cohen talk about their musical journey from the symphonic scope of Forza Motosport 6 to the more “vintage rock” tone of Forza Motorsport 7.ANNOTATED TRACKS02:13 - Track - FM6 Theme - “Forza Motorsport” (FM6)05:08 - Track - Love of the Sport (FM6)05:54 - Track - From Flag to Flag (FM6)07:10 - Track - Big Bang (FM7)09:03 - Track - Dialed In (FM7)11:03 - Track - Hanky Pank (FM7)12:14 - Track - Lock It Up (FM7)OTHER TRACKS00:00 - Track - From Flag to Flag (FM6)01:12 - Track - Big Bang (FM7)18:43 - Track - FM6 Theme - “Forza Motorsport” (FM6)SOUNDTRACKThe release of the original soundtrack for FORZA MOTORSPORT 7 is slated for release in December 2017 by Sumthing Music Works., the soundtrack for FORZA MOTORSPORT 6 is available digitally at Amazon.com and iTunes and is available to stream on Spotify.MORE ABOUT THE COMPOSERSYou can hear more music and find out more about the composers at MICHAEL NIELSEN'S and KAVEH COHEN'S official site's http://michaelnielsen.com and kavehcohen.comYou can follow KAVEH COHEN on Twitter at twitter.com/kavehcohen and MICHAEL NIELSEN at twitter.com/audiomichaelABOUT THE ANNOTATORProduced by Christopher Coleman (@ccoleman) and you can Find more episodes at THEANNOTATOR.NET or you can subscribe via iTunes, Stitcher Radio or wherever you find quality podcasts.FOLLOW USTwitter @audioannotatorFacebook @TheAnnotatorEmail theannotatorpodcast@gmail.comSUBSCRIBEiTunesStitcher RadioGoogle Play PodcastsRSS Feed
Kaveh Cohen & Michael Nielsen are two composers who are paving a path by bringing new approaches and unique sounds to their projects. The composing duo talks about their journey to becoming film composers and talk about how they met, and why they decided to form a partnership. We discuss how the two of them compliment each other's talents and the benefits of working with a composing partner. We speak briefly about Ninja Tracks, which is the production music company the two formed together. It's a whole different approach to the "production music library" model, and they are truly succeeding with their music in huge campaigns such as The Martian and Hardcore Henry. We dive into their score for Forza Motorsport 6 and all the challenges that come with scoring a racing game, and a franchise that is on its 6th entry. This is truly a fascinating and fun chat for anyone wanting to learn more about working relationships and how to truly bring emotional weight to the challenging genre of racing games. Interview Conducted By:Kaya Savas Special Thanks:Kaveh Cohen & Michael NielsenBeth KrakowerThe Krakower Group
Today, for my Summer Break Flashback series, Ninja Tracks composer and guitarist, Michael Nielsen, is my guest! Michael is a great guy, and I guarantee you know his music... Forza 6/7 and Splinter Cell for XBOX, independent film scores and just about every major motion picture trailer you've heard in the last ten years. He's also got a terrifically entertaining YouTube channel called "Big Hairy Guitars", where he reviews all sorts of guitars, pedals and gear, so make sure to check it out- Enjoy! See acast.com/privacy for privacy and opt-out information.
Let your views be heard about current LDS policies and teachings about LGBT persons and issues! Two social psychologists, Michael Nielsen and David Wulff have launched a survey with the hope of learning more about feelings and understandings of LGBT Latter-day Saints and issues among church members from all across the spectrum of belief and activity. This survey offers chances in various places for respondents to type in longer answers to open-ended questions, making it a bit difficult to accurately predict how long it will take to complete the survey. The current estimate is 30 to 40 minutes. Here is a link to the survey's landing page. There you can learn more about Michael and David, privacy of your data, and more. Be part of this potentially important qualitative as well as quantitative survey. Only through means like this can we fully understand how Latter-day Saints connect various parts of their Mormonism with different ideas and experiences. Take the survey--and then share it with friends and family members. Especially those who may see things differently than you do! http://bit.ly/2eZ6bnh
Michael Nielsen describes the extent and importance of arginine methylation in the human proteome.
Composer/producer Michael Nielsen is my guest this week. He's a founding partner of Ninja Tracks, the premier creators of film trailer and video game music. I guarantee you've heard a lot of their stuff, if you like going to the movies. Or maybe you've seen his YouTube or Facebook channel, Big Hairy Guitars? Michael is a sensitive and very nice fellow, and I'm happy to call he and his partners Kaveh and Collin, friends. I hope you enjoy our chat, and thanks for tuning in! See acast.com/privacy for privacy and opt-out information.
A long-awaited survey of LDS attitudes toward gender relationships and women’s ordination has begun to yield intriguing snapshots of just where we are within Mormonism on these issues--with continued analysis yet to come. In this episode, survey team members Nancy Ross, Michael Nielsen, and Stephen Merino join Jana Riess and Mormon Matters host Dan Wotherspoon for a discussion of the survey--its origins, goals, methods--and key preliminary findings. For those interested in seeing more forward movement within Mormonism regarding gender and greater representation of women in leadership councils, and perhaps even ordination, what are reasons for hope? What does the survey suggest (or the panelists see) as issues and structures and attitudes that need much greater attention before this strong movement can happen?
Episode 58 - 2013 Year in Preview (Video Games) Sophia Tong, Editor-in-Chief at GamesRadar, and host of SoundRadar, joins Christopher, Marius, Richard and Edmund to take a look at what video game scores the year of 2013 has in store for us. They discuss some of the many upcoming indie-games on the horizon as well as the big, AAA, games...including those that have released unseasonably early. Everyone shares "What They Have Been Listening To" as well as sidetracking on the latest Humble Bundle release and the first preview of M83 and Joseph Trapanese's score for Oblivion. Episode Highlights 00:00 Fifty FPS Forest (Scoreman Retro-Remix) 00:30 Welcome and Intros 08:11 WHYBLT. Marius? Sim City, Jack the Giant Slayer 16:22 WHYBLT, Edmund? Warm Bodies 17:49 WHYBLT, Richard? Shadow of the Colossus, Deus Ex: Human Revolution (again) 20:22 WHYBLT,Sophia? Ni No Kuni, Tomb Raider, Sim City 26:27 WHYBLT, Christopher? Company of Heroes (Frederick Wiedmann) 30:11 SIDETRACK: The New Humble Bundle 34:38 2013 Year in Preview: Indie Game Scores 46:29 2013 Year in Preview: AAA Game Scores 96:18 Will 2013 be better than 2012? Music Selections 00:09 " Fifty FPS Forest" (Fastfall: Dustforce) by Lifeformed 07:44 "03 Building the Foundation" (SimCity) by Chris Tilton 09:37 "Jack and Isabelle" (Jack the Giant Slayer)) by John Ottman 19:37 "Icarus" (Deus Ex: Human Revolution) by Michael McCann 20:35 "Morning of Beginning" (Ni No Kuni) by Joe Hisaishi 22:27 "Infiltrating the Bunker"(Tomb Raider) by Jason Graves 45:35 "Dune Storm" (Catacomb Snatch) by C418 47:31 "Virtual Reality (High)" (Metal Gear Rising: Revengeance) by Jamie Christopherson 50:00 "200 Years Ago On An Icy Planet" (Dead Space 3) by James Hannigan 52:15 "Main Theme" (Aliens: Colonial Marines) by Kevin Riepl 59:38 "Restless" (Gear of War 3) by Steve Jablonsky 62:25 "Arrival" (Battle LA) by Brian Tyler 64:44 "Bioshock Main Theme" (Bioshock) by Garry Schymann 73:39 "Suite from Star Trek" (Star Trek) by Michael Giacchino 81:20 "Conviction Main Theme" (Splinter Cell: Conviction) by Michael Nielsen, Kaveh Cohen 82:55 "Main Theme" (Assassins Creed III) by Lorne Balfe 86:16 "Besieged Village" (Castlevania: Lord of Shadows) by Oscar Araujo 93:37 "Arrival" (Halo 3) by Marty O'Donnell, Michael Salvatori 100:30 "Retreat" (Aliens: Colonial Marines) by Kevin Riepl Additional Notes: Visit GamesRadar Subscribe to SoundRadar Follows Sophia Tong (@sophiatong) Download the Episode Subscribe and More Info
Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: There just really doesn’t seem to be an effective concrete practice for taking like day to day insights and accumulating them, like rolling them up into a snowball of novel ideas. 00:00:16 - Speaker 2: Hello and welcome to Meta Muse. Muse is software for your iPad that helps you with ideation and problem solving. This podcast isn’t about Muse product, it’s about Muse the company and the small team behind it. My name is Adam Wiggins. I’m here today with my colleague Mark McGranaghan. Adam, and a guest today, Andy Matuschek. Hello, thanks for joining us today, Andy. I think you’re about as close as there is to Rockstar and the tools for thought space. 00:00:39 - Speaker 1: That’s a really distressing statement. 00:00:42 - Speaker 2: Yeah, we’ll, we’ll talk more about why this space is so small a little later on, but for those that might not know you that are listening, maybe you can briefly give us your background. 00:00:51 - Speaker 1: Sure, I’ve kind of a meandering background. It begins in technology. When I was a kid, I was constantly developing video game engines and kind of these tools for creative people. I, um, with a couple of roommates, I worked on the, the first native Mac OS 10 graphics app and did that for a bunch of years and then made some open source software for developing. I was always really into tools for others. Went off to Caltech and kind of got introduced to science, serious science. And uh kind of got my, my very pragmatic engineer perspective salted uh with all that. But unlike all of my peers who who went off to get a PhD, I, I went off to Apple and got a different kind of, it kind of felt like a graduate program of studying at the, the heels of all of these people with like jeweler’s loops that they were using to to look at individual pixels of devices and There, there my work became much less about just programming and much more about kind of the intersection between technology and design. I, I got myself involved in in all these projects that it kind of the through line was that they, they were about what was central to dynamic media, uh, as opposed to just pictures on screens. So things like, you know, interactive gestures and like the 3D parallax effect and, you know, crazy page curls and And all this stuff we’ve talked before about. 00:02:07 - Speaker 2: Uh, the way that Apple’s environment maybe has less of that distinction between design and engineering or there were a lot of people that sat really on the intersection of those two things and it was part of what allowed them to do and continues to allow them to do really innovative things on interface and and maybe you’re a person that sits in that place as well, right? 00:02:26 - Speaker 1: Sure, yeah, yeah, it’s it’s interesting because like from an org chart perspective, there’s really heavy boundaries between engineering and design, and like I was on the engineering side of the house, like I sat with the engineers, but uh for several years, I, I would like Spend much of my day sitting in the human interface lab, like next to a designer, and we’re just kind of like tossing prototypes back and forth all day. And so it became this kind of mind meld thing where those people could tweak values in the prototypes I built and you know, I would end up tweaking design elements as I was building prototypes and it kind of just the titles fell away. But over time, I kind of, I began to feel that these experiments we were doing with the dynamic medium, I would love to see them applied to things which had More, more meaning, more impact in the world. And so I, I got really interested in, in education research. I started writing about that. And uh the folks at Khan Academy reached out and asked whether I’d like to do that kind of work with them. Um, so I joined Khan Academy and and took along, uh, one of my Apple colleagues, Mei Li Ku, who is a wonderful designer and, and together we started this like R&D lab, uh, at Khan Academy where we explored all kinds of uh novel educational environments from that perspective of like trying to trying to look at what the dynamic medium alone can do. Trying to make these active learning environments and I did that for about 5 years and um I started getting a little disillusioned with institutional education and um I started getting really interested in the kind of knowledge work that people like you and me do every day, where you’re reading information, writing information, creating new things, pursuing uh novel ideas every day, and I’m wondering how we could augment some of that. Uh, so now I have this kind of independent research practice where I’m pursuing oddball questions like what comes after the book? Can we make something that does the job of a book but better? Uh, it’s just been sort of a delightful experience. 00:04:12 - Speaker 2: And I think one of the uh pieces you’ve written in all your writing is delightful, and I certainly recommend everyone uh read it, but uh uh read as much of it as they care to. But when I’ll link to because I think it particularly illustrates maybe the place where you and our team kind of overlap and thinking is the transformational tools for thought article, which both describes sort of your current work around the the learning and the space repetition, which you can tell us about, uh, but also the kind of the meta elements of how do we develop these kinds of tools in the first place. 00:04:43 - Speaker 1: Yeah, that that was a project with uh my wonderful colleague Michael Nielsen, who’s also been investigating the space which we might label tools for thought. And people have defined this in different ways that the term stretches back some decades, but uh I like to think of it as tools or environments which expand what people can think and do. And you know, a great example of this is writing. Another great example is numerals. So there’s a tendency to, to think about, you know, kind of computer implementations of these things and of course there are instances which are very interesting. Um, I find it very powerful to reach back to you know, these, these cultural. 00:05:16 - Speaker 2: Uh, ancestry tools for thought. Absolutely. Another great example of that is, I think Brett Victor has a piece about this, which is essentially the chart, is the charting numbers, you know, on an X Y axis or, you know, line graph or that sort of thing that we we take for granted nowadays where it’s easy to crank that out in a spreadsheet or whatever, but that was an invention that happened not even all that long ago. It’s, you know, a couple 100 years back or something like that and the existence of this new. Um, tool, or actually, I think as you argue in that piece, medium, you would even call it a medium for thought, might even be more accurate, basically allows you to have new ideas or see the world in a different way. So the tools shape the kinds of thoughts you’re able to have and the kinds of works that you’re able to create. 00:05:58 - Speaker 1: That’s right. If all you have is Roman numerals, Roman numerals, uh, then it’s very difficult to multiply. Suddenly, if you have Arabic numerals, it becomes quite easy by comparison. So kind of in the what comes after the book space, one of the things that my colleague Michael and I had been exploring is just this observation that most people seem to forget almost everything that they read, uh, and sometimes that’s, that’s fine. The thing that really matters in a book is, is the way that it kind of changes the way that you view the world for many books that really is the impact that matters. Uh, but for other books, for instance, if you’re trying to learn about quantum computation or some advanced technical topic, uh, it really is kind of a problem, uh, that, that you forget. Uh, most of what you read because these topics build on each other as the book continues. And so you end up starting reading a book in English, say, and then halfway through the chapter, uh, you start to see there’s like a word of Spanish and, and then by the end of the chapter, there’s like whole sentences of Spanish and then then like the whole second chapter is in Spanish and say that you don’t know Spanish as a language, you read this book and you’re like, well, I thought I was reading an English book. It’s like, no, it’s actually written in this other language that you have to. Learn, just as you would have to, you know, learn vocabulary, if you were trying to speak a foreign language, you need to like learn the vocabulary, both conceptual and declarative of this domain that you’re seeking to enter. Uh and so, and so the experiment is kind of been, well, can we make that easier? A project that that paper describes is this textbook called Quantum Country, which tries to make it effortless for readers to remember what they read. Um sounds like kind of a crazy thing, but It takes advantage of really a fairly well understood idea from cognitive science, about how it is that that we form memories. It’s reasonably well understood. There’s sort of a closed set of things that you need to do in order to form a memory reliably. Uh, it’s just that like logistically, it’s kind of onerous to do those things, and it requires a lot of coordination and management. And so most people don’t do it or it’s kind of difficult to do it. Uh, but it’s pretty easy to have a computerized system assist these things. And so, basically, as you’re reading this book, every 10 minutes or so of reading, there’s this really quick interaction where, you know, say you just read about the definition of a qubit, after a few minutes of reading, there would be this little prompt interface where it’s like, hey, so how many dimensions? Does a qubit have? And you try to remember like, uh, how, OK, it’s two dimensional. So you think yourself 2 and then you reveal the answer and it’s like, oh yes, it was 2, and so you say, cool, like I remembered that. And then we say like, OK, so a qubit is really a two dimensional what space? Like, how do we think about representing this? And say you don’t remember that, it’s this linear algebra concept. OK, it’s a vector space. That’s fine. Like you reveal it back, you didn’t remember that. See market is like, I like, I didn’t remember that detail. And um this is already doing something for you because it’s kind of signaling like, hey, maybe you weren’t quite reading closely enough or just seeing that answer that you missed, like as you read the next section, if that topic comes up. Maybe you’re more likely to remember because you were just uh corrected and you saw that correct answer. But somewhat more importantly, 10 or 15 minutes later when you’re looking at this, this next set of prompts, and you, you see kind of the new things from this section, that prompts about the two dimensional vector spaces that you failed to remember, that one will appear there. And so you’ll, you’ll kind of get another chance. And then once you remember it there, the idea is a few days later, we will send you an email and you’ll say like, hey, uh, let’s let’s remember these things about quantum computing that you were working on, let’s work towards long-term memory, and you’ll you’ll open up that review session and linked in the email, and you you’ll kind of do this interaction again, just, just a couple seconds per question. It takes about 10 minutes to go through the material. And that 5 days later will kind of reinforce your memory of that material about as well as the 10 minutes later prompts did, not, not exactly, but, but just roughly you get the idea. And then if you remember things after 5 days, then, you know, maybe you will next practice them after 2 weeks and after a month, after 2 months, after 4 months, and so it initially seems like this kind of onerous thing, like, oh, I’m gonna like be working on these like memory flashcards for this thing I’m learning, but Because the way human memory works is that it’s stabilized in this kind of exponential fashion where you can have successive exposures that are further and further apart. Uh, it only takes a few exposures before a particular idea can be remembered durably for many, many months at a time. 00:10:11 - Speaker 2: And this is a space repetition systems you’re talking about, um, which I had some exposure to through Onki, which is this little kind of I don’t know, uh, it’s definitely a tool for thought, but it is, uh, very nichey, I would say more than a little clunky to use. You have to be really motivated to do it. And so you can use a tool like this to increase your retention or understanding of something you’re reading a science paper, a book. Something you you do want to get a deep grasp of, but you got to really work hard at it, right? The tools are very taping it all together yourself in a way that requires pretty big commitment and investment. And one of the things I think is really interesting about the work you’re doing is whether you can take that and build it in a way that’s fun, relatively low effort by comparison, maybe even you know, sleekly designed and just more, more enjoyable overall. 00:11:06 - Speaker 1: Yeah, one thing that characterizes, I think a lot of opportunity in this space is that there are many exciting ideas which have been explored by technologists or by academics, which are promising at some foundational level. The underlying mechanic of Aki is fundamentally the same as the underlying mechanic of quantum country if you look at it from a certain angle, but there’s this core design piece missing, that’s kind of keeping that idea from really having the transformative impact it could have. By that, I don’t mean the fact that Aki is like hideous. I mean, it is, and, and it will kind of like turn off basically everybody who looks at it for that reason. But there are deeper issues to your point, it’s really hard to write good prompts. Uh, both in the sense that people start by being bad at it, and so they’ll write prompts that don’t work very well and that are boring and onerous to review, and they mostly won’t realize that that’s what’s happening. They’ll just think like that’s what this is. And then also in the sense that even if you do know how to write prompts well, it’s quite taxing. It takes a lot of effort. It’s a context switch from the experience of reading and it’s valuable insofar as kind of reflecting on material that you’re studying and synthesizing it, distilling it and turning it into a question actually does. go quite a long way to enforcing your your understanding of the material, but maybe you’re only going to do that for like the most important things in your life. And it’s pretty interesting to wonder like, OK, maybe you do that for the top 10% of the stuff that you ever read, but what if it was like really pretty easy and low effort for you to remember the top 70% of the things that you could read. You could save that special effort for the stuff that really, really matters. Um, that’s kind of what quantum Country is pursuing. One of the main things it’s wondering is, can we make this something. That it basically everybody who’s reading it and is serious about the topic can take advantage of and really see the benefit of. 00:12:49 - Speaker 3: I think this thread also reflects one of the challenges in developing new tools for thought, which is you actually need a lot of different skill sets. It’s not just a matter of engineering or computer programming, you need engineering, products, design, writing, marketing, community, often you need at least all of those things. And I see a lot of people approach the domain as basically pure engineers and they they. Tend to kind of bounce off or the products don’t stick because they’re missing a lot of those aspects. 00:13:15 - Speaker 1: That’s right. And I’ll add one more actually, that that’s kind of Michael’s in my hobby horse here, which is that you probably also need some kind of domain expertise. So many of the, the projects in this domain, even if they do actually have the design skills and the technical skills involved as well as some of the other peripheral skills, they’ll be doing things like trying to make a tool to do math better or something like that, but no one on the team is a serious mathematician. And so they’ll make something that seems really cool and it makes for a really good like product presentation, but no mathematicians really going to use it to do serious work. Maybe it works in an educational perspective, but it’s fundamentally limited. It’s it’s like a toy in some fundamental fashion. And so to that list, I would add, you need some kind of deep domain expertise too for a product like Muse, maybe that is somewhat diffuse. So anybody working on a product, the domain expertise that’s relevant there might be like, you know, the visual design of a product or like doing this kind of conception stages of a product. 00:14:11 - Speaker 2: Well, our domain is thinking. So luckily we have a domain expert on that, and that’s Mark, right? 00:14:15 - Speaker 3: Yeah, I feel like a sort of secret that we have, we had with the lab and now we have with use this understanding of the creative process and thinking and a lot of it actually comes. From the study of how this stuff happened historically. And you mentioned reaching back in history and learning from that something we’ve done a lot of. 00:14:29 - Speaker 1: Yeah, it’s fantastic. I think it’s just really attractive to build tools. It is built into my DNA like I grew up that way, and it’s actually a liability for me. My tendency when I see an opportunity or I see a problem space, is like, oh, wow, like I’m going to make a tool to like help with that. And that’s like a useful tendency, it’s a cool tendency. But often, I’m not like really solving a burning problem, or I’m solving an abstract problem that isn’t connected to something that is like concrete and intrinsically meaningful and that like actually is about doing the work. So like the analog and muse would be if maybe I’ve done like one serious creative process that was about like a concrete thing, and then I. Like, wow, like I’m really interested in the creative process. Like I’m going to devote, you know, the rest of my days to working on building tools for the creative process, which I like, I’m never really using to do any subsequent serious creative process. Like I’m I’m doing it in order to make the tool because I’m fascinated by tools. That’s a tendency that I have that I have to actively combat. 00:15:24 - Speaker 2: The other thing that comes with it, if you come into building a tool with the domain knowledge. Is that over time you get focused on building the tool and maybe you actually know the domain less well. So there’s there’s quite a parallel for me personally between uh Hiroku and Muse in that both are some kind of creative process. Hiokku’s web development, um, which is one kind of one kind of creativity, one kind of creation, act of creation with Muse’s, it’s thinking and reading and making decisions. In both cases, there is a process where a thoughtful professional sits down and they start in one place and they end with a solution or a result or or an output. And studying and understanding that process both it’s fun for me to introspect for myself, but then the the ethnographic research aspect of going out talking to in the lab and in the build up to Muse, we talked to hundreds of creative professionals about their process, which was always an interesting thing because of course it’s this very private and intimate thing and also I would say 98% of the time people are vaguely embarrassed because they feel like it should be better. It’s like, oh, my notes are really messy, or yeah. Yeah, you know, don’t look at my office. It’s, you know, things are, I, I should have some, I don’t know, some they have some idealized version of what it would what it would look like the reality I think is the creative process is messy and that was something we we fed into Muse was sort of embracing that a little bit. 00:16:46 - Speaker 1: I think it’s critical that you all not only experience that ethnographically but also personally that you have this deep personal experience of that process. Otherwise I fear it’s too detached. The insight from the last year that I’m most excited about is is kind of this nugget in the middle of the the paper you you referenced, Adam. I call it like that the parable of the Hindu Arabic numerals. I hope you don’t mind if if I kind of recap it here because it just seems to bear. It’s this observation that if you are the Roman royal accountant and you’re just struggling through these tables of numbers and you find it very onerous and it’s kind of taxing and it’s error prone, imagine if There was like Roman IDEO and you could go to them and say like, hey, please help me like with my accounting process, please redesign this. You know, IDEO’s process is pretty amazing in a lot of ways. They’ve helped make a lot of really powerful products, and they have this process that is really interesting where they go and they they embed, they will like sit with the accounting departments and like interview extensively as you talked about interviewing people about their creative process and like really try to internalize it, they’ll do all this like synthesis and diagramming. And they’ll come up with words to describe what people are doing, and it’s all great, but I think there’s just no way that Hindu-Arabic numerals would be the result of of that process if, if what you’re starting with is Roman numerals, because the transition requires the deep insights of a mathematician and also deep insights of a designer. So just for instance, place value, this notion that like if I have a 6 and it appears in the right moment. Spot, then it’s like a one digit, but if it appears in the second or rightmost spot, that 6 is still 60 in certain fundamental ways, and you can still perform the same fundamental operations on it, like with addition and so on. It still works the same, but it has this alternate interpretation of being like 60, it’s in the tens place. That is a profound mathematical insight that depends on deep intuition of like commutivity, the laws of distributivity. Uh, it’s not something that somebody just like doing some ethnographic research in the field is going to come up with, yet simultaneously, it’s also not something that most mathematicians are going to come up with. And so it’s a great example of how you like, you really have to have the same, the people on the same team. 00:18:57 - Speaker 2: That is a great example of the domain knowledge, and I wonder if that connects to something. I feel like I see the trend of people with design as a skill set. I feel like are more often drawn to what I would call consumer or sort of end user things. So they’re more interested in working on social media, you know, let me get a job at Instagram or Facebook or something like that. And I wonder if that’s because then they only need to be an expert in the design domain, and if they’re working on something that’s more um for an end user that’s not really a specific domain, you don’t need that knowledge or the things that you need. To understand the problem space of Instagram is not deep specialized professional knowledge. It’s just being a person with a smartphone that likes to take photos and post them on the internet. 00:19:40 - Speaker 1: They can certainly be a lot more successful in that way. People are sometimes surprised that Apple doesn’t really engage in anything that looks like design research, and here I use that word to to kind of mean that the ethnography that you’re describing user interviews, the walls full of sticky notes where you’re trying to like describe user behavior. And summarizes your quotes. The Apple designers don’t really do that. But they’re primarily designing products that solve problems in their lives. Like I use email, like, let me make this email a little nicer, and so like they can do that. But I think as soon as you leave that domain, things start getting hard, like Apple iBooks, there aren’t a lot of like really serious readers on the design team. I think that’s part of why Apple iBooks is not good. The various attempts at social music platforms, that’s something that requires like a set of ideas that have been pursued by various products. It requires like, you know, kind of a landscape review, understanding people’s social interactions really deeply, that’s also not part of the process. The Instagram designers, I think they are doing something that the Apple designers aren’t, they’re talking to users a lot about how they feel when they’re interacting socially, and that’s a piece that has always been missing from Apple’s process, but to your point, they’re not this like goal of of taking and sharing photos. That’s something they already like. 00:20:52 - Speaker 2: Well, we’re already pretty far into it here, but I feel like I should um stick to our format, which is introducing the topic. Maybe I’ll do that here and Andy, you, you suggested this one, which is uh environments for idea development, particularly idea of development over time. I thought it might be interesting to compare what that phrase brings to mind for each of us. 00:21:12 - Speaker 1: Sure. So one of the hobby horses I’ve been thinking about recently is, I’ve been reading this literature on deliberate practice. Eriksson is maybe the prominent individual there and there’s this, this extensive research on the practices of dancers, musicians, athletes who have these very formal and intense. Hence preparation and practice structures that stretch from youth into eminence. So touring international pianist is still working on these like fundamental skills and activities. And I think it’s fascinating that by contrast, knowledge workers really don’t seem to take their fundamental skills all that seriously insofar as kind of like improving them in a deliberate daily ongoing way. 00:21:51 - Speaker 2: Yeah, I’d be curious to even just enumerate what we think are some of the foundational or some of the core skills for a knowledge worker. 00:21:59 - Speaker 1: I was about to try to do that because I think it actually connects to this to this phrase. I’m sure that y’all could add some more, but I think reading effectively is is one of them, writing, communicating effectively is one of them. But taking an inkling and developing it over time effectively seems like another just really important idea of creative work. And so that that’s what made me suggest the topic that if I speak to people and ask them like, hey, so you know, this kind of interesting notion comes out of a conversation, and you think like it might be worth pursuing, then what? People’s answers are uh. They’re not good, you know, and like people do come up with things, they managed to develop ideas in spite of this, but it’s clear that this is very haphazard, and it doesn’t always feel like haphazard in a good way. People will say things like, well, you know, maybe I write it down in my notebook. It’s like, well, and then what? Well, uh, maybe later I’ll like flip back through and see it, like, no, no you won’t, uh, or, you know, you can like you can schedule time, you can like put aside time to like think about that idea, and maybe if it’s like a really important idea you’ll do that. But you won’t for like, you know, something cool that comes out of a conversation that seems like it might connect to something later. There just really doesn’t seem to be an effective concrete practice for taking like day to day insights and accumulating them, like rolling them up into a snowball of novel ideas over time, except insofar as, you know, they kind of happen to accumulate in your awareness. 00:23:17 - Speaker 2: Yeah, that makes sense and obviously connects very well to the To the Muse story for me, it’s become because of this product that I now obviously have been using in the process of our team developing it. Because it for me represents the place I go to do my deepest thinking. There’s almost not quite a ritual, but let’s say when I, when I go to make a muse board for something that I feel like is something I need to do a deep dive on, I know I’m really getting into it. That signals it to myself. Almost to the point that sometimes I’m, it’s an idea I’m excited to explore exactly what you described, like the team is having a conversation, something serendipitously comes up. I think I should really dig in on that. I think there’s something there. I put it in my notes to do that. So that can be like. A fun, exciting opening a new door, opening a fun Pandora’s box kind of thing. But it can actually also be the other way around, which is I know it’s maybe more of um something important to insult to research or understand deeply that maybe has is a problem in in my personal life or like a government paperwork thing or some other something like that. And I just know, OK, I’m going to really get into it. This is not shrugging it off. This is not quickly jotting down a couple of quick notes in my notebook and moving on by creating this board. I’m kind of mental. Making myself a commitment to follow this rabbit hole as deep as it goes until I feel like I have my head around the problem or or I’ve solved it, which is sort of an interesting effect, mental effect that the product seems to have on me. 00:24:36 - Speaker 1: It’s really interesting. Can I ask the and then what? Like something comes up in a team meeting and so like you add it to the muse board. What’s the and then what? How does that idea grow? 00:24:45 - Speaker 2: Yeah. Well, importantly, I wouldn’t add it straight to muse from the meeting. I would put it more into my kind of like inbox GTD style. Like just stop it’s the same it’s the same list where I put down, um, you know, we’re out of we’re out of milk, you know, get more, it’s just like little notes here. Another way I’ll think of it sometimes uh in team meetings is realizing we need kind of an internal memo to pull together diverse thoughts on the topic and like really articulating what the problem is, um, and really trying to lay it all out so that not just for my own thinking but so we can all sort of be on the same literal page about. Something, particularly maybe something that’s a long time ongoing problem and there’s people that weren’t on the team before and they don’t have some of the past contexts you want to put it all together. Yes, so then what for me is deciding I want to devote a chunk of time to this, you know, maybe it’s 20 minutes, maybe it’s an hour, maybe it’s more to really dig in, to really just face whatever this is head on and see where it leads me. And you know, maybe it’s something like an idea for a new product feature, for example, which again tends to be more on the fun. Uh, the fun side of things. And so then, then there’s this whole process around, you know, let me assemble prior art and get together some ideas and sketch some things and all this kind of stuff. The output varies, but sometimes there’s just a clear insight of like, oh we should do X, it’s a decision basically, and then I will go and take action on that, but other times it’s realizing, wow, this is a really much deeper hole than I thought and You know, it needs more thought or it needs more whatever. And then maybe I want to, for example, it’s a team activity, maybe I want to bring it back to the team and say, we thought we could, I thought I could think about this briefly, have a solution, and then do it. But actually it’s a lot deeper than that. What do we want to do? So I think it’s, I think it’s just like understanding or not quite enlightenment, but getting to this new place of understanding about whatever the thing is, and then that in turn implies a next action. 00:26:38 - Speaker 1: One of the questions I’ve been exploring in this space is what to do when it’s not really possible to make a lot of progress in one session. So talking with people about their practices, one common approach that I hear relates to what I just heard you articulate, and that’s that something kind of reaches a threshold of interestingness or apparent importance. And at that point, you’re going to like carve out some time and sit down and really think about the thing. That’s cool. And sometimes that is enough. I noticed that for a lot of the most interesting ideas that I explore, one session doesn’t often really doesn’t yield all that much. In fact, often it doesn’t necessarily feel like that session really produced a significant increment at all. Uh. You’re just kind of like manipulating the terms of the equation, so to speak, getting a better handle on it. And so one element that I noticed often really seems to be lacking from people’s processes, because it’s kind of it’s hard to orchestrate is marination, where it seems like sometimes what ideas need is just kind of consistently returning to them over time and asking like what do I have that’s new to say about this difficult question? OK, I can say a few sentences about it that seem kind of new, like it’s interesting, but it’s still not. Something. So I’m going to leave this for 2 weeks and I’m going to come back and like, what do I have that’s new to say about this? And maybe if you do that, you know, 6 times, something starts to emerge. That seems really difficult to orchestrate. 00:27:58 - Speaker 2: It makes me think of a great article called Solitude and Leadership, which basically is describing how you need to carve off this. You basically need to disconnect from the opinions and influence of others in order to have original thoughts. One way that the author talks about it is in that first session, like you described, at the end, everything that you’ve come up with a written. Down is really in a way just the thoughts of others that you’re echoing back. And that’s fine. That’s a starting place, but to truly get to something original or new or potentially breakthrough, you need to push past that. Yes, he claims that he can sense when he’s sort of like sort of cross from the more mundane thinking and into the more excuse visionary for lack of a better, better word or just original, uh, when the thoughts start to not just be an echo of what he’s read or seen or heard someplace else. And that always requires multiple sessions. 00:28:49 - Speaker 3: I think this also points to the idea that you can’t always expect to sit down in a series of sessions and then kind of one step after another, produce an idea all kind of in the forefront of your mind. When we think about thinking and ideas and tools for thought, we have this very conscious perception of it. It’s like I’m sitting down, I’m going to come up with something that’s better than Roman numerals. At the end of the session, I’ll have, you know, Arabic numerals. I think that’s just not how it works. Usually, sometimes you can get away with that, but often it’s more of your, like you said, marinating on stuff. That’s becoming this fodder for your mind and then in the background, you’re having an unconscious process of ideas, connection forming, inspiration, and then when you come into a later session, you might be better prepared to have a new idea. So I think it’s like you said, it’s really important to find ways for the tool to support that marination, chewing, ruminating, going over, rearranging without the expectation that you’re going to be explicitly building up your new idea. 00:29:39 - Speaker 1: It’s really easy for tools to accidentally build walls for that. One of my favorite novel reading tools is this. liquid text, totally fascinating set of interactions for manipulating PDFs, excerpts, things like that. One very interesting design decision is that by default documents are kind of a workspace and so you extract excerpts into like this canvas and you can manipulate them, but documents are kind of separate from each other in that sense. So you can have a set of insights about a document, but if you’re going to have inter-document insights, that’ll depend on your memory. Now there’s a fix for that, which is that you can create multi-document workspaces. You can say like, well, this is like my thinking about the this. Problem, you can kind of like bring several PDFs into it and kind of like make your notes and make your excerpts and whatever. And that’s cool because then you can have insights between them, but it still requires this intentionality of saying like, cool, I’m gonna like bring that PDF into this workspace and then like the notes and excerpts and whatever like they live there. But if you’re working on several interesting questions and ideas at once, it’s not at all clear that you’re going to have interactions between those workspaces that are necessary. 00:30:40 - Speaker 2: Yeah, liquid liquid text is great, but I think as a coming back to the environments for idea development. That creating room for serendipity without just total chaos is maybe a subtle and tricky thing. 00:30:53 - Speaker 3: I’ve thought about ways, by the way, to do this not subtly. One notion I have for an experiment is the idea collider. So you have something like your, your notes or your wiki pages, and every morning it just gives you two random pages and it’s like write a third page, which is the synthesis of these two things. Oh cool. I’d love for someone to do that experiment. Have you tried it? No, no, it’s kind of a it’s open. Request for research. So if anyone listening wants to develop it, let us know. That’s great. 00:31:14 - Speaker 1: It connects to a set of ideas that I’ve been exploring for the last year or so. I’ll share it, maybe that’ll generate some more. I’ve been doing this kind of strange note taking practice that really came out of trying to solve this problem of like, how, how can I make marination effective? How can I, how can I make a process where I can like do something every morning and cause there to be increments on my understanding. of some ideas or some problems I’m trying to solve. And so I have something that’s kind of like a personal wiki basically. The technology is not really important. It’s more about the practice that’s important and the practice is that I try to write these notes that are densely linked to each other where each note is about a particular atomic idea. Sometimes the note is a question like what are the most important design considerations when writing prompts for the mnemonic medium like one country and sometimes for Since the children of that note are declarative statements like space repetition memory prompts should focus on one idea, and then that note will kind of accumulate not just in one session, but over many sessions, all of the things that I have to say about that. And sometimes I’ll learn that the title was wrong. It’s like, oh, actually they shouldn’t always focus on one idea because sometimes it’s really good for, you know, these memory prompts to like synthesize multiple ideas and these things kind of evolve over time, a term that some have used is is gardening. Uh, I call these like evergreen notes because they’re trying not to be fleeting notes, like notes from a meeting that you’ll never really return to, but rather uh notes that you water and which grow over time. And just to get back to your idea, Mark, one of the practices that I found most rewarding here is this notion of a writing inbox, where when something seems interesting or juicy, I have a place for it to go, and I start my writing most mornings by looking at that writing inbox and and training. those as a set of provocations or prompts and asking like, which of these things do I feel like writing about this morning. In this way, ideas which seem promising, even if there’s already a lot written about them, I can kind of throw them back in the inbox and then it’ll like it’ll appear for consideration on upcoming mornings. But I think that inbox gets even more powerful if you start to introduce fancier orchestration methodologies into it. So one possible orchestration methodology is like the one that you just mentioned where like maybe the inbox this morning. contains these like pairs of notes. Uh, so it’s going to kind of combinatorically like walk my tree here. But another thing that seems pretty interesting and that I’ve been playing with is this idea that I had this interesting idea in a conversation with someone. I don’t really know what to do with it yet. It still feels promising, like, I don’t want to lose it, but I also don’t really have anything more to say about it right now. So I can like kind of snooze it for a while. It’s like, OK, I can go out of my, my writing inbox for a while, and it’s familiar from Gmail. And then It’ll like come back in a while, but in a modification on the snoozing functionality that I’ve been finding very interesting is the parameterless snooze. Normally you have to say like come back in a week. I think that kind of overhead is unhelpful and is often counterproductive and it’s better to just say like, no, not today. And to say like, well, if I’ve said that 10 times, then like, probably this should just go away a long time. 00:33:57 - Speaker 3: Yeah, does it like exponentially back off and reminding you. I think by the way, that snoozing or moving things out of you is really important. It’s actually a big difference in just having a big pile of to dos because there’s a limit to how many things you can have in your head at one time. And often we have new ideas that we want to bring in, but there’s no space. And the only way to do that is to actually kick stuff out from your working memory, and something like a snooze can help with that. 00:34:21 - Speaker 1: Muse is really Interesting in this regard because the the constraint of the screen as a surface, it encourages users to keep stuff to the quantity which they can see at a reasonable zoom scale on a screen at a particular time. I like part of the design? 00:34:36 - Speaker 2: Yeah, well, certainly constraints are potentially great for creativity. Post-it notes. One that I reliably come back to both in my own work, but also just as just this kind of very workhorse tool for thought analog world thing and part of it is you just can only fit so much you can also use index cards for this as well, yeah, maybe with an index card and a Sharpie and that sort of limited amount that can be on each card. Of course you can have any number of cards. So yeah, obviously with Muse, you’ve got the, you’ve got the expanding boards and you’ve got the sort of the 3D nesting, but certainly there’s I feel a desire to make what’s on the screen at the time kind of fit together as a collection of things that feed each other and when I start to have a section. Of the board that starts to feel like a rabbit trail, then I want to make a subboard that and so it feels like you’re going deeper down the rabbit hole or something like that. 00:35:30 - Speaker 1: One of the things I wanted to ask you about is kind of muse relates to this note writing practice I’ve been doing is the practices of refactoring or revision, polishing, gardening. Uh, something that’s been very useful in my practice is kind of having ways to think about writing at different levels of fidelity. So I’ll kind of have a place where daily notes go that are quite fleeting and kind of scraps will start there. And when something is titalable, there’s some, some atomic unit that I can point to and say like, OK, that’s that’s the thing. Now it can get its own notes and it can be linked to from places. But almost, it’s almost like the goal over time is for these things to adhere and Crete into larger elements. So a a note that’s a single claim is like not that useful. It’s kind of this ross, but eventually some number of notes that make a claim will become like a, like a theory or like a noun phrase, a coinage or something. And that larger note that, you know, contains references to all these constituents, it feels like an increment that’s meaningful. And so the pressure in the system to like over time refine, refract. To create ever higher order abstractions is very helpful in my writing practice and I’m curious how you think about that. 00:36:38 - Speaker 3: I would say that Muse supports that, but doesn’t require it. So you can certainly use Muse as a persistent corpus that you’re accumulating over time and building up to these pristine and complete notes that are basically publishable. But you can also use it in complement with other tools. So maybe you’re doing it in your head, maybe you’re writing stuff out in notion, maybe you’re using an authoring tool like Final Cut Pro, it’s more flexible on multi-purpose maybe. It’s very important. It was a very explicit design decision that boards and cards in general do not require titles. I think that one of the kind of original sins of of file systems is in order something to exist, it has to have a name, but a lot of things just aren’t named yet. 00:37:11 - Speaker 2: That was one of our design goals with Hiroku was that you’d be able to put an application online without giving it a name. 00:37:17 - Speaker 1: Oh, that’s great. I didn’t know that. 00:37:19 - Speaker 2: That’s wonderful. The original implementation was Apps by default were untitled some long. 00:37:25 - Speaker 1: They have cute names. 00:37:26 - Speaker 2: I recall. This was, I think one of the, one of the really lovely pieces of work my partner there, James Lindenbob did, which is what we now call haiku names, which I think have been fairly widely adopted, which is sort of taking an adjective and a noun that were carefully selected so that they go together and they convey a certain vibe that kind of connected to our brand or whatever, plus we eventually had to add some numbers on the end just because there was enough of them. Um, but the idea is something that looks nice. It doesn’t look unfinished, it doesn’t look like untitled, but it doesn’t also require you to figure out, wait, do I want to call this my wiki or is it the team wiki or is it Team Wiki 2 or is it the, cause it’s like an idea I wanna pursue an unfinished thing and I don’t quite know what it’s gonna be yet. I have this hunch that I’m exploring and then yeah, you get all hung up on the name, um, and yeah, for for sure I see the file system. Uh, world of things having kind of that same problem where you use his names are important when we know that we sense that and so if you have to give it a name to even get started on whatever it is you’re creating, that can be a bit of a, a bit of a hold up. Now it’s nice, it might be nice to title something or label it later. Muse has labels for that reason, obviously rename your Hiroku app. There’s lots of other examples of that, but being able to just start with, it doesn’t have a name and eventually actually the act of naming it is you’re sort of upgrading it from Random tidbit of of random idea, random tidbit of knowledge may not amount to anything to, OK, this is a thing now. 00:38:47 - Speaker 1: Yeah, I really like this word upgrade. It accesses a design direction or a design space that I’m curious about with this taxonomy of notes, taxonomy of creative work. Taxonomy is too too rigid a word. It’s obviously much more fluid than that. Almost the ceremony of giving something a name, giving some. A coinage, and that that feels that the object feels more complete when it has a name, almost like it wants to like it wants to have a name. It’s OK with not having a name, but it’s in a happier state when it has a name. 00:39:19 - Speaker 3: This is a feeling that resonates very strongly with me. When I’m doing a project, a huge milestone is when I come up with a good name. And I don’t know why it is just, it feels so much more. Real when that happens. 00:39:29 - Speaker 1: In designing tools for thought in general, I think this is a powerful practice to avoid the tyranny of formality by saying like, OK, there are 6 types of notes. There’s the fleeting note, there’s the claim note, like, that’s terrible, screw that. But you can still have an opinion about process. People ask me like, what software do you use for your note taking? and it’s like totally the wrong question. What matters is kind of the methodology, but having the methodology and Mind, I can’t readily like communicate it or install it into others' minds except by having them read like thousands of words of notes. And one of the things that Tools for Thought can do is to encourage a particular methodology, not by imposing formal structure, but by implying certain kinds of structure, by making, for instance, objects on a canvas feel somehow more complete when they have a title. You’re not imposing the necessity of a title, but you, you’re suggesting that one’s work should perhaps culminate in a title. 00:40:24 - Speaker 2: My creative process is always heavily oriented around finding patterns, which is why it’s important for me to have a lot of I guess raw material and input. Uh, you can call it data, but it might be something like user interviews or it might be something like looking at some other products in the space that I want to compete with or improve upon or something like that. Um, it might be a series of bug reports, and I’m trying to get to the root of what this is in some kind of complex system in order to do that. I want to, you know, it’s been very difficult to track down, but if I could somehow kind of look at all of it together and extract out what’s the, what’s the pattern here? That’s, that’s the place where insights come from me. I, I glean that’s not necessarily the case for everyone, but for me it is this process and if I can somehow get everything together, I can get all the relevant stuff in one place, that’s half the battle. 00:41:12 - Speaker 3: Uh, one last idea and tools for thought before we transition into the meta, and the, the mummonic medium can be thought of as a way to optimally position you to remember things. There’s this point where if you’re at as a learner, you’re, you’re best position to recall vocabulary phrases. It’s like just as you’re about to forget, basically, you get prompted again and as that happens more and more, those times become longer and with a system like space repetition, you get this software-based support to help you remember things. I’m curious if you think that technique can be applied to Skills. Uh, this is an idea that I’m really intrigued by because yes there’s a lot of interesting things that are like facts and figures, but there’s also a lot of things that are our skills and abilities, and I wonder if we could apply the same technique to learning how to play chess or how to use a video editing program or something like that. 00:41:57 - Speaker 1: I do think that’s possible. I’ve spent a few years experimenting with it now, and so is my colleague Michael, and it begins with this observation that it’s possible to use spaced repetition memory systems for more than just recall. So the the typical way to use them is like, OK, what’s this term? What’s this definition? And that’s cool. I mean, that’s useful. But you can also use them for, for instance, applying an idea. And in fact, in quantum country in the final chapter, we have these questions that look a little bit more like lightweight exercises from a textbook or something like that, that share the property of the recall prompts that you can kind of, you can do them in your head, they’re quite rapid. They’re semi fungible, they’re lightweight, but they’re things like what would the output of this circuit be? And these are different from the recall prompts and that they’re not the same. Every time you see them. So you’re actively not trying to remember the answer, but you’re trying to like go through the work of producing the answer. You can also write conceptual prompts, concepts distinguish themselves from declarative knowledge by focusing on how things relate to each other and kind of systems and structures. You can ask questions like for instance, when I was studying the history of philosophy, contrast positivism and existentialism. Now we’re making a connection, but in terms of developing a skill, like maybe you want to like learn to think in a Danological fashion or something. So you can also write a prompt that says, take a decision that you made this morning and it could be as simple as like deciding not to exercise when you normally would have and justify it or condemn it from a dentological perspective. And so this is like a task. So zooming out, I think space repetition becomes most powerful when we think about the items, not as flashcards, but as micro tasks and what the system is doing is batching. The transaction costs, which would normally be associated with orchestrating all of these tiny micro tasks that you could use to practice a skill or develop a worldview or self-author in some way, and putting them together so you can say like I’m going to do 10 minutes of like my self betterment session very broadly construed, and that’s going to involve remembering certain chess moves and also practicing this line of force motion in chess and also reflecting on logical positivism in a certain way. Uh, and so on and so forth. 00:44:06 - Speaker 3: Yeah, that’s really interesting, and I’m, I’m wondering if you can extend it even further. So I think one element of space repetition is it’s kind of helping you with the mechanics of, OK, you commit to spending 10 minutes a day on this problem and we’re going to use the software system to make that really productive. You’re gonna see a lot of cards, for example. But I think another element is basically identifying what you need to get better at. In the case of memory, it’s pretty straightforward. It’s like the, the question. that you answered incorrectly last time or something like that, are the ones that you need to see now. But in the case of chess, for example, it might be that your endgame is weak, or you don’t know how to handle attacking knights or something, and that is potentially much harder to identify programmatically. But it seems like it’s also within reach. And so I’m curious about systems that both um help you mechanically, but also in kind of the same system, identify your weaknesses and where you can improve. 00:44:48 - Speaker 1: There’s a lengthy history. of people trying to solve that particular problem, going back, I think now almost 5 decades. For me, the most promising kind of subfield or sub approach is called intelligent tutoring systems. There are a few systems in the wild that have been commercially successful. The most notable is called Alex ALEKS. It’s an algebra tutor which has some fairly clever mechanics for identifying your weak points and then focusing practice time on on those. I would say that none of these systems has been wildly successful and the field as a whole has not been wildly successful. I don’t fully understand why. I’d like to spend some time studying that because it seems like a somewhat obvious progression once you kind of get into the space repetition space of trying to schedule stuff more efficiently, choose construct cards more effectively, perhaps dynamically. I have read some papers about people in the fields theories about why it hasn’t worked very well. They center on things like the non-regularity of topics. So an intelligent tutoring system on algebra will often share very little in common in its implementation with an intelligent tutoring system on geometry. They can share, you know, some kind of fundamental like modeling, the learner primitive type stuff, but the representation of the ontology is first off very difficult to construct and second off very. difficult to like systematize and encode in a consistent way across fields. My like personal hunch, and again, I haven’t read deeply into this, but my hunch is that part of why these systems have not been more effective in my practice is that they’re universally incredibly dreary. They, they have this intense feeling of being in a skinner box, like you’re a rat in a wheel, you are being fed. These like morsels of problem, and you like swallow, and then, OK, true, like, here’s another morsel, like, do this one next, and I think it may be possible to like, to recuperate the underlying conceptual ideas without the the interaction framework that they all employ. 00:46:39 - Speaker 3: Yeah, very interesting. I check out that literature. 00:46:40 - Speaker 2: So if we come to the meta side Of how tools for thought get developed. We all have some familiarity with the human-computer interaction academic field and dabbled in that in various ways, even if none of us are career academics. Then Andy, you ran a corporate R&D lab, which is sort of a one commercial approach to tackling innovation. We, uh Mark and I were part of An independent research lab, which was an experiment in that, uh, and then all of us in various ways have been part of either classic Silicon Valley startups or bigger innovative companies like Apple. And despite all of these, I feel like we still don’t have the level of attention, funding, and just people who are passionate about. Yeah, computers and more broadly information tools that can help us be smarter, more thoughtful, make better decisions, be self-actualized, all of that bicycle for the mind stuff. I’m still trying to figure it out why that is. What’s the, what’s the gap there? 00:47:41 - Speaker 1: This is an ongoing mystery and a topic for discovery and discussion because in my mind, the wind condition for my work is not creating a particular tool for thought that that’s really powerful, but causing this to be a field. I view it as not a field right now. It’s kind of like this proto fields like some people doing stuff. We don’t have the Maxwell’s equations. We don’t have a powerful practice, but it kind of wants to be a field. I would really like it to be a field. 00:48:04 - Speaker 2: And in order to get there, no one graduates from design school and says I’m going to go into Tools for Though. 00:48:08 - Speaker 1: Well, I mean, some people have that intention, but they mostly don’t, and they mostly can’t. 00:48:11 - Speaker 2: Yeah, can’t is a really good point. I we got a lot of emails that can switch with people saying, hey, I’m about to graduate from this design school or I’m working in a startup over here. How can I get into To this field, I kind of said, well, what field? I didn’t have, I didn’t have anything like an answer for them. 00:48:27 - Speaker 1: I don’t think there is a good answer. Almost everybody who’s been successful, it’s difficult actually to say that anybody’s been terribly successful recently in this space, but anybody who’s had even moderate success has something weird going on. They’re like independently wealthy or they have some cash cow that they’re like milking in order to let them do this essentially economically unproductive activity, or they have like a whole bunch of connections that they’re using. I have been helped in my thinking on this recently by reading uh Nadia Eggbal’s new book Making in Public, which analyzes the economics of open source production, and there are some connections between the the challenges of trying to provision tools for thought, work and also the challenges of trying to provision work on. Open source. They both seem from an outside view to be kind of economically unproductive activities. Nadia’s insight that really helped me and that seems to have some analogs and tools for thought is that it makes sense to separate the way that we think about the economic model of consumption of open source from the economic model of production of open source. So when one consumes open source software, that is a non-excludable resource, so the code is just, you know, it’s available online, you can’t readily charge tolls for it. Uh, it’s also non-rival risk. So you downloading the code doesn’t really like make it more costly for me downloading the code. There’s very near zero marginal costs. The analog and tools for thought is once I like publish that paper. On the great idea I had in Muse. This is a non-excludable resource out there, and it’s also mostly non-rivals, you know, the 100th person consuming that paper and consuming those ideas. It doesn’t really cost any different from the 100th person. But the production looks pretty different. It’s a it’s a small country of people. It’s perhaps excludable, and there are some rivals elements in open source, for instance, Nadia characterizes it as being about attention. The scarce resource for the open source maintainer is their attention, they’re being bombarded by these like requests and like well-meaning people trying to contribute code and so on and so forth and it’s very draining and this actually makes the resource rival risk because the 1,000th contributor to the repository doesn’t cost 0 additionally relative to the 100th contributor. And so one way to think about this that she suggests for open source that I think applies a bit for tools for thought and relates sort of the strategy that I’m pursuing now is we should think about funding production. Than funding consumption. Normally with media goods, we think about funding consumption. Like you go to the store and you buy the shrink wrapped package of software, and see like you’re buying a good, you’re buying an artifact. And when we think about commercializing or monetizing software, likewise, we think about the good or the artifact, or perhaps the services associated with it in the modern world, like I’m going to sell support services if I’m red hat or something, modern models might sell cloud services, but a different way to think about all this is to think about kind of verb instead of noun, funding the process of production rather than funding the The output of the production. This is more common in the arts, somewhat more familiar in the arts. Like if there’s a musician you really like, your contribution to buying their albums or whatever, like it’s probably not earning them very much money, but increasingly it’s a popular thing to like be part of their their fan club or sponsor them or something like this. And when you do that, when you sponsor the musician
Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: Often when you ask an expert who’s accumulated a large amount of experiential data around a problem area, they’re fabricating an answer. They actually have way more information than they could possibly convert into a verbal symbolic language, and the inability to articulate something doesn’t mean that there isn’t knowledge there, right? Taste is real and experience is real, and you can have a lot of knowledge that can be extremely difficult to articulate. 00:00:31 - Speaker 2: Hello and welcome to Meta Muse. Muse is a tool for deep work on iPad and Mac, but this podcast isn’t about Muse the product. It’s about the small team and the big ideas behind it. I’m Adam Wiggins here with my colleague Mark McGranaghan. Hey Adam, and joined today by our guest Connor White Sullivan of Rome Research. 00:00:49 - Speaker 1: Thanks for having me on. 00:00:49 - Speaker 2: And Connor, I happen to know you have a dog companion. There’s a husky, right? He is. 00:00:55 - Speaker 1: One thing I like about them is that they’re not bred to be obedient dogs, cause you didn’t want somebody who was an inexperienced sled driver to drive the whole team out onto thin ice. So the dogs sort of take a light suggestion, which is one of the reasons they’re particularly hard for first time owners. 00:01:13 - Speaker 2: I feel like they take light suggestion also is a good training for being a manager of software engineers and designers. 00:01:20 - Speaker 1: Or maybe a parent too, but uh, yes, a parent of a toddler, absolutely. 00:01:23 - Speaker 2: And I think our audience probably knows who you are and knows about Rome cause you’re definitely a notable figure in the tools for thought scene that we consider ourselves part of, but for those that aren’t familiar, maybe you could give us a brief introduction. 00:01:39 - Speaker 1: How would you introduce from? 00:01:42 - Speaker 2: I consider it having created not only the kind of modern phenomenon of tools for thought, which obviously that concept extends well back in time. Indeed, Mark and I did a whole podcast on it, but in terms of popularizing it in kind of the last few years, it’s really, I think, opened the aperture for a lot of tools, including us and others to say there’s more to productivity software than, I don’t know, email and note taking in calendars, and that’s what I think of as the collective kind of tools for thought, scene. And then the, the specifics of the product, I think it really is all about the value of linking thoughts together and bringing things that I think of as being part of obviously the internet, part of things that have been in our knowledge tools in different ways over the years, but putting them together into this kind of notes and Personal memory and personal thinking space in just a new way that really struck a chord with people indeed to the point that I think it’s been widely copied now and I would say you basically invented or at least pioneered a whole new category of software, which is quite a special thing to do in one’s career, I would say. 00:02:46 - Speaker 1: The thing that is interesting to me is that part of my frustration in the last few years is that none of the folks who have supposedly copied us have copied the things that I think are actually important or are even indicative of the direction of like why I built Rome or what we’re aiming for. I think of writing as a tool for thinking. We’ve talked about this in past discussions, just one on one. I don’t have a great extended working memory. Like, I’ve worked with people who are actually geniuses who are able to visualize complex systems in their head, who are able to, you know, recall any piece of information they need, but I have a hard time just Laying out all the steps of the problem and trying to think through all the variables that are there, and just trying to keep my head straight, especially around things like software design, let alone systems design or building a team, or any kind of complex decision. So, Rome, what you see right now as a product is something that did largely evolve as a sort of cognitive prosthetic for me. Largely I handled my ADHD and trying to learn as an autodidact, all of these things that I needed to do to be able to build Rome. I’m self-taught engineer, self-taught designer, self-taught manager, maybe not good at any of these things, but I had to learn how to fundraise, had to learn how to do marketing, like, I studied none of these things, had no formal training in anything, and I had to figure out how to get good enough at a lot of things. At the same time, more or less, or in various sequences. So, I built Rome as a tool for helping me to organize my own learning and also just to, I’ve had very severe ADHD for my, my whole life, and it runs in my family, but it is not. I think Mark, I might have heard you say it on a podcast, or maybe it was some other colleague of yours that was on saying that they were characteristically unemployable or something. Well, I was fortunate for startups to exist because I don’t think I could have held down like anything even remotely resembling a white collar job, for any amount of time if I had not been able to, to build my own companies where I couldn’t get fired. So, a lot of Rome was built as a tool for me to be able to just organize my own thinking as I was thinking. So, I think of it first and foremost as an extension of my working memory, so that I can Zoom in, eliminate all the extraneous things, have a clear workspace, but then at any point, I can pick up pieces from, I can break problems down into smaller chunks and know that I will have the relevant information available the next time I’m able to pick it up, which might be some indefinite point in the future. So, R Rome is a tool for writing, but it’s also, and I’ll talk a little bit about It’s a little hard to fully explain, especially what we’ve been doing over the last few years, if you don’t know the context of why I started Rome, and what it’s trying to get to, and why I, like, even got interested in software in the first place, but I don’t want to tangent too far yet. So yeah, it’s a different medium for writing and thinking and trying to Organize your brain so that you can think thoughts. The way I said it before like this, there are things you can’t see with the naked eye that you can see with the telescope, and there are things that you can’t hear, but, you know, if you’ve got a powerful microphone, you can hear them, and I think that there are thoughts that we can’t think unless we’ve got some sort of cognitive AIDS. And Brett Victor has talked a lot about this. I know you guys are probably fans of his work. I’d love to chat a little bit about some of those ideas, but I think that a lot of our diagramming tools, mathematical notations, programming languages are all cognitive prosthetics that allow you to think thoughts you couldn’t otherwise think, and Rome is It’s also a programming environment. You can write code and execute it in code. We’re trying to create a whole new kind of medium for expressing your thoughts first to yourself, but then eventually be able to create a communication medium that can allow for a different kind of coordination and knowledge transfer, and a new kind of collective action, collective thinking, collective intelligence, and that’s the real thing that has been motivating me for at least the last 15 years. Which kind of leads me into the questions that I want to ask you guys. 00:07:07 - Speaker 2: Well, please do. I have something to say about what you just said. It’s very inspiring, especially because in many ways you’re not talking about the specific features or exactly the way that how does this writing slash thinking slash notes slash memory tool differ from what comes. Before, but this underlying why, which is exactly as you said with Brett Victor, I think Andy Metzek talks about this a bit in his work, talking about, for example, Roman numerals versus Arabic numerals and how that allowed us to, yeah, essentially do new things, think new thoughts, do new kinds of Math and the computing medium obviously has all this potential to open that up, but to date, even as far into this computer thing as we sort of are in many ways we are just transliterating, OK, I’ve got a sketchbook. OK, now that I’ve got an iPad, let me make a direct transliteration of what’s on paper. I’ve got. A typewriter, let me turn that into a word processor and so forth, and I would say most notes programs, even pretty sophisticated ones, I don’t know, you take every note in prime 10 years ago can obviously do a lot of things that like a paper filing system can’t do, but in the end it kind of is just that on a computer. And it seems very clear to me that there’s so much more potential if we truly embrace the dynamic medium of the computer, and there’s probably 1000 different experiments we need to do, and different people will need different things, to your point about what exactly is the right thinking prosthetic for you probably is also for a lot of other people, but maybe not everyone in the world. Different people need different ones, and that’s why I think it’s so. experiment and break out of our established categories, but I felt like a few years back you couldn’t get past the like again productivity software just kind of like notes, email, word processors, spreadsheets, and happily the tools for thought seeing that you really helped seed, I think has opened our minds to like, OK, let’s do some innovation here. 00:08:57 - Speaker 1: Even the idea that you could have end user customization where people could actually write code, I mean, like, We got so much push back when we let people run arbitrary JavaScript inside, and I mean, rightly so, because it’s also a multiplayer tool. Obviously there’s some security concerns, but my entire thesis is, I want to give people power, right? And I know I’m extremely neuro atypical. And I know that a lot of systems, which worked very well for plenty of other people, worked horribly for me, right? Schooling being the sort of most obvious one. So I know the feeling of being put into a box and the box not being extremely constraining and wanting to do more and needing people who do not want to give you their permission. I hate asking for permission. And so, that’s one of the reasons that first I was like, well, Wild West, you wanna run JavaScript, we will give you the ability to completely break everything in your graph. Like, if you wanna really mess yourself up and just like grab some code that you found off the internet and put it in there, and like, maybe you’ll lose, you know, all your notes because you’ve got some random, I don’t, especially in the early days when it was still a small amount of attention there. I have a very different attitude than folks with the security mindset of, well, what if hypothetically, somebody might be trying to steal my notes? I’m like, you’ve been using the product for a day and a half. Like, I don’t think this is a hard target yet, but I get ahead of myself. I have been really excited to see that proven out, you know, that people now are trying to Do something that was pretty common in things like text editors and for Emacs and Vim and for professional programmers are very used to the idea of being able to modify their tools. And if you work in the trades, like my recreational activity is doing metalworking, you know, I like welding for fun, right? And one thing I like to do is like making my own tools and making jigs, and like, if you’re doing any kind of carpentry, You know, oh, you don’t have the exact right tool for it. Well, if you’ve got an angle grinder and you’ve got a welder, and you’ve got some scrap metal, like, you might be able to jerry rig something up that might be able to serve the purpose of what you’re trying to build a one-off tool, and we haven’t had those for knowledge workers, except for in the domain of computer programming, and I think that people who do other kinds of work, it’s been very exciting to see so many like, Folks who are doctors, who’ve never written a line of code in their life, and they’re able to learn in the weekend enough to build some functionality into Rome that like, is not my priority. I don’t care about it. It would never occur to me to make it, but it’s ideologically important to me that they not have to get my permission to make the tool do what they want to do. So here’s the question I’ve got for you guys, which is, when did you guys start caring about computers at all? And what was it that made you care about them? 00:11:54 - Speaker 2: Yeah, I usually peg myself for 8 years old, and I think it was an Atari with 1K of RAM, since I’m old enough that that was the kind of computing. I think we had one of these in our entire school, the elementary school I was in. I don’t know what drew me to it, maybe this is just a classic young nerd thing that you can’t identify it, but I like to at least post hoc rationalize it that I saw the potential for creativity and I immediately want and all you could really do with program computers back then was program them, right, like they could use logo, maybe later basic and you’d get in there and just in the same way that a kid just wants to pick up that piece of paper and the crayons and start drawing scribbles, and that it’s this form of expression. I saw the same thing in the computer and just was endlessly fascinated with it. 00:12:42 - Speaker 3: What about you, Mark? Yeah, similar story for me. I did not have the experience of programming very young. I didn’t do any really substantial programming until I was in college. And the specific impetus for me was, I was studying economics, among other things, and I wanted to do agent-based simulations to test out some economic ideas. And so, OK, I got to teach myself Java. And I remember, in retrospect how completely terrible that Java program was, just the incredible amount of copying and pasting. You wouldn’t even believe it. But anyways, at that point, I got into that track that Adam was describing where it’s an incredibly powerful and accessible medium for creating. I’ve always liked creating things like I did model airplanes and other stuff like that. But there’s actually a pretty narrow set of things you can actually do that’s both powerful and accessible. Maybe you are into welding, but as a 19 year old in rural Maine, it’s kind of tough, right? But you can get a computer and do whatever you want. And you don’t need to ask anyone’s permission, and the sky’s the limit. So it’s pretty cool. 00:13:39 - Speaker 1: I want to touch on both of those. Adam, you’re talking about being a nerd. If you can imagine it, I was such a nerd I didn’t have enough friends to play D&D with. Let’s say that, like, I used to play this single player D&D type book. It was like a choose your, I don’t know if you guys might have been actually maybe. Too old to remember the RL Stine Choose your own adventure books, those were like really big in the 90s. 0 yeah. There was a game called Quest, and it was individual paragraphs, each with a number, and it was like, oh, if you go down the right hallway, go to 232, if you go down the left one, if you fight the goblin or whatever. But I remember playing these games all the way through, and then I actually made the multiplayer. I did have two friends who were nerdy enough to indulge me in this for like, A couple recesses before they were sick of it, but I played the games all the way through and then continued the rule set for the game and just started writing paragraphs at the end of the book to try to like keep the game going, because they were originally supposed to publish, like, 10 of these game books, but only 2 of them got published in the US so, and in some ways actually there’s something reminiscent of Rome in that sort of backstory. You’re talking about agent-based simulation for economics, Mark. Here’s the next question I’ve got for you guys. What’s the first problems or bigger problems that you remember and awareness of, or even caring about? 00:15:00 - Speaker 3: I’m gonna give you sort of a half answer here. So there’s certainly problems if I go back in my memory when I was a very impressionable kid, you know, whatever. The 3rd grade science teacher says, you know, all the turtles are dying, so everyone, you know, goes home and clips all the six-pack plastic things and, you know, stuff like that. But something that’s still sticking with me is when I was working in computers and originally I had this very unalloyed excitement about the cloud. Coming out of college, I was like the cloud services in particular. This is before I was even at Hiroku. It’s just so powerful to be able to have a hosted service that does everything for you, and the end game is everything moves to that model. And I would say I still think there’s a lot to that. But it was only with the experience of living in a society that has embraced that model that you realize some of the really tricky downsides of it. Something I’m still grappling with is someone who works in computers. So we could do a whole episode about that, but that’s one that I’ve definitely thought about. 00:15:53 - Speaker 1: Our follow-up question. You guys know the phrase, you can’t solve the problems you have, it’s attributed to Einstein, you can’t solve your current problems with the thinking that got you into them. Yeah. Do you know the exact quote? 00:16:06 - Speaker 2: We can’t solve the problems by using the same kind of thinking we used when we created them. Yeah. Yes. Yeah, and I do think there’s a maybe a positive spin on that, you know, one is like, we had dumb thinking and then that led us to be in a bad situation and we need to be less dumb. But another way to put it might be that in moving yourself or your group or society forward with better thinking, well, that creates new problems like the cloud. Version there that Mark mentioned and now you need to solve those new problems, but on net you’re probably better off than where you were before. It’s just that the idea that anything is going to bring a panacea utopia where all your problems are solved and now we don’t need to have new thinking and new solutions and be aware of the downsides of the world we’ve created, that will basically never happen. 00:16:55 - Speaker 3: I think you can even generalize it and say, even if there’s not progress, there’s change, the world is different. There’s no going back, you know, that’s the way it is. The only way out is through. 00:17:06 - Speaker 1: So I thought that’s The potential hope is networks. I politically became really alive when I read Yochai Benkler’s The Wealth of Networks, and, you know, saw lay shirky, organizations, institutions. My life plan was to sell John Deere tractors in Africa cause it seems like the coolest job I could do. I was planning on like doing a few years in college and then like going and being like a heavy equipment salesman in Africa because I wanted to travel, and that looked like a job that would pay for me to travel to really crazy places. And I thought, you know, excavators and tractors were cool. So I was like, yeah I’ll probably do that. That was like my freshman year plan, and then I was like, oh, but actually, we might be at a period of history that is as important as the printing press. 00:17:50 - Speaker 2: Part of the thesis, I guess, of the wealth of networks is that the creation of this network society through the ever increasing communication capabilities, the internet being the kind of, at least to date, the ultimate manifestation of that creates a moment of opportunity to have an impact, to change the way the world works. Again, that’s certainly where the startup world sees itself as an This highly dynamic, you know, early stage thing where you have the opportunity to maybe have more impact than you would as an individual. So was it that part of the book that sort of inspired you to think, OK, well, it was a couple of things. 00:18:22 - Speaker 1: It was the idea of non-rival goods. So first, the idea of, I’d make something and it costs $0 for there to now be a million versions of it, right? And That because the goods are non-rival and they’re post-scarcity, like, they have a different kind of economic pattern to them. That was one aspect of it. And so he sort of had a four part quadrant, that he was sort of laying things out. He was thinking about the state, the firm, the market, and the network. So, a state would be something which is public goods, like, you know, they’re trying to manage resources that cannot be sort of carved up into small pieces, you couldn’t have property rights on things like Clean air, you know, so places where there’s lots of externalities and like one person could hurt the commons. But there isn’t less private incentive for people to maintain or protect the commons. So the state historically has used coercion for the governance of the Commons. So the state would be centralized management of a commons, the firm would be centralized management of private resources, the market would be decentralized management of private resources, and the network is decentralized management of public resources. So, like, it allows for the creation of new kinds of commons, particularly information commons. 00:19:43 - Speaker 1: And so here we’re thinking what open source or the way that like DNS works where there’s no, I mean, I also was interested in Ray Kurzweil at the time, so I was thinking general like techno utopian post scarcity, like what happens when we can 3D print organs and the more we can get to actually we might be on the cusp of technology that allows you to take things from the digital world into the physical world, and this could be potentially somewhat revolutionary in terms of if I can get any medicine that I need. By like downloading it, and if someone can make an open source version of the medicine that I need, like, that was the kind of one aspect of what I was thinking, that was that book. But the other thing, I think it was mostly just I got some hope that like, hey, there’s, you know, Linux. I also got then disillusioned but I did a bunch of open source stuff. My undergraduate thesis was on trying to create a way of we finding a local government and actually like making a more direct democracy type approach. Under the assumption that, you know, people have a ton of tacit knowledge, like, there’s voices that are not heard that have expertise that is not like, recognized and you need culturally relevant solutions. I was coming from anthropology background, so I was thinking a lot about like, the thing that is gonna work in a rural village in Ghana is like not gonna work necessarily in Boston, Massachusetts, right? And even the thing that’s gonna work in Southie is not gonna work in Jamaica Plain, maybe. Like, you need to tap into the resources and the culture and like the actual lives and local contexts, lived experience of people who are in a community. 00:21:11 - Speaker 2: You know, I’m a huge fan of being close to the problem, let’s you, like you said, tacit knowledge, understand it in a way that you just can’t, but yet as our societies get bigger and literally this is just a scaling the number of humans thing that exists, which is governments are going to naturally get further away from the people, right? The government of the United States 200 years ago when the population of the United States was a tiny fraction, you know, it’s much closer to those people whose problems it’s hopefully trying to solve. 00:21:40 - Speaker 1: Do you guys remember your first ideology? 00:21:44 - Speaker 3: Baby’s first ideology? 00:21:46 - Speaker 2: Ba’s first ideology. Yeah, I mean, the classic thing you have with, yeah, let’s say university students is, yeah, they get really into environmentalism or something like that, and it becomes almost the purism of it, right? 00:21:57 - Speaker 1: Do you remember the first thing you were ideological about? I, it’s easier to call it an ideology if you’re like post, if you’ve left it in some ways. 00:22:03 - Speaker 2: Yeah, probably open source actually. 00:22:04 - Speaker 1: Open source. How old were you? 00:22:05 - Speaker 2: And Linux specifically, this is the year of Linux on the desktop, you know that this year is Linux on the desktop? 00:22:11 - Speaker 2: Well, being the kind of lover of open source belief in what that could bring and thinking, OK, commercial software’s days are numbered, eventually we’re all going to be running, you know, things that are developed in the common for the common good. It’s just a matter of time. So yeah, I think that was probably one of my first in the late 90s. 00:22:29 - Speaker 1: I was definitely in that ideological camp until I try to run an open source project. And then I realized it’s a lot easier if people actually can make a living and do the thing full time and yeah. What about you, Mark, do you remember your baby’s first ideology? 00:22:44 - Speaker 3: Oh, I was gonna give you another half answer, which is, I’ve always been more of an is than an art person. I associate isms with thought, you know, the world ought to look like this. And there’s something to that. And of course, people having aught notions feeds back into what is. But for me, I keep myself busy just trying to understand what’s actually going on and the dynamics that are unfolding. As you understand things better, you certainly develop notions about how they might be different, or how you might want them to be different, but I try to keep a real close eye on how The world actually is, cause just understanding that is quite hard. To give you a concrete example, you had talked a little bit about technological determinism. Just understanding how the various technologies that have and are evolving, what that means for us is incredibly not obvious. Even something like computers, or even networking is networking going to be centralizing? I don’t know, it’s right in the name, shouldn’t it be? Or is it gonna be highly centralizing as, you know, for example, Samuel Burge has argued in one of his pieces that we can link to. It’s not obvious to me. 00:23:43 - Speaker 1: All right, well, I mean, we can take a second on this because I think the best kind of prophecy is worth telling, right? Where certain kinds of things, like the most interesting predictions are self-fulfilling predictions, right? Something where because you imagine a thing to be possible, and you believe that it’s worth your energy to try to make that thing possible, you can make the thing possible, right? I mean, both of you guys have made startups happen out of nothing. Like, nobody makes anything happen out of nothing, but like, The early stages are crazy, because it’s not a Ponzi scheme per se, but like, you need to convince investors that you’re gonna be able to convince engineers that you’re gonna be able to like convince customers, like, it is a crazy balancing act where you have to make a vision into a reality. Even just in the assembling of the early team, and the raising of enough capital for people to quit their jobs for long enough to get the proof points to convince more and yeah, I think there is a faith based element to it. 00:24:38 - Speaker 1: Yeah, I mean, my thinking is there are two ways, you know, the idea of a map territory conflict? 00:24:41 - Speaker 2: Yeah, right. 00:24:42 - Speaker 1: If the map I have in my head doesn’t match the territory, there’s two ways I can change things. I can try to update my map, or I can get a bulldozer and I can try to change the territory, right? 00:24:51 - Speaker 2: Yeah, there is something to that. The power of the ideological person and often the world are changed by young people that have sort of like an unrealistic vision because they aren’t stuck in the status quo and they are willing to take that bulldozer in, but it has to be balanced somehow by pragmatism. 00:25:08 - Speaker 1: My first company was trying to make an online town common. The idea was if you could get the for any local issue, and every issue could be made into a local issue, was the sort of the hope, right? If you could accumulate political capital, Online. So if you could confirm that all the people on this little forum thing were actual registered voters in this town, was the mechanism that we had for trying to accumulate political capital, then you could sort of force a more responsive local government and start to sort of decentralize the place where people are most likely to influence and get some power. And the idea At the time, I was hoping, you know, oh, if you gave people that like, then we could actually have democracy experiment everywhere, you know, like, but it totally didn’t work. It totally, like, I didn’t even want to use it because I realized I didn’t care that much about, like, you know, the property taxes and like the paving of the roads, and like, what to name the new library, like, just the local politics issues were so boomer. And I was like this little 19 year old, like libertarian socialist, like, we’re gonna have an internet anarchism revolution, and all of my users for that product were like 60+. I was so glad that I had no power to coerce anyone to do anything cause I just didn’t understand the world. So I became even more pro startups because there’s something beautiful where you have to Be right about making something people want. You have to both have a vision, but that vision does get tested against reality of like, will it blend? Like, can you ship it? Like, when you ship it, will anyone care? Like, if you build it, will they come, right? You kind of have to believe it in order to build it, but reality will test you there, which is one of the reasons I like startups, and it’s also one of the reasons why I’m hopeful for more. diversity of political entrepreneurship or things like that. Like, it’s one thing I really do share in common with biology and the hope that there will be more micro nations someday in the future and actual entrepreneurship and meaning bounded communities or something, something like that. Utah is a great example of like, somebody put out a vision and a bunch of people with the same kind of ideas. Utah is the original network state. 00:27:32 - Speaker 2: Certainly makes me think of charter cities, which is certainly another kind of libertarianism type sphere idea, but yeah, it is that idea of it’s not just about self-governance and getting to choose, but also the let 1000 flowers bloom. We have to try a bunch of stuff because, as you said, Ideas have to be tested in the real world, and we can sit around and debate them, and indeed people do, but until you can try it at scale, over time, see how it actually impacts people’s lives, do people really want to live in that place that does change society in some fundamental way? 00:28:03 - Speaker 1: Yeah. So here’s another question I’ve got for you guys, which is like, and I’ll give my answer first, but I’ve been thinking recently about just Beliefs that sort of lodge in your head that end up propagating into all sorts of other things, and you don’t necessarily go back that often to reexamine them. So I’ll give one, which is the idea that like, creative work can’t be coerced. And I think this is part of why I’ve been so pro volunteeristic type associations and like trying to figure out networks for mutual aid and ways for people to help each other, where it is a very opt-in system. But I think it might also be just directly related to me having a pretty oppositional, like low agreeableness personality where I really don’t like to be coerced in anything, so, like, I just, I assume that good work can’t happen under real coercion because I won’t work under coercion, therefore, you know, I don’t know if anything jumps to mind for you guys in terms of like, little beliefs like that that might color. 00:29:08 - Speaker 2: The core of critical thinking, I think, is trying to examine beliefs that are in your mind and how did they get embedded, and the reality is it’s rarely a I encountered a new idea, fact checked it carefully, and then decided to make it part of my worldview. It’s more you get exposed to something a lot over and over again and it just through osmosis sinks into how you see the world, and I always find it funny. To stumble across little beliefs, even just things like, you know, should you keep this particular food item in the fridge versus is it OK to, you know, sit on the shelf stable and sometimes there’s just something I picked up when I was a kid from one of my parents or something, and I didn’t realize until I was an adult that actually you can stick that on the shelf. I just never examined it, right? 00:29:51 - Speaker 1: What did you keep in the fridge that you didn’t need to keep in the 00:29:53 - Speaker 1: fridge. 00:29:54 - Speaker 2: Remember what maybe it was uh potatoes was one that was like that. It does slow down they’re like budding or something like that, but I don’t know, I think my mom always stirred potatoes in the fridge and then yeah, I had a roommate that was just like, I’m just going to put them on the shelf. We don’t have room in the fridge. I’m like, wait, you can’t do that, you know, they’ll spoil, but then I’m stopping and thinking, well, wait, how do I know that or why do I think that? And the answer is, you know, it’s just something I absorbed. 00:30:17 - Speaker 1: What about you, Mark? Do you have any things you’ve noticed that were like, it’s the most general question. Do you have any unexamined beliefs? Yeah. With a terrible question, I, I apologize. 00:30:28 - Speaker 3: I’m not sure if this is exactly what you were asking, but there are some lenses I keep in my pocket. I’m always putting them up and using them to look at the world. So one lens is the lens of trade-offs from economics. It’s very easy to speak in absolutes or to speak in terms of improvement, integradation. The reality is almost always one of trade-offs. Another one that I use all the time, relatedly is Distributed information processing. This kind of is related to your idea of mutual association. The world is so complicated that there’s no way for it to be understood, essentially, especially when you consider that a lot of the things that are important to understood are matters of personal preference. So it’s not physically possible to bring that information into one place, compute, and to spit out results about what ought to happen. So it has to be done in a distributed way. And it’s so easy to fall into the trap of, you know, what if we just brought all the information in one place and figured out what to do? It just, it can’t be done. And when you remind yourself of that all the time, you come across many cases where you see people trying to do that, to try to extract the information and put it through an explicit machine and turn out an answer. And you have to instead just let it be out there and let the network process the information and decide what should happen. 00:31:41 - Speaker 1: Can you go into more detailing? 00:31:42 - Speaker 3: Well, this is the whole key tenet of Like Austrian economics or Hayakian economics, people can look up those things and read about it. There’s a famous, I think it’s an, I wanna say it’s an essay written about the manufacture of a lead pencil. And something as simple as that, there’s actually no one in the world who knows how to manufacture a lead pencil. Like it has to involve many different people from around the world, and they all have their own test and knowledge and understanding of what kind of wood is right, and, you know, they know about the quirks of the machine and like how it’s always off by one degree, so you got to counteract that right. That’s sort of thing that it seems so simple, but even something as basic as that can’t be known centrally and needs to be distributed out, by the way, not even to mention. Like how many should be produced at what price, where, what materials, there’s an incredible amount of complexity that can only be computed on and distributed way. I just find that a handy idea to go back to it often. 00:32:33 - Speaker 1: Can you think of examples besides the market? Like, the first thing that you’re making me think of was, I feel like I’ve only in the last few years, Gotten language for thinking about why it makes sense to listen to emotions so much. Like like thinking of emotions almost as like bass net massive information compression systems where you’re just getting a vibe about like, oh, this feels off. There’s an essay called The Limits of Legibility, or it’s like a less strong post that I like, but often when you ask somebody, especially an expert, or somebody who’s like accumulated a large amount of experiential data around a problem area or around a scale or something like this, like, well, why do you think we should do it this way or that way? They’re fabricating an answer, like, they actually have way more information. Then they could possibly convert into a bit stream that is compressed into, you know, verbal symbolic language. And so, if you treat the answer that somebody gives you of why as if it’s actually meaningful. Many people actually treat the why, especially if they want to argue about it, as though that’s the real thing, rather than a like tiny symbolic representation of like, what in that moment they were able to generate, which might not even be the real thing. Right. And the inability to articulate something doesn’t mean that there isn’t knowledge there, right? That is like such a like taste is real and experience is real, and you can have a lot of knowledge that can be extremely difficult to articulate. I found this to be extremely challenging when I was trying to introduce sort of counterintuitive cultural norms into the company, and I was bringing people who were used to working in, like, all right, so, since I finally found a moment where I actually made a point, Rome is not a normal company. It’s because I think normal companies are what got us into the situation we’re in, right? I wouldn’t want to work at Google. I wouldn’t want to work at Microsoft. I would have no interest in being there, like, they’re not building products for people like me, and also, like, they kind of are, but like, the thing that I’m interested in is trying to figure out a different way of thinking together and in a bunch of different ways. But Especially as I was having folks coming in who I’m trying to communicate certain practices, especially practices around how I work with Rome, that had just evolved over time, right? I’ve got a whole very different way of using the tool than, you know, your average user, and trying to communicate why I do things a certain way or why I was even asking somebody to do something a certain way, it was very hard to do. If there wasn’t trust that there was some intuition there, and that the words that were going to be used as the explanation for why we’re trying a thing, we’re not the actual only reasons. Like, I could come up with a 100 reasons for why we might try to do the thing. 00:35:26 - Speaker 3: Yeah. It’s such an important point, and we’ve mentioned it on the podcast many times, but I think it’s worth reiterating that experience and judgment and expertise, they’re incredibly multi-dimensional, you know, millions. Millions, billions of dimensions, right? And there’s no way to compact it down either in terms of the model itself of experience, say, or the answer to some discrete symbols, as you were saying. And furthermore, when you get discrete symbols out, they’re often just back solved, like this huge multi-dimensional model spits out in an intuitive answer. But then it’s unsatisfying to convey that so the brain just like finds a way back through symbols it knows to convey something that sort of ends up at the destination, and it sort of plausibly sounds like a quote unquote argument or quote unquote reason, but it’s just totally backstop of that how they actually got the answer. Now I’m gonna turn this around, because this is an idea that I’ve embraced in my own thinking, but what does that mean for a tool like Rome or other tools for thought, which are inevitably collections of discretized pages and links and things like that. How do you reconcile those two worlds? 00:36:29 - Speaker 1: TLDR, my first startup, I started as an open source project, could not recruit anybody to actually work on it, was somehow able to pitch it as a business plan competition, like at business plan competitions, got 10K, suddenly could get actually better engineers and the ability for them to work full time. And so I found having an open source like political project, plenty of people who were interested in the idea, but nobody could actually help me build the thing. A lot of the talent, as soon as I framed it as, oh, well, like, I guess we’ll make it a media startup and maybe we’ll sell the data. Like, it’s disgusting to think about the idea of selling political data now, but at the time, I was just trying to figure out how do I win this business plan competition, so somebody will give me some money. I worked construction before I got into tech. That was my summer job. But it was a terrible business. You know, I got to give like some TEDx talks and go to the Aspen Institute, and we did end up getting acquired by AOL, but It was not a good business model. We were selling software to newspapers to get high quality uses generated content on like super niche issues that they were like, civically important for the mission of newspapers. It was just a bad, bad business. And I was trying to solve so many problems at once, in terms of how do you build a user interface for collective intelligence? How do you think about the political dynamics of like, OK, what people are excluded if you’re using real names and you’re using local, like voter registrations. The problem of political coordination plus how do you crowdsource from a large body of people the actual best ideas from a broad perspective, so that you don’t have to read every comment. I was trying to solve a bunch of things at once, and I found that I actually had to do some sort of science, and the fact that I couldn’t isolate any variable was like just blah. So after that company was bought, I was still interested in how do you build a better way for groups of people to In a weird way centralized their decentralized knowledge. So maybe the Hayek point is, it’s not even perfect execution, this is just a bad idea. I’ve wasted my whole career. Maybe like, if I just read Hayek, I’ll be like, OK. The collective intelligence stuff isn’t gonna happen, but I wanted to simplify the problem. And so my first thought was, if I am able to be as a single player, Able to take the best writing that was done. So, like, one problem we had in the first company was, how do you get critical mass for a social network, and then how do you create a ecosystem that actually inspires people to be as articulate as they possibly can be about what their position is, or like why you should do the thing. How do you actually get people to give really, really high quality content, and then how do you from a large mass of users. Identify the best content from a diversity of perspectives, because instead of just having people vote down ideas that are good articulations, but they happen to disagree with, how could you actually get the best ideas from many different perspectives and see this sort of multi-dimensional object of, like, any kind of question, but we were particularly starting with these local political questions. And my thought was the simplified version of this problem is, well, one, we had only been able to launch in places where we had critical mass for my first company which is called Local Acracy. And so I was like, the tool has to work without critical mass. It has to work as a single player tool. And if I can start with the best writing throughout all of history, and I can be the one who’s aggregating it, and I can figure out how to like map these different perspectives together. Well, now I’ve just isolated a bunch of variables because I don’t have to worry about getting the best, like, articulation of an idea. I’ve got the entire corpus of human history. I can just pick out what I think are the best articulations of the idea, and I don’t have to worry about critical mass, cause as long as I’m interested in the problem, I can do this, or as long as anyone’s interested in the problem. And so, There was a book called The Sentopicon. They’d spent like 50 million bucks. It was from Encyclopedia Britannica, and it was a great books course of all the best ideas of Western history. And the first two sections of the book are an index of these ideas where they sort of summarize it, and they point to the paragraph number of, like, where the ideas articulated by Descartes, or Hegel, or Marx or Kant or Plato, or whatever, right? So I was like, well, if I can make a digital version of this, and I can make it for a single player. And then I just charge money for that. I don’t have to worry about selling software to newspapers who then run into advertisers, and like, I have to convince a bunch of other people. If I can just find people who want to organize thoughts, and I just sell it to single players, then I can maybe get the iteration cycles that I’m gonna need. I can basically keep this company alive long enough to run through all the iterations to solve this potentially impossible UX problem of how do you actually create these high dimensional objects. That represent many different perspectives around a single sort of truth thing. Like, how do you build a truth engine? How do you build a system that actually allows you to sort of create this base net so that your beliefs could propagate. 00:41:27 - Speaker 2: Well, I see the breadcrumbs now. You start with kind of collective action and you’re thinking in terms of governance, but you’re also thinking in terms of networks and how to bring together sort of computing and some of the open source and maybe kind of more freedom oriented ways of organizing ourselves. You tried to do that with software for kind of participation in government and that was a total bust, but it leads you into the like it was so poor. 00:41:52 - Speaker 1: I mean, it’s like. Imagine selling software to government. Now imagine that you have to sell software to government and to the newspapers that are going through like massive decline at the same time, and the subject matter that is gonna be discussed on there is like extremely boring. You guys familiar with Michael Nielsen’s reinventing Discovery? That was the book I was looking for for forever after my experience with Localocracy and then trying to work on, cause when I ran a labs group briefly and poorly at Huffington Post, cause after we were bought by AOL, we ended up in the editorial division for HuffPost, and then I just was able to spin out my own little labs group for about a year, focusing on kind of collective intelligence, crowdsourcing knowledge, figuring out ways of doing new stuff. And anyways, the book that I found that was sort of one of the better textbooks on thinking about the problem of collective intelligence is that one. And he talks about things like The problem of a conference where, even if you have all the experts in the same place, you’re not necessarily routing the right people for the right conversations. You have to worry about when you’re making everything synchronous, whether people had enough coffee or whether they’re distracted by something, like, you want to be able to allow for a certain kind of serendipity to be more Predictably happening and like remove the sort of constraint of they have to be in the physically right place at the right time, they have to just happen to bump into each other. When you run into somebody, you don’t know that they know something that you need to know in order to solve the problem that you’re working on, but you don’t know what the name of their knowledge is and blah blah. Like there’s certain kinds of human routing or information routing problems that he lays out pretty well in there. He calls it efficient allocation of expert attention. And so one of the reasons Ro is block-based even is just trying to work with that. So not just thinking about thinking in terms of Blending, programming, and writing. So you’re not just writing paragraphs, you’re actually trying to think about a kind of data structure of a pattern of thought. And that’s a lot of what I’ve been trying to create as a medium is, you know, if you think about block references, which is something that none of these so-called roam clones do at all. I don’t know any of them that are actually multiplayer, right? The reason I’m referenced your offline talk is like, we’ve been multiplayer from day one, even though we’ve been a single player tool. Right? That was actually architecturally some of the hard stuff to figure out was like, how do I make this thing work as sort of a collaborative real-time thing with a graph database and start thinking about the interpersonal dynamics of referencing somebody else’s thought, or like, what are the different ways that you write when you’re trying to write almost as statements that you’re expecting to be reused by other people? How do you think about version control of a statement? Or like the way someone might transform a statement or rephrase a statement. Like, these are the kinds of, you know, it, it’s thinking about language in a different way than paragraphs or pages, because we’re trying to think about how to create an object where you’re not gonna have to read the entire history of a Slack channel when you go into it to get up to speed. You know what the group knows, actually, if you’ve got a new piece of knowledge that really would have unlocked something that the group was talking about 6 months ago, you know, and the group kind of shelved that whole discussion because they didn’t have that knowledge, how does your knowledge immediately fit in and unlock that? Right? So, it’s thinking about a different kind of collaborative thought data structure. And so things like block references and the ability to build a statement up out of other statements, having unique ideas for those. Yeah, that’s the kind of work that I think is important about Rome and something that never gets talked about on any of the YouTube videos of users. I’m not complaining about it because if people weren’t happy with the things that I thought of as basic, which is the stuff that everyone imitated, right? I needed to get those basic things. To even get to the place where I could think about the block references and all these other things, which are still rough. Therefore, a problem people don’t know they have. Like, nobody’s trying to, like, reinvent prose. I’m trying to reinvent prose. I don’t think prose works for large scale collaborative problem solving. Like, essays do not work. I mean, they can work. They’re the best that we have right now. I saw you shaking your head, Mark, right? Like, and in fact, there’s a whole other thing which is like being able to go from Convincing rhetoric that is storytelling where like the author is taking you on a journey with them into the structure, you kind of may wanna have both. You wanna have a sequentially ordered narrative that is being presented, but then if you’re trying to analyze the logic of it, or like debug the program, you might wanna have a more sort of structured graphical representation of it for the analysis of, like, where are the weak points in the narrative, but. 00:46:41 - Speaker 2: I find it very interesting, you know, your vision for where you want to be longer term, which is really about collective intelligence more than individual intelligence, but you can’t bootstrap into getting everyone to use something at the same time. 00:46:54 - Speaker 1: Also, your past and future selves are totally different people, right? The past is a foreign country. So the other reason that I started with the single player tool is that I didn’t have to convince anybody else to use it to be able to iterate on it. Other authors were other people, and myself across time was other people. And so it is an easier and still extremely difficult problem to solve the problem of organizing your own thoughts over time as your thoughts change, being able to go back and actually reexamine them. These are actually more related than people think. Like, people don’t realize how many selves they have, right? I actually think the idea that you are just one self is kind of, you have up so many different sub-agents running around. Like, one day you think this, if you’re hungry, you think that, if you’re tired, you think that, like, how do you actually bring your all your internal family systems is, yeah, I don’t know if you guys have ever gotten into that kind of stuff, but like, You’re many, you are many. Yeah. 00:47:48 - Speaker 2: Contain multitudes, absolutely. Certainly, writing is a technology for not only communicating with others, but also communicating with your past and future self is a powerful piece of it. 00:47:58 - Speaker 1: And present self, I don’t know what I think until I write it sometime. 00:48:01 - Speaker 2: Um, so yeah, the externalizing the thought, the conversation with the page, you see what’s there, and that becomes a loop that’s different from the kind of thinking inside the brain. 00:48:11 - Speaker 1: Or to tie back to Mark saying, you know, when you were talking about you got into programming so you could build those multi-agent models for doing economic simulations, like, that’s the kind of stuff I want people to be able to do in Rome, right? It’s like, Rome is the database of all their notes, all their thinking, right? And so if they want to just start playing with stuff, they shouldn’t have to worry about setting up a web server or web page or whatever, like, it’s like, OK, they write some JavaScript and suddenly they’re embedding a little. And that’s one of the cool things with closures, like, closure interpreter inside Rome, and a JavaScript interpreter inside Rome. So, hopefully someday, the future mark is thinking through his economics thoughts with little simulations inside the notes, and like, they’re part of his scaffolding of his own thinking, and he’s gonna be able to go back and not just read his old thoughts, but like, play with the simulations that he was writing. 00:49:02 - Speaker 2: That’s super interesting and definitely the programmability built into the tool that again, programmers, editors have had since forever, but bringing that to something that’s more for other kinds of knowledge workers or other kinds of, obviously power users, but people who are more working in the realm of ideas, not necessarily code. Putting those two things together, I think was a surprising but important innovation. 00:49:25 - Speaker 1: I’ll say one word here too, which is that someone asked. What are we working on? What have we been working on? One thing that is still an open research problem that I’ve seen no one else even thinking about is the idea of that there are higher order functions for regular thinking, right? If you do weird San Francisco hippie like intentional relation stuff with other groups of people, like, you get used to these kind of patterns of questions, right? For instance, I ran a learning cult, cause I was trying to do it with work flowy and Excel, like, I was trying to build a sort of peer to peer research group with, you know, just friends and folks that I met when I was in the Bay Area. I was trying to figure out the minimum thing I would need to build for Rome to build a decentralized research group, right? And so I was, in order to stress test, I was like, how close could I get to the ideal social and like information structure without building anything with like off the shelf tools. And I used Workflow, which was a tree-based outliner, and I used Google Sheets. And I would do stuff like, I would ask people, what are the like 7 best books you’ve read in your life. OK, for each book, like, what were the 3 big ideas? For each of those big ideas, how did that impact you? For each of those ideas, can you find two quotes from the book? Can you go back and do these things? And even just a simple thing like a for each function, right? Even just like being able to separate out, I want to ask. Questions, and then I wanna take those answers, and I wanna map new questions onto the answers onto these things, and I want to create a data structure out of this. Map, filter and reduce, like, we don’t have higher order functions for these basic qualitative kinds of, you know, personal interactions. And so one of the main things that I’ve been working on with R is trying to build a programming system for And maybe for like teachers or like group facilitators or something like that. For me, this was a very important practice for being an autodidact was I had a methodology for like deconstructing books, for like, managing my attention, and it was always really inconvenient to have to go back and forth between like what step am I on right now? And then I couldn’t just like set up, this is the sequence of events that I wanna do. And just like look at each thing one at a time, and separate out the sort of cognitive scaffolding from the actual thinking. And so, I’m trying to build a sort of higher order programming language for creating cognitive scaffolds for guiding your own thinking, for like, you know, intentional, either reflection or investigation, or like that kind of stuff like that, because I think there’s a whole domain of programming that is about programming your mind. You being the programmer of your mind and being able to be like, let me think about the thinking I want to be doing, and let me create prompts for myself, where like, I’m the evaluator. So, that is what Rome really is. It’s about building a programming language for human cognition, which could be the individual or multiplayer. 00:52:23 - Speaker 2: Well, let’s wrap it there. Thanks everyone for listening. Join us on Discord to discuss this episode with me, Mark, and our community. The links in the show notes, and you can follow us on Twitter at @museapphq. Connor, I’m so glad that you’re helping us think big thoughts about how we can just be better at collective intelligence cause it’s pretty clear that that’s a place that humanity can get a lot better. And thanks for coming on the show. 00:52:47 - Speaker 1: Thanks so much.