Idea Machines is a deep dive into the systems and people that bring innovations from glimmers in someone's eye all the way to tools, processes, and ideas that can shift paradigms. We see the outputs of innovation systems everywhere but rarely dig into how they work. Idea Machines digs below the su…
Tim Hwang turns the tables and interviews me (Ben) about Speculative Technologies and research management.
Peter van Hardenberg talks about Industrialists vs. Academics, Ink&Switch's evolution over time, the Hollywood Model, internal lab infrastructure, and more! Peter is the lab director and CEO of Ink&Switch, a private, creator oriented, computing research lab. References Ink&Switch (and their many publications) The Hollywood Model in R&D Idea Machines Episode with Adam Wiggins Paul Erdós Transcript Peter Van Hardenberg [00:01:21] Ben: Today I have the pleasure of speaking with Peter van Hardenbergh. Peter is the lab director and CEO of Inkin switch. Private creator oriented, competing research lab. I talked to Adam Wiggins, one of inkind switches founders, [00:01:35] way back in episode number four. It's amazing to see the progress they've made as an organization. They've built up an incredible community of fellow travelers and consistently released research reports that gesture at possibilities for competing that are orthogonal to the current hype cycles. Peter frequently destroys my complacency with his ability to step outside the way that research has normally done and ask, how should we be operating, given our constraints and goals. I hope you enjoy my conversation with Peter. Would you break down your distinction between academics and industrialists [00:02:08] Peter: Okay. Academics are people whose incentive structure is connected to the institutional rewards of the publishing industry, right? You, you publish papers. And you get tenure and like, it's a, it's, it's not so cynical or reductive, but like fundamentally the time cycles are long, right? Like you have to finish work according to when, you know, submission deadlines for a conference are, you know, you're [00:02:35] working on something now. You might come back to it next quarter or next year or in five years, right? Whereas when you're in industry, you're connected to users, you're connected to people at the end of the day who need to touch and hold and use the thing. And you know, you have to get money from them to keep going. And so you have a very different perspective on like time and money and space and what's possible. And the real challenge in terms of connecting these two, you know, I didn't invent the idea of pace layers, right? They, they operate at different pace layers. Academia is often intergenerational, right? Whereas industry is like, you have to make enough money every quarter. To keep the bank account from going below zero or everybody goes home, [00:03:17] Ben: Right. Did. Was it Stuart Brand who invented pace [00:03:22] Peter: believe it was Stewart Brand. Pace layers. Yeah. [00:03:25] Ben: That actually I, I'd never put these two them together, but the, the idea I, I, I think about impedance mismatches between [00:03:35] organizations a lot. And that really sort of like clicks with pace layers Exactly. Right. Where it's like [00:03:39] Peter: Yeah, absolutely. And, and I think in a big way what we're doing at, Ink& Switch on some level is trying to provide like synchro mesh between academia and industry, right? Because they, the academics are moving on a time scale and with an ambition that's hard for industry to match, right? But also, Academics. Often I think in computer science are like, have a shortage of good understanding about what the real problems people are facing in the world today are. They're not disinterested. [00:04:07] Ben: just computer [00:04:08] Peter: Those communication channels don't exist cuz they don't speak the same language, they don't use the same terminology, they don't go to the same conferences, they don't read the same publications. Right. [00:04:18] Ben: Yeah. [00:04:18] Peter: so vice versa, you know, we find things in industry that are problems and then it's like you go read the papers and talk to some scientists. I was like, oh dang. Like. We know how to solve this. It's just nobody's built it. [00:04:31] Ben: Yeah. [00:04:32] Peter: Or more accurately it would be to say [00:04:35] there's a pretty good hunch here about something that might work, and maybe we can connect the two ends of this together. [00:04:42] Ben: Yeah. Often, I, I think of it as someone, someone has, it is a quote unquote solved problem, but there are a lot of quote unquote, implementation details and those implementation details require a year of work. [00:04:56] Peter: yeah, a year or many years? Or an entire startup, or a whole career or two? Yeah. And, and speaking of, Ink&Switch, I don't know if we've ever talked about, so a switch has been around for more than half a decade, right? [00:05:14] Peter: Yeah, seven or eight years now, I think I could probably get the exact number, but yeah, about that. [00:05:19] Ben: And. I think I don't have a good idea in my head over that time. What, what has changed about in, can switches, conception of itself and like how you do things. Like what is, what are some of the biggest things that have have changed over that time?[00:05:35] [00:05:35] Peter: So I think a lot of it could be summarized as professionalization. But I, I'll give a little brief history and can switch began because the. You know, original members of the lab wanted to do a startup that was Adam James and Orion, but they recognized that they didn't, they weren't happy with computing and where computers were, and they knew that they wanted to make something that would be a tool that would help people who were solving the world's problems work better. That's kinda a vague one, but You know, they were like, well, we're not physicists, we're not social scientists. You know, we can't solve climate change or radicalization directly, or you know, the journalism crisis or whatever, but maybe we can build tools, right? We know how to make software tools. Let's build tools for the people who are solving the problems. Because right now a lot of those systems they rely on are getting like steadily worse every day. And I think they still are like the move to the cloud disempowerment of the individual, like, you [00:06:35] know, surveillance technology, distraction technology. And Tristan Harris is out there now. Like hammering on some of these points. But there's just a lot of things that are like slow and fragile and bad and not fun to work with and lose your, you know, lose your work product. You know, [00:06:51] Ben: Yeah, software as a service more generally. [00:06:54] Peter: Yeah. And like, there's definitely advantages. It's not like, you know, people are rational actors, but something was lost. And so the idea was well go do a bit of research, figure out what the shape of the company is, and then just start a company and, you know, get it all solved and move on. And I think the biggest difference, at least, you know, aside from scale and like actual knowledge is just kind of the dawning realization at some point that like there won't really be an end state to this problem. Like this isn't a thing that's transitional where you kind of come in and you do some research for a bit, and then we figure out the answer and like fold up the card table and move on to the next thing. It's like, oh no, this, this thing's gotta stick around because these problems aren't gonna [00:07:35] go away. And when we get through this round of problems, we already see what the next round are. And that's probably gonna go on for longer than any of us will be working. And so the vision now, at least from my perspective as the current lab director, is much more like, how can I get this thing to a place where it can sustain for 10 years, for 50 years, however long it takes, and you know, to become a place that. Has a culture that can sustain, you know, grow and change as new people come in. But that can sustain operations indefinitely. [00:08:07] Ben: Yeah. And, and so to circle back to the. The, the jumping off point for this, which is sort of since, since it began, what have been some of the biggest changes of how you operate? How you, or just like the, the model more generally or, or things that you were [00:08:30] Peter: Yeah, so the beginning was very informal, but, so maybe I'll skip over the first like [00:08:35] little period where it was just sort of like, Finding our footing. But around the time when I joined, we were just four or five people. And we did one project, all of us together at a time, and we just sort of like, someone would write a proposal for what we should do next, and then we would argue about like whether it was the right next thing. And, you know, eventually we would pick a thing and then we would go and do that project and we would bring in some contractors and we called it the Hollywood model. We still call it the Hollywood model. Because it was sort of structured like a movie production. We would bring in, you know, to our little core team, we'd bring in a couple specialists, you know, the equivalent of a director of photography or like a, you know, a casting director or whatever, and you bring in the people that you need to accomplish the task. Oh, we don't know how to do Bluetooth on the web. Okay. Find a Bluetooth person. Oh, there's a bunch of crypto stuff, cryptography stuff. Just be clear on this upcoming project, we better find somebody who knows, you know, the ins and outs of like, which cryptography algorithms to use or [00:09:35] what, how to build stuff in C Sharp for Windows platform or Surface, whatever the, the project was over time. You know, we got pretty good at that and I think one of the biggest changes, sort of after we kind of figured out how to actually do work was the realization that. Writing about the work not only gave us a lot of leverage in terms of our sort of visibility in the community and our ability to attract talent, but also the more we put into the writing, the more we learned about the research and that the process of, you know, we would do something and then write a little internal report and then move on. But the process of taking the work that we do, And making it legible to the outside world and explaining why we did it and what it means and how it fits into the bigger picture. That actually like being very diligent and thorough in documenting all of that greatly increases our own understanding of what we did.[00:10:35] And that was like a really pleasant and interesting surprise. I think one of my sort of concerns as lab director is that we got really good at that and we write all these like, Obscenely long essays that people claim to read. You know, hacker News comments on extensively without reading. But I think a lot about, you know, I always worry about the orthodoxy of doing the same thing too much and whether we're sort of falling into patterns, so we're always tinkering with new kind of project systems or new ways of working or new kinds of collaborations. And so yeah, that's ongoing. But this, this. The key elements of our system are we bring together a team that has both longer term people with domain contexts about the research, any required specialists who understand like interesting or important technical aspects of the work. And then we have a specific set of goals to accomplish [00:11:35] with a very strict time box. And then when it's done, we write and we put it down. And I think this avoids number of the real pitfalls in more open-ended research. It has its own shortcomings, right? But one of the big pitfalls that avoids is the kind of like meandering off and losing sight of what you're doing. And you can get great results from that in kind of a general research context. But we're very much an industrial research context. We're trying to connect real problems to specific directions to solve them. And so the time box kind of creates the fear of death. You're like, well, I don't wanna run outta time and not have anything to show for it. So you really get focused on trying to deliver things. Now sometimes that's at the cost, like the breadth or ambition of a solution to a particular thing, but I think it helps us really keep moving forward. [00:12:21] Ben: Yeah, and, and you no longer have everybody in the lab working on the same projects, right. [00:12:28] Peter: Yeah. So today, at any given time, The sort of population of the lab fluctuates between sort of [00:12:35] like eight and 15 people, depending on, you know, whether we have a bunch of projects in full swing or you know, how you count contractors. But we usually, at the moment we have sort of three tracks of research that we're doing. And those are local first software Programmable Inc. And Malleable software. [00:12:54] Ben: Nice. And so I, I actually have questions both about the, the write-ups that you do and the Hollywood model and so on, on the Hollywood model. Do you think that I, I, and this is like, do you think that the, the Hollywood model working in, in a. Industrial Research lab is particular to software in the sense that I feel like the software industry, people change jobs fairly frequently. Contracting is really common. Contractors are fairly fluid and. [00:13:32] Peter: You mean in terms of being able to staff and source people?[00:13:35] [00:13:35] Ben: Yeah, and people take, like, take these long sabbaticals, right? Where it's like, it's not uncommon in the software industry for someone to, to take six months between jobs. [00:13:45] Peter: I think it's very hard for me to generalize about the properties of other fields, so I want to try and be cautious in my evaluation here. What I would say is that, I think the general principle of having a smaller core of longer term people who think and gain a lot of context about a problem and pairing them up with people who have fresh ideas and relevant expertise, does not require you to have any particular industry structure. Right. There are lots of ways of solving this problem. Go to a research, another research organization and write a paper with someone from [00:14:35] an adjacent field. If you're in academia, right? If you're in a company, you can do a partnership you know, hire, you know, I think a lot of fields of science have much longer cycles, right? If you're doing material science, you know, takes a long time to build test apparatus and to formulate chemistries. Like [00:14:52] Ben: Yeah. [00:14:52] Peter: someone for several years, right? Like, That's fine. Get a detach detachment from another part of the company and bring someone as a secondment. Like I think that the general principle though, of putting together a mixture of longer and shorter term people with the right set of skills, yes, we solve it a particular way in our domain. But I don't think that that's software u unique to software. [00:15:17] Ben: Would, would it be overreaching to map that onto professors and postdocs and grad students where you have the professor who is the, the person who's been working on the, the program for a long time has all the context and then you have postdocs and grad students [00:15:35] coming through the lab. [00:15:38] Peter: Again, I need to be thoughtful about. How I evaluate fields that I'm less experienced with, but both my parents went through grad school and I've certainly gotten to know a number of academics. My sense of the relationship between professors and or sort of PhD, yeah, I guess professors and their PhD students, is that it's much more likely that the PhD students are given sort of a piece of the professor's vision to execute. [00:16:08] Ben: Yeah. [00:16:09] Peter: And that that is more about scaling the research interests of the professor. And I don't mean this in like a negative way but I think it's quite different [00:16:21] Ben: different. [00:16:22] Peter: than like how DARPA works or how I can switch works with our research tracks in that it's, I it's a bit more prescriptive and it's a bit more of like a mentor-mentee kind of relationship as [00:16:33] Ben: Yeah. More training.[00:16:35] [00:16:35] Peter: Yeah. And you know, that's, that's great. I mean, postdocs are a little different again, but I think, I think that's different than say how DARPA works or like other institutional research groups. [00:16:49] Ben: Yeah. Okay. I, I wanted to see how, how far I could stretch the, stretch [00:16:55] Peter: in academia there's famous stories about Adosh who would. Turn up on your doorstep you know, with a suitcase and a bottle of amphetamines and say, my, my brain is open, or something to that effect. And then you'd co-author a paper and pay his room and board until you found someone else to send him to. I think that's closer in the sense that, right, like, here's this like, great problem solver with a lot of like domain skills and he would parachute into a place where someone was working on something interesting and help them make a breakthrough with it. [00:17:25] Ben: Yeah. I think the, the thing that I want to figure out, just, you know, long, longer term is how to. Make those [00:17:35] short term collaborations happen when with, with like, I, I I think it's like, like there's some, there's some coy intention like in, in the sense of like Robert Kos around like organizational boundaries when you have people coming in and doing things in a temporary sense. [00:17:55] Peter: Yeah, academia is actually pretty good at this, right? With like paper co-authors. I mean, again, this is like the, the pace layers thing. When you have a whole bunch of people organized in an industry and a company around a particular outcome, You tend to have like very specific goals and commitments and you're, you're trying to execute against those and it's much harder to get that kind of like more fluid movement between domains. [00:18:18] Ben: Yeah, and [00:18:21] Peter: That's why I left working in companies, right? Cause like I have run engineering processes and built products and teams and it's like someone comes to me with a really good idea and I'm like, oh, it's potentially very interesting, but like, [00:18:33] Ben: but We [00:18:34] Peter: We got [00:18:35] customers who have outages who are gonna leave if we don't fix the thing, we've got users falling out of our funnel. Cause we don't do basic stuff like you just, you really have a lot of work to do to make the thing go [00:18:49] Ben: Yeah. [00:18:49] Peter: business. And you know, my experience of research labs within businesses is that they're almost universally unsuccessful. There are exceptions, but I think they're more coincidental than, than designed. [00:19:03] Ben: Yeah. And I, I think less and less successful over time is, is my observation that. [00:19:11] Peter: Interesting. [00:19:12] Ben: Yeah, there's a, there's a great paper that I will send you called like, what is the name? Oh, the the Changing Structure of American Innovation by She Aurora. I actually did a podcast with him because I like the paper so much. that that I, I think, yeah, exactly. And so going back to your, your amazing [00:19:35] write-ups, you all have clearly invested quite a chunk of, of time and resources into some amount of like internal infrastructure for making those really good. And I wanted to get a sense of like, how do you decide when it's worth investing in internal infrastructure for a lab? [00:19:58] Peter: Ooh. Ah, that's a fun question. Least at In and Switch. It's always been like sort of demand driven. I wish I could claim to be more strategic about it, but like we had all these essays, they were actually all hand coded HTML at one point. You know, real, real indie cred there. But it was a real pain when you needed to fix something or change something. Cause you had to go and, you know, edit all this H T M L. So at some point we were doing a smaller project and I built like a Hugo Templating thing [00:20:35] just to do some lab notes and I faked it. And I guess this is actually a, maybe a somewhat common thing, which is you do one in a one-off way. And then if it's promising, you invest more in it. [00:20:46] Ben: Yeah. [00:20:46] Peter: And it ended up being a bigger project to build a full-on. I mean, it's not really a cms, it's sort of a cms, it's a, it's a templating system that produces static HT m l. It's what all our essays come out of. But there's also a lot of work in a big investment in just like design and styling. And frankly, I think that one of the things that in can switch apart from other. People who do similar work in the space is that we really put a lot of work into the presentation of our work. You know, going beyond, like we write very carefully, but we also care a lot about like, picking good colors, making sure that text hyphenates well, that it, you know, that the the screencast has the right dimensions and, you know, all that little detail work and. It's expensive [00:21:35] in time and money to do, but I think it's, I think the results speak for themselves. I think it's worth it. [00:21:47] Ben: Yeah. I, and I mean, if, if the ultimate goal is to influence what people do and what they think, which I suspect is, is at least some amount of the goal then communicating it. [00:22:00] Peter: It's much easier to change somebody's mind than to build an entire company. [00:22:05] Ben: Yes. Well, [00:22:06] Peter: you wanna, if you wanna max, it depends. Well, you don't have to change everybody's mind, right? Like changing an individual person's mind might be impossible. But if you can put the right ideas out there in the right way to make them legible, then you'll change the right. Hopefully you'll change somebody's mind and it will be the right somebody. [00:22:23] Ben: yeah. No, that is, that is definitely true. And another thing that I am. Always obscenely obsessed, exceedingly impressed by that. In Switch. [00:22:35] Does is your sort of thoughtfulness around how you structure your community and sort of tap into it. Would you be willing to sort of like, walk me through how you think about that and like how you have sort of the, the different layers of, of kind of involvement? [00:22:53] Peter: Okay. I mean, sort of the, maybe I'll work from, from the inside out cuz that's sort of the history of it. So in the beginning there was just sort of the people who started the lab. And over time they recruited me and, and Mark Mcg again and you know, some of our other folk to come and, and sign on for this crazy thing. And we started working with these wonderful, like contractors off and on and and so the initial sort of group was quite small and quite insular and we didn't publish anything. And what we found was that. Once we started, you know, just that alone, the act of bringing people in and working with them started to create the beginning of a [00:23:35] community because people would come into a project with us, they'd infect us with some of their ideas, we'd infect them with some of ours. And so you started to have this little bit of shared context with your past collaborators. And because we have this mix of like longer term people who stick with the lab and other people who come and go, You start to start to build up this, this pool of people who you share ideas and language with. And over time we started publishing our work and we began having what we call workshops where we just invite people to come and talk about their work at Ink and Switch. And by at, I mean like now it's on a discord. Back in the day it was a Skype or a Zoom call or whatever. And the rule back then in the early days was like, if you want to come to the talk. You have to have given a talk or have worked at the lab. And so it was like very good signal to noise ratio in attendance cuz the only people who would be on the zoom call would be [00:24:35] people who you knew were grappling with those problems. For real, no looky lose, no, no audience, right? And over time it just, there were too many really good, interesting people who are doing the work. To fit in all those workshops and actually scheduling workshops is quite tiring and takes a lot of energy. And so over time we sort of started to expand this community a little further. And sort of now our principle is you know, if you're doing the work, you're welcome to come to the workshops. And we invite some people to do workshops sometimes, but that's now we have this sort of like small private chat group of like really interesting folk. And it's not open to the public generally because again, we, I don't want to have an audience, right? I want it to practitioner's space. And so over time, those people have been really influential on us as well. And having that little inner [00:25:35] circle, and it's a few hundred people now of people who, you know, like if you have a question to ask about something tricky. There's probably somebody in there who has tried it, but more significantly, like the answer will come from somebody who has tried it, not from somebody who will call you an idiot for trying or who will, right, like you, you avoid all the, don't read the comments problems because the sort of like, if anybody was like that, I would probably ask them to leave, but we've been fortunate that we haven't had any of that kind of stuff in the community. I will say though, I think I struggle a lot because I think. It's hard to be both exclusive and inclusive. Right, but exclusive community deliberately in the sense that I want it to be a practitioner's space and one where people can be wrong and it's not too performative, like there's not investors watching or your, your user base or whatever. [00:26:32] Ben: Yeah. [00:26:32] Peter: at the same time, [00:26:33] Ben: strangers. [00:26:34] Peter: [00:26:35] inclusive space where we have people who are earlier in their career or. From non-traditional backgrounds, you know, either academically or culturally or so on and so forth. And it takes constant work to be like networking out and meeting new people and like inviting them into this space. So it's always an area to, to keep working on. At some point, I think we will want to open the aperture further, but yeah, it's, it's, it's a delicate thing to build a community. [00:27:07] Ben: Yeah, I mean the, the, frankly, the reason I'm asking is because I'm trying to figure out the same things and you have done it better than basically anybody else that I've seen. This is, this is maybe getting too down into the weeds. But why did you decide that discourse or discord was the right tool for it? And the, the reason that I ask is that I personally hate sort of [00:27:35] streaming walls of texts, and I find it very hard to, to seriously discuss ideas in, in that format. [00:27:43] Peter: Yeah, I think async, I mean, I'm an old school like mailing list guy. On some level I think it's just a pragmatic thing. We use Discord for our internal like day-to-day operations like. Hey, did you see the pr? You know, oh, we gotta call in an hour with so-and-so, whatever. And then we had a bunch of people in that community and then, you know, we started having the workshops and inviting more people. So we created a space in that same discord where. You know, people didn't have to get pinged when we had a lab call and we didn't want 'em turning up on the zoom anyway. And so it wasn't so much like a deliberate decision to be that space. I think there's a huge opportunity to do better and you know, frankly, what's there is [00:28:35] not as designed or as deliberate as I would like. It's more consequence of Organic growth over time and just like continuing to do a little bit here and there than like sort of an optimum outcome. And it could, there, there's a lot of opportunity to do better. Like we should have newsletters, there should be more, you know, artifacts of past conversations with better organizations. But like all of that stuff takes time and energy. And we are about a small little research lab. So many people you know, [00:29:06] Ben: I, I absolutely hear you on that. I think the, the, the tension that I, I see is that people, I think like texting, like sort of stream of texts. Slack and, and discord type things. And, and so there's, there's the question of like, what can you get people to do versus like, what creates the, the right conversation environment?[00:29:35] And, and maybe that's just like a matter of curation and like standard setting. [00:29:42] Peter: Yeah, I don't know. We've had our, our rabbit trails and like derailed conversations over the years, but I think, you know, if you had a forum, nobody would go there. [00:29:51] Ben: Yeah. [00:29:52] Peter: like, and you could do a mailing list, but I don't know, maybe we could do a mailing list. That would be a nice a nice form, I think. But people have to get something out of a community to put things into it and you know, you have to make, if you want to have a forum or, or an asynchronous posting place, you know, the thing is people are already in Discord or slack. [00:30:12] Ben: exactly. [00:30:13] Peter: something else, you have to push against the stream. Now, actually, maybe one interesting anecdote is I did experiment for a while with, like, discord has sort of a forum post feature. They added a while back [00:30:25] Ben: Oh [00:30:25] Peter: added it. Nobody used it. So eventually I, I turned it off again. Maybe, maybe it just needs revisiting, but it surprised me that it wasn't adopted, I guess is what [00:30:35] I would say. [00:30:36] Ben: Yeah. I mean, I think it, I think the problem is it takes more work. It's very easy to just dash off a thought. [00:30:45] Peter: Yeah, but I think if you have the right community, then. Those thoughts are likely to have been considered and the people who reply will speak from knowledge [00:30:55] Ben: Yeah. [00:30:56] Peter: and then it's not so bad, right? [00:30:59] Ben: it's [00:30:59] Peter: The problem is with Hacker News or whatever where like, or Reddit or any of these open communities like you, you know, the person who's most likely to reply is not the person who's most helpful to apply. [00:31:11] Ben: Yeah, exactly. Yeah, that makes, that makes a lot of sense. And sort of switching tracks yet again, how so one, remind me how long your, your projects are, like how long, how big are the, is the time box. [00:31:28] Peter: the implementation phase for a standard income switch Hollywood project, which I can now call them standard, I think, cuz we've done like, [00:31:35] Ooh, let me look. 25 or so over the years. Let's see, what's my project count number at? I have a little. Tracker. Yeah, I think it's 25 today. So we've done about 20 some non-trivial number of these 10 to 12 weeks of implementation is sort of the core of the project, and the idea is that when you hit that start date, at the beginning of that, you should have the team assembled. You should know what you're building, you should know why you're building it, and you should know what done looks like. Now it's research, so inevitably. You know, you get two weeks in and then you take a hard left and like, you know, but that, that we write what's called the brief upfront, which is like, what is the research question we are trying to answer by funding this work and how do we think this project will answer it? Now, your actual implementation might change, or you might discover targets of opportunity along the way. But the idea is that by like having a, a narrow time box, like a, a team [00:32:35] that has a clear understanding of what you're trying to accomplish. And like the right set of people on board who already have all the like necessary skills. You can execute really hard for like that 10 to 12 weeks and get quite far in that time. Now, that's not the whole project though. There's usually a month or two upfront of what we call pre-infusion, kind of coming from the espresso idea that like you make better espresso if you take a little time at low pressure first to get ready with the shot, and so we'll do. You know, and duration varies here, but there's a period before that where we're making technical choices. Are we building this for the web or is this going on iPad? Are we gonna do this with rust and web assembly, or is this type script is this, are we buying Microsoft Surface tablets for this as we're like the ink behavior, right? So all those decisions we try and make up front. So when you hit the execution phase, you're ready to go. Do we need, what kind of designer do we want to include in this project? And who's available, you know? All of that stuff. We [00:33:35] try and square away before we get to the execution phase. [00:33:38] Ben: right. [00:33:38] Peter: when the end of the execution phase, it's like we try to be very strict with like last day pencils down and try to also reserve like the last week or two for like polish and cleanup and sort of getting things. So it's really two to two and a half, sometimes three months is like actually the time you have to do the work. And then after that, essays can take between like two months and a year or two. To produce finally. But we try to have a dr. We try to have a good first draft within a month after the end of the project. And again, this isn't a process that's like probably not optimal, but basically someone on the team winds up being the lead writer and we should be more deliberate about that. But usually the project lead for a given project ends up being the essay writer. And they write a first draft with input and collaboration from the rest of the group. And then people around [00:34:35] the lab read it and go, this doesn't make any sense at all. Like, what? What do you do? And you know, to, to varying degrees. And then it's sort of okay, right? Once you've got that kind of feedback, then you go back and you restructured and go, oh, I need to explain this part more. You know, oh, these findings don't actually cover the stuff that other people at the lab thought was interesting from the work or whatever. And then that goes through, you know, an increasing sort of, you know, standard of writing stuff, right? You send it out to some more people and then you send it to a bigger group. And you know, we send it to people who are in the field that whose input we respect. And then we take their edits and we debate which ones to take. And then eventually it goes in the HTML template. And then there's a long process of like hiring an external copy editor and building nice quality figures and re-recording all your crappy screencasts to be like, Really crisp with nice lighting and good, you know, pacing and, you know, then finally at the end of all of that, we publish. [00:35:33] Ben: Nice. And [00:35:35] how did you settle on the, the 10 to 12 weeks as the right size, time box? [00:35:42] Peter: Oh, it's it's it's, it's clearly rationally optimal. [00:35:46] Ben: Ah, of course, [00:35:47] Peter: No, I'm kidding. It's totally just, it became a habit. I mean, I think. Like I, I can give an intuitive argument and we've, we've experimented a bit. You know, two weeks is not long enough to really get into anything, [00:36:02] Ben: right. [00:36:02] Peter: and the year is too long. There's too much, too much opportunity to get lost along the way. There's no, you go too long with no real deadline pressure. It's very easy to kind of wander off into the woods. And bear in mind that like the total project duration is really more like six months, right? And so where we kind of landed is also that we often have like grad students or you know, people who are between other contracts or things. It's much easier to get people for three months than for eight months. And if I feel like [00:36:35] just intuitively, if I, if someone came to you with an eight month project, I'd be, I'm almost positive that I would be able to split it into two, three month projects and we'd be able to like find a good break point somewhere in the middle. And then write about that and do another one. And it's like, this is sort of a like bigger or smaller than a bread box argument, but like, you know, a month is too little and six months feels too long. So two to four months feels about right. In terms of letting you really get into, yeah, you can really get into the meat of a problem. You can try a few different approaches. You can pick your favorite and then spend a bit of time like analyzing it and like working out the kinks. And then you can like write it up. [00:37:17] Ben: Thanks. [00:37:18] Peter: But you know, there have been things that are not, that haven't fit in that, and we're doing some stuff right now that has, you know, we've had a, like six month long pre-infusion going this year already on some ink stuff. So it's not a universal rule, but like that's the, that's the [00:37:33] Ben: Yeah. No, I [00:37:35] appreciate that intuition [00:37:36] Peter: and I think it also, it ties into being software again, right? Like again, if you have to go and weld things and like [00:37:43] Ben: yeah, exactly. [00:37:44] Peter: You know, [00:37:44] Ben: let let some bacteria grow. [00:37:46] Peter: or like, you know, the, it's very much a domain specific answer. [00:37:51] Ben: Yeah. Something that I wish people talked about more was like, like characteristic time scales of different domains. And I, I think that's software, I mean, software is obviously shorter, but it'd be interesting to, to sort of dig down and be like, okay, like what, what actually is it? So the, the, the last question I'd love to ask is, To what extent does everybody in the lab know what's, what everybody else is working on? Like. [00:38:23] Peter: So we use two tools for that. We could do a better job of this. Every Monday the whole lab gets together for half an hour only. [00:38:35] And basically says what they're doing. Like, what are you up to this week? Oh, we're trying to like, you know, figure out what's going on with that you know, stylist shaped problem we were talking about at the last demo, or, oh, we're, you know, we're in essay writing mode. We've got a, we're hoping to get the first draft done this week, or, you know, just whatever high level kind of objectives the team has. And then I was asked the question like, well, Do you expect to have anything for show and tell on Friday and every week on Friday we have show and tell or every other week. Talk a bit more about that and at show and tell. It's like whatever you've got that you want input on or just a deadline for you can share. Made some benchmark showing that this code is now a hundred times faster. Great. Like bring it to show and tell. Got that like tricky you know, user interaction, running real smooth. Bring it to show and tell, built a whole new prototype of a new kind of [00:39:35] like notetaking app. Awesome. Like come and see. And different folks and different projects have taken different approaches to this. What has been most effective, I'm told by a bunch of people in their opinion now is like, kind of approaching it. Like a little mini conference talk. I personally actually air more on the side of like a more casual and informal thing. And, and those can be good too. Just from like a personal alignment like getting things done. Perspective. What I've heard from people doing research who want to get useful feedback is that when they go in having sort of like rehearsed how to explain what they're doing, then how to show what they've done and then what kind of feedback they want. That not only do they get really good feedback, but also that process of making sure that the demo you're gonna do will actually run smoothly and be legible to the rest of the group [00:40:35] forces you. Again, just like the writing, it forces you to think about what you're doing and why you made certain choices and think about which ones people are gonna find dubious and tell them to either ignore that cuz it was a stand-in or let's talk about that cuz it's interesting. And like that, that that little cycle is really good. And that tends to be, people often come every two weeks for that [00:40:59] Ben: Yeah. [00:41:01] Peter: within when they're in active sort of mode. And so not always, but like two weeks feels about like the right cadence to, to have something. And sometimes people will come and say like, I got nothing this week. Like, let's do it next week. It's fine. And the other thing we do with that time is we alternate what we call zoom outs because they're on Zoom and I have no, no sense of humor I guess. But they're based on, they're based on the old you and your research hamming paper with where the idea is that like, at least for a little while, every week [00:41:35] we all get together and talk about something. Bigger picture that's not tied to any of our individual projects. Sometimes we read a paper together, sometimes we talk about like an interesting project somebody saw, you know, in the world. Sometimes it's skills sharing. Sometimes it's you know, just like, here's how I make coffee or something, right? Like, You know, just anything that is bigger picture or out of the day-to-day philosophical stuff. We've read Illich and, and Ursula Franklin. People love. [00:42:10] Ben: I like that a lot. And I, I think one thing that, that didn't, that, that I'm still wondering about is like, On, on sort of a technical level are, are there things that some peop some parts of the lab that are working on that other parts of the lab don't get, like they, they know, oh, like this person's working on [00:42:35] inks, but they kind of have no idea how inks actually work? Or is it something where like everybody in the lab can have a fairly detailed technical discussion with, with anybody else [00:42:45] Peter: Oh no. I mean, okay, so there are interesting interdependencies. So some projects will consume the output of past projects or build on past projects. And that's interesting cuz it can create almost like a. Industry style production dependencies where like one team wants to go be doing some research. The local first people are trying to work on a project. Somebody else is using auto merge and they have bugs and it's like, oh but again, this is why we have those Monday sort of like conversations. Right? But I think the teams are all quite independent. Like they have their own GitHub repositories. They make their own technology decisions. They use different programming languages. They, they build on different stacks, right? Like the Ink team is often building for iPad because that's the only place we can compile like [00:43:35] ink rendering code to get low enough latency to get the experiences we want. We've given up on the browser, we can't do it, but like, The local first group for various reasons has abandoned electron and all of these like run times and mostly just build stuff for the web now because it actually works and you spend all, spend way less calories trying to make the damn thing go if you don't have to fight xcode and all that kind of stuff. And again, so it really varies, but, and people choose different things at different times, but no, it's not like we are doing code review for each other or like. Getting into the guts. It's much more high level. Like, you know, why did you make that, you know, what is your programming model for this canvas you're working on? How does you know, how does this thing relate to that thing? Why is, you know, why does that layout horizontally? It feels hard to, to parse the way you've shown that to, you know, whatever. [00:44:30] Ben: Okay, cool. That, that makes sense. I just, I, the, the, the reason I ask [00:44:35] is I am just always thinking about how how related do projects inside of a single organization need to be for, like, is, is there sort of like an optimum amount of relatedness? [00:44:50] Peter: I view them all as the aspects of the same thing, and I think that that's, that's an important. Thing we didn't talk about. The goal of income switch is to give rise to a new kind of computing that is more user-centric, that's more productive, that's more creative in like a very raw sense that we want people to be able to think better thoughts, to produce better ideas, to make better art, and that computers can help them with that in ways that they aren't and in fact are [00:45:21] Ben: Yeah. [00:45:25] Peter: whether you're working on ink, Or local first software or malleable software media canvases or whatever domain you are working in. It [00:45:35] is the same thing. It is an ingredient. It is an aspect, it is a dimension of one problem. And so some, in some sense, all of this adds together to make something, whether it's one thing or a hundred things, whether it takes five years or 50 years, you know, that's, we're all going to the same place together. But on many different paths and at different speeds and with different confidence, right? And so in the small, the these things can be totally unrelated, but in the large, they all are part of one mission. And so when you say, how do you bring these things under one roof, when should they be under different roofs? It's like, well, when someone comes to me with a project idea, I ask, do we need this to get to where we're going? [00:46:23] Ben: Yeah, [00:46:24] Peter: And if we don't need it, then we probably don't have time to work on it because there's so much to do. And you know, there's a certain openness to experimentation and, [00:46:35] and uncertainty there. But that, that's the rubric that I use as the lab director is this, is this on the critical path of the revolution?
A conversation with Tim Hwang about historical simulations, the interaction of policy and science, analogies between research ecosystems and the economy, and so much more. Topics Historical Simulations Macroscience Macro-metrics for science Long science The interaction between science and policy Creative destruction in research “Regulation” for scientific markets Indicators for the health of a field or science as a whole “Metabolism of Science” Science rotation programs Clock speeds of Regulation vs Clock Speeds of Technology References Macroscience Substack Ada Palmer's Papal Simulation Think Tank Tycoon Universal Paperclips (Paperclip maximizer html game) Pitt Rivers Museum Transcript [00:02:02] Ben: Wait, so tell me more about the historical LARP that you're doing. Oh, [00:02:07] Tim: yeah. So this comes from like something I've been thinking about for a really long time, which is You know in high school, I did model UN and model Congress, and you know, I really I actually, this is still on my to do list is to like look into the back history of like what it was in American history, where we're like, this is going to become an extracurricular, we're going to model the UN, like it has all the vibe of like, after World War II, the UN is a new thing, we got to teach kids about international institutions. Anyways, like, it started as a joke where I was telling my [00:02:35] friend, like, we should have, like, model administrative agency. You know, you should, like, kids should do, like, model EPA. Like, we're gonna do a rulemaking. Kids need to submit. And, like, you know, there'll be Chevron deference and you can challenge the rule. And, like, to do that whole thing. Anyways, it kind of led me down this idea that, like, our, our notion of simulation, particularly for institutions, is, like, Interestingly narrow, right? And particularly when it comes to historical simulation, where like, well we have civil war reenactors, they're kind of like a weird dying breed, but they're there, right? But we don't have like other types of historical reenactments, but like, it might be really valuable and interesting to create communities around that. And so like I was saying before we started recording, is I really want to do one that's a simulation of the Cuban Missile Crisis. But like a serious, like you would like a historical reenactment, right? Yeah. Yeah. It's like everybody would really know their characters. You know, if you're McNamara, you really know what your motivations are and your background. And literally a dream would be a weekend simulation where you have three teams. One would be the Kennedy administration. The other would be, you know, Khrushchev [00:03:35] and the Presidium. And the final one would be the, the Cuban government. Yeah. And to really just blow by blow, simulate that entire thing. You know, the players would attempt to not blow up the world, would be the idea. [00:03:46] Ben: I guess that's actually the thing to poke, in contrast to Civil War reenactment. Sure, like you know how [00:03:51] Tim: that's gonna end. Right, [00:03:52] Ben: and it, I think it, that's the difference maybe between, in my head, a simulation and a reenactment, where I could imagine a simulation going [00:04:01] Tim: differently. Sure, right. [00:04:03] Ben: Right, and, and maybe like, is the goal to make sure the same thing happened that did happen, or is the goal to like, act? faithfully to [00:04:14] Tim: the character as possible. Yeah, I think that's right, and I think both are interesting and valuable, right? But I think one of the things I'm really interested in is, you know, I want to simulate all the characters, but like, I think one of the most interesting things reading, like, the historical record is just, like, operating under deep uncertainty about what's even going on, right? Like, for a period of time, the American [00:04:35] government is not even sure what's going on in Cuba, and, like, you know, this whole question of, like, well, do we preemptively bomb Cuba? Do we, we don't even know if the, like, the warheads on the island are active. And I think I would want to create, like, similar uncertainty, because I think that's where, like, that's where the strategic vision comes in, right? That, like, you have the full pressure of, like, Maybe there's bombs on the island. Maybe there's not even bombs on the island, right? And kind of like creating that dynamic. And so I think simulation is where there's a lot, but I think Even reenactment for some of these things is sort of interesting. Like, that we talk a lot about, like, oh, the Cuban Missile Crisis. Or like, the other joke I had was like, we should do the Manhattan Project, but the Manhattan Project as, like, historical reenactment, right? And it's kind of like, you know, we have these, like, very, like off the cuff or kind of, like, stereotype visions of how these historical events occur. And they're very stylized. Yeah, exactly, right. And so the benefit of a reenactment that is really in detail Yeah. is like, oh yeah, there's this one weird moment. You know, like that, that ends up being really revealing historical examples. And so even if [00:05:35] you can't change the outcome, I think there's also a lot of value in just doing the exercise. Yeah. Yeah. The, the thought of [00:05:40] Ben: in order to drive towards this outcome that I know. Actually happened I wouldn't as the character have needed to do X. That's right That's like weird nuanced unintuitive thing, [00:05:50] Tim: right? Right and there's something I think about even building into the game Right, which is at the very beginning the Russians team can make the decision on whether or not they've even actually deployed weapons into the cube at all, yeah, right and so like I love that kind of outcome right which is basically like And I think that's great because like, a lot of this happens on the background of like, we know the history. Yeah. Right? And so I think like, having the team, the US team put under some pressure of uncertainty. Yeah. About like, oh yeah, they could have made the decision at the very beginning of this game that this is all a bluff. Doesn't mean anything. Like it's potentially really interesting and powerful, so. [00:06:22] Ben: One precedent I know for this completely different historical era, but there's a historian, Ada Palmer, who runs [00:06:30] Tim: a simulation of a people election in her class every year. That's so good. [00:06:35] And [00:06:36] Ben: it's, there, you know, like, it is not a simulation. [00:06:40] Tim: Or, [00:06:41] Ben: sorry, excuse me, it is not a reenactment. In the sense that the outcome is indeterminate. [00:06:47] Tim: Like, the students [00:06:48] Ben: can determine the outcome. But... What tends to happen is like structural factors emerge in the sense that there's always a war. Huh. The question is who's on which sides of the war? Right, right. And what do the outcomes of the war actually entail? That's right. Who [00:07:05] Tim: dies? Yeah, yeah. And I [00:07:07] Ben: find that that's it's sort of Gets at the heart of the, the great [00:07:12] Tim: man theory versus the structural forces theory. That's right. Yeah. Like how much can these like structural forces actually be changed? Yeah. And I think that's one of the most interesting parts of the design that I'm thinking about right now is kind of like, what are the things that you want to randomize to impose different types of like structural factors that could have been in that event? Right? Yeah. So like one of the really big parts of the debate at XCOM in the [00:07:35] early phases of the Cuban Missile Crisis is You know, McNamara, who's like, right, he runs the Department of Defense at the time. His point is basically like, look, whether or not you have bombs in Cuba or you have bombs like in Russia, the situation has not changed from a military standpoint. Like you can fire an ICBM. It has exactly the same implications for the U. S. And so his, his basically his argument in the opening phases of the Cuban Missile Crisis is. Yeah. Which is actually pretty interesting, right? Because that's true. But like, Kennedy can't just go to the American people and say, well, we've already had missiles pointed at us. Some more missiles off, you know, the coast of Florida is not going to make a difference. Yeah. And so like that deep politics, and particularly the politics of the Kennedy administration being seen as like weak on communism. Yeah. Is like a huge pressure on all the activity that's going on. And so it's almost kind of interesting thinking about the Cuban Missile Crisis, not as like You know us about to blow up the world because of a truly strategic situation but more because of like the local politics make it so difficult to create like You know situations where both sides can back down [00:08:35] successfully. Basically. Yeah [00:08:36] Ben: The the one other thing that my mind goes to actually to your point about it model UN in schools. Huh, right is Okay, what if? You use this as a pilot, and then you get people to do these [00:08:49] Tim: simulations at [00:08:50] Ben: scale. Huh. And that's actually how we start doing historical counterfactuals. Huh. Where you look at, okay, you know, a thousand schools all did a simulation of the Cuban Missile Crisis. In those, you know, 700 of them blew [00:09:05] Tim: up the world. Right, right. [00:09:07] Ben: And it's, it actually, I think it's, That's the closest [00:09:10] Tim: thing you can get to like running the tape again. Yeah. I think that's right. And yeah, so I think it's, I think it's a really underused medium in a lot of ways. And I think particularly as like you know, we just talk, talk like pedagogically, like it's interesting that like, it seems to me that there was a moment in American pedagogical history where like, this is a good way of teaching kids. Like, different types of institutions. And like, but it [00:09:35] hasn't really matured since that point, right? Of course, we live in all sorts of interesting institutions now. And, and under all sorts of different systems that we might really want to simulate. Yeah. And so, yeah, this kind of, at least a whole idea that there's lots of things you could teach if you, we like kind of opened up this way of kind of like, Thinking about kind of like educating for about institutions. Right? So [00:09:54] Ben: that is so cool. Yeah, I'm going to completely, [00:09:59] Tim: Change. Sure. Of course. [00:10:01] Ben: So I guess. And the answer could be no, but is, is there connections between this and your sort of newly launched macroscience [00:10:10] Tim: project? There is and there isn't. Yeah, you know, I think like the whole bid of macroscience which is this project that I'm doing as part of my IFP fellowship. Yeah. Is really the notion that like, okay, we have all these sort of like interesting results that have come out of metascience. That kind of give us like, kind of like the beginnings of a shape of like, okay, this is how science might work and how we might like get progress to happen. And you know, we've got [00:10:35] like a bunch of really compelling hypotheses. Yeah. And I guess my bit has been like, I kind of look at that and I squint and I'm like, we're, we're actually like kind of in the early days of like macro econ, but for science, right? Which is like, okay, well now we have some sense of like the dynamics of how the science thing works. What are the levers that we can start, like, pushing and pulling, and like, what are the dials we could be turning up and turning down? And, and, you know, I think there is this kind of transition that happens in macro econ, which is like, we have these interesting results and hypotheses, but there's almost another... Generation of work that needs to happen into being like, oh, you know, we're gonna have this thing called the interest rate Yeah, and then we have all these ways of manipulating the money supply and like this is a good way of managing like this economy Yeah, right and and I think that's what I'm chasing after with this kind of like sub stack but hopefully the idea is to build it up into like a more coherent kind of framework of ideas about like How do we make science policy work in a way that's better than just like more science now quicker, please? Yeah, right, which is I think we're like [00:11:35] we're very much at at the moment. Yeah, and in particular I'm really interested in the idea of chasing after science almost as like a Dynamic system, right? Which is that like the policy levers that you have You would want to, you know, tune up and tune down, strategically, at certain times, right? And just like the way we think about managing the economy, right? Where you're like, you don't want the economy to overheat. You don't want it to be moving too slow either, right? Like, I am interested in kind of like, those types of dynamics that need to be managed in science writ large. And so that's, that's kind of the intuition of the project. [00:12:04] Ben: Cool. I guess, like, looking at macro, how did we even decide, macro econ, [00:12:14] Tim: how did we even decide that the things that we're measuring are the right things to measure? Right? Like, [00:12:21] Ben: isn't it, it's like kind of a historical contingency that, you know, it's like we care about GDP [00:12:27] Tim: and the interest rate. Yeah. I think that's right. I mean in, in some ways there's a triumph of like. It's a normative triumph, [00:12:35] right, I think is the argument. And you know, I think a lot of people, you hear this argument, and it'll be like, And all econ is made up. But like, I don't actually think that like, that's the direction I'm moving in. It's like, it's true. Like, a lot of the things that we selected are arguably arbitrary. Yeah. Right, like we said, okay, we really value GDP because it's like a very imperfect but rough measure of like the economy, right? Yeah. Or like, oh, we focus on, you know, the money supply, right? And I think there's kind of two interesting things that come out of that. One of them is like, There's this normative question of like, okay, what are the building blocks that we think can really shift the financial economy writ large, right, of which money supply makes sense, right? But then the other one I think which is so interesting is like, there's a need to actually build all these institutions. that actually give you the lever to pull in the first place, right? Like, without a federal reserve, it becomes really hard to do monetary policy. Right. Right? Like, without a notion of, like, fiscal policy, it's really hard to do, like, Keynesian as, like, demand side stuff. Right. Right? And so, like, I think there's another project, which is a [00:13:35] political project, to say... Okay, can we do better than just grants? Like, can we think about this in a more, like, holistic way than simply we give money to the researchers to work on certain types of problems. And so this kind of leads to some of the stuff that I think we've talked about in the past, which is like, you know, so I'm obsessed right now with like, can we influence the time horizon of scientific institutions? Like, imagine for a moment we had a dial where we're like, On average, scientists are going to be thinking about a research agenda which is 10 years from now versus next quarter. Right. Like, and I think like there's, there's benefits and deficits to both of those settings. Yeah. But man, if I don't hope that we have a, a, a government system that allows us to kind of dial that up and dial that down as we need it. Right. Yeah. The, the, [00:14:16] Ben: perhaps, quite like, I guess a question of like where the analogy like holds and breaks down. That I, that I wonder about is, When you're talking about the interest rate for the economy, it kind of makes sense to say [00:14:35] what is the time horizon that we want financial institutions to be thinking on. That's like roughly what the interest rate is for, but it, and maybe this is, this is like, I'm too, [00:14:49] Tim: my note, like I'm too close to the macro, [00:14:51] Ben: but thinking about. The fact that you really want people doing science on like a whole spectrum of timescales. And, and like, this is a ill phrased question, [00:15:06] Tim: but like, I'm just trying to wrap my mind around it. Are you saying basically like, do uniform metrics make sense? Yeah, exactly. For [00:15:12] Ben: like timescale, I guess maybe it's just. is an aggregate thing. [00:15:16] Tim: Is that? That's right. Yeah, I think that's, that's, that's a good critique. And I think, like, again, I think there's definitely ways of taking the metaphor too far. Yeah. But I think one of the things I would say back to that is It's fine to imagine that we might not necessarily have an interest rate for all of science, right? So, like, you could imagine saying, [00:15:35] okay, for grants above a certain size, like, we want to incentivize certain types of activity. For grants below a certain size, we want different types of activity. Right, another way of slicing it is for this class of institutions, we want them to be thinking on these timescales versus those timescales. Yeah. The final one I've been thinking about is another way of slicing it is, let's abstract away institutions and just think about what is the flow of all the experiments that are occurring in a society? Yeah. And are there ways of manipulating, like, the relative timescales there, right? And that's almost like, kind of like a supply based way of looking at it, which is... All science is doing is producing experiments, which is like true macro, right? Like, I'm just like, it's almost offensively simplistic. And then I'm just saying like, okay, well then like, yeah, what are the tools that we have to actually influence that? Yeah, and I think there's lots of things you could think of. Yeah, in my mind. Yeah, absolutely. What are some, what are some that are your thinking of? Yeah, so I think like the two that I've been playing around with right now, one of them is like the idea of like, changing the flow of grants into the system. So, one of the things I wrote about in Microscience just the past week was to think [00:16:35] about, like sort of what I call long science, right? And so the notion here is that, like, if you look across the scientific economy, there's kind of this rough, like, correlation between size of grant and length of grant. Right, where so basically what it means is that like long science is synonymous with big science, right? You're gonna do a big ambitious project. Cool. You need lots and lots and lots of money Yeah and so my kind of like piece just briefly kind of argues like but we have these sort of interesting examples like the You know Like framing a heart study which are basically like low expense taking place over a long period of time and you're like We don't really have a whole lot of grants that have that Yeah. Right? And so the idea is like, could we encourage that? Like imagine if we could just increase the flow of those types of grants, that means we could incentivize more experiments that take place like at low cost over long term. Yeah. Right? Like, you know, and this kind of gets this sort of interesting question is like, okay, so what's the GDP here? Right? Like, or is that a good way of cracking some of the critical problems that we need to crack right now? Right? Yeah. And it's kind of where the normative part gets into [00:17:35] it is like, okay. So. You know, one way of looking at this is the national interest, right? We say, okay, well, we really want to win on AI. We really want to win on, like, bioengineering, right? Are there problems in that space where, like, really long term, really low cost is actually the kind of activity we want to be encouraging? The answer might be no, but I think, like, it's useful for us to have, like, that. Color in our palette of things that we could be doing Yeah. In like shaping the, the dynamics of science. Yeah. Yeah. [00:18:01] Ben: I, I mean, one of the things that I feel like is missing from the the meta science discussion Mm-Hmm. is, is even just, what are those colors? Mm-Hmm. like what, what are the, the different and almost parameters of [00:18:16] Tim: of research. Yeah. Right, right, right. And I think, I don't know, one of the things I've been thinking about, which I'm thinking about writing about at some point, right, is like this, this view is, this view is gonna piss people off in some ways, because where it ultimately goes is this idea that, like, like, the scientist or [00:18:35] science Is like a system that's subject to the government, or subject to a policy maker, or a strategist. Which like, it obviously is, right? But like, I think we have worked very hard to believe that like, The scientific market is its own independent thing, And like, that touching or messing with it is like, a not, not a thing you should do, right? But we already are. True, that's kind of my point of view, yeah exactly. I think we're in some ways like, yeah I know I've been reading a lot about Keynes, I mean it is sort of interesting that it does mirror... Like this kind of like Great Depression era economic thinking, where you're basically like the market takes care of itself, like don't intervene. In fact, intervening is like the worst possible thing you could do because you're only going to make this worse. And look, I think there's like definitely examples of like kind of like command economy science that like don't work. Yes. But like, you know, like I think most mature people who work in economics would say there's some room for like at least like Guiding the system. Right. And like keeping it like in balance is like [00:19:35] a thing that should be attempted and I think it's kind of like the, the, the argument that I'm making here. Yeah. Yeah. I [00:19:41] Ben: mean, I think that's, [00:19:42] Tim: that's like the meta meta thing. Right. Right. Is even [00:19:46] Ben: what, what level of intervention, like, like what are the ways in which you can like usefully intervene and which, and what are the things that are, that are foolish and kind of. crEate the, the, [00:20:01] Tim: Command economy. That's right. Yeah, exactly. Right. Right. And I think like, I think the way through is, is maybe in the way that I'm talking about, right? Which is like, you can imagine lots of bad things happen when you attempt to pick winners, right? Like maybe the policymaker whoever we want to think of that as like, is it the NSF or NIH or whatever? Like, you know, sitting, sitting in their government bureaucracy, right? Like, are they well positioned to make a choice about who's going to be the right solution to a problem? Maybe yes, maybe no. I think we can have a debate about that, right? But I think there's a totally reasonable position, which is they're not in it, so they're not well positioned to make that call. Yeah. [00:20:35] Right? But, are they well positioned to maybe say, like, if we gave them a dial that was like, we want researchers to be thinking about this time horizon versus that time horizon? Like, that's a control that they actually may be well positioned to inform on. Yeah. As an outsider, right? Yeah. Yeah. And some of this I think, like, I don't know, like, the piece I'm working on right now, which will be coming out probably Tuesday or Wednesday, is you know, some of this is also like encouraging creative destruction, right? Which is like, I'm really intrigued by the idea that like academic fields can get so big that they become they impede progress. Yes. Right? And so this is actually a form of like, I like, it's effectively an intellectual antitrust. Yeah. Where you're basically like, Basically, like the, the role of the scientific regulator is to basically say these fields have gotten so big that they are actively reducing our ability to have good dynamism in the marketplace of ideas. And in this case, we will, we will announce new grant policies that attempt to break this up. And I actually think that like, that is pretty spicy for a funder to do. But like actually maybe part of their role and maybe we should normalize that [00:21:35] being part of their role. Yeah. Yeah, absolutely. [00:21:37] Ben: I I'm imagining a world where There are, where this, like, sort of the macro science is as divisive as [00:21:47] Tim: macroeconomics. [00:21:48] Ben: Right? Because you have, you have your like, your, your like, hardcore free market people. Yeah. Zero government intervention. Yeah, that's right. No antitrust. No like, you know, like abolish the Fed. Right, right. All of that. Yeah, yeah. And I look forward to the day. When there's there's people who are doing the same thing for research. [00:22:06] Tim: Yeah, that's right. Yeah. Yeah when I think that's actually I mean I thought part of a lot of meta science stuff I think is this kind of like interesting tension, which is that like look politically a lot of those people in the space are Pro free market, you know, like they're they're they're liberals in the little L sense. Yeah, like at the same time Like it is true that kind of like laissez faire science Has failed because we have all these examples of like progress slowing down Right? Like, I don't know. Like, I think [00:22:35] that there is actually this interesting tension, which is like, to what degree are we okay with intervening in science to get better outcomes? Yeah. Right? Yeah. Well, as, [00:22:43] Ben: as I, I might put on my hat and say, Yeah, yeah. Maybe, maybe this is, this is me saying true as a fair science has never been tried. Huh, right. Right? Like, that, that, that may be kind of my position. Huh. But anyways, I... And I would argue that, you know, since 1945, we have been, we haven't had laissez faire [00:23:03] Tim: science. Oh, interesting. [00:23:04] Ben: Huh. Right. And so I'm, yeah, I mean, it's like, this is in [00:23:09] Tim: the same way that I think [00:23:11] Ben: a very hard job for macroeconomics is to say, well, like, do we need [00:23:15] Tim: more or less intervention? Yeah. Yeah. [00:23:17] Ben: What is the case there? I think it's the same thing where. You know, a large amount of science funding does come from the government, and the government is opinionated about what sorts of things [00:23:30] Tim: it funds. Yeah, right. Right. And you [00:23:33] Ben: can go really deep into that. [00:23:35] So, so I [00:23:35] Tim: would. Yeah, that's actually interesting. That flips it. It's basically like the current state of science. is right now over regulated, is what you'd say, right? Or, or [00:23:44] Ben: badly regulated. Huh, sure. That is the argument I would say, very concretely, is that it's badly regulated. And, you know, I might almost argue that it is... It's both over and underregulated in the sense that, well, this is, this is my, my whole theory, but like, I think that there, we need like some pockets where it's like much less regulated. Yeah. Right. Where you're, and then some pockets where you're really sort of going to be like, no. You don't get to sort of tune this to whatever your, your project, your program is. Yeah, right, right. You're gonna be working with like [00:24:19] Tim: these people to do this thing. Yeah, yeah. Yeah, and I think there actually is interesting analogies in like the, the kind of like economic regulation, economic governance world. Yeah. Where like the notion is markets generally work well, like it's a great tool. Yeah. Like let it run. [00:24:35] Right. But basically that there are certain failure states that actually require outside intervention. And I think what's kind of interesting in thinking about in like a macro scientific, if you will, context is like, what are those failure states for science? Like, and you could imagine a policy rule, which is the policymaker says, we don't intervene until we see the following signals emerging in a field or in a region. Right. And like, okay, that's, that's the trigger, right? Like we're now in recession mode, you know, like there's enough quarters of this problem of like more papers, but less results. You know, now we have to take action, right? Oh, that's cool. Yeah, yeah. That would be, that would be very interesting. And I think that's like, that's good, because I think like, we end up having to think about like, you know, and again, this is I think why this is a really exciting time, is like MetaScience has produced these really interesting results. Now we're in the mode of like, okay, well, you know, on that policymaker dashboard, Yeah. Right, like what's the meter that we're checking out to basically be like, Are we doing well? Are we doing poorly? Is this going well? Or is this going poorly? Right, like, I think that becomes the next question to like, make this something practicable Yeah. For, for [00:25:35] actual like, Right. Yeah. Yeah. One of my frustrations [00:25:38] Ben: with meta science [00:25:39] Tim: is that it, I [00:25:41] Ben: think is under theorized in the sense that people generally are doing these studies where they look at whatever data they can get. Huh. Right. As opposed to what data should we be looking at? What, what should we be looking for? Yeah. Right. Right. And so, so I would really like to have it sort of be flipped and say, okay, like this At least ideally what we would want to measure maybe there's like imperfect maybe then we find proxies for that Yeah, as opposed to just saying well, like here's what we can measure. It's a proxy for [00:26:17] Tim: okay. That's right, right Yeah, exactly. And I think a part of this is also like I mean, I think it is like Widening the Overton window, which I think like the meta science community has done a good job of is like trying to widen The Overton window of what funders are willing to do. Yeah. Or like what various existing incumbent actors are willing to [00:26:35] do. Because I think one way of getting that data is to run like interesting experiments in this space. Right? Like I think one of the things I'm really obsessed with right now is like, okay, imagine if you could change the overhead rate that universities charge on a national basis. Yeah. Right? Like, what's that do to the flow of money through science? And is that like one dial that's actually like On the shelf, right? Like, we actually have the ability to influence that if we wanted to. Like, is that something we should be running experiments against and seeing what the results are? Yeah, yeah. [00:27:00] Ben: Another would be earmarking. Like, how much money is actually earmarked [00:27:05] Tim: for different things. That's right, yeah, yeah. Like, how easy it is to move money around. That's right, yeah. I heard actually a wild story yesterday about, do you know this whole thing, what's his name? It's apparently a very wealthy donor. That has convinced the state of Washington's legislature to the UW CS department. it's like, it's written into law that there's a flow of money that goes directly to the CS department. I don't think CS departments need more money. I [00:27:35] know, I know, but it's like, this is a really, really kind of interesting, like, outcome. Yeah. Which is like a very clear case of basically just like... Direct subsidy to like, not, not just like a particular topic, but like a particular department, which I think is like interesting experiment. I don't like, I don't know what's been happening there, but yeah. Yeah. Yeah. Natural, natural experiment. [00:27:50] Ben: Totally. Has anybody written down, I assume the answer is no, but it would be very interesting if someone actually wrote down a list of sort of just all the things you [00:28:00] Tim: could possibly [00:28:00] Ben: want to pay attention to, right? Like, I mean, like. Speaking of CS, it'd be very interesting to see, like, okay, like, what fraction of the people who, like, get PhDs in an area, stay in this area, right? Like, going back to the, the [00:28:15] Tim: health of a field or something, right? Yeah, yeah. I think that's right. I, yeah. And I think that those, those types of indicators are interesting. And then I think also, I mean, in the spirit of like it being a dynamic system. Like, so a few years back I read this great bio by Sebastian Malaby called The Man Who Knew, which is, it's a bio of Alan Greenspan. So if you want to ever read, like, 800 pages about [00:28:35] Alan Greenspan, book for you. It's very good. But one of the most interesting parts about it is that, like, there's a battle when Alan Greenspan becomes head of the Fed, where basically he's, like, extremely old school. Like, what he wants to do is he literally wants to look at, like, Reams of data from like the steel industry. Yeah, because that's kind of got his start And he basically is at war with a bunch of kind of like career People at the Fed who much more rely on like statistical models for predicting the economy And I think what's really interesting is that like for a period of time actually Alan Greenspan has the edge Because he's able to realize really early on that like there's It's just changes actually in like the metabolism of the economy that mean that what it means to raise the interest rate or lower the interest rate has like very different effects than it did like 20 years ago before it got started. Yeah. And I think that's actually something that I'm also really quite interested in science is basically like When we say science, people often imagine, like, this kind of, like, amorphous blob. But, like, I think the metabolism is changing all the [00:29:35] time. And so, like, what we mean by science now means very different from, like, what we mean by science, like, even, like, 10 to 20 years ago. Yes. And, like, it also means that all of our tactics need to keep up with that change, right? And so, one of the things I'm interested in to your question about, like, has anyone compiled this list of, like, science health? Or the health of science, right? It's maybe the right way of thinking about it. is that, like, those indicators may mean very different things at different points in time, right? And so part of it is trying to understand, like, yeah, what is the state of the, what is the state of this economy of science that we're talking about? Yeah. You're kind of preaching [00:30:07] Ben: to the, to the choir. In the sense that I'm, I'm always, I'm frustrated with the level of nuance that I feel like many people who are discussing, like, science, quote, making air quotes, science and research, are, are talking about in the sense that. They very often have not actually like gone in and been part of the system. Huh, right. And I'm, I'm open to the fact that [00:30:35] you [00:30:35] Tim: don't need to have got like [00:30:36] Ben: done, been like a professional researcher to have an opinion [00:30:41] Tim: or, or come up with ideas about it. [00:30:43] Ben: Yeah. But at the same time, I feel like [00:30:46] Tim: there's, yeah, like, like, do you, do you think about that tension at all? Yeah. I think it's actually incredibly valuable. Like, I think So I think of like Death and Life of Great American Cities, right? Which is like, the, the, the really, one of the really, there's a lot of interesting things about that book. But like, one of the most interesting things is sort of the notion that like, you had a whole cabal of urban planners that had this like very specific vision about how to get cities to work right and it just turns out that like if you like are living in soho at a particular time and you like walk along the street and you like take a look at what's going on like there's always really actually super valuable things to know about yeah that like are only available because you're like at that like ultra ultra ultra ultra micro level and i do think that there's actually some potential value in there like one of the things i would love to be able to set up, like, in the community of MetaScience or whatever you want to call it, right, [00:31:35] is the idea that, like, yeah, you, you could afford to do, like, very short tours of duty, where it's, like, literally, you're just, like, spending a day in a lab, right, and, like, to have a bunch of people go through that, I think, is, like, really, really helpful and so I think, like, thinking about, like, what the rotation program for that looks like, I think would be cool, like, you, you should, you should do, like, a six month stint at the NSF just to see what it looks like. Cause I think that kind of stuff is just like, you know, well, A, I'm selfish, like I would want that, but I also think that like, it would also allow the community to like, I think be, be thinking about this in a much more applied way. Yeah. Yeah. Yeah. [00:32:08] Ben: I think it's the, the meta question there for, for everything, right? Is how much in the weeds, like, like what am I trying to say? The. It is possible both to be like two in the weeds. Yeah, right and then also like too high level Yeah, that's right. And in almost like what what is the the right amount or like? Who, who should [00:32:31] Tim: be talking to whom in that? That's right. Yeah, I mean, it's like what you were saying earlier that like the [00:32:35] success of macro science will be whether or not it's as controversial as macroeconomics. It's like, I actually hope that that's the case. It's like people being like, this is all wrong. You're approaching it like from a too high level, too abstract of a level. Yeah. I mean, I think the other benefit of doing this outside of like the level of insight is I think one of the projects that I think I have is like We need to, we need to be like defeating meta science, like a love of meta science aesthetics versus like actual like meta science, right? Like then I think like a lot of people in meta science love science. That's why they're excited to not talk about the specific science, but like science in general. But like, I think that intuition also leads us to like have very romantic ideas of like what science is and how science should look and what kinds of science that we want. Yeah. Right. The mission is progress. The mission isn't science. And so I think, like, we have to be a lot more functional. And again, I think, like, the benefit of these types of, like, rotations, like, Oh, you just are in a lab for a month. Yeah. It's like, I mean, you get a lot more of a sense of, like, Oh, okay, this is, this is what it [00:33:35] looks like. Yeah. Yeah. I'd like to do the same thing for manufacturing. Huh. Right. [00:33:39] Ben: Right. It's like, like, and I want, I want everybody to be rotating, right? Huh. Like, in the sense of, like, okay, like, have the scientists go and be, like, in a manufacturing lab. That's right. [00:33:47] Tim: Yeah. [00:33:48] Ben: And be like, okay, like, look. Like, you need to be thinking about getting this thing to work in, like, this giant, like, flow pipe instead of a [00:33:54] Tim: test tube. That's right, right. Yeah, yeah, yeah. Yeah, [00:33:57] Ben: unfortunately, the problem is that we can't all spend our time, like, if everybody was rotating through all the [00:34:03] Tim: things they need to rotate, we'd never get anything done. Yeah, exactly. [00:34:06] Ben: ANd that's, that's, that's kind of [00:34:08] Tim: the problem. Well, and to bring it all the way back, I mean, I think you started this question on macroscience in the context of transitioning away from all of this like weird Cuban Missile Crisis simulation stuff. Like, I do think one way of thinking about this is like, okay, well, if we can't literally send you into a lab, right? Like the question is like, what are good simulations to give people good intuitions about the dynamics in the space? Yeah. And I think that's, that's potentially quite interesting. Yeah. Normalized weekend long simulation. That's right. Like I love the idea of basically [00:34:35] like like you, you get to reenact the publication of a prominent scientific paper. It's like kind of a funny idea. It's just like, you know, yeah. Or, or, or even trying to [00:34:44] Ben: get research funded, right? Like, it's like, okay, like you have this idea, you want yeah. [00:34:55] Tim: I mean, yeah, this is actually a project, I mean, I've been talking to Zach Graves about this, it's like, I really want to do one which is a game that we're calling Think Tank Tycoon, which is basically like, it's a, it's a, the idea would be for it to be a strategy board game that simulates what it's like to run a research center. But I think like to broaden that idea somewhat like it's kind of interesting to think about the idea of like model NSF Yeah, where you're like you you're in you're in the hot seat you get to decide how to do granting Yeah, you know give a grant [00:35:22] Ben: a stupid thing. Yeah, some some some congressperson's gonna come banging [00:35:26] Tim: on your door Yeah, like simulating those dynamics actually might be really really helpful Yeah I mean in the very least even if it's not like a one for one simulation of the real world just to get like some [00:35:35] common intuitions about like The pressures that are operating here. I [00:35:38] Ben: think you're, the bigger point is that simulations are maybe underrated [00:35:42] Tim: as a teaching tool. I think so, yeah. Do you remember the the paperclip maximizer? Huh. The HTML game? Yeah, yeah. [00:35:48] Ben: I'm, I'm kind of obsessed with it. Huh. Because, it, you've, like, somehow the human brain, like, really quickly, with just, like, you know, some numbers on the screen. Huh. Like, just like numbers that you can change. Right, right. And some, like, back end. Dynamic system, where it's like, okay, like based on these numbers, like here are the dynamics of the [00:36:07] Tim: system, and it'll give you an update. [00:36:09] Ben: Like, you start to really get an intuition for, for system dynamics. Yeah. And so, I, I, I want to see more just like plain HTML, like basically like spreadsheet [00:36:20] Tim: backend games. Right, right, like the most lo fi possible. Yeah, I think so. Yeah. Yeah, I think it's helpful. I mean, I think, again, particularly in a world where you're thinking about, like, let's simulate these types of, like, weird new grant structures that we might try out, right? Like, you know, we've got a bunch [00:36:35] of hypotheses. It's kind of really expensive and difficult to try to get experiments done, right? Like, does a simulation with a couple people who are well informed give us some, at least, inclinations of, like, where it might go or, like, what are the unintentional consequences thereof? Yeah. [00:36:51] Ben: Disciplines besides the military that uses simulations [00:36:56] Tim: successfully. Not really. And I think what's kind of interesting is that like, I think it had a vogue that like has kind of dissipated. Yeah, I think like the notion of like a a game being the way you kind of do like understanding of a strategic situation, I think like. Has kind of disappeared, right? But like, I think a lot of it was driven, like, RAND actually had a huge influence, not just on the military. But like, there's a bunch of corporate games, right? That were like, kind of invented in the same period. Yeah. That are like, you determine how much your steel production is, right? And was like, used to teach MBAs. But yeah, I think it's, it's been like, relatively limited. Hm. [00:37:35] Yeah. It, yeah. Hm. [00:37:38] Ben: So. Other things. Huh. Like, just to, [00:37:41] Tim: to shift together. Sure, sure, go ahead. Yeah, yeah, yeah, yeah. I guess another [00:37:44] Ben: thing that we haven't really talked about, but actually sort of plays into all of this, is thinking about better [00:37:50] Tim: ways of regulating technology. [00:37:52] Ben: I know that you've done a lot of thinking about that, and maybe this is another thing to simulate. [00:38:00] Tim: Yeah, it's a model OSTP. But [00:38:04] Ben: it's maybe a thing where, this is actually like a prime example where the particulars really matter, right? Where you can't just regulate. quote unquote technology. Yeah. Right. And it's like, there's, there's some technologies that you want to regulate very, very closely and very tightly and others that you want to regulate very [00:38:21] Tim: loosely. Yeah, I think that's right. And I think that's actually, you know, I think it is tied to the kind of like macro scientific project, if you will. Right. Which is that I think we have often a notion of like science regulation being like. [00:38:35] literally the government comes in and is like, here are the kind of constraints that we want to put on the system. Right. And there's obviously like lots of different ways of doing that. And I think there's lots of contexts in which that's like appropriate. But I think for a lot of technologies that we confront right now, the change is so rapid that the obvious question always becomes, no matter what emerging technology talking about is like, how does your clock speed of regulation actually keep up with like the clock speed of technology? And the answer is frequently like. It doesn't, right? And like you run into these kind of like absurd situations where you're like, well, we have this thing, it's already out of date by the time it goes into force, everybody kind of creates some like notional compliance with that rule. Yeah. And like, in terms of improving, I don't know, safety outcomes, for instance, it like has not actually improved safety outcomes. And I think in that case, right, and I think I could actually make an argument that like, the problem is becoming more difficult with time. Right? Like, if you really believe that the pace of technological change is faster than it used to be, then it is possible that, like, there was a point at which, like, government was operating, and it could actually keep [00:39:35] pace effectively, or, like, a body like Congress could actually keep pace with society, or with technology successfully, to, like, make sure that it was conformant with, sort of, like, societal interests. Do you think that was [00:39:46] Ben: actually ever the case, or was it that we didn't, we just didn't [00:39:50] Tim: have as many regulations? I would say it was sort of twofold, right? Like, I think one of them was you had, at least, let's just talk about Congress, right? It's really hard to talk about, like, government as a whole, right? Like, I think, like, Congress was both better advised and was a more efficient institution, right? Which means it moved faster than it does today. Simultaneously, I also feel like for a couple reasons we can speculate on, right? Like, science, or in the very least, technology. Right, like move slower than it does today. Right, right. And so like actually what has happened is that both both dynamics have caused problems, right? Which is that like the organs of government are moving slower at the same time as science is moving faster And like I think we've passed some inflection [00:40:35] point now where like it seems really hard to craft You know, let's take the AI case like a sensible framework that would apply You know, in, in LLMs where like, I don't know, like I was doing a little recap of like recent interoperability research and I like took a step back and I was like, Oh, all these papers are from May, 2023. And I was like, these are all big results. This is all a big deal. Right. It's like very, very fast. Yeah. So that's kind of what I would say to that. Yeah. I don't know. Do you feel differently? You feel like Congress has never been able to keep up? Yeah. [00:41:04] Ben: Well, I. I wonder, I guess I'm almost, I'm, I'm perhaps an outlier in that I am skeptical of the claim that technology overall has sped up significantly, or the pace of technological change, the pace of software change, certainly. Sure. Right. And it's like maybe software as a, as a fraction of technology has spread up, sped up. And maybe like, this is, this is a thing where like to the point of, of regulations needing to, to. Go into particulars, [00:41:35] right? Mm-Hmm. . Right, right. Like tuning the regulation to the characteristic timescale of whatever talk [00:41:40] Tim: technology we're talking about. Mm-Hmm. , right? [00:41:42] Ben: But I don't know, but like, I feel like outside of software, if anything, technology, the pace of technological change [00:41:52] Tim: has slowed down. Mm hmm. Right. Right. Yeah. [00:41:55] Ben: This is me putting on my [00:41:57] Tim: stagnationist bias. And would, given the argument that I just made, would you say that that means that it should actually be easier than ever to regulate technology? Yeah, I get targets moving slower, right? Like, yeah, [00:42:12] Ben: yeah. Or it's the technology moving slowly because of the forms of [00:42:14] Tim: the regulator. I guess, yeah, there's like compounding variables. [00:42:16] Ben: Yeah, the easiest base case of regulating technology is saying, like, no, you can't have [00:42:20] Tim: any. Huh, right, right, right. Like, it can't change. Right, that's easy to regulate. Yeah, right, right. That's very easy to regulate. I buy that, I buy that. It's very easy to regulate well. Huh, right, right. I think that's [00:42:27] Ben: That's the question. It's like, what do we want to lock in and what don't we [00:42:31] Tim: want to lock in? Yeah, I think that's right and I think, you [00:42:35] know I guess what that moves me towards is like, I think some people, you know, will conclude the argument I'm making by saying, and so regulations are obsolete, right? Or like, oh, so we shouldn't regulate or like, let the companies take care of it. And I'm like, I think so, like, I think that that's, that's not the conclusion that I go to, right? Like part of it is like. Well, no, that just means we need, we need better ways of like regulating these systems, right? And I think they, they basically require government to kind of think about sort of like moving to different parts of the chain that they might've touched in the past. Yeah. So like, I don't know, we, Caleb and I over at IFP, we just submitted this RFI to DARPA. In part they, they were thinking about like how does DARPA play a role in dealing with like ethical considerations around emerging technologies. Yep. But the deeper point that we were making in our submission. was simply that like maybe actually science has changed in a way where like DARPA can't be the or it's harder for DARPA to be the originator of all these technologies. Yeah. So they're, they're almost, they're, they're placing the, the, the ecosystem, the [00:43:35] metabolism of technology has changed, which requires them to rethink like how they want to influence the system. Yeah. Right. And it may be more influence at the point of like. Things getting out to market, then it is things like, you know, basic research in the lab or something like that. Right. At least for some classes of technology where like a lot of it's happening in private industry, like AI. Yeah, exactly. Yeah. [00:43:55] Ben: No, I, I, I think the, the concept of, of like the metabolism of, of science and technology is like really powerful. I think in some sense it is, I'm not sure if you would, how would you map that to the idea of there being a [00:44:11] Tim: research ecosystem, right? Right. Is it, is it that there's like [00:44:17] Ben: the metabolic, this is, this is incredibly abstract. Okay. Like, is it like, I guess if you're looking at the metabolism, does, does the metabolism sort of say, we're going to ignore institutions for now and the metabolism is literally just the flow [00:44:34] Tim: of [00:44:35] like ideas and, and, and outcomes and then maybe like the ecosystem is [00:44:41] Ben: like, okay, then we like. Sort of add another layer and say there are institutions [00:44:46] Tim: that are sure interacting with this sort of like, yeah, I think like the metabolism view or, you know, you might even think about it as like a supply chain view, right? To move it away from, like, just kind of gesturing at bio for no reason, right? Is I think what's powerful about it is that, you know, particularly in foundation land, which I'm most familiar with. There's a notion of like we're going to field build and what that means is we're going to name a field and then researchers Are going to be under this tent that we call this field and then the field will exist Yeah, and then the proper critique of a lot of that stuff is like researchers are smart They just like go where the money is and they're like you want to call up like I can pretend to be nanotech for a Few years to get your money Like, that's no problem. I can do that. And so there's kind of a notion that, like, if you take the economy of science as, like, institutions at the very beginning, you actually miss the bigger [00:45:35] picture. Yes. Right? And so the metabolism view is more powerful because you literally think about, like, the movement of, like, an idea to an experiment to a practical technology to, like, something that's out in the world. Yeah. And then we basically say, how do we influence those incentives before we start talking about, like, oh, we announced some new policy that people just, like... Cosmetically align their agendas to yeah, and like if you really want to shape science It's actually maybe arguably less about like the institution and more about like Yeah, the individual. Yeah, exactly. Like I run a lab. What are my motivations? Right? And I think this is like, again, it's like micro macro, right? It's basically if we can understand that, then are there things that we could do to influence at that micro level? Yeah, right. Which is I think actually where a lot of Macro econ has moved. Right. Which is like, how do we influence like the individual firm's decisions Yeah. To get the overall aggregate change that we want in the economy. Yeah. And I think that's, that's potentially a better way of approaching it. Right. A thing that I desperately [00:46:30] Ben: want now is Uhhuh a. I'm not sure what they're, they're [00:46:35] actually called. Like the, you know, like the metal, like, like, like the [00:46:37] Tim: prep cycle. Yeah, exactly. Like, like, like the giant diagram of, of like metabolism, [00:46:43] Ben: right. I want that for, for research. Yeah, that would be incredible. Yeah. If, if only, I mean, one, I want to have it on [00:46:50] Tim: my wall and to, to just get across the idea that. [00:46:56] Ben: It is like, it's not you know, basic research, applied [00:47:01] Tim: research. Yeah, totally. Right, right, right. When it goes to like, and what I like about kind of metabolism as a way of thinking about it is that we can start thinking about like, okay, what's, what's the uptake for certain types of inputs, right? We're like, okay, you know like one, one example is like, okay, well, we want results in a field to become more searchable. Well what's really, if you want to frame that in metabolism terms, is like, what, you know, what are the carbs that go into the system that, like, the enzymes or the yeast can take up, and it's like, access to the proper results, right, and like, I think that there's, there's a nice way of flipping in it [00:47:35] that, like, starts to think about these things as, like, inputs, versus things that we do, again, because, like, we like the aesthetics of it, like, we like the aesthetics of being able to find research results instantaneously, but, like, the focus should be on, Like, okay, well, because it helps to drive, like, the next big idea that we think will be beneficial to me later on. Or like, even being [00:47:53] Ben: the question, like, is the actual blocker to the thing that you want to see, the thing that you think it is? Right. I've run into far more people than I can count who say, like, you know, we want more awesome technology in the world, therefore we are going to be working on Insert tool here that actually isn't addressing, at least my, [00:48:18] Tim: my view of why those things aren't happening. Yeah, right, right. And I think, I mean, again, like, part of the idea is we think about these as, like, frameworks for thinking about different situations in science. Yeah. Like, I actually do believe that there are certain fields because of, like, ideologically how they're set up, institutionally how [00:48:35] they're set up, funding wise how they're set up. that do resemble the block diagram you were talking about earlier, which is like, yeah, there actually is the, the basic research, like we can put, that's where the basic research happens. You could like point at a building, right? And you're like, that's where the, you know, commercialization happens. We pointed at another building, right? But I just happen to think that most science doesn't look like that. Right. And we might ask the question then, like, do we want it to resemble more of like the metabolism state than the block diagram state? Right. Like both are good. Yeah, I mean, I would [00:49:07] Ben: argue that putting them in different buildings is exactly what's causing [00:49:10] Tim: all the problems. Sure, right, exactly, yeah, yeah. Yeah. But then, again, like, then, then I think, again, this is why I think, like, the, the macro view is so powerful, at least to me, personally, is, like, we can ask the question, for what problems? Yeah. Right? Like, are there, are there situations where, like, that, that, like, very blocky way of doing it serves certain needs and certain demands? Yeah. And it's like, it's possible, like, one more argument I can make for you is, like, Progress might be [00:49:35] slower, but it's a lot more controllable. So if you are in the, you know, if you think national security is one of the most important things, you're willing to make those trade offs. But I think we just should be making those trade offs, like, much more consciously than we do. And [00:49:49] Ben: that's where politics, in the term, in the sense of, A compromise between people who have different priorities on something can actually come in where we can say, okay, like we're going to trade off, we're going to say like, okay, we're going to increase like national security a little bit, like in, in like this area to, in compromise with being able to like unblock this. [00:50:11] Tim: That's right. Yeah. And I think this is the benefit of like, you know, when I say lever, I literally mean lever, right. Which is basically like, we're in a period of time where we need this. Yeah. Right? We're willing to trade progress for security. Yeah. Okay, we're not in a period where we need this. Like, take the, take, ramp it down. Right? Like, we want science to have less of this, this kind of structure. Yeah. That's something we need to, like, have fine tuned controls over. Right? Yeah. And to be thinking about in, like, a, a comparative sense, [00:50:35] so. And, [00:50:36] Ben: to, to go [00:50:36] Tim: back to the metabolism example. Yeah, yeah. I'm really thinking about it. Yeah, yeah. [00:50:39] Ben: Is there an equivalent of macro for metabolism in the sense that like I'm thinking about like, like, is it someone's like blood, like, you know, they're like blood glucose level, [00:50:52] Tim: like obesity, right? Yeah, right. Kind of like our macro indicators for metabolism. Yeah, that's right. Right? Or like how you feel in the morning. That's right. Yeah, exactly. I'm less well versed in kind of like bio and medical, but I'm sure there is, right? Like, I mean, there is the same kind of like. Well, I study the cell. Well, I study, you know, like organisms, right? Like at different scales, which we're studying this stuff. Yeah. What's kind of interesting in the medical cases, like You know, it's like, do we have a Hippocratic, like oath for like our treatment of the science person, right? It's just like, first do no harm to the science person, you know? [00:51:32] Ben: Yeah, I mean, I wonder about that with like, [00:51:35] with research. Mm hmm. Is there, should we have more heuristics about how we're [00:51:42] Tim: Yeah, I mean, especially because I think, like, norms are so strong, right? Like, I do think that, like, one of the interesting things, this is one of the arguments I was making in the long science piece. It's like, well, in addition to funding certain types of experiments, if you proliferate the number of opportunities for these low scale projects to operate over a long period of time, there's actually a bunch of like norms that might be really good that they might foster in the scientific community. Right. Which is like you learn, like scientists learn the art of how to plan a project for 30 years. That's super important. Right. Regardless of the research results. That may be something that we want to put out into the open so there's more like your median scientist has more of those skills Yeah, right, like that's another reason that you might want to kind of like percolate this kind of behavior in the system Yeah, and so there's kind of like these emanating effects from like even one offs that I think are important to keep in mind [00:52:33] Ben: That's actually another [00:52:35] I think used for simulations. Yeah I'm just thinking like, well, it's very hard to get a tight feedback loop, right, about like whether you manage, you planned a project for 30 years [00:52:47] Tim: well, right, [00:52:48] Ben: right. But perhaps there's a better way of sort of simulating [00:52:51] Tim: that planning process. Yeah. Well, and I would love to, I mean, again, to the question that you had earlier about like what are the metrics here, right? Like I think for a lot of science metrics that we may end up on, they may have these interesting and really curious properties like we have for inflation rate. Right. We're like, the strange thing about inflation is that we, we kind of don't like, we have hypotheses for how it happens, but like, part of it is just like the psychology of the market. Yeah. Right. Like you anticipate prices will be higher next quarter. Inflation happens if enough people believe that. And part of what the Fed is doing is like, they're obviously making money harder to get to, but they're also like play acting, right? They're like. You know, trust me guys, we will continue to put pressure on the economy until you feel differently about this. And I think there's going to be some things in science that are worth [00:53:35] measuring that are like that, which is like researcher perceptions of the future state of the science economy are like things that we want to be able to influence in the space. And so one of the things that we do when we try to influence like the long termism or the short termism of science It's like, there's lots of kind of like material things we do, but ultimately the idea is like, what does that researcher in the lab think is going to happen, right? Do they think that, you know, grant funding is going to become a lot less available in the next six months or a lot more available in the next six months? Like influencing those might have huge repercussions on what happens in science. And like, yeah, like that's a tool that policymakers should have access to. Yeah. Yeah. [00:54:11] Ben: And the parallels between the. The how beliefs affect the economy, [00:54:18] Tim: and how beliefs [00:54:19] Ben: affect science, I think may also be a [00:54:21] Tim: little bit underrated. Yeah. In the sense that, [00:54:24] Ben: I, I feel like some people think that It's a fairly deterministic system where it's like, ah, yes, this idea's time has come. And like once, once all the things that are in place, like [00:54:35] once, once all, then, then it will happen. And like, [00:54:38] Tim: that is, that's like how it works. [00:54:40] Ben: Which I, I mean, I have, I wish there was more evidence to my point or to disagree with me. But like, I, I think that's, that's really not how it works. And I'm like very often. a field or, or like an idea will, like a technology will happen because people think that it's time for that technology to happen. Right. Right. Yeah. Obviously, obviously that isn't always the case. Right. Yeah. Yeah. There's, there's, there's hype [00:55:06] Tim: cycles. And I think you want, like, eventually, like. You know, if I have my druthers, right, like macro science should have like it's Chicago school, right? Which is basically like the idea arrives exactly when it should arrive. Scientists will discover it on exactly their time. And like your only role as a regulator is to ensure the stability of scientific institutions. I think actually that that is a, that's not a position I agree with, but you can craft a totally, Reasonable, coherent, coherent governance framework that's based around that concept, right? Yes. Yeah. I think [00:55:35] like [00:55:35] Ben: you'll, yes. I, I, I think like that's actually the criteria for success of meta science as a field uhhuh, because like once there's schools , then, then, then it will have made it, [00:55:46] Tim: because [00:55:47] Ben: there aren't schools right now. Mm-Hmm. , like, I, I feel , I almost feel I, I, I now want there to b
Nadia Asparouhova talks about idea machines on idea machines! Idea machines, of course, being her framework around societal organisms that turn ideas into outcomes. We also talk about the relationship between philanthropy and status, public goods and more. Nadia is a hard-to-categorize doer of many things: In the past, she spent many years exploring the funding, governance, and social dynamics of open source software, both writing a book about it called “Working in Public” and putting those ideas into practice at GitHub, where she worked to improve the developer experience. She explored parasocial communities and reputation-based economies as an independent researcher at Protocol Labs and put those ideas into practice as employee number two at Substack, focusing on the writer experience. She's currently researching what the new tech elite will look like, which forms the base of a lot of our conversation. Completely independently, the two of us came up with the term “idea machines” to describe same thing — in her words: “self-sustaining organisms that contains all the parts needed to turn ideas into outcomes.” I hope you enjoy my conversation with Nadia Asparouhova. Links Nadia's Idea Machines Piece Nadia's Website Working in Public: The Making and Maintenance of Open Source Software Transcript [00:01:59] Ben: I really like your way of, of defining things and sort of bringing clarity to a lot of these very fuzzy words that get thrown around. So, so I'd love to sort of just get your take on how we should think about so a few definitions to start off with. So I, in your mind, what, what is tech, when we talk about like tech and philanthropy what, what is that, what is that entity. [00:02:23] Nadia: Yeah, tech is definitely a fuzzy term. I think it's best to find as a culture, more than a business industry. And I think, yeah, I mean, tech has been [00:02:35] associated with startups historically, but But like, I think it's transitioning from being this like pure software industry to being more like, more like a, a way of thinking. But personally, I don't think I've come across a good definition for tech anywhere. It's kind, you know? [00:02:52] Ben: Yeah. Do, do you think you could point to some like very sort of like characteristic mindsets of tech that you think really sort of set it. [00:03:06] Nadia: Yeah. I think the probably best known would be, you know, failing fast and moving fast and breaking things. I think like the interest in the sort of like David and gly model of an individual that is going up against an institution or some sort of. Complex bureaucracy that needs to be broken apart. Like the notion of disrupting, I think, is a very tech sort of mindset of looking at a problem and saying like, how can we do this better? So it, in a [00:03:35] weird way, tech is, I feel like it's sort of like, especially in relation, in contrast to crypto, I feel like it's often about iterating upon the way things are or improving things, even though I don't know that tech would like to be defined that way necessarily, but when I, yeah. Sort of compare it to like the crypto mindset, I feel like tech is kind of more about breaking apart institutions or, or doing yeah. Trying to do things better. [00:04:00] Ben: A a as opposed. So, so could you then dig into the, the crypto mindset by, by contrast? That's a, I think that's a, a subtle difference that a lot of people don't go into. [00:04:10] Nadia: Yeah. Like I think the crypto mindset is a little bit more about building a parallel universe entirely. It's about, I mean, well, one, I don't see the same drive towards creating monopolies in the way that and I don't know if that was like always a, you know, core value of tech, but I think in practice, that's kind of what it's been of. You try to be like the one thing that is like dominating a market. Whereas with crypto, I think people are [00:04:35] because they have sort of like decentralization as a core value, at least at this stage of their maturity. It's more about building lots of different experiments or trying lots of different things and enabling people to sort of like have their own little corner of the universe where they can, they have all the tools that they need to sort of like build their own world. Whereas the tech mindset seems to imply that there is only one world the world is sort of like dominated by these legacy institutions and it's Tech's job to fix. Those problems. So it's like very much engaged with what it sees as kind of like that, that legacy world or [00:05:10] Ben: Yeah, I, I hadn't really thought about it that way. But that, that totally makes sense. And I'm sure other people have, have talked about this, but do, do you feel that is an artifact of sort of the nature of the, the technology that they're predicated on? Like the difference between, I guess sort of. The internet and the, the internet of, of like SAS and servers and then the [00:05:35] internet of like blockchains and distributed things. [00:05:38] Nadia: I mean, it's weird. Cause if you think about sort of like early computing days, I don't really get that feeling at all. I'm not a computer historian or a technology historian, so I'm sure someone else has a much more nuanced answer to this than I do, but yeah. I mean, like when I think of like sixties, computer or whatever, it, it feels really intertwined with like creating new worlds. And that's why like, I mean, because crypto is so new, it's maybe. It, we can only really observe what's happening right now. I don't know that crypto will always look exactly like this in the future. In fact, it almost certainly will not. So it's hard to know like, what are, it's like core distinct values, but I, I just sort of noticed the contrast right now, at least, but probably, yeah, if you picked a different point in, in text history, sort of like pre startups, I guess and, and pre, or like that commercialization phase or that wealth accumulation phase it was also much more, I guess, like pie this guy. Right. But yeah, it feel, it feels like at least the startup mindset, or like whenever that point of [00:06:35] history started all this sort of like big successes were really about like overturning legacy industries, the, yeah. The term disruption was like such a buzzword. It's about, yeah. Taking something that's not working and making it better, which I think is like very intertwined with like programmer mindset. [00:06:51] Ben: It's yeah, it's true. And I'm just thinking about sort of like my impression of, of the early internet and it, and it did not have that same flavor. So, so perhaps it's a. Artifact of like the stage of a culture or ecosystem then like the technology underlying it. I guess [00:07:10] Nadia: And it's strange. Cause I, I feel like, I mean, there are people today who still sort of maybe fetishizes too strong, a word, but just like embracing that sort of early computing mindset. But it almost feels like a subculture now or something. It doesn't feel. yeah. I don't know. I don't, I don't find that that's like sort of the prevalent mindset in, in tech. [00:07:33] Ben: Well, it, it feels like the, the sort of [00:07:35] like mechanisms that drive tech really do sort of center. I mean, this is my bias, but like, I feel like the, the way that that tech is funded is primarily through venture capital, which only works if you're shooting for a truly massive Result and the way that you get a truly massive result is not to build like a little niche thing, but to try to take over an industry. [00:08:03] Nadia: It's about arbitrage [00:08:05] Ben: yeah. Or, or like, or even not even quite arbitrage, but just like the, the, to like, that's, that's where the massive amount of money is. And, and like, [00:08:14] Nadia: This means her like financially. I feel like when I think about the way that venture capital works, it's it's. [00:08:19] Ben: yeah, [00:08:20] Nadia: ex sort of exploiting, I guess, the, the low margin like cost models. [00:08:25] Ben: yeah, yeah, definitely. And like then using that to like, take over an industry, whereas if maybe like, you're, you're not being funded in a way [00:08:35] that demands, that sort of returns you don't need to take as, as much of a, like take over the world mindset. [00:08:41] Nadia: Yeah. Although I don't think like those two things have to be at odds with each other. I think it's just like, you know, there's like the R and D phase that is much more academic in nature and much more exploratory and then venture capital is better suited for the point in which some of those ideas can be commercialized or have a commercial opportunity. But I don't think, yeah, I don't, I don't think they're like fighting with each other either. [00:09:07] Ben: Really? I, I guess I, I don't know. It's like, so can I, can I, can I disagree and, and sort of say, like, it feels like the, the, the stance that venture type funding comes with, like forces on people is a stance of like, we are, we might fail, but we're, we're setting out to capture a huge, huge amount of value and like, [00:09:35] And, and, and just like in order for venture portfolios to work, that needs to be the mindset. And like there, there are other, I mean, there are just like other funding, ways of funding, things that sort of like ask for more modest returns. And they can't, I mean, they can't take as many risks. They come with other constraints, but, but like the, the need for those, those power law returns does drive a, the need to be like very ambitious in terms of scale. [00:10:10] Nadia: I guess, like what's an example of something that has modest financial returns, but massive social impact that can't be funded through philanthropy and academia or through through venture capital [00:10:29] Ben: Well, I mean, like are, I mean, like, I think that there's, [00:10:35] I think that, that, that, [00:10:38] Nadia: or I guess it [00:10:39] Ben: yeah, I think the philanthropy piece is really important. Sorry, go ahead. [00:10:42] Nadia: Yeah. I guess always just like, I feel like it was like different types of funding for different, like, I, I sort of visualized this pipeline of like, yeah. When you're in the R and D phase. Venture capital is not for you. There's other types of funding that are available. And then like, you know, when you get to the point where there are commercial opportunities, then you switch over to a different kind of funding. [00:11:01] Ben: Yeah. Yeah, no, I, I definitely agree with that. I, I, I think, I think what we're like where, where, where I was at least talking about is like that, that venture capital is sort of in the tech world is, is like the, the, the thing, the go to funding mechanism. [00:11:16] Nadia: Yeah. Yeah. Which is partly why I'm interested in, I guess, idea machines and other sources of funding that feel like they're at least starting to emerge now. Which I think gets back to those kinds of routes that, I mean, it's actually surprising to me that you can talk to people in tech who don't always make the connection that tech started as an, [00:11:35] you know, academically and government funded enterprise. And not venture venture capital came along later. Right then and so, yeah, maybe we, we're kind of at that point where there's been enough wealth generated that can kind of start that cycle again. [00:11:47] Ben: yeah. And, and speaking of that another distinction that, that you've made in your writing that I think is really important is the difference between charity and philanthropy. Do you mind unpacking how you think about that? [00:12:00] Nadia: Yeah. Charity is, is more like direct services. So you're not, there's sort of like a one to one, you put something in, you get sort of similar equal measure back out of it. And there's, I mean, charity is, you know, you can have like emergency relief or disasters or yeah, just like charitable services for people that need that kind of support. And to me, it's, it's just sort of strange that it always gets lumped in with philanthropy, which is a. Enterprise entirely philanthropy is more of the early stage pipeline [00:12:35] for it it's, it's more like venture capital, but for public goods in the same way that venture capital is very early stage financing for private goods. Philanthropy is very early stage financing for public goods. And if those public goods show promise or yeah, one need to be scaled, then you can go to government to get to get more funding to sustain it. Or maybe there are commercial opportunities or, you know, there are multiple paths that can, they can branch out from there. But yeah, philanthropy at its heart is about experimenting with really wild and crazy ideas that benefit public society that that could have massive social returns if successful. Whereas charity is not really about risk taking charity is really about providing a stable source of financing for those who really need it in the moment. [00:13:21] Ben: And, and the there's, there's two things I, I, I want to poke at there is like, do so. So you describe philanthropy as like crazy risk taking do, do you think that most [00:13:35] philanthropists see it, that. [00:13:37] Nadia: Today? No. And yeah, philanthropy has had this very varied history over the last, like let's say like modern philanthropy in its current form has only really existed since the late 18 hundreds, early 19 hundreds. So we've got whatever, like a hundred, hundred 50 years. Most of what we think about in philanthropy today for, you know, most let's say adults that have really only grown up in the phase of philanthropy that you might call like late stage modern philanthropy to be a little cynical about it where it has. And, and part of that has just come from, I mean, just a bridge history of philanthropy, but you know, early on or. Premodern philanthropy. We had the the church was kind of maybe more played more of that, that role or that that force in both like philanthropic experiments and direct services. And then like when, in the age of sort of like, yeah, post gilded, age, post industrial revolution you had people who made a lot of, lot of self-made wealth. And you had people that were experimenting with new ideas [00:14:35] to provide public goods and services to society. And government at the time was not really playing a role in that. And so all that was coming from private citizens and private capital. And so those are, yeah, there was a time in which philanthropy was much more experimental in that way. But then as government sort of stepped in around you know, mid 19 hundreds to become sort of like that primary provider and funder of public services that diminished the role of philanthropy. And then in the late 1960s, Foundations just became much more heavily regulated. And I think that was sort of like the turning point where philanthropy went from being this like highly experimental and, and just sort of like aggressive risk taking sort of enterprise to much more like safe because it was just sort of like hampered by all these like accountability requirements. So yeah, I think like philanthropy today is not representative of what philanthropy has been historically or what it could be. [00:15:31] Ben: A and what are, what are some of your favorite, like weird, [00:15:35] risky pre regulation, philanthropic things. [00:15:40] Nadia: Oh, I don't do favorites, but [00:15:42] Ben: Oh, okay. Well what, what are, what are some, some amusing examples of, of risky philanthropic cakes. [00:15:51] Nadia: one I mean, [00:15:52] Ben: Take a couple. [00:15:54] Nadia: Probably like the most famous example would be like Carnegie public libraries. So like our public library system started as a privately funded experiment. And for each library that was created Andrew Carnegie would ask the government, the, the local government or the local community to find he would help fund the creation of the libraries. And then the government would have to find a way to like continue to sustain it and support it over the years. So it was this nice sort of like, I guess, public private type partnership. But then you have, I mean, also scientific research and public health initiatives that were philanthropically supported and funded. So Rockefeller's eradication of worm as a yeah. Public health initiative finding care for yellow fever. Those are some [00:16:35] examples. Yeah. I mean the public school education system in the south did not exist until there was sort of like an initiative to say, why aren't there public schools in the south and how do we just create them and, and fund. So and then also like the state of American private universities, which were sort of modeled after European universities at the time. But also came about after private philanthropists were funding research into understanding, like why is our American higher education? Not very good, you know, at the time it was like, not that good compared to the German university models. And so there was a bunch of research that was produced from that. And then they kind of like set out to yeah. Reform American universities and, yeah. So, I mean, there, there're just like so many examples of people just sort of saying, and, and I think like, I, I, one thing I do wanna caveat is like, I'm not regressive in the sense of. Wow. This thing, you know, worked really well a hundred years ago. And why don't we just do the exact same thing again? I feel like that's like a common pitfall in history. It's not that I think, you know, [00:17:35] everything about the world is completely different today versus let's say 19 years, but [00:17:39] Ben: in the past. And so it could be different to her in the [00:17:41] Nadia: exactly that that's sort of, the takeaway is like, where we're at right now is not a terminal state or it doesn't have to be a terminal state. Like philanthropy has been through many different phases and it can continue to have other phases in the future. They're not gonna look exactly like they did historically, but yeah. [00:17:56] Ben: That, that's that such a good distinction. And it goes for, for so many things where like, like when you point to historical examples I don't know. Like, I, I think that I, I suffer the same thing where I, you know, it's like you point to, to historical examples and it's like, not, it's not bringing up the historical examples to say, like, we should go back to this it's to say, like, it has been different and it could be different. [00:18:18] Nadia: Something I think about, and this is a little, it just, I don't know. I, I just think of like any, any adult today in, like, let's say like the, the who's like active in the workforce. We're talking about the span of like a, you know, like 30 year institutional memory or something. Like, and so [00:18:35] like anything that we think about, like, what is like possible or not possible is just like limited by like our biological lifespans. Like anyone you're talking, like, all we ever know is like, what we've grown up with in like, let's say the last 30 ish years for anyone. And so it's like, the reason why it's important to study history is to remind yourself that like everything that you know about, you know, what I think about philanthropy right now, based on the inputs I've been given in my lifetime is very different from if I study history and go, oh, actually it's only been that way for like pretty short amount of time. Only a few decades. [00:19:06] Ben: Yeah, totally. And I, I, I guess this is, this might be a, a slightly people might disagree with this, but from, from my perspective there's been sort of less institutional change within. The lifetime of most people in, in the workforce and especially most people in tech, which tends to skew younger than there was in the past, [00:19:30] Nadia: Yeah. [00:19:32] Ben: like, or, or like to put, put more fine on a point of it. [00:19:35] Like there's, there seems to have been less institutional change in the like latter half of the, the 20th century than in the first, like two thirds of it. [00:19:44] Nadia: Yeah. I think that's right. It feels much more much more stagnant. [00:19:49] Ben: Yeah. And I, I think the, the last thing like pull, pull us back to, to, to definitions real quick. So how, how do you like to describe idea of machines to people? Like if, if someone was like, Nadia, what, what is an idea machine besides this podcast? How would you, how would you describe that? [00:20:05] Nadia: I would point them to my blog post. So I don't have to explain it. [00:20:08] Ben: Okay. Excellent. Perfect. Everybody. [00:20:14] Nadia: If I had to, I mean, if I had to sort of explain in short version, I would say it's kind of like the modern successor to philanthropic foundations, maybe depending who I'm talking to, I might say that or yeah, it's just, it's sort of like a, a framework for understanding the interaction between funders and communities and that are like [00:20:35] centered around to similar ideology and how they turn ideas into outcomes is like there's a whole bunch of soft social infrastructure that, that. To take someone who says, Hey, I have an NDO. Why don't we do X? And like, how does that actually happen in the world? There's so many different inputs that like come together to make that happen. And that was just sort of my attempt at creating a framework for. [00:20:54] Ben: Yeah, no, I think it's a really good framework. And, and the, the, one of the, the powerful things I think in it is that you say there's like these like five components where there's like an ideology, a community ideas, an agenda, and people who capitalize the agenda. And then and I guess I'll, I'll like caveat this for, for the listeners, like in, in the piece you use effective altruism or EA for short as, as a, kind of like a case study in, in idea machines. And so it is, is sort of very topical right now. And I, I think what we will try to avoid is like the, the topical topics about it, but use it as a, an object of study. I think it's actually a very good object of study. [00:21:35] For thinking about these things. And, and actually one of the things that I thought was, was sort of stood out to me about it about EA a as opposed to many other philanthropies is that EA feels like one of the few places where the people who are capitalizing the agenda are, are willing to capitalize other people's other people's agendas as opposed to, to like sort of imposing their own on that. Do you, do you get a sense of that? [00:22:03] Nadia: Yeah. Yeah. It feels, it feels like there's. Mm, yeah. Some sort of shift there. So, I mean, if you think about. You know, someone got super wealthy in the let's call, Haiti of, of the five, one C three foundation. Like, I don't know, let's say like the fifties or something. Yeah, someone, someone makes a ton of money and like the next step is at some point they end up setting up a charitable foundation, they appoint a committee of people to help them figure out like, what should my agenda? And they, but it's all kind of like flowing from the donor and saying like, I want to [00:22:35] create this thing in the world. I wanna fund this thing in the world because it's sort of like my personal interest. Whereas I feel like we're starting to see some examples today of sure. Like, you know, there has to be alignment between a funder's interest and maybe like a community's interest. But in some ways the agenda is being driven, not just by the funder or like foundation staff but by a community of people that are sort of all like talking to each other and saying like, here's what we think is the most important agenda. And so it feels in some ways, like much. Yeah, much more organic. And it's not to say that, you know, the funder is not influencing that or doesn't have an influence in that. But but I, I sort of like seeing now that there, if, if it feels like it's like much more yeah. Intertwined or like it could go in a lot of different directions. So yeah, you see that with EA, which was the example I had used of like the agenda is very strongly driven by its community. It's not like there's like one foundation of, of people that are just like sitting in an ivory tower and saying, here's what we think we should fund. And then they just like go off and do it. And I think that just creates a lot more [00:23:35] possibilities for serendipity around, like what kinds of ideas end up getting funded? [00:23:38] Ben: Yeah. And it also, it also feels like at least to me I'd be interested if you agree with this, it feels like it makes for situations where you can actually like pool capital more easily for for, for sort of like larger projects. Where, when it's, it's like individual. When there's not sort of like a, a broader agenda you have sort of like the, the funding gets very dispersed, but whereas like, if there's, there's a way for like multiple funders to say like, okay, like this is an important thing, then it makes it much easier to like pull capital for, for bigger ideas. [00:24:19] Nadia: Yeah, I think that's right. Like I think within the world of philanthropy, there's it is just sort of more naturally. Towards zero sum games and competitiveness of funding because there's just less funding available. And because there is always this sort of like [00:24:35] reputation or status aspect intertwined with it, where like you wanna be, you know, the funder that made something happen in the world. But I agree that when it, the, the, the, the boundaries feel a little bit more porous when it's not just like, you know, two distinct foundations that are competing with each other or two distinct funders, but it's like, we're, there are multiple funders, you know, that are existing, bigger fish, smaller fish, or whatever that are like, sort of amplifying the agenda of, of a separate community that is not, you know, is not even formally affiliated with any of, any of these funders. [00:25:08] Ben: Yeah. And do, do you have a sense of how, like, almost like what, what are the, the necessary preconditions for that? Level of community to, to come about. Right. Like EA I think maybe is it's under talked about how, like it has, you know, a hundred years of like thinking behind it, of, of before [00:25:35] people really, you know, it's like sort of like different utilitarian and consequentialist philosophers, really sort of like working out, like thinking about how do we prioritize things. And, and so it's just, I guess it's like, if for, for like creating new, powerful, useful idea machines, like what, what are sort of like the, the like bricks that need to be created to lay the groundwork for them? [00:26:01] Nadia: Yeah. I mean, you've seen it come out in different sorts of ways. So like for EA, as you said it, I mean, it already existed before any major funders came in. It was for, I mean, first you have sort of its historical roots in utilitarianism, which go way back, but then even just effective altruism itself was, you know, started in Oxford and like was an academic discipline right at, at its outset. So there was already a seed of something there before they had major funders coming in, but then there are other, other types of idea machines, I think that are where like that community has to be actively nurtured. And it's weird cause [00:26:35] yeah, I mean, I don't think there's anything wrong with that. Or I think people tend to. Underestimate, how many communities had a lot of elbow grace put in to get them going, right. So it's like, you need to create some initial momentum to build a scene. It's not like it's not always just, you know, a handful of people got together and decided to make a thing. I think that's sort of like the historical story that guest glorified, like we like thinking about a bunch of artists and creatives that are just sort of like hanging out at the same cafe and then like, you know, this scene starts to organically form. That's definitely a thing, but right, right. But you know, there's also, yeah. In, in many cases there are funders behind the scenes who are helping make these things happen. They're, you know, convenings that are organized, there are you know, individual academics or or creatives or writers that are being funded in order to help you. Bring these sorts of ideas to to the, [00:27:35] the forefront of, of people's minds. So yeah, I think there's a lot of work that can go, it's just like, you know, start anything, but there's a lot of work that can go on behind the scenes to help these communities even start to exist. But then they start to have these compounding returns for funders, I think, where it's like, okay, now, instead of, you know, instead of hiring a couple of program officers to my foundation I am starting this like community of people that is now a beacon for attracting other people I might not have even even heard of that are sort of like flocking to this cause. And it's sort of like a, a talent, well, in itself, [00:28:08] Ben: Yeah. To change tracks a little bit. So with, with these sort of like new waves of like sort of potential philanthropists in, in both like the tech world or the crypto world do you have any sense of like risky, philanthropic experiments that you would want to see people do? Like just sort of like any, any kind of wishlist. [00:28:32] Nadia: I don't know. I don't know if that's like the role that I am trying to play [00:28:35] necessarily. I mean, I think like personally one area that still feels the way I think about it is I just think about, you know, what are the different components of, of, of the public sector and sort of like what areas are being more or less. Covered right now. And so we see funders that are getting more involved in politics and policy. We see funders that are you know, replicating or trying to, to field build in, in academia. I feel like media is still strangely kind of overlooked or just this big enigma to me, at least when I think about, yeah. How do, how do funders influence different aspects of the public sector? And so, yeah, there's, there's sort of, well, I don't think it's even necessarily a lack of interest because I, I see a lot of. You know, again, that sort of tech mindset and yeah, I guess I'm more specific thinking about tech right now, but going back to, you know, tech wanting to break apart institutions or tech, sort of like being this ancy teenager that is like railing against the institution you see a lot [00:29:35] of that and there's, you know, a lot of tension between tech industry and media right now. So you see that sort of like champing up bit. But then it's not clear to me, like what, like what they're doing to replace that. Is it, and, and, and some of that is just maybe more existential questions about like, what is the future of media? Like, what should that be? Is it this sort of focus on individual media creators instead of, you know, going to like the mainstream newspaper or the mainstream TV network or whatever you're going to Joe Rogan, let's say that's relevant today, cuz I just saw. Mark Zuckerberg did an interview on, on Joe Rogan so like, you know, is, is it like, is that what the future looks like? Is that the vision of what tech wants media to look like? It's not totally clear to me what the answer is yet, but, and I also feel like I'm seeing sort of like a lack of interest in and funding towards that. So that that's sort of like one area where, and it's sort of unsurprising to me, I guess that like, you know, tech is gonna be interested in like science or [00:30:35] politics. And maybe just sort of tech is not great at thinking about cultural artifacts. But you know, in terms of like my personal wishlist or just areas where I think their deficiencies on the sort of public sector checklists that, that one of them. [00:30:49] Ben: yeah, no, that's that's and I think the important thing is, is to, to flag these things. Right. Cuz it's like, it's, it's sort of hard to know what counterfactuals are, but it's like, yeah, like like media media as public goods. Does seem like kind of underrated as an idea, right. It's like would, would, I don't know. It's like, I think Sesame Street's really important and that was, that was publicly funded, right? [00:31:17] Nadia: mm-hmm and even education is sort of like a, a weird, like, I mean, there's talk about homeschooling. There's talk about how universities aren't, you know, really adequate today. I mean, you have like the, you know, one effort to, to [00:31:35] build a new university, but it feels. I don't know, I'm still sort of like waiting for like, what are like the really big, ambitious efforts that we're gonna see in terms of like tech people that are trying to rebuild either, you know, primary, secondary education or higher education. I just, yeah, I don't know. [00:31:53] Ben: Yeah, no, that, that that's in a great, a great place. Like it does not feel like there have been a lot of ambitious experiments there. In terms of right. Like anything along the lines of, of like building all the, the public schools in the south. Right. [00:32:06] Nadia: Right. Like at that level and this actually, I mean, this is like, and I think you, and I may not agree on this topic, but like I do genuinely wonder, you know, at the same time, we're also iterating at the same time you have these, you know, cycles of wealth that come in and, and shape public society in different ways, on like a broader scale. You also have the, you know, a hundred year institutional cycle where like, Institutions are built and then they kind of mature and then they, they start to stagnate and, and die down. What have we learned from like the last a hundred [00:32:35] years of institution building? Like maybe we learned that institutions are not as great as they seem, or they inevitably decline. And like, maybe people are interested in ways to avoid that in, in other words, like, you know, do we need to build another CNN in, in the realm of media? Or do we need to build another Harvard or is maybe the takeaway that like institutions themselves are falling out of favor and the philanthropically funded experiments might not look like the next Harvard, but they're gonna look like some, yeah, some, some sort of more broken down version of that. [00:33:05] Ben: Ooh, [00:33:06] Nadia: I don't know. And yeah. Yeah. I don't know. [00:33:10] Ben: sorry. Go, go ahead. [00:33:11] Nadia: Oh, I was just gonna say, I mean, like, this is, this is where I feel like history only has limited things to teach us. Right. Because yeah, the sort of copy paste answer would be. There used to be better institutions. Let's just build new institutions. But I think, and I think this is actually where crypto is thinking more critically about this than tech where crypto says like, yeah, like, why are we [00:33:35] just gonna repeat the same mistakes over and over again? Let's just do something completely different. Right. And I think that is maybe part of the source of their disinterest in what legacy institutions are doing, where they're just like, we're not even trying to do that. We're not trying to replicate that. We wanna just re rethink that concept entirely. I, I feel like, yeah, in tech, there's still a bit of LARPing around like, like around like, you know, without sort of the critical question of like, what did we, what did we take away from that? Maybe that wasn't so good. What we did in the past. [00:34:04] Ben: Yeah, well, I, I guess my response just is, is I think definitely that. That institutions are not functioning as well as they have. I think the, the question is like, what is the conclusion to draw from that? And, and maybe the, the conclusion I draw is that we need like different, like newer, different [00:34:35] institutions. And I feel like there's different levels of implicitness or explicitness of an institution, but broadly, it is some way of coordinating people that last through time. Right. And so, even what people are doing in crypto is I would argue building institutions. They just are organized wildly differently than ones we've seen before. [00:35:00] Nadia: Yeah. Yeah. And again, it's like, so the history is so short in crypto. It's hard to say what exactly anyone is trying to do until maybe we can understand that in retrospect. Yeah, I mean, I don't know. I, I think like there is just like some. Like, I feel like there's probably some learning from, from open source where I spent a lot of my brain space in the past around like, it was just like an entirely different type of coordination model from, from like centralized cozy firms. [00:35:34] Ben: Yeah. [00:35:34] Nadia: [00:35:35] And like there's some learning there and, and crypto is modeling itself much more after like open source projects than it is after like KO's theory of the firm. And, and so I, so I, I think there's probably some learnings there of like, yes, they're building things. I don't know. I mean, like in the world of opensource, like a lot of these projects don't last very, like you don't sort of like iterate upon existing projects. A lot of times you just build a new project and then eventually try to get people to like switch over to that project. So it's like these much shorter lifespans And so I don't, I don't know what that looks like in terms of institutional design for like the public sector or social institutions, but I just, yeah, I don't know. I think I just sort of wonder what that looks like. And yeah, I do see, like, there are some experiments within sort of like non crypto tech world as well. Like I was just thinking about Institute for progress and they're a, a policy think tank in, in DC. And I think like one of the things that they're doing well is trying to iterate [00:36:35] upon the sort of, you know, existing think tech tank model. And like one of the things that they acknowledge better than maybe, you know, you go to ano you go to a sort of like one of the stodgy older think tanks, and you're like, your brand is the think tank, right? You are like an employee of that place and you are representing their brand. Whereas I think my sense, at least with Institute for progress is they've been a little bit more like you are someone who is an expert already in your. domain. You, you already have your own audience. You're, you're someone who's already widely known and we're kind of like the infrastructure that is supporting you. I don't wanna speak on their behalf. That's sort of like the way I've been understanding it. And yeah, I mean, so, you know, even outside of crypto, I think people are still contending with that whole atomization of the firm, cetera, etcetera of like how do you balance or like individual reputation versus firm reputation. And maybe that is where it plays out. Like to my question about, you know, are you trying to build another media institution or is it just about supporting like lots of in individual influencers? But yeah, [00:37:35] just, I wonder like, are we sitting here waiting for new institutions to be built and like, actually there are no more, maybe we're just like institutions period are dying and like that's the future. Or yeah, at the same time, like they do provide this sort of like history and memory that is useful. So I don't know. [00:37:51] Ben: yeah, I mean, like, it sounds to me like, there's, there's, I mean, from what you're saying, there's like a much more sort of subtle way to look at it where there's, there's like a number of different sort of like sliders or spectra, right. Where it's like, how. I don't know, like internalized versus externalized, the institution is right where it's like, you think of like your like 1950s company and it's like, people are like subsume themselves to it. Right. And that's like on some end of the spectrum. And then on the other end of the spectrum, it's like like, I don't know, like YouTube, right. Where it's like, yeah. Like all like YouTube YouTubers are like technically all YouTubers, but like beyond that [00:38:35] they have no like coordination or, or real like connection. And like, and like that's one access. And then like new institutions could like come in and, and maybe we're like moving towards an era of history where like the, like just there is more externalization, but then like, sort of like explicitly acknowledging that and then figuring out how to. Do a lot of good and like have that, that sort of like institutional memory, given the, a world where, where like everybody's a brand [00:39:09] Nadia: Yeah. [00:39:10] Ben: that it, it seems like it's, that's not necessarily like institutions are dead. It's just like institutions live in a different like, like are, are just like structurally different [00:39:23] Nadia: Yeah. Yeah. Like, I, I, I wondered, like if we just sort of embrace the fact that maybe we are moving towards having much shorter memories like what does a short term memory [00:39:35] institution look like? I dunno, like maybe that's just sort where, right. You know, like I try to sort of like observe what is happening versus kind of being like, it should be different. And so like, if that just is what it is then, like, how do we design for that? I have an idea and I think that actually get to like part of what crypto is trying to do differently is saying, okay, like, this is where we have sort like trustless and where we have the rules that are encoded into a protocol where like, you don't need to remember anything like the, the network is remembering for you. [00:40:03] Ben: Yeah, I'm just thinking, I, I haven't actually watched it, but do you know the movie memento, which I [00:40:09] Nadia: Yes, [00:40:10] Ben: a guy who has yeah, exactly is short term memory loss and just like tattoos all over his body. So it's like, what, what is the institutional version of that? I guess, I guess like, yeah, exactly. That's that's where the, the note taking goes. [00:40:25] Nadia: Your. [00:40:27] Ben: yeah, exactly. So sort of down another separate track is, is something that I've noticed is like, [00:40:35] I guess, how do you think about what is and is not a public good? And I, and I asked this because I think my experience talking to many people in, in tech there's, there's sort of this attitude that sort of everything can be made like that, that almost like public goods don't exist. That it's like every, like everything can, can sort of be done by a, for profit company. And if like you can't capture the value of what you're doing it might not be valuable. [00:41:06] Nadia: Yeah, that's a frustrating one. Yeah, I mean like public goods have a very literal and simple economic definition of being a, a good that is non rivals and non-excludable so non excludable, meaning that you can't prevent anyone from accessing it and non rivals, meaning that if someone uses the public good, it doesn't diminish someone else's ability to use that, that public good. And that sort of stands in contrast to private goods and other types of goods. So, you know, there's that definition to start with, but then of course in [00:41:35] real life, real life is much more complex than that. Right. And so I, I noticed there was like a lot of, yeah, just like assumptions. I get rolled up in that. So like one of the things. Open source code, for example in the book that I wrote I tried to sort of like break apart, like people think of open source code as a public. Good. And that's it. Right. And, and with that carries a bunch of implications around, well, if open source is, you know, freely accessible, it's not excludable. That means that we should not prevent anyone from contributing to it. And that's like, you know, then, then that leads to all these sort of like management problems. And so I kind of try to break that apart and say the consumption of open source code. Like the, the actual code itself can be a public good that is freely accessible, but then the production of open source, like who actually contributes to an open source community could be, you know, like more like a membership style community where you do exclude people. That's just, you know, one example that comes to mind of like how public goods are not as black and white as they seem. I think another, like assumption that I see is that public goods have to be funded by government. And government has again, [00:42:35] like, you know, Especially since mid 19 hundreds, like been kinda like primary provider of public goods, but there are also public goods that are privately funded. Like, you know like roads can be funded through public private partnerships or privately funded. So it's not just because something is a public good. Doesn't say anything about how it has to be funded. So yeah, there, there is just sort of like, and then, yeah, as you're saying within tech, I think there's just because the vehicle of change in the world that is sort of like the defining vehicle for the tech industry is startups. Right. And so it's both like understandable why like everything gets filtered through that lens of like, why is it not a startup? But then, you know, as, as we both know, kind of minimizes the text history, the reason that we even, you know, got to the commercial era of startups and the startup. Era is because of the years and years of academic and government funded research that, that led up to that. So and, and then, and same with sort of like the open source work that I [00:43:35] was doing was to say, okay, all these companies that are developing their software products, every single one of these private companies is using open source code. They're relying on this public digital infrastructure to build their software. So like, it's, it's not quite as clean cut as especially, I mean, by some estimates, like a vast majority of let's say, yeah, any, any private company, any private software company, like, you know, let's say like 70% of their, their code or, you know, it's, it varies so much between companies, but like certainly a majority of the code that is quote unquote written is actually just like shared public code. So it's, you know it's, it's not quite as simple as saying like public goods have no place in, in tech. I think they, they still have a very, very strong place. [00:44:16] Ben: Yeah, no, and it it's, it's also just, just thinking about like, sort of like the, the publicness of different things, right? Cuz it's like, there are for profit, there, there are profitable private schools. Right. And yet, [00:44:35] like I think most people would agree that. If all schools were, were for profit and private I mean, yeah, I guess separating out like the, the, like, even if schools were for profit and private you would prob like, it would probably still be a good thing to have government getting money into those schools. Right. Like even like, I, I think people who don't like public schooling still think that it is worthwhile for the government to give money towards schools. Right. [00:45:12] Nadia: Mm-hmm [00:45:13] Ben: Is that [00:45:14] Nadia: Yeah. And, and this is a distinction between, for the example of education, it's like, you know, the concept of education might be a public. Good. But then how is education funded might, you know, get funded in different ways, including private. [00:45:27] Ben: yeah, exactly. And, and, and I. Yeah. So, so the, the, the concept of education [00:45:35] as, as a public good. Yeah, that's a, that's a good way of putting it and there's like, but I, and I think, I guess there, there are, there are more I guess think fuzzier places where it's like, it's less clear whe like, to what extent it's to public good, like like I think infrastructure maybe one where it's like, you, you could imagine a system where like, everybody just like, who uses, say like a sewer line buys into it versus having it be, be publicly funded. And I think like research might be another one. [00:46:11] Nadia: I mean, even education is if you go far back enough, right? Like not everyone went to public schools before. Not everyone got an education. It was not seen as necessarily something that it was something for like privileged people to get. It was not something that was just like part of the public sector. So yeah, our, our notions of what the public sector even is, or what's in and out of it is definitely evolved over the years. [00:46:32] Ben: Yeah, no, that's a really good point. So it's, [00:46:35] it's like that again is like, that's, that's where it's complicated where it's like, it's not just some like attribute of the world. Right. It's like our, like some kind of social consensus, [00:46:45] Nadia: Great. [00:46:46] Ben: around public goods. And, and something I also wanted to, to talk about is like, I know you've been thinking a lot about like the, sort of the relationship between philanthropy and status and I guess like, do, do you have, like, what's like. Do do you have a sense of like, why? Like, and it's different for everybody, but like why do people do philanthropy now? Like when you, when you don't have like a, a sort of like a, a reli, excuse me, a religious mandate to do it. [00:47:21] Nadia: I actually think, yeah, I think this question is more complicated than it seems. Because there's so many different types of philanthropists you know, The old adage of, if you've met one philanthropist, you've met one philanthropist. And so motivations [00:47:35] are, I mean, there are a lot of different motivations and also just sort of like, there's some spectrum here that I am still kind of lack and vocabulary on, but like a lot of philanthropy, if you just look by the numbers, like a lot of philanthropy is done at the local level, right. Or it's done within a philanthropy sort of local sphere. Like we forget about, you know, when you think about philanthropy, you think about the biggest billionaires in the world. You think about bill gates or Warren buffet or whatever. But like, we forget that, you know, there are a lot of people that are wealthy that are just kind of like that, that aren't part of the quote unquote global elite. Right? So like I, yeah, one example I have to think about is like the, the Koch family. And and so we all know the Koch brothers, but then like, They were, they were not the original philanthropist in their family. Their father was, and their father was originally, I mean, they had a family foundation and they just kind of focused on their local area doing local philanthropy. And it was only with the next generation that they ended up sort of like expanding into this like more global focus. But like, yeah, I mean, there's so much philanthropy. That is, so when we say, you know, like, what are the motivations of someone of a philanthropist? Like, it, it really [00:48:35] depends on like who you're talking about. But I do think like one aspect that just gets really under discussed or underappreciated philanthropy is the kind of like cohort nature of at least philanthropy that operates on a more like global, global skill. And I don't mean literally global in the sense of like international, I just mean like, I don't know what the right term is for this, but like outside of your yeah, like nonlocal right. [00:48:59] Ben: Yeah. [00:49:00] Nadia: And yeah, I don't know. That feels unsatisfying too. I don't really know what, what, what the term is, but there is a distinction there, right. But yeah, I think like, well, yeah, I don't know. I don't know what the right term is. But like I, the, the ways in which, so like, you know, why does a, why does a philanthropist? I, I think I have one open question of like, why, what makes a philanthropist convert from kinda like the more local focus to some expanded quote unquote global focus is one question. I think like when people talk about the motivations of philanthropists, they tend to focus on individual motivations of that person. So, you [00:49:35] know, the classic answer to like, why do, why do people give philanthropically? It's always like something like about altruism and wanting to give back or it's, or it's like the, you know, the, the edgy self-interested model of like, you know, people that are motivated by, by status and wanting to look good. I don't, I feel like those answers, they don't, they're not like they're just not fully satisfying to me. I think there's. This aspect of maybe like, like a more like power relational theory that is maybe under, under discussed or underappreciated of if you think about like like these wealth generations, rather than just like individuals who are wealthy you can see these sort of like cohorts of people that all became wealthy in similar sorts of ways. So you have wall street wealth, you have tech wealth, you have crypto wealth. And and you know, these are very large buckets, but you can sort of group people together based on like, they got wealthy because they had some unique insight that the previous paradigm did not have. And I think like, [00:50:35] there's sort of like, yeah, there are these cycles that like wealth is moving in where first you're sort of like the outcast you're working outta your garage, you know, let's use the startup example. No one really cares about you. You're very counterculture. Then you become sort of like more popular you're you're like a, but you're still like a counterculture for people that are like in the know, right. You're showing traction, you're showing promise whatever, and then there's some explosion to the mean stream. There's sort of this like frenzied period where everyone wants to, you know, do startups or join a startup or start a startup. And then there's sort of like the crash, right? And this is this mirrors Carla press's technological revolutions and, and financial capital where she talks about how technological innovations influence financial markets. But you know, she talks about these sort like cycles that we move in. And then like, after the sort of like crash, there's like a backlash, right? There's like a reckoning where the public says, you know, how, how could we have been misled by this, these crazy new people or whatever. But that moment is actually the moment in which the, the new paradigm starts to like cement its power and starts to become sort of like, you know, the dominant force in the field. It needs to start. [00:51:35] Switching over and thinking about their public legacy. But I think like one learnings we can have from looking at startup wealth now and sort of like how interesting it is that in the last couple years, like suddenly a lot of people in tech are starting to think about culture building and institution building and, and their public legacies that wasn't true. Like, you know, 10 years ago, what is actually changed. And I think a lot of that really was influenced by the, the tech backlash that was experienced in, in 2016 or so. And so you look at these initiatives now, like there are multiple examples of like philanthropic initiatives that are happening now. And I don't find it satisfying to just say, oh, it's because these individuals want to have a second act in their career. Or because they're motivated by status. Like, I think those are certainly all components of it, but it doesn't really answer the question of why are so many people doing it together right now? Not literally coordinated together, but like it's happening independently in a lot of different places. And so I feel like we need some kind of. Cohort analysis or cohort explanation to say, okay, I actually think this is kind of like a defense mechanism because you have this [00:52:35] clash between like a rising new paradigm against the incumbents and the new paradigm needs to find ways to, you know, like wield its influence in the public sector or else it's just gonna be, you know, regulated out of existence or they're gonna, you know, be facing this sort of like hostile media landscape. They need to learn how to actually like put their fingers into that and and, and grapple with the role. But it it's this sort of like coming of age for our counterculture where they're used to, like tech is used to sort of being in this like safe enclave in Silicon valley and is now being forced or like reckoned with the outside world. So like that, that, that is one answer for me of like, why do philanthropists do these things? It's we can talk about sort of like individual motivations for any one person. In, in my sort of like particular area of interest in trying to understand, like, why is tech wealth doing this? Or like, what will crypto wealth be doing in the future? I, I find that kind of explanation. Helpful. [00:53:25] Ben: Yeah. That's I feel like it has a very like Peter Turin vibe like in, in the good way, in the sense of like, like identifying. [00:53:35] like, I, I, I don't think that history is predictive, but I do think that there are patterns that repeat and like that, like, I've never heard anybody point out that pattern, but it feels really truthy to me. I think the, the, the really cool thing to do would be to like, sort of, as you dig into this, like, sort of like set up some kind of like bet with yourself on like, what are the conditions under which like crypto people will become like start heavily going into philanthropy. Right. Like, [00:54:09] Nadia: Yes, totally. I think about this now. That's why I'm like, I'm weirdly, like, to me, crypto wealth is the specter in the future, but they're not actually in the same boat as what tech wealth is in right now. So I'm almost in a, like, they're, they're not yet really motivated to deal with this stuff, because I think like that moment, if I had to like, make a bet on it is like, it's gonna be the moment where like crypto, when, when crypto really faces like a public [00:54:35] backlash. Because right now I think they're still in the like we're counterculture, but we're cool kind of moment. And then they had a little bit of this frenzy in the crash, but like, yeah, I think it's still. [00:54:44] Ben: for tech, right? Or 2000. [00:54:46] Nadia: Yeah. And even despite exactly. And, and, and despite the, you know, same as in 2001 where people were like, ah, pets.com, you know, it was all a scam. This was all bullshit. Oh, sorry. I dunno if I could say that. [00:54:57] Ben: Say that. [00:54:57] Nadia: But then, you know, like did not even, like startups had a whole other Renaissance after that was like not, you know, far from being over. But like people still by and large, like love crypto. And like, there are the, you know, loud, negative people that are criticizing it in the same way that people criticize startups in 2001. But like by and large, a lot of people are still engaging with it and are interested in it. And so, like, I don't feel like it's hit that public backlash moment yet the way that startups did in 2016. So I feel like once it gets to that point and then like, kind of the reckoning after that is kind of the point where crypto wealth will be motivated to act philanthropically in kind of like this larger cohort [00:55:35] kind of way. [00:55:36] Ben: Yeah. And I don't think that the time scales will be the same, but I mean the time scale for, for that in tech, if we sort of like map it on to the, the 2000 crash is like, you know, so you have like 15 years. So like, that'd be like 20 37 is when we need to like Peck back in and see like, okay, is this right? [00:55:56] Nadia: It's gonna be faster. So I'm gonna cut that in half or something. I feel like the cycles are getting shorter and moving faster. [00:56:01] Ben: That, that, that definitely feels true. Looking to the future is, is a a good place for us to, to wrap up. I really appreciate this.
Seemay Chou talks about the process of building a new research organization, ticks, hiring and managing entrepreneurial scientists, non-model organisms, institutional experiments and a lot more! Seemay is the co-founder and CEO of Arcadia Science — a research and development company focusing on underesearched areas in biology and specifically new organisms that haven't been traditionally studied in the lab. She's also the co-founder of Trove Biolabs — a startup focused on harnessing molecules in tick saliva for skin therapies and was previously an assistant professor at UCSF. She has thought deeply not just about scientific problems themselves, but the meta questions of how we can build better processes and institutions for discovery and invention. I hope you enjoy my conversation with Seemay Chou Links Seemay on Twitter (@seemaychou) Arcadia's Research Trove Biolabs Seemay's essay about building Arcadia Transcript [00:02:02] Ben: So since a lot of our conversation is going to be about it how do you describe Arcadia to a smart well-read person who has never actually heard of it before? [00:02:12] Seemay: Okay. I, I actually don't have a singular answer to this smart and educated in what realm. [00:02:19] Ben: oh, good question. Let's assume they have taken some undergraduate science classes, but perhaps are not deeply enmeshed in, in academia. So, so like, [00:02:31] Seemay: enmeshed in the meta science community.[00:02:35] [00:02:35] Ben: No, no, no, no, but they've, they, they, they, they they're aware that it's a thing, but [00:02:40] Seemay: Yeah. Okay. So for that person, I would say we're a research and development company that is interested in thinking about how we explore under researched areas in biology, new organisms that haven't been traditionally studied in the lab. And we're thinking from first principal polls about all the different ways we can structure the organization around this to also yield outcomes around innovation and commercialization. [00:03:07] Ben: Nice. And how would you describe it to someone who is enmeshed in the, the meta science community? [00:03:13] Seemay: In the meta science community, I would, I would say Arcadias are meta science experiment on how we enable more science in the realm of discovery, exploration and innovation. And it's, you know, that that's where I would start. And then there's so much more that we could click into on that. Right. [00:03:31] Ben: And we will, we will absolutely do that. But before we get there I'm actually really [00:03:35] interested in, in Arcadia's backstory. Cuz cuz when we met, I feel like you were already , well down the, the path of spinning it up. So what's, there's, there's always a good story there. What made you wanna go do this crazy thing? [00:03:47] Seemay: So, so the backstory of Arcadia is actually trove. Soro was my first startup that I spun out together with my co-founder of Kira post. started from a point of frustration around a set of scientific questions that I found challenging to answer in my own lab in academia. So we were very interested in my lab in thinking about all the different molecules and tick saliva that manipulate the skin barrier when a tick is feeding, but basically the, the ideal form of a team around this was, you know, like a very collaborative, highly skilled team that was, you know, strike team for like biochemical, fractionation, math spec, developing itch assays to get this done. It was [00:04:35] not a PhD style project of like one person sort of open-endedly exploring a question. So I was struggling to figure out how to get funding for this, but that wasn't even the right question because even with the right money, like it's still very challenging to set up the right team for this in academia. And so it was during this frustration that I started exploring with Kira about like, what is even the right way to solve this problem, because it's not gonna be through writing more grants. There's a much bigger problem here. Right? And so we started actually talking to people outside of academia. Like here's what we're trying to achieve. And actually the outcome we're really excited about is whether it could yield information that could be acted on for an actually commercializable product, right. There's like skin diseases galore that this could potentially be helpful for. So I think that transition was really important because it went from sort of like a passive idea to, oh, wait, how do we act as agents to figure out how to set this up correctly? [00:05:35] We started talking to angel investors, VCs people in industry. And that's how we learned that, you know, like itch is a huge area. That's an unmet need. And we had tools at our disposal to potentially explore that. So that's how tr started. And that I think was. The beginning of the end or the, the start of the beginning. However you wanna think about it. Because what it did, was it the process of starting trove? It was so fun and it was not at all in conflict with the way I was thinking about my science, the science that was happening on the team was extremely rigorous. And I experienced like a different structure. And that was like the light bulb in my head that not all science should be structured the same way. It really depends on what you're trying to achieve. And then I went down this rabbit hole of trying to study the history of what you might call meta science. Like what are the different structures and iterations of this that have happened over, over the history of even the United States. And it's, hasn't always been the same. Right? And then I think [00:06:35] like, as a scientist, like once you grapple with that, that the way things are now is not how they always have been. Suddenly you have an experiment in front of you. And so that is how Arcadia became born, because I realize. Couched within this trove experiment is so many things that I've been frustrated about that I, I, I don't feel like I've been maximized as the type of scientist that I am. And I really want to think in my career now about not how I fit into the current infrastructure, but like what other infrastructures are available to us. Right? [00:07:08] Ben: Nice. [00:07:09] Seemay: Yeah. So that, that was the beginning. [00:07:11] Ben: and, and so you, you then, I, I, I'm just gonna extrapolate one more, more step. And so you sort of like looked at the, the real, the type of work that you really wanted to do and determined that, that the, the structure of Arcadia that you've built is, is like perhaps the right way to go about enabling that. [00:07:30] Seemay: Okay. So a couple things I, I don't even know yet if Arcadia is the right way to do it. So I [00:07:35] feel like it's important for me to start this conversation there that I actually don't know. But also, yeah, it's a hypothesis and I would also say that, like, that is a beautiful summary, but it's still, it was still a little clunkier than the way you described it and the way I described it. So there's this gap there then of like, okay, what is the optimal place for me to do my science? How do we experiment with this? And I was still acting in a pretty passive way. You know, I was around people in the bay area thinking about like new orgs. And I had heard about this from like ju and Patrick Collison and others, like people very interested in funding and experimenting with new structures. So I thought, oh, if I could find someone else to create an organization. That I could maybe like help advise them on and be a part of, and, and so I started writing up this proposal that I was trying to actually pitch to other people like, oh, would you be interested in leading something like this? [00:08:35] Like, and the more that went on and I, I had like lots and lots and lots of conversations with other scientists in academia, trying to find who would lead this, that it took probably about six months for me to realize like, oh, in the process of doing this, I'm actually leading this. I think and like trying to find someone to hand the keys over to when actually, like, I seem to be the most invested so far. And so I wrote up this whole proposal trying to find someone to lead it and. It came down to that like, oh, I've already done this legwork. Like maybe I should consider myself leading it. And I've, I've definitely asked myself a bunch of times, like, was that like some weird internalized sexism on my part? Cause I was like looking for like someone, some other dude or something to like actually be in charge here. So that's actually how it started. And, and I think a couple people started suggesting to this to me, like if you feel so strongly about this, why aren't you doing this? And I know [00:09:35] it's always an important question for a founder to ask themselves. [00:09:38] Ben: Yeah, yeah, no, that's, that's really clutch. I appreciate you sort of going into the, the, the, the, the, the, like, not straight paths of it. Because, because I guess when we, we put these things into stories, we always like to, to make it like nice and, and linear and like, okay, then this happened and this happened, and here we are. But in reality, it was it's, it's always that ambiguity. Can, can I actually ask two, two questions based on, on that story? One is you, you mentioned that. In academia, even if you had the money, you wouldn't be able to put together that strike team that you thought was necessary. Like why can, can you, can you unpack that a little bit? [00:10:22] Seemay: Yeah. I mean, I think there's a lot of reasons why one of the important reasons, which is absolutely not a criticism of academia, in fact, it's maybe like my support of the [00:10:35] mission in academia is around training and education. That like part of our job as PIs and the research projects we set up is to provide an opportunity for a scientist to learn how to ask questions. How to answer those, how to go through the whole scientific process. And that requires a level of sort of like openness and willingness to allow the person to take the reigns on that. That I think is very difficult if you're trying to hit like very concrete, aggressive milestones with a team of people, right. Another challenge of that is, you know, the way we set up incentive structures around, you know, publishing, like we also don't set up the way we, you know, publish articles in journals to be like very collaborative or as collaborative as you would want in this scenario. Right. At the end of the day, there's a first author, there's the last author. And that is just a reality. We all struggle with despite everyone's best intentions. And so that inherently now sets up yeah. [00:11:35] Another situation where you're trying to figure out how you, we, this collaborative effort with this reality and. Even in the best case scenario, it doesn't always feel great. Right? Like it just like makes it harder to do the thing. And then finally, like it just, you know, for the way we fund projects in, in academia, you know, this wasn't a very hypothesis driven project. Like it's very hard to lay out specific aims for it. Beyond just the things we're gonna be trying to like, what, what, what is our process that we can lay [00:12:08] Ben: Yeah, it's a [00:12:09] Seemay: I can't tell you yeah. What the outcomes are gonna be. So I did write grants on that and that was repeatedly the feedback. And then finally, there's, you know, this other thing, which is that, like, we didn't want to accidentally land on an opportunity for invi innovation. We explicitly wanted to find molecules that could be, you know, engineered for products. Like that was [00:12:35] our hypothesis. If there is any that like. By borrowing the innovation from ticks who have evolved to feed for days to sometimes over a week that we are skipping steps to figure out the right natural product for manipulating processes in the skin that have been so challenging to, you know, solve. So we didn't want it to be an accident. We wanted to be explicitly translational quote unquote. So that again, poses another challenge within an academic lab where you, you have a different responsibility, right? [00:13:05] Ben: Yeah. And, and you it's there there's like that tension there between setting out to do that and then setting out to do something that is publishable, right? [00:13:14] Seemay: Mm-hmm mm-hmm . Yeah. Yeah. And I think one of the, the hard things that I'm always trying to think about is like, what are things that have out of the things that I just listed? What are things that are appropriately different about academia and what are the things that maybe are worth a second? [00:13:31] Ben: mm. [00:13:32] Seemay: they might actually be holding us back even [00:13:35] within academia. So the first thing I would say is non-negotiable that there's a training responsibility. So that is has to be true, but that's not necessarily mutually exclusive with also having the opportunity for this other kind of team. For example, we don't really have great ways in academia to properly, you know, support staff scientists at a, at a high level. Like there's a very limited opportunity for that. And I, you know, I'm not arguing with people about like the millions of reasons why that might be. That's just a fact, you know, so that's not my problem to solve. I just, I just see that as like a challenge also like of course publishing, right? Like I think [00:14:13] Ben: yeah, [00:14:14] Seemay: in a best case scenario publishing should be science should be in the driver's seat and publishing should be supporting those activities. I think we do see, you know, and I know there's a spectrum of opinions on this, but there are definitely more and more cases now where publishing seems to be in the [00:14:35] driver's seat, [00:14:36] Ben: yeah, [00:14:36] Seemay: dictating how the science goes on many levels. And, you know, I can only speak for myself that I, I felt that to be increasingly true as I advanced my career. [00:14:47] Ben: yeah. And just, just to, to make it, make it really explicit that it's like the, the publishing is driving because that's how you like, make your tenure case. That's how you make any sort of credibility. Everybody's gonna be judging you based on what you're publishing as opposed to any other. [00:15:08] Seemay: right. And more, I think the reason it felt increasingly heavy as I advanced my career was not even for those reasons, to be honest, it was because of my trainees, [00:15:19] Ben: Hmm. [00:15:20] Seemay: if I wanna be out. Doing my crazy thing. I have a huge responsibility now to my students, and that is something I'm not willing to like take a risk on. And so now my hands are tied in this like other way, and their [00:15:35] careers are important to me. And if they wanna go into academia, I have to safeguard that. [00:15:40] Ben: Yeah. I mean, it suggests. Sort of a, a distinction between sort of, regardless of academia or not academia between like training labs and maybe focused labs. And, and you could say like, yes, you, you want trainees in focus. Like you want trainees to be exposed to focused research. But like at least sort of like thinking about those differences seems really important. [00:16:11] Seemay: Yes. Yeah. And in fact, like, you know, because I don't like to, I don't like to spend too much time, like. Criticizing people in academia, like we even grapple with this internally at Arcadia, [00:16:25] Ben: Yeah. [00:16:25] Seemay: like there is a fundamentally different phase of a project that we're talking about sort of like new, creating new ideas, [00:16:35] exploring de-risking and then some transition that happens where it is a sort of strike team effort of like, how do you expand on this? How do you make sure it's executed well? And there's probably many more buckets than the, just the two I said, but it it's worthy of like a little more thought around the way we set up like approvals and budgets and management, because they're too fundamentally different things, you know? [00:17:01] Ben: Yeah, that's actually something I, I wanted to ask about more explicitly. And this is a great segue is, is sort of like where, where do ideas come from at Arcadia? Like how, you know, it's like, there's, there's some spectrum where everybody's from, like everybody's working on, you know, their own thing to like you dictating everything. Everything in between. So like, yeah. Can you, can you go more into like, sort of how that, that flow works almost? [00:17:29] Seemay: So I might even reframe the question a little bit to [00:17:35] not where do ideas come from, but how do ideas evolve? Because it's [00:17:39] Ben: please. Yeah. That's a much better reframing. [00:17:41] Seemay: because it's rarely the case, regardless of who the idea is coming from at Arcadia, that it ends where it starts. and I think that that like fluidity is I the magic sauce. Right. And so by and large, the ideas tend to come from the scientists themselves. Occasionally of course, like I will have a thought or Che will have a thought, but I see our roles as much more being there to like shepherd ideas in the most strategic and productive direction. And so we like, you know, I spent a lot of time thinking about like, well, what kind of resources would this take? And, you know, Che definitely thinks about that piece as well as, you know, like what it, what would actually be the impact of this if it worked in terms of like both our innovation, as well as the knowledge base outside of Arcadia Practically speaking, something we've started doing, that's been really helpful because we've gone. We've already gone through different iterations of this too. Like we [00:18:35] started out of like, oh, let's put out a Google survey. People can fill out where they pitch a project to us. And that like fell really flat because there's no conversation to be had there. And now they're basically writing a proposal. Yeah. More streamlined, but it's not that qualitatively different of a process. So then we started doing these things called sandboxes, which I'm actually really enjoying right now. These are every Friday we have like an hour long session. The entire company goes and someone's up at the dry erase board. We call it, throwing them in the sandbox and they present some idea or set of ideas or even something they're really struggling. For everybody to like, basically converse with them about it. And this has actually been a much more productive way for us to source ideas. And also for me to think collaboratively with them about like the right level of like resources, the right sort of inflection points for like, when we decide go or no, go on things. And so that's how we're currently doing it. I mean, we're [00:19:35] like just shy of about 30 people. I, this process will probably break again. once we hit like 50 people or something, cuz it's actually just like logistically a lot of people to cram into a room and there is a level of sort of like, yeah, and then there's a level of formality that starts to happen when there's like that many people in the room. So we'll see how it goes, but that's how it's currently working today. [00:20:00] Ben: that's that's really cool. And, and, and so then, then like, let's, let's keep following the, the evolutionary path, right. So an idea gets sandboxed and you collectively come to some conclusion that it's like, okay, like this idea is, is like, well worth pursuing then what happens. [00:20:16] Seemay: So then and actually we're like very much still under construction right now around this. We're trying to figure out like, how do, how do we think about budget and stuff for this type of step? But then presumably, okay, the person starts working on it. I can tell you where we're trying to go. I, I'm not sure where there yet, where we're trying to go is turning our [00:20:35] publications into a way to like actually integrate into this process. Like, ideally I would love it as CEO, if I can be updated on what people in the order are doing through our pub site. [00:20:49] Ben: Oh [00:20:50] Seemay: And that, like, I'm not saying they publish every single thing they do every day. Of course, that's crazy, crazy talk, but like that it's somewhat in line with what's happening in real time. That that is an appropriate place for me to catch up on what they're doing and think about like high level decisions and get feedback and see the feedback from the community as well, because that matters, right? Like if, if our goal is to either generate products in the form of actual products in the world that we commercialize versus knowledge products that are useful to others and can stimulate either more thought or be used by others directly. Like I need to actually see that data in the form of like the outside world interacting with their releases. Right. [00:21:35] So that's what we're trying to move towards, but there's a lot of challenges associated with that. Like if a, if a scientist is like needing to publish very frequently, How do we make sure we have the right resources in place to help them with that? There may be some aspects of that, that like anyone can help with like formatting or website issues or, you know, even like schematic illustrations to try and just like reduce the amount of friction around this process as much as possible. [00:22:00] Ben: And I guess almost just like my, my concern with the like publishing everything openly very early. And this is, this is almost where, where I disagree with with some people is that there's what, what I believe Sahi Baca called like the, the like Wardy baby problem, where ideas, when you're first sort of like poking at them are just like really ugly and you like, can't even, you can't even, like, you can barely justify it to [00:22:35] anybody on your team who like, trust you let alone people who like don't have any insight into the process. And so. Do do you, do you worry at all about like, almost just being like completely demoralized, right? Like it's just, it's so much easier to point out why something won't work early on than why it will. [00:22:56] Seemay: Yeah, totally. Yeah. [00:22:59] Ben: how do you [00:22:59] Seemay: Well, I mean, yeah, no, I think that's a hard, hard challenge. I mean, and, and people, and I would say at a metal level, I get, I get a lot of that too. Like people pointing out all the ways Arcadia [00:23:09] Ben: Yeah, I'm [00:23:10] Seemay: or potentially going to fail. So a couple things, I mean, I think one is that just, of course I'm not asking our scientists to. They have a random thought in the shower, like put that out into the world. right. Like there's of course some balance, like, you know, go through some amount of like thinking and like, you know, feedback with, with their most local peers on it. More, more in terms more than anything, like [00:23:35] just to like make sure by the time it goes out into the world that you're capturing precious bandwidth strategically. Right. [00:23:41] Ben: Yeah, [00:23:41] Seemay: On the other hand though, like, you know, while we don't want like that totally raw thing, we are so far on the, under the spectrum right now in terms of like forgiveness of some wards. And, and it also ignores the fact that like, it's the process, right? Like ugly baby. Great. That's that's like, like the uglier the better, like put it out there because like you want that feedback. You're not trying to be. trying to get to some ground truth here. And rigor happens through lots of like feedback throughout the entire process, especially at the beginning. And it's not even like that, that rigor doesn't happen in our current system. It's just that it doesn't make it out into the public space. People do share their thoughts with others. They do it at the dry erase board. They share proposals with each other. There's a lot of this happening. It's just not visible. So I mean, the other thing just like culturally, what I've been trying to like emphasize at [00:24:35] Arcadia is like process, not outcomes that like, you know, talking about it directly, as well as we have like an exercise in the beginning of thinking about like, what is the correct level of like failure rate quote unquote, and like what's productive failure. And just like, if we are actually doing like high risk, interesting science that's worth doing fundamentally, there's gotta be some inherent level of failure built in that we expect. Otherwise, we are answering questions. We already know the answer to, and then what's the fucking point. Right? [00:25:05] Ben: Yeah, [00:25:06] Seemay: So it almost doesn't matter what the answer to that question is. Like people said like 20%, some people said 80%, there's a very wide range in people's heads. Cuz there's this, isn't not a precise question. Right. So there's not gonna be precise answers, but the point is like the acceptance of that fact. Right? [00:25:24] Ben: Yeah. And also, I, I think I'm not sure if you would agree with this, but like, I, I feel like even like failure is a very fuzzy concept. In this, in this context, [00:25:35] right? [00:25:35] Seemay: totally. I actually really hate that word. We, we are trying to rebrand it internally to pivots. [00:25:42] Ben: Yeah. Yeah. I like that. I also, I also hate in this context, the idea of like risk, right? Like risk makes sense when it's like, you're getting like cash on cash returns, but [00:25:54] Seemay: right. [00:25:54] Ben: when [00:25:55] Seemay: Yeah. Yeah. I mean, you can redefine that word in this case to say like, it's extremely risky for you to go down this safe path because you will be very likely, you know, uncovering boring things. That's a risk, right? [00:26:13] Ben: Yeah. And then just in terms of process, I wanna go one, one step further into the, sort of like the, the like strike teams around an idea. Is it like something like where, where people just volunteer do do they get, like how, how, how do you actually like form those teams? [00:26:30] Seemay: Yeah. So far there has not been like sort of top down forcing of people into things. I [00:26:35] mean, we are a small org at this point, but like, I think like personally, my philosophy is that like, people do their best work when they're, they feel agency and like sort of their own deep, inner inspiration to do it. And so I try to make things more ground up because of that. Not, not just because of like some fuzzy feeling, but actually I think you'll get the best work from people, if you'd set it up that way. Having said that, you know, there are starting to be situations where we see an opportunity for a strike team project where we can, like, we need to hire someone to come. [00:27:11] Ben: Mm-hmm [00:27:12] Seemay: because no one existing has that skill set. So that that's a level of like flexibility that like not everybody has in other organizations, right. That you have an idea now you can hire more people onto it. So I mean, that's like obviously a huge privilege. We have to be able to do that where now we can just like transparently be like, here's the thing who wants to do it? You know? [00:27:32] Ben: yeah, yeah. [00:27:35] That's, that's very cool. [00:27:36] Seemay: One more thing else. Can I just say one more thing about that [00:27:39] Ben: of course you can see as many things as you [00:27:40] Seemay: yeah. Actually the fact that that's possible, I feel like really liberates people at Arcadia to think more creatively because something very different happens when I ask people in the room. What other directions do you think you could go in versus what other directions do you think this project should go, could go in that we could hire someone from the outside to come do. Because now they like, oh, it doesn't have to be me. Or maybe they're maybe it's because they don't have the skillset or maybe they're attached to something else that they're working on. So making sure that in their mind, it's not framed as like an either or, but in if, and, and that they can stay in their lane with what they most wanna do. If we decide to move forward on that, you know? Cause I, I think that's often something that like in academia, we don't get to think about things that way. [00:28:30] Ben: Yeah, absolutely. And then the, the people that you would hire onto a [00:28:35] project, would they, like, so say, say, say the, the project then ends it, it reaches some endpoint. Do they like then sort of go back into the, the pool of people who are, are sandboxing? How do, how does that [00:28:49] Seemay: So we, So we haven't had that challenge on a large scale yet. I would say from a human perspective, I would really like to avoid a situation where like standard biotech companies, you know, if an area gets closed out, there's a bunch of layoffs. Like it would be nice to figure out how we can like, sort of reshuffle everybody. One of the ways this has happened, but it's not a problem yet is like we have these positions called arcade scientists, which is kind of meant for this to allow people to kind of like move around. So there's actually a couple of scientists that Arcadia that are quote unquote arcade it's meant to be like a playful term for someone who's a, a generalist in some area like biochemistry, [00:29:35] generalist computational generalist, something like that, where their job is literally to just work on like the first few months of any project. [00:29:44] Ben: oh, [00:29:45] Seemay: And help kind of like, de-risk like, they're really tolerant of that process. They like it. They like trying to get something brand new off the ground. And then once it becomes like more mature with like clear milestones, then we can hire someone else and then they move on to like the next thing, I think this is a skill in itself that doesn't really get highlighted in other places. And I think it's a skillset that actually resonates with me very much personally, because if I were applying to Arcadia, that is the position that I would want. [00:30:14] Ben: I, I think I'm in the same boat. Yeah, that, and that's, that's critical is like, there aren't a lot of organizations where you sort of like get to like come in for a stage of a project. In research, like there, it it's generally like you're, you're on this project. [00:30:29] Seemay: And how often do you hear people complain about that in science of like, oh, so and so they're, they're [00:30:35] really great at starting things, but not finishing things. It's like, well, like how do we capitalize on that then? [00:30:39] Ben: yeah. Make it a feature and not a bug. Yeah, no, it's like, it it's sort of like having, I I'm imagining like sort of just different positions on a, a sports team, for example. And, and I feel like I, I was thinking the other day that that analogies between like research organizations and sports teams are, are sort of underrated right. Like you don't expect like the goal to be going and like, like scoring. Right. And you don't, you don't say like, oh, you're underperforming goalie. You didn't score any goals. [00:31:08] Seemay: Right. That's so funny. I like literally just had a call with Sam Aman before this, where, where we were talking about this a little bit, we were talking about in a slightly different context about a role that I feel like is important in our organization of someone to help connect the dots across the different projects. What we were sort of like conceptualizing in my call with him as like the cross pollinators, like the bees in the organization that like, know what get in the [00:31:35] mix, know what everyone's doing and help everybody connect the dots. And like, I feel like this is some sort of a supportive role. That's better understood on sports teams. Like there's always someone that's like the glue, right? Maybe they're not the MVP, but they're the, the other guy that's like, or, you know, girl, whatever, UN gendered, but very important. Everybody understands that. And like, it's like celebrated, you know, [00:31:58] Ben: Yeah. Yeah. And it's like, and, and the trick is, is really seeing it more like a team. Right. So that's like the, the overarching thing. [00:32:07] Seemay: And then I'll just like, I don't know, just to highlight again though, how like these realities that you and I are talking about that I think is actually very well accepted across scientists. We all understand these different roles. Those don't come out in the very hierarchical authorship, byline of publications, which is the main currency of the system. And so, yeah, that's been fascinating to like, sort of like relearn because when we started this publishing experiment, [00:32:35] I was primarily thinking about the main benefit being our ability to do different formats and in a very open way. But now I see that this there's this whole other thing that's probably had the most immediate impact on Arcadia science, which is the removal of the authorship byline. [00:32:52] Ben: Mm. So, so you don't, you don't say who wrote the thing at all. [00:32:57] Seemay: We do it's at the bottom of the article, first of all. And then it's listed in a more descriptive way of who did what, it's not this like line that's like hierarchical, whether implicitly or explicitly and for my conversations with the scientists at Arcadia, like that has been really like a, a wonderful release for them in terms of like, thinking about how do they contribute to projects and interact with each other, because it's like, it doesn't matter anymore that that currency is like off the table. [00:33:27] Ben: Yeah. That that's very cool. And can, can I, can I change tracks a little bit and ask you about model organisms? [00:33:34] Seemay: sure [00:33:34] Ben: [00:33:35] so like, and this is, this is coming really from my, my naivete, but like, like what, what are model organisms? And like, why is having more of them important? [00:33:47] Seemay: So there's, this is super, super important for me to clarify there's model organisms and there's non-model organisms, but there's actually two different ways of thinking about non-model organisms. Okay. So let me start with model organisms. A model organism is some organism that provides an extremely useful proxy for studying typically like either human biology or some conserved element of biology. So, you know, the fact that like we have. Very similar genetic makeup to mice or flies. Like there's some shortcuts you can take in these systems that allow you to like quickly ask experimental questions that would not be easy to do in a human being. Right. Like we obviously can't do those kinds of experiments there.[00:34:35] And so, and so, so the same is true for like ASIS, which can be a model for plants or for like biology more generally. And so these are really, really useful tools, especially if you think about historically how challenging it's been to set up new organisms, like, think about in the fifties before we could like sequence genomes as quickly or something, you know, like you really have to band together to like build some tools in a few systems that give you useful shortcuts in general, as proxies for biology now. [00:35:11] Ben: can I, can I, can I just double click right there? What does it mean to like set it up? Like, like what, what does it mean? Like to like, yeah. [00:35:18] Seemay: Yeah. I mean, there's basic anything from like Turing, right? Like you have to learn how to like cultivate the organism, grow it, proliferate it. Yeah. You gotta learn how to do like basic processing of it. Like whether it's like dissections or [00:35:35] isolating cell types or something, usually some form of genetics is very useful. So you can perturb the system in some controlled way and then ask precise questions. So those are kinda like the range of things that are typically challenging to set up and different organisms. Like, I, you can think of them as like video game characters, they have like different strengths, right? Like different bars. Some are [00:35:56] Ben: Yeah. [00:35:59] Seemay: fantastic for some other reason. You know, whether it's cultivation or maybe something related to their biology. And so that's that's model organisms and. I am very much pro model organisms. Like our interest in non-model organisms is in no way in conflict with my desire to see model organisms flourish, right. That fulfills an important purpose. And we need more, I would say, non-model organisms. Now. This is where it gets a little murky with the semantics. There's two ways you could think about it. At least one is that these are organisms that haven't quite risen to the level of this, the [00:36:35] canonical model organisms in terms of like tooling and sort of community effort around it. And so they're on their way to becoming model, but they're just kinda like hipster, you know, model or model organisms. Maybe you could think about it like that. There's a totally different way to think about it, which is actually how Arcadia's thinking about it, is to not use them as proxy for shared biology at all. But focus on the biology that is unique about that organism that signals some unique biological innovation that happened for that organism or plate of organisms or something. So for example, ticks releasing a bunch of like crap in their saliva, into your skin. That's not a proxy for us, like feeding on other, you know, vertebrates that is an innovation that happened because ticks have this like enormous job they've had to evolve to learn, to do well, which is to manipulate everything about your [00:37:35] circulation, your skin barrier, to make sure it's one blood meal at each of its life stages happen successfully and can happen for days to over a week. It's extremely prolonged. It can't be detected. So that is a very cool facet about tech biology that we could now leverage to learn something different. That could be useful for human biology, but that's, it's not a proxy, right? [00:37:58] Ben: Yeah. And so, so I was gonna ask you why ticks are cool, but I think that that's sort of self explanatory. [00:38:05] Seemay: Oh, they're wild. Like they, like, they have this like one job to do, which is to drink your blood and not get found out. [00:38:15] Ben: and, and I guess like, is there, so, so like with ticks, I I'm trying to, to frame this, like, is there something useful in like comparing like ticks and mosquitoes? Do they like work by the same mechanisms? Are they like completely different [00:38:30] Seemay: yeah. There's no, there's definitely something interesting here to explore because blood [00:38:35] feeding as a behavior in some ways is a very risky behavior. Right. Any sort of parasitism like that. And actually blood [00:38:42] Ben: That's trying to drink my blood. [00:38:44] Seemay: Yes. That's the appropriate response. Blood feeding actually emerged multiple times over the course of evolution in different lineages and mosquitoes, leeches ticks are in very different clouds of organisms and they have like different strategies for solving the same problem that they've evolved independently. So there's some convergence there, but there's a lot of divergence there as well. So for example, mosquitoes, and if you think about mosquitoes, leaches, and tick, this is a great spectrum because what's critically different about them is the duration of the blood [00:39:18] Ben: Mm, [00:39:19] Seemay: feed for a few seconds. If they're lucky, maybe in the range of minutes, leaches are like minutes to hours. Ticks are dazed to over a week. Okay. So like temporally, like they have to deal with very different. For, for mosquitoes, they tend to focus on [00:39:35] like immediately numbing of the local area to getting it out. Right. Undetected, Lees. They they're there for a little bit longer, so they have very cool molecules around blood flow like that there's a dilation, like speeding up the amount of blood that they can intake during that period. And then ticks have to deal with not just the like immediate response, but also longer term response, inflammation, wound healing, all these other sensations that happen. If, imagine if you stuck a needle in yourself for a week, like a lot more is going on, right? [00:40:08] Ben: Yeah. Okay. That, that makes a lot of sense. And so, so they really are sort of unique in that temporal sense, which is actually important. [00:40:17] Seemay: Yeah. And whether it's positive or not, it does seem to track that duration of that blood meal at least correlates with sort of the molecular complexity in terms of Sliva composition from each of these different sets of organisms. I just list. So there's way more proteins in other molecules that [00:40:35] have been detected int saliva as opposed to mosquito saliva. [00:40:39] Ben: And, and so what you're sort of like one of your, your high level things is, is like figuring out which of those are important, what mixture of them are important and like how to replicate that for youthful purposes? [00:40:51] Seemay: Yeah. Right, exactly. Yeah. [00:40:54] Ben: and, and, and are there other, like, I mean, I, I guess we can imagine like farther into Arcadia's future and, and think about like, what do you have, like, almost like a, like a wishlist or roadmap of like, what other really weird organisms you want to start poking at? [00:41:13] Seemay: So actually, so that, that is originally how we were thinking about this problem for non-model organisms like which organisms, which opportunities and that itself has evolved in the last year. Well, we realized in part, because of our, just like total paralysis around this decision, because [00:41:35] what we didn't wanna do is say, okay, now Arcadia's basically decided to double down on these other five organisms. We've increased the Canon by five now. Great. Okay. But actually that's not what we're trying to do. Right. We're trying to highlight the like totally different way. You could think about capitalizing on interesting biology and our impact will be felt more strongly if it happens, not just in Arcadia, but beyond Arcadia for this to be a more common way. And, and I think like Symbio is really pushing for this as a field in general. So we've gone from sort of like which organisms to thinking about. Maybe one of our most important contributions is to ask the question, how do you decide which organism, like, what is even the right set of experiments to help you understand that? What is the right set of data? That you might wanna collect, that would help you decide, let's say for example, cuz this is an actual example. We're very interested in produce diatoms, algae, other things, which, [00:42:35] which species should you settle on? I don't know. Like there's so many, right? Like, so then we started collecting like as many we could get our hands on through publicly available databases or culture collections. And now we are asking the meta question of like, okay, we have these, what experiments should we be doing in a high throughput way across all of these to help us decide. And that itself, that process, that engine is something that I think could be really useful for us to share with the worlds that is like hard for an individual academic lab to think about. That is not aligned with realities of like grants and journal publications and stuff. And so, yeah. Is it like RNA seek data sets? What kind of like pheno assays might you want, you want to collect? And we now call this broadly organismal onboarding process. Like what do you need in the profile of the different organisms and like, is it, phenomics now there's structural [00:43:35] prediction pipelines that we could be running across these different genomes depending on your question, it also may be a different set of things, but wouldn't it be nice to sort of just slightly turn the ES serendipity around, like, you know, what was around you versus like, can we go in and actually systematically ask this question and get a little closer to something that is useful? You know, [00:43:59] Ben: Yeah. [00:43:59] Seemay: and I think the amazing thing about this is. You know, I, and I don't wanna ignore the fact that there's been like tons of work on this front from like the field of like integrative biology and evolutionary biologists. Like there's so much cool stuff that they have found. What I wanna do is like couple their thinking in their efforts with like the latest and greatest technologies to amplify it and just like broaden the reach of the way they ask those questions. And the thing that's awesome about biology is even if you didn't do any of this and you grabbed like a random butterfly, you would still find extremely cool stuff. So that's the [00:44:34] Ben: [00:44:35] Right. Yeah. [00:44:36] Seemay: like, where can we go from here now that we have all these different technologies at our disposal? [00:44:41] Ben: Yeah. No, that's, that's extremely cool. And I wanted to ask a few questions about Arcadia's business model. And so sort of like it's, it's a public fact, unlike a lot of research organizations, Arcadia is, is a for-profit organization now, of course, that's that's a, you and I know that that's a legal designation. And there's like, I, I almost think of there as being like some multidimensional space where it's like, on the one hand you have like, like the Chan Zuckerberg initiative, which is like, is nominally a for-profit right. In the sense of [00:45:12] Seemay: Yeah. [00:45:13] Ben: not a, it's not a non-profit organization. And then on the other hand, under the spectrum, you have maybe like something like a hedge fund where it's like, what is like the only purpose of this organization in the world is to turn money into more money. Right. And so like, I, I guess I'd love to know like how you, how you think about sort of like where in that domain you [00:45:34] Seemay: [00:45:35] Yeah. Yeah. So, okay. This [00:45:38] Ben: and like how you sort of came to that, that [00:45:41] Seemay: Yeah. This was not a straightforward decision because actually I originally conceived of the Arcadia as a, a non-profit entity. And I think there were a lot of assumptions and also some ignorance on my part going into that. So, okay. Lemme try and think about the succinct way to tell all this. So I [00:45:58] Ben: take, take, take your time. [00:46:00] Seemay: okay. I started talking to a lot of other people at organizations. Like new science type of organizations. And I'll sort of like refrain from naming names here out of respect for people. But like they ran into a lot of issues around being a nonprofit, you know, for one, it, it impacted sort of like just sort of like operational aspects, maintaining a nonprofit, which if, if you haven't done it before, and I learned like, by reading about all this and learning about all this, like it maintaining that status is in and [00:46:35] of itself and effort, it requires legal counsel. It requires boards, it requires oversight. It requires reporting. There's like a whole level of operations [00:46:45] Ben: Yeah. And you always sort of have the government looking over your shoulder, being [00:46:49] Seemay: Yep. And you have to go into it prepared for that. So it also introduces some friction around like how quickly you can iterate as an organization on different things. The other thing is that like Let's say we started as a nonprofit and we realized, oh, there's a bunch of like for-profit type activities. We wanna be doing the transition of converting a nonprofit to a for-profit is actually much harder than the other way around. [00:47:16] Ben: Mm. [00:47:17] Seemay: And so that sort of like reversibility was also important to me given that, like, I didn't know exactly what Arcadia would ultimately look like, and I still dunno [00:47:27] Ben: Yeah. So it's just more optionality. [00:47:29] Seemay: Yeah. And another point is that like I do have explicit for profit interests for [00:47:35] Arcadia. This is not like, oh, I like maybe no. Like we like really want to commercialize some of our products one day. And it's, it's not because we're trying to optimize revenue it's because it's very central to our financial experiment that we're trying to think about, like new structures. Basic scientists and basic science can be, can capture its own value in society a little bit more efficiently. And so if we believe the hypothesis that discovering new biology across a wide range of organisms could yield actionable lessons that could then be translated into real products. Then we have to make a play for figuring out how this, how to make all this work. And I like also see an opportunity to figure out how I can make it work, such that if we do have revenue, I make sure our basic scientists get to participate in that. You know, because that is like a huge frustration for me as a basic scientist that like we haven't solved this problem. [00:48:35] Like basic science. It's a bedrock for all downstream science. Yet we some have to have, yeah, we have to be like siloed away from it. Like we don't get to play a part in it. And also the scientists at our Katy, I would say are not like traditional academic scientists. Like I would, I, my estimate would be like, at least a third of them have an intentional explicit interest in being part of a company one day that they helped found or spin out. And so that's great. We have a lot of like very entrepreneurial scientists at Arcadia. And so I I'm, I'm not shying away from the fact that like, we are interested in a, for profit mission. Having said all of that, I think it's important to remember that like mission and values don't stem from tax structure, right? Like you, there are nonprofit organizations that have like rotten values. And there are also for-profit organizations that have rotten values, like that is not the [00:49:35] dividing line for this. And so I think it puts the onus on us at Arcadia though, to continuously be rigorous with ourselves accountable to ourselves, to like define our values and mission. But I don't think that they are like necessarily reliant on the tax structure, especially in a for-profit organization where there's only two people at the cap table and their original motivating reason to do doing this was to conduct a meta science experiment. So we have like a unique alignment with our funders on this that I think also makes us different from other for-profit orgs. We're not a C Corp, we're an LC. And actually we're going through the process right now of exploring like B Corp status, which means that you have a, a fundamental, like mix of mission and for profit. [00:50:21] Ben: Yeah. That was actually something that I was going to ask about just in, in terms of, I think, what sort of like implicitly. One of the reasons that people wonder about [00:50:35] the, the mixture of like research and for profit status is that like the, the, the time scales of research is just, are just long, right? Like, like re, re research research takes a long time and is expensive. And if, if you're like, sort of answering to investors who are like really like, primarily looking for a return on their investment I feel like that, I, I mean, at least just in, in my experience and like my, my thinking about this like that, that, that's, that's my worry about it is, is that like so, so what, like having like, really like a small number of really aligned investors seems like pretty critical to being able to like, stick to your values. [00:51:18] Seemay: Yeah, no, it's true. I mean, there were actually other people interested in funding, our Arcadian every once in a while I get reached out to still, but like me Jud and Sam and Che, like we went through the ringer together. Like we went on this journey together to get here, to [00:51:35] decide on this. And I think there is, I think built in an understanding that like, there's a chance this will fail financially and otherwise. Um, but, but I think the important case to consider is like that we discussed is like, what would happen if we are a scientific success, but a financial failure. What are each of you interested in doing. and that that's such an important answer. A question, right? So for both of them, the answer was we would consider the option of endowing this into a nonprofit, but only if the science is interesting. Okay. If that is, and I'm not saying that we're gonna target that end goal, like I'm gonna fight with all my might to figure out another way, but that is a super informative answer, right? Because [00:52:27] Ben: yeah, [00:52:27] Seemay: delineating what the priorities are. The priority is the science, the revenue is [00:52:35] subservient to that. And if it doesn't work fine, we will still iterate on that like top priority. [00:52:42] Ben: Yeah, it would also be, I mean, like that would be cool. It would also be cool if, if you, I mean, it's just like, everybody thinks about like growing forever, but I think it would be incredibly cool if you all just managed to make enough revenue that you can just like, keep the cycle going right. [00:52:58] Seemay: Yeah. It also opens us up to a whole new pocket of investments that is difficult in like more standard sort of like LP funded situations. So, you know, given that our goal is sustainability now, like things that are like two to five X ROI are totally on the table. [00:53:22] Ben: Yeah. Yeah, yeah. [00:53:24] Seemay: actually that opens up a huge competitive edge for us in an area of like tools or products that like are not really that interesting to [00:53:35] LPs that are looking to achieve something else. [00:53:38] Ben: yeah, with like a normal startup. And I think that I, I, that that's, I think really important. Like I, I think that is a big deal because there's, there's so many things that I see And, and it's like the two to five X on the amount of money that you could capture. Right. But like the, the, the amount of value that you create could be much, much larger than that. Right. Like, and this is the whole problem. Like, I, I, I mean, it's just like the, the thing that I always run into is you look at just like the ability of people to capture the value of research. And it just is very hard to, to like capture the whole thing. And often when people try to do that, it ends up sort of like constraining it. And so you're, you're just like, okay, with getting a reasonable return then it just lets you do so many other cool things. [00:54:27] Seemay: yeah. I'm yeah. I think that's the vibe. [00:54:32] Ben: that is an excellent vibe. And, and speaking [00:54:35] with the vibe and, and you mentioned this I'm, I. Interested in both, like how you like find, and then convince people to, to join Arcadia. Right. Because it's, it's like, you are, you are to some extent asking people to like play a completely different game. Right? Like you're asking people who have been in this, this like you know, like citations and, and paper game to say like, okay, you're gonna like, stop playing that and play this other thing. And so like, yeah. [00:55:04] Seemay: yeah. It's funny. Like I get asked this all the time, like, how do you protect the careers or whatever of people that come to Arcadia? And the solution is actually pretty simple, even though people don't think of it, which is you Don. You don't try and convince people to come. Like we are not trying to grow into an infinitely large organization. I don't even know if we'll ever reach that number 150. Like I was just talking to Sam about like, we may break before that point. Like, that's just sort of like my cap. We may find that [00:55:35] 50 people is like the perfect number 75 is. And you know, we're actually just trying to figure out like, what is, what are the right ingredients for the thing we're trying to do? And so therefore we don't need everybody to join. We need the right people to join and we can't absorb the risk of people who ultimately see a career path that is not well supported by Arcadia. If we absorb that, it will pull us back to the means. because we don't want anyone at Arcadia to be miserable. We want scientists to succeed. So actually the easiest way to do that is to not try and convince people to do something they're not comfortable with and find the people for whom it feels like a natural fit. So actually think, I think I saw on Twitter, someone ask this question in your thread about what's like the, oh, an important question you asked during your interviews. And like one of the most important questions I ask someone is where else have you applied for jobs? [00:56:35] And if they literally haven't applied anywhere outside of academia, like that's an opportunity for me to push [00:56:43] Ben: Mm. [00:56:44] Seemay: I'm very worried about that. Like, I, I don't want them to be quote unquote, making a sacrifice that doesn't resonate with where they're trying to go in their career. Cuz I can't help them AF like once they come. Arcadia has to evolve like its own organism. And like, sometimes that means things that are not great for people who wanna be in academia, including like the publishing and journal bit. And so yeah, what I tell them is like, look, you have two jobs at Arcadia and both have to be equally exciting to you. And you have to fully understand that there both your responsibility, your job is to be a scientist and a meta scientist. And that those two things have to be. You understand what that second thing is that your job is to evolve with me, provide me with feedback on like, what is working and not working [00:57:35] for you and actively participate in all the meta science experiments that we're doing around publishing translation, technology, all these things, right? Like it can't be passive. It has to be active. If that sounds exciting to you, this is a great place for you. If you're trying to figure out how you're going to do that, have your cake and eat it too, and still have a CV that's competitive for academia in case like in a year, you know, like you go back, I, this is not the place for you. And I, I can't as a human being, like, that's, I, I can't absorb that because like, I like, I can't help, but have some empathy for you once you're here as an individual, like, I don't want you to suffer. Right. And so we need to have those hard conversations early before they join. And there's been a few times where like, yeah, I think like I sufficiently scared someone away. So I think it was better for them. Right? Like it's better for [00:58:25] Ben: Yeah, totally. [00:58:25] Seemay: if that happens. Yeah, it's harder once they're here. [00:58:29] Ben: and, and so, so the like, The, they tend to be people who are sort of like already [00:58:35] thinking, like already have like one foot out the door of, of academia in the sense of like, they're, they're already sort of like exploring that possibility. So they've so you don't have to like get them to that point. [00:58:48] Seemay: Right. Yes. Because like, like that's a whole journey they need to go on in, on their own, because there's so many reasons why someone might be excited to leave academia and go to another organization like this. I mean, there's push and pull. Right. So I think that's a challenge, like separating out, like, like what is just like push, because they're like upset with how things are going there versus like, do they actually understand what joining us will entail? And are they, do they have the like optimism and the agency to like, help me do this experiment. It does require optimism. Right. [00:59:25] Ben: absolutely. [00:59:25] Seemay: So like sometimes like, you know, I push people, like what, where else have you applied for jobs? And they, if they can't seem to answer that very well I say, okay, let me change [00:59:35] this question. You come to Arcadia and I die. Arcadia dissolves. It's, it's an easier way of like, it's like, I can own it. Okay. Like I died and like me and Che and Jed die. Okay. Like now what are you gonna do with your career? And like, I is a silly question, but it's kind of a serious question. Like, you know, just like, what is, how does this fit into your context of how you think about your career and is it actually going to move you towards where you're trying to go? Because, I mean, I think like that's yeah. Another problem we're trying to solve is like scientists need to feel more agency and they won't feel agency by just jumping to another thing that they think is going to solve problems for them. [01:00:15] Ben: Yeah, that's a really good point. And so, so this is almost a selfish question, but like where do you find these people? Right? Like you seem to, you seem to be very good at it. [01:00:26] Seemay: Yeah. I actually don't I don't, I, I don't know the answer to that question fully because we [01:00:35] only just recently said, oh my God, we need to start collecting some data through like voluntary surveys from applicants of like, how do they know about us? You know? It seems to be a lot of like word of mouth, social media, maybe they read something that I wrote or that Che wrote or something. And while that's been fine so far, we also like wanna think about how we like broaden that reach further. It's definitely not through their, for the most part, not through their institutes or PIs that I know. [01:01:03] Ben: Yeah, I, but, but it is, it is like, it sounds like it does tend to be inbounds, right? Like it tends to be people like reaching out to you as opposed to the other way around. [01:01:16] Seemay: Yeah. You know, and that's not for lack of effort. I mean, there have been definitely times where. We have like proactively gone out and tried to scout people, but it does run into that problem that I just described before of like, [01:01:29] Ben: Yeah. [01:01:30] Seemay: if you find them yourself, are you trying to pull them in and have they gone through their own [01:01:35] journey yet? And so in some of those cases, while it seemed like, like we entertain like conversations for a while with a couple of candidates, we tried to scout, but ultimately that's where it ended was like, oh, they like, they need to go on their own. And like, sort of like fully explore for a bit, you know, this would be a bit risky. But it hasn't, you know, it hasn't been all, you know a failure like that, but it, it happens a lot. [01:01:58] Ben: Yeah, no, I mean that, that, frankly, that, that squares with my, my experience sort of like roughly, roughly trying to find people who, who fit a similar mold. So that that's, I mean, and that, that suggests a strategy, right. Is like, be like, be good at setting up some kind of lighthouse, which you, you seem to have done. [01:02:17] Seemay: The only challenge with this, I would say, and, and we are still grappling with this is that sort of approach does make it hard to reach candidates that are sort of like historically underrepresented, because they may not see themselves as like strong candidates for such and such. And [01:02:35] so now we're, now we have this other challenge to solve of like, how do we make sure people have gone through their own process on their own, but also make sure that the opportunity is getting communicated to the right people and that they like all, everybody understands that they're a candidate, you know, [01:02:53] Ben: Yeah. And I guess so , as long as we're recording this podcast, like what, what is that like, like if you were talking to someone who was like, what does that process even look like? Like what would I start doing? Like what would you, what would you tell someone? [01:03:08] Seemay: Oh, to like explore a role at Arcadia. [01:03:11] Ben: yeah. Or just like to like, go through that, like, like to, to start going through that [01:03:16] Seemay: Yeah. Yeah, I mean, I guess like, there's probably a couple of different things. Like, I mean, one is just some deep introspection on like, what are your priorities in your life, right? Like what are you trying to achieve in your career? Beyond just like the sort of ladder thing, like what's the, what are the most important, like north stars for you? And I think [01:03:35] like for a place like Arcadia or any of the other sort of like meta science experiments, That has to be part of it somehow. Right. Like being really interested and passionate about being part of finding a solution and being one of the risk takers for them. I think the other thing is like very pragmatic, just like literally go out there and like explore other jobs, please. Like, I feel like, you know, like, like what is your market value? You know? Like what [01:04:05] Ben: don't don't [01:04:05] Seemay: Yeah. Like, and like go get that information for yourself. And then you will also feel a sense of like security, because like, even if I die and Arcadia dissolves, you will realize through that process that you have a lot of other opportunities and your skillset is highly valuable. And so there is like solid ground underneath you, regardless of what happens here, that they need to absorb that. Right. And then also just. Like, trust me, your negotiations with me will go way better. If you come in [01:04:35] armed with information, like one of my goals with like compensation for example, is to be really accurate about making sure we're hitting the right market val
William Bonvillian does a deep dive about his decades of research on how DARPA works and his more recent work on advanced manufacturing. William is a Lecturer at MIT and the Senior Director of Special Projects,at MIT's Office of Digital Learning. Before joining MIT he spent almost two decades as a senior policy advisor for the US senate. He's also published many papers and a detailed book exploring the DARPA model. Links William's Website The DARPA Model for Transformative Technologies Transcript [00:00:35] In this podcast, William Bonvillian, and I do a deep dive about his decades of research about how DARPA works and his more recent work on advanced manufacturing. Well humans, a lecturer at MIT and a senior director of special projects at MIT is office of digital learning. Before joining MIT. He spent almost two decades as a senior policy advisor for the us Senate. He's published many papers and a detailed book exploring the DARPA model. I've wanted [00:01:35] to compare notes with him for years. And it was a pleasure. And an honor to finally catch up with him. Here's my conversation with William [00:01:42] Ben: The place that I I'd love to start off is how did you get interested in, in DARPA and the DARPA model in the first place you've been writing about it for more than a decade now. And, and you're probably one of the, the foremost people who who've explored it. So how'd you get there in the first. [00:01:58] William: You know, I, I I worked for the us Senate as a advisor in the Senate for for about 15 years before coming to MIT then. And I I worked for a us Senator who is on the on the armed services committee. And so I began doing a substantial amount of that staffing, given my interest in science technology, R and D and you know, got early contact with DARPA with some of DARPA's both program managers and the DARPA directors, and kind of got to know the agency that way spent some time with them over in their [00:02:35] offices. You know, really kind of got to know the program and began to realize what a, what a dynamic force it was. And, you know, we're talking 20, 20 plus years ago when frankly DARPA was a lot less known than it is now. So yeah, just like you know, kind of suddenly finding this, this Jewelbox varied in the. It was it was a real discovery for me and I became very, very interested in the, kind of the model they had, which was so different than the other federal R and D agencies. [00:03:05] Ben: Yeah. And, and actually um, It sort of in your mind, what is the for, for people who I, I think tend to see different federal agencies that give money to researchers as, as all being in the same bucket. What, what do you, what would you describe the difference between DARPA and the NSF as being [00:03:24] William: well? I mean, there's a big difference. So the NSF model is to support basic research. And they have, you know, the equivalent of project [00:03:35] managers there and they, they don't do the selecting of the research projects. Instead they queue up applicants for funds and then they supervise a peer review process. Of experts, you know, largely from academia who evaluate, you know, a host of proposals in a, in a given R and D area mm-hmm and and make valuations as to which ones would qualify. What are the kind of best most competitive applicants for NSFs basic research. So DARPA's got a different project going on, so it doesn't work from the bottom up. It, it has strong program managers who are in effect kind of empowered to go out and create new things. So they're not just, you know, responding to. Grant applications for basic research, they come into DARPA and develop a [00:04:35] vision of a new breakthrough technology area. They wanna stand up. And so it's, and there's no peer review here. It's really, you hire talented program managers. And you unleash them, you turn them loose, you empower them to go out and find the best work that's going on in the country. And that's, that can be from, from universities and often ends in this breakthrough technology area they've identified. But it also could be from comp companies, often smaller companies and typically they'll construct kind of a hybrid model where they've got academics. Companies working on a project, the companies are already always oriented to getting the technology out the door. Right. Cause they have to survive, but the researchers are often in touch with some of the more breakthrough capabilities behind the research. So bringing those two together is something that the program manager at DARPA does. So while at [00:05:35] NSF, the program manager equivalent, you know, their big job is getting grant out the door and supervising a complex selection process by committee mm-hmm . The role of the, of the ARPA of the, of the DARPA program manager is selecting the award winners is just the beginning of the job. Then in effect you move into their home, right? You work with them on an ongoing basis. DARPA program managers are spending at least one third of their time on the road, linking up with their, you know, with their grantees, the folks they've contracted with sort of helping them along in the process. And then, you know, the typically fund a group of research awards in an area they'll also work on putting together kind of a thinking community amongst those award winners. Contract winners so that they begin to share their best ideas. And that's not a, that's not easy, right? Yeah. Yeah. If you're an academic [00:06:35] or you, a company, you stuff, you trading ideas is a complicated process, but that's one of the tasks. That the DARPA program manager has, is to really build these thinking communities around problems. And that's what they that's what they're driven to do. So it's a very, very different situation. This is, this is the different world here that Dar is created [00:07:01] Ben: and, and sort of actually to, to, to click on The, the how DARPA program managers interact with ideas. Do you have a sense of how they incentivize that idea sharing? Is it just the, the concept that if you share these ideas, they might get funded in a way that they wouldn't or like what, how do they sort of construct that That trust that people for people could actually be sharing those ideas. [00:07:28] William: Yeah. In, in some ways then it starts out at an all stage. So before, you know, a new [00:07:35] program manager arrives at DARPA and often they'll have, I mean, this could be ape. It could be I RPA, which worked slightly different ways, but similar kind of approach RPE is our energy DARPA. I, APA is our intelligence Dar. Right. And then soon we'll have a help DARPA, which has now been funded. Yeah. I wanna [00:07:55] Ben: your opinion on that later. [00:07:57] William: Okay. Well, we're working away on this model here. You know, you hire a program manager and you hire somebody. Who's gonna be, you know, talent and dynamic and kind of entrepreneurial and standing up a new program. They get the DARPA and they begin to work on this new technology area. And a requirement of DARPA is that really be a breakthrough. They don't wanna fund incremental work that somebody else may be doing. They wanna find a new, new territory. That's their job, revolutionary breakthroughs. To get there. They'll often convene workshops, 1, 2, 3 workshops with some of the best thinkers around the country, including people, [00:08:35] people who may be applying for the funding, but they'll, they'll look for the best people bringing together and get, you know, a day long process going um, often in several different locations to kind of think through. Technology advance opportunity. How, how it might shape up what might contribute, how might you organize it? What research might go into it, what research areas and that kind of begins the kind of thinking process of building a community around a problem. And then they'll make grant awards. And then similarly, they're gonna be frequently convening this group and everybody can sit on their hands and keep their mouth shut. But you know, that's not often the way technologists work. They'll get into a problem and start wanting to share ideas and brainstorm. And that's, that's typically what then takes place and part of the job of the, of. Partner manager DARPA is to really encourage that kind of dialogue and get a lot of ideas on the table and really promote it. Yeah. [00:09:34] Ben: [00:09:35] And, and then also with, with those ideas do, do you have, like, in your, your having looked at this so much, do you have a sense of how much there there's this tension? You know, it's like people generally do the best research when they feel a lot of ownership over their own ideas and they feel like they're, they're really working on. The, the thing that they want to work on. But then at the same time to sort of for, for, for the, a project to play into a broader program, you often need to sort of adjust ideas towards sort of a, a bigger system or a bigger goal. Do you have, do you have an idea of how much Program managers sort of shape what people are working on versus just sort of enabling people to work on things that they would want to work on. Otherwise. [00:10:24] William: Yeah. The program manager in communication with DARPA's office directors and director. Right, right. So it's a very flat organization. You know, and [00:10:35] there'll be an office director and a number of program managers working with that office director. For example in the field of, of biological technologies, a fairly new DARPA office set up about a decade ago. Yeah. You know, there'll be a group of DARPA program managers with expertise in that field and they will often have often a combination of experiences. They'll have some company experience as well as some academic research experience that they're kind of walking on both sides. They'll come into DARPA often with some ideas about things they want to pursue, right. And then they'll start the whittle down process to get after what they really wanna do. And that's, that's a very, very critical stage. They'll do it often in dialogue with fellow program managers at DARPA who will contribute ideas and often with their office. Who kind of oversees the portfolio and we can feed that DARPA program manager into other areas of expertise around DARPA. So coming up with a big breakthrough idea, then [00:11:35] you test it out in these workshops, as I mentioned, right. As well as in dialogue with your colleagues at DARPA. And then if it looks like it's gonna work, then you can move it rapidly to the approval process. But DARPA is, you know, I mean, it's what its name says. It's advanced research projects agency. So it's not just doing research. It very much wants to do projects. And you know, it's an agency and it's a defense agency, so they're gonna be, have to be related to the defense sector. Although there's often spill over into huge areas of civilian economy, like in the it world really pioneer a lot. But essentially the big idea to pursue that's being developed by the program manager and refined by the program manager. And then they'll put out, you know, often what's called a broad area announcement, a BIA. We wanna get a technology that will do this. Right. Right. Give us your best [00:12:35] ideas. And put this out, this broad area announcement out and get people to start applying. And if it's, if the area is somewhat iffy, they can, you know, proceed with smaller awards to see how it kind of tests out rather than going into a full, larger, larger award process with kind of seedlings they'll plant. So there's a variety of mechanisms that it uses, but getting that big breakthrough revolution or idea is the key job at a program manager. And then they'll, they're empowered to go out and do it. And look, Dora's very cooperative. The program managers really work with each other. Yeah. But in addition, it's competitive and everybody knows whose technology is getting ahead, whose technology is moving out and what breakthroughs it might lead to. So there's a certain amount of competition amongst the program managers too, as to how their revolution is coming along. Nice. [00:13:28] Ben: And, and then sort of to, to go sort of like one level down the hierarchy, if you will. When [00:13:35] they put out these, these BAAs do you have a sense for, of how often the performers will sort of either shift their focus to, to, towards a APA program or like how much sort of haggling is there between the performer and the, the program manager in terms of Sort of finding this balance between work that supports the, the broader program goals and work that sort of supports a researcher's already existing agenda. Right. Because, you know, it's like people in their labs, they, they sort of have this, the things that they're pursuing and maybe they're, they're like sort of roughly in the same direction as a program, but need to be, need to be shifted. [00:14:20] William: Yeah. It's, you know, the role of the program manager is to put out a new technological vision, you know, some kind of new breakthrough territory. That's gonna really be a very significant [00:14:35] advance that can be implemented. It's gonna be applied. It's not discovery. It's implementation that they're oriented to. They want to create a new thing that can be implemented. So they're gonna put the vision out there and look the evaluation process. Is gonna look hard at whether or not this exact question you're raising. It's gonna look hard at whether or not the, the applicant researcher is kind of doing their own thing or can actually contribute to the, to the implementation of the vision. And that's gonna be the cutoff. Will it serve the vision or not? And if it's not, it's not gonna get the award. So look, that's an issue with DARPA. DARPA is going at their particular technology visions. NSF will fund, you know, it's driven by the applicants. They will think of ideas they wanna pursue and see if they can get NSF funding for it at DARPA's the other way around the program manager has vision [00:15:35] and then sees who's willing to pursue that vision with him or her. Yeah. Right. So it's a, it's more of a, I won't say top down because DARPA's very collaborative, but it's more of a top down approach than as opposed to NSF, which is bottom up. They're going for technology visions, not to see what neat stuff is out there. right. [00:15:56] Ben: Yeah. And just to, to shift a little bit you, you mentioned I a RPA and ARPA, E as, as other government agencies that, that used the same model you wrote an article in 2011 about ARPA E and, and I I'm interested in. What like how you think that it has played out over, over the past decade? Like how, like how well do you think that they, they have implemented the model? Do you think that it, it does work there. And like what other places do you think, I guess do, do you have a sense of like how to know whether a DARPA, the DARPA [00:16:35] model is applicable to an area more broadly? [00:16:39] William: Yeah. I mean, look that's, and that's kind of a, that's kind of a key question, you know, if you wanna do a, if you wanna do a DARPA, like thing, is it gonna work in the territory that you wanna work in? But let's, let's look at this energy issue. You know, I was involved in, you know, some of the early discussions about creating an, a. For for energy and, you know, the net result of that was that Congressman named bar Gordon led an effort on the house science committee to really create an ARPA energy. And, and that approach had been recommended by a national academies committee. And it you know, it seemed to make a term on a sense. So what was going on in energy at the time of formulation of this. Like the 2007 rough time period. You know, 2008, what was happening was that there was significant amount of investment that was moving from, in, [00:17:35] in moving in venture capital towards new energy, clean tech technologies. So the venture capital sector in that timetable was ramping. It's 2006, 2007 time period was ramping up its venture funding and Cleantech. And that's when AR was being proposed and consider. So it looks like it looked to us, looks everybody, like there would be a way of doing the scale up. Right. In other words, it's not enough just to have, you know, Cool things that come out of an agency, you need to implement the technology. So who's gonna implement it. Right. Who's gonna do that scale up into actual implementation. And that's a very key underlying issue to consider when you're trying to set up a DARPA model. DARPA has the advantage of a huge defense procurement budget. So, right. It can, you know, it can formulate a new technology breakthrough, like [00:18:35] saying stealth, right. Mm-hmm or in you know, UAVs and drones. And then it can turn to the defense department that will spend procuring money to actually stand up the model on a good day. Cause that doesn't always happen. doesn't always go. Right. But, but it's there, what's the scale up model gonna be for energy? Well, we thought there was gonna be venture capital money to scale up Cleantech. And then the bottom fell out of the Cleantech venture funding side in the 2008, 2009 time table and venture money really pulled out. So, you know, in 2009 is. Harpa E first received it, significant early funding. Got an appropriation of 400 million had been authorized for the science committee and then it got an appropriation. Could you say that again? And the there was a big risk there. So look, RPE was then created, had a very dynamic leader named Maju. Who's now at Stanford leading the energy initiatives there. Aroon [00:19:35] saw the challenge and he frankly rose to it. So if they weren't gonna get this, these technologies scaled up through venture capital, like everybody assumed would work. How are they gonna do scale up? So who did a whole series of very creative things? There was some venture left. So we maintained, you know, good relations with the venture world. But also with the corporate world, because there were a lot of corporations that were interested in kind of moving in some of these directions. If these new technologies complimented technologies, they were already pursuing, right. So room created this annual. RPE summit where all of its award winners would, you know, present their technologies and, you know, fabulous, you know, presentations and booths all around this conference. It rapidly became the leading energy technology conference in the us wide widely attended by thousands of people. Venture capital may not be funding much, but they were there. But more importantly, [00:20:35] companies were there. And, you know, looking at what these technologies were to see how they could get to get stood up. So that was a way of exposing what was RPE was doing in a really big way. Right. Right. Another approach they tried was very successfully was to create what they call the tech to market group. So in addition to your program manager at RPE, You stand up a new project and assigned to that project would be somebody with expertise in the commercialization of technology by whatever route the financing might be obtained. And they brought in a series of experts who had done this, who knew venture, who knew startups, who also knew federal government contracting in case the feds were gonna buy this stuff, particularly a D O D and this tech to market group became, you know, that was part of the discipline of standing up a project was to really make sure there was gonna be a pathway to commercialization. In fact, that approach. [00:21:35] Was so successful and DARPA for a number of years later hired away RPE tech tech to tech, to market director to run and set up its own tech to market program. Right. Which was, you know, the, the new child is just taught the parent a lesson here is what the, what the point was. So there's now a tech to market group at, at DARPA as well. Another approach they did. Was, you know, there's a, there's a substantial amount of other R and D funding, more incrementally oriented at the department of energy. The E E R E program, but other programs in different energy technology areas will support, you know, companies, company research, as well as academic research. So RP built very good ties. With E E R E the applied research wing for renewable energy and other applied research, arms of department of energy so that they could provide the kind of next stage in funding. So you do the [00:22:35] prototyping through APA E and then some of the scale up could occur through through. Some of the applied agencies within the department of department of energy. So that was, there were other things they attempted as well. But those were some of the most creative and, you know, they got around this problem. Now there's an underlying issue in energy technology and, and it's true for many. DARPA like approaches the technologies don't stand up overnight. In other words, you don't do your applied work and end up with an early prototype and expect it to become a major business within two weeks. Right. Right. That process can take 10 years or 15 years, particularly in the hard tech area. Right. Anything that requires manufacturing? Yeah. Energy technology stand up. That's a, that's a 10 to 15 year process in the United States. So RPS only been around what, you know, 11, 12 years, something like that. They're still, you know, their technology are still emerging. They have made a lot of [00:23:35] technology contributions in a lot of technology areas that have helped expand opportunity spaces. Yeah. In many interesting areas. So they really helped, I believe. In identifying kind of new territories where there can be advances. But you know, have we transformed the world and solve climate change because of RPE yet? No, no, that's, that's a longer term project. So you have to have that expectation when you look at these different story of software and in some it sectors, DARPAs played a huge role in the evolution of those. Those could be shorter. Yeah, but anything really in the heart tech area is gonna take a much more extended period. Yeah. So you have to be patient. The politicians can't expect change in two weeks or two years. They're gonna have to be a little more patient. [00:24:24] Ben: And, and another sort of just issue that I, I, I'm not sure is, is a real thing, but that I've noticed is that a difference between DARPA and RPE is that [00:24:35] with, with DARPA, when you have the, the DOD acquiring technologies, they can sort of gather together all the different projects that were in, in, within a program and sort of integrate them into an entire system where. When you have a, an ape E program ending there's, there are a number of different projects, but there, there, isn't a great way of sort of integrating all the different pieces of a program. Is that an accurate assessment or am I, am I off base on that? [00:25:07] William: No, Ben, I think that's, I think that's accurate. I know. I mean the part of energy doesn't have a procurement budget. Right, right. Like the defense department does, it's not spending 700 billion a year to make things. So it can't play that system scale up kind of role in the kind of way the defense department does. Now. Look, I, I don't wanna overstate this because DARPA has definitely stood up technologies outside of defense, above procurement. So [00:25:35] most of its it revolution stuff. Where it played a, you know, big role, for example, as you know, in the development of desktop computing and, and a huge role in, in supporting the internet development of the internet. Absolutely. You know, those got stood up, not particularly through DOD, they got stood up in the civilian sector. So DARPA, you know, works on both sides of the street here. If it appears advantageous to, to stand it up on the civilian side, let it scale up and then the can buy it. Right. Mm-hmm , it'll do that. But on the other hand, there's, you know, there's very critical areas. Defense's gonna have to be a lead on like, you know, GPS, for example and really scale up the system. And then it can be shifted over to serve a dual use. [00:26:22] Ben: And, and then, so, so sort of like looking forward to the, the future how do you see all these considerations playing out with with ARPA H the, the health ARPA that is, I think been approved, [00:26:35] but hasn't actually started doing anything yet. [00:26:39] William: Yeah. It's got money appropriated. So you, and it's a priority of the. Of the current administration. So, you know, I believe it's gonna happen here. I mean, look, you know, there, there's some things that just need to be in place for a DARPA model to work well, mm-hmm, scale up is one that we've talked about and, you know, there is a pathway to scale up for new breakthroughs in in, you know, biomedicine and and, and medical device. We've got strong venture capital support in that area for a series of historical reasons. So that follow one pickup in many fields, right, is gonna be is gonna be available in many biomedical kind of fields. Know, there are issues. There, there was a big debate about an issue that I'll call island bridge, right? What you want, what you wanna do [00:27:35] with your DARPA is you want to put your, your DARPA team on an island. You wanna protect that island and keep the bureaucracy away from it. Right? Let 'em do their thing out there and do great stuff. And don't let the bureaucracy, the suits interfere with them. Yeah. On the other hand, they really need a bridge back to the mainland to get their technologies scaled up. So DARPA, for example, reports to the, in effect to the secretary defense and can undertake projects that the secretary defense can then, you know, in effect force the military services to pick up or, you know, use, use budgeting authority to encourage the military services to pick up DARPA has it's an island. It's got a separate building. It's about five miles away from the Pentagon. It's got its team there. It's got its own established culture. But then it's got a bridge back to the mainland, through the secretary of defense, into the defense procurement system. What's gonna be ARPA HS [00:28:35] relationship there. So there've been a lot of. About where to put ARPA-h do you put it in NIH, which is another, like NSF, another peer review, basic research agencies by far the biggest it's got its own culture and that culture frankly, is not a DARPA culture, right? That's not a strong program manager culture. It's a peer review culture. Do you really want to put your DARPA like thing within NIH? And within that NIH culture on the other hand, where else are you gonna put it? Right. So at the moment we've gotta compromise the ARPA H is gonna report to the secretary of HHS, but the secretary of HHS. Doesn't have money to scale up new technologies to speak up. Right. Right. There is an assistant secretary of health who oversees BARDA and some other entities. So, you know, that's, that's a possibility. But NIH has got a lot of ongoing research going on. [00:29:35] There could be a lot of following research that came out of NIH, NIH. So it's, this is a challenge. This is a challenge to set up the right kind of island bridge model for this new ARPA H. We've kind of got a compromise there at the moment. It will be located somewhere on the NIH campus. Hopefully in a separate, you know, building or location. Yeah. And then report to the secondary of HHS. But how are these, how is this scale up gonna work here? What's the bridge to the mainland gonna be and will it be protected enough from a very different culture at NIH? With lots of look, lots of jealousies, you know, when RPE was created for energy, the labs saw the, you know, there's 14 major energy labs, right? They saw RPE as a big competitor for funding that was gonna take money away from the labs. It took a long time to build those relationships so that the lab saw RPE, not as a competitor, but as a way in which their stuff could help. Move ahead. [00:30:35] Yeah. Uh, And that took a while to kind of sort out. So there's a series of these issues that are gonna have to get well thought through for for this new ARPA H that opening culture is absolutely critical. Say more about that and it, yeah. In other words, the culture of strong program managers that are empowered and ready to pursue breakthrough technologies. That's the heart of the darker culture, that culture locks in, in the opening months. If you get it wrong, it's very hard to fix it later. You really can't go back. Yeah. So hiring the right people, having a DARPA director who understands, for example, an ARPA age director who really understands the DARPA model and how to implement it that's gonna be key in setting that culture upright to the. [00:31:23] Ben: Yeah. And, and, and you've mentioned a, a couple of times the, sort of the effect of physical location on, on the culture. Have you, have you seen that, that, like where [00:31:35] people are physically located really like have an effect on, on resulting cultures? [00:31:41] William: Yeah. I mean, look, obviously post pandemic, we're exploring remote work a lot. Yes. But there's a lot to be said for getting your, your thinking team in one place where they're bouncing off ideas, each other with each other all the time. Yeah. Where they're exposed and, and critiqued and evaluated. And they just can't see each other, remind each other kind of all the time. So creating that island. With your talent on it so that they can interact and, and inevitably work pretty intensively together. Yeah. That's a, you know, I think that's a, something of a prerequisite to getting these kinds of organizations together. You've gotta build that earliest free to core and that early culture that that's very empowered. [00:32:30] Ben: And, and so just sort of to, to take, to take a, a right turn [00:32:35] and, and talk a little bit about your, your work on, on advanced manufacturing. This is, this is an area I personally know much less about. But like, I guess one, one sort of basic thing is I think a lot of people Like don't have a good sense of what sort of advanced manufacturing actually means. Like what, what is, what is, what, what, what does advanced actually entail in this situation? [00:33:00] William: Yeah, let me, let me tell you know, a little bit of a story here. Yeah, please. The there are a suite of new technologies. Corresponding processes that are kind of emerging, right. And some have, you know, some have emerged. Some are earlier at an earlier stage but areas like robotics, you know, 3d printing, additive manufacturing obviously digital digital production technologies. Where it is built into kind of every stage. All of your factory floor equipment is all [00:33:35] linked. You're doing continuous analytics on each machine, but then able to connect them to see the processes as a whole that kind of it revolution side. Then there's a whole series of advances in critical, you know, materials. that will enable us to do kind of designer materials in a way we've never done before, because we can now really operate at the medical level in designing materials. So, you know, we can have, you know, in the, in the clean tech space or automotive space, for example, we can have much lighter, much stronger materials. And in a related area, composites are now, you know, an emerging opportunity space. For a lot of, kind of new manufacturing. We may be able to do electronics, which is a whole new generation of electronics based on light and with whole kind of range of new speed for electronics as a result of that and new efficiencies. So there's a lot of technologies that are, that are [00:34:35] available. Some are starting to enter. Some are further back like Flo for example. But they could completely transform the way in which we make things. And that's what advanced manufacturing is. Can we move to these new technologies and, and the processes that go with them in completely transforming the way in which we make. [00:34:57] Ben: Yeah. And, and like, so, so this is, I'm very interested in this and it, it feels like there isn't like, like sort of answering that question involves real research. Right? Cause you, you sort of need to, to rethink processes, you need to rethink how you do design. But at the same time, there, there aren't a lot of. Institutions that are, are organized to do that sort of research. [00:35:23] William: Yeah, that's look that this has been a big gap in our R and D portfolio in the United States. So at the end of world war II, Ben you know, veever Bush designs, the postwar [00:35:35] system for science. Right, right. So. We do this amazing connected system in world war II. We have industries working with universities, working with government that're closely tied. We do incredible advances that lead to, you know, the, they lead to the electronics industry. They lead to the whole aerospace industry, right at the kind of scale we have now, they lead to, they lead to nuclear power. Amazing stuff comes out of world war. I. And we had a very connected system. Then we, we dismantled the military at the end of the work. Cause we thought mistakenly there was gonna be world peace and all those 16 million, you know, soldiers, sailors, airmen that are overseas start to come home and VIN Bush steps in and he says, wait a minute, let's hang on to some of this. We built this amazing R D capability in the course of the war. Let's hold on to some of it. So he says let's support basic research. That's the cheapest stage, right? Applied research costs a lot more. Yep. So we decided let's hang onto that. [00:36:35] And then we began during the war with a lot of federal research funding and universities really for the first time. So my school MIT got 80 times. Amount of federal research funding in four years of world war II, as it did in all of its previous 80 years of history, wow. That's happening at a whole bunch of schools. We're creating this incredible jewel in the American system. We're creating the federally funded research university. So it leads to that which is big, positive, but neighbor Bush's basic research model leaves out the applied side. And the assumption he's got, it's kind of a, what he, what others refer to as a pipeline model. But the federal government role is let's dump basic research into one end of the innovation pipeline. Let's hope that mysterious things occur and great products emerge. Right. And it's the job of industry to do that interim stage. That's kind the model. That is what it, [00:37:35] your fingers hoping something is gonna happen in that pipeline. And whereas in world war II, every stage that pipeline was pretty well organized in a coordinated kind of way. So we move away from that world war II connected system to a very disconnected system. We in effect institutionalized the valley of death, right. There's gonna be a gap between research side and on one side of the valley. And. You know, the actual technology implementation applied late stage applied side on the other side of the valley with a big gap, big valley gap in between the two and very few bridging mechanisms across. So we built that into our system. And look, VIN Bush was worried about science. How are we gonna fund basic science? That's his worry. And we built, you know, the us, wasn't the science leader going into world war II. Yeah. Germany, Britain, war. We weren't, we managed [00:38:35] to bring over lots of immigrants to help lead science in the us. And they, they took up the reigns and we trained a lot of great talent here in the course of the war. And you know, we got ourselves in a position where the us was the science leader by the end of the. We were going into the war, the world manufacturing leader. We weren't the science leader. We were the world manufacturing leader. We had built a system of mass production that nobody else had ever seen. Right? Yeah. We went into the war with eight times the production capacity of Japan and four times the production capacity in Germany going into the war. You can only imagine what were coming outta the war. Yeah, exactly. So the least thing on Genever Bush's mind was manufacturing that's in great shape. He sort [00:39:24] Ben: took that as a given [00:39:25] William: almost right. That's a given we're always gonna have that. Right. But he was wrong. We weren't always gonna have that. Uh, And Japan taught us, [00:39:35] you That ended up costing the us it's electronic sector leadership in the electronic sector and leadership in the auto sector, two industry sectors that we had completely dominated. So, and then, you know, comes to China and we have further erosion as well. So the reason why advanced manufacturing is important is you. We, we got two moves to compete with China. China's lower wage, lower cost. We can lower our wages to Chinese wage levels. That's probably not gonna happen. Right. Or alternatively, although we've been working on it, cause we've definitely stagnated wages in us, manufacturing, believe me. But secondly, we can get much more efficient, much more productive. We can apply our innovation system to manufacturing. Right. So NSF doesn't have an R and D portfolio related to manufac. Star doesn't have an R D portfolio that's terribly related to manufacturing either. Right? NIH certainly [00:40:35] doesn't we don't do manufacturing. We don't do these manufacturing technologies and processes in our I D system. Let's get that very talented, still very able us innovation system onto manufacturing. So that's the basic idea and that's the way we're gonna have to compete. We sort of got no other move. Right? We can just have continued erosion with all kinds of social disruption. And a real decline in the American working glass, we can continue to do that and we watch what that's doing to our democracy, or we can get our act together and do advanced manufacturing. Yeah. And, and [00:41:12] Ben: do you look, I guess, like, what are some of the most sort of promising efforts in that area, in, in that you've seen? [00:41:21] William: Well, there's, there's amazing work going on that we already see in a whole new kind of robotics. You know, the old industrial robots weighed 10 times. They're very dangerous. You have to put cages around them and make sure that the workers don't go near them. [00:41:35] And they do, you know, they lift up something heavy and they'll do like one perfect spot weld, and then they'll move to the next, you know, next piece of, you know, next piece, moving down the assembly line. Yeah. That's the old kinda robotics. The new kind of robotics are lightweight, collaborative robotics. Just as you know, we're talking on cell phones, it's like the relationship between me and the cell phone. It's a big enabler for me. It helps me I can do voice commands to the robot and it's, you know, and can work in a precision kind of way, but it was also knows me works around me. Doesn't endanger. It's a helper, not a, you know, a caged beast that has to be behind a fence. So we're moving to that kind of new robotics. That's a whole new C change in manufacturing. We're doing 3d printing, which you know, is instead of. Instead of subtractive manufacturing, where you cut away a huge piece of metal [00:42:35] and end up with a smaller part with real limits to what the shape and dimensions and content of that, that, that part can be additive enables you to build a part from scratch with these, with powders shape it to exactly the role you want often with new materials and we're moving into. Metal 3d printing. So it's no longer plastics and resins only, it's a whole new kind of it's metal of production. And look, you know, we haven't figured out yet how to get the volumes that are similar to, to mass production for 3d printing, but there are plenty of product lines where you you're making limited numbers that are, have to be extremely precise, right? Yeah. Like. Jet engines, right. You know, you're not turning out millions of jet engines every day. You're turning out small numbers, but the precision that additive [00:43:35] can bring potentially with new materials, like ceramics to creating those turbine blades is really quite dramatic. So there's a whole series of industrial sectors that'll be suited to, to additive. And that's already moving in on some of these sectors and we're learning how to. All kinds of, of new materials for additives, you know, particularly in the metal side and new material side. So that's another huge territory of opportunity to transform their actually new ways. [00:44:03] Ben: And, and something that I'm particularly interested in is, is so. You could think of, of many of these, these new technologies as sort of components in a broader system. And what it seems like I, I don't personally see a lot of, is kind of like the like process research work to really sort of rethink the entire The the, the entire, like, call it a manufacturing line or the entire system and sort of ask, like what, how would you like redesign the product around how you're making it? Have you seen any [00:44:35] sort of like institutions that are sort of trying to do that sort of work? [00:44:40] William: Yeah. I mean this, this whole idea of, you know, for a long time, you know, we gear. The design had to fit the manufacturing, right? So we moved to, you know, design for manufacturing, right. To make it easily manufacturable. But now. The manufacturing can be much more embedded into the design process because you can come up with a whole new suite of capabilities that will effectuate new design opportunities. Right? So rather than manufacturing, being a limiting factor on, on design, it's a, it's now an enabler of design and additive manufacturing is an example of. So a whole new relationship between the production process and design process really possible here with these new technologies. And then getting back to your systems point. You know, now we've got the opportunity through digital [00:45:35] technologies to really take a look at a production. Operation, not as this, a series of isolated machines where material has to be carted from one machine suite to the next machine suite. Right now we've got the ability to integrate them in, in ways that we have never had before with running the kind of level of data analytics on, on performance for each machine, but also running a new level of analytics on the system itself. Right? So we're now in a position to really connect, collect the metrics. To a very fine scale and level on the production process itself in a way that we've never really had before. So the opportunities for efficiencies here I think are quite dramatic. And I think that's the way we're gonna have to compete. But a lot of people worry, you know, are we gonna eliminate all work? Right? Are the, are the robots gonna displace the workers? But the reality of advanced manufacturing is actually something [00:46:35] of the opposite. You know, the robot will display some jobs, but much more frequently, the robot will create all kinds of new possibilities within existing jobs. Yeah. And then thirdly, there will be jobs to get created because we need to make robots right. And operate program. And so they're gonna be a lot of jobs. So the net job loss problem, I just don't think is a real. Right. Yeah. Instead we get these new possibilities of kind of moving ahead and look at the center of these kinds of new factory systems are gonna be people, right? Yeah. People in the are the ones that have ideas you know, software and AI. And robotics just can't do a whole lot of things that people are, are able to do. They don't have the kind of conceptual frameworks and the ability to kind of Intuit [00:47:35] change that people have got. So I think in a way the new manufacturing system is going be, you know, more people centric than it's been before. Instead [00:47:47] Ben: of people just acting like robots. [00:47:49] William: Yeah. Lot people act acting like robots. It's people, you know, doing the organization and designing and management and the systems and the programming and the processed way that we're gonna need. Yeah, [00:48:07] Ben: This was awesome. I'm so grateful. And now a quick word from our sponsors. If you listen to podcasts you've surely heard about advertisements for all sorts of amazing mattresses ones that can get hot or cold firmer or softer but now with the pod you can sleep in a tank of hydrostatic fluid make gravity while you sleep a thing of the past [00:48:35]
In this conversation, Adam Falk and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P. Sloan Foundation, which was started by the eponymous founder of General Motors and has been funding science and education efforts for almost nine decades. They've funded everything from iPython Notebooks to the Wikimedia foundation to an astronomical survey of the entire sky. If you're like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan Foundation, Adam was the president of Williams College and a high energy physicist focused on elementary particle physics and quantum field theory. His combined experience in research, academic administration, and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem. I hope you enjoy this as much as I did. Links - The Sloan Foundation - Adam Falk on Wikipedia - Philanthropy and the Future of Science and Technology Highlight Timestamps - How do you measure success in science? [00:01:31] - Thinking about programs on long timescales [00:05:27] - How does the Sloan Foundation decide which programs to do? [00:08:08] - Sloan's Matter to Life Program [00:12:54] - How does the Sloan Foundation think about coordination? [00:18:24] - Finding and incentivizing program directors [00:22:32] - What should academics know about the funding world and what should the funding world know about academics? [00:28:03] - Grants and academics as the primary way research happens [00:33:42] - Problems with grants and common grant applications [00:44:49] - Addressing the criticism of philanthropy being inefficient because it lacks market mechanisms [00:47:16] - Engaging with the idea that people who create value should be able to capture that value [00:53:05] Transcript [00:00:35] In this conversation, Adam Falk, and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P Sloan foundation, which was started by the eponymous founder of general motors. And has been funding science and education efforts for almost nine decades. They funded everything from IP. I fond [00:01:35] notebooks to Wikimedia foundation. To an astronomical survey of the entire sky. If you're like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan foundation. Adam was the president of Williams college and I high energy physicist focused on elementary particle physics in quantum field theory. His combined experience in research. Uh, Academic administration and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem i hope you enjoy this as much as i did [00:02:06] Ben: Let's start with like a, sort of a really tricky thing that I'm, I'm myself always thinking about is that, you know, it's really hard to like measure success in science, right? Like you, you know, this better than anybody. And so just like at, at the foundation, how do you, how do you think about success? Like, what is, what does success look like? What is the difference between. Success and failure mean to [00:02:34] Adam: you? [00:02:35] I mean, I think that's a, that's a really good question. And I think it's a mistake to think that there are some magic metrics that if only you are clever enough to come up with build them out of citations and publications you could get some fine tune measure of success. I mean, obviously if we fund in a scientific area, we're funding investigators who we think are going to have a real impact with their work individually, and then collectively. And so of course, you know, if they're not publishing, it's a failure. We expect them to publish. We expect people to publish in high-impact journals, but we look for broader measures as well if we fund a new area. So for example, A number of years ago, we had a program in the microbiology of the built environment, kind of studying all the microbes that live in inside, which turns out to be a very different ecosystem than outside. When we started in that program, there were a few investigators interested in this question. There weren't a lot of tools that were good for studying it. [00:03:35] By 10 years later, when we'd left, there was a journal, there were conferences, there was a community of people who were doing this work, and that was another measure, really tangible measure of success that we kind of entered a field that, that needed some support in order to get going. And by the time we got out, it was, it was going strong and the community of people doing that work had an identity and funding paths and a real future. Yeah. [00:04:01] Ben: So I guess one way that I've been thinking about it, it's just, it's almost like counterfactual impact. Right. Whereas like if you hadn't gone in, then it, the, it wouldn't be [00:04:12] Adam: there. Yeah. I think that's the way we think about it. Of course that's a hard to, to measure. Yeah. But I think that Since a lot of the work we fund is not close to technology, right. We don't have available to ourselves, you know, did we spin out products? Did we spin out? Companies did a lot of the things that might directly connect that work to, [00:04:35] to activities that are outside of the research enterprise, that in other fields you can measure impact with. So the impact is pretty internal. That is for the most part, it is, you know, Has it been impact on other parts of science that, you know, again, that we think might not have happened if we hadn't hadn't funded what we funded. As I said before, have communities grown up another interesting measure of impact from our project that we funded for about 25 years now, the Sloan digital sky survey is in papers published in the following sense that one of the innovations, when the Sloan digital sky survey launched in the early. Was that the data that came out of it, which was all for the first time, digital was shared broadly with the community. That is, this was a survey of the night sky that looked at millions of objects. So they're very large databases. And the investigators who built this, the, the built the, the, the telescope certainly had first crack at analyzing that [00:05:35] data. But there was so much richness in the data that the decision was made at. Sloan's urging early on that this data after a year should be made public 90% of the publications that came out of the Sloan digital sky survey have not come from collaborators, but it come from people who use that data after it's been publicly released. Yeah. So that's another way of kind of seeing impact and success of a project. And it's reached beyond its own borders. [00:06:02] Ben: And you mentioned like both. Just like that timescale, right? Like that, that, that 25 years something that I think is just really cool about the Sloan foundation is like how, how long you've been around and sort of like your capability of thinking on those on like a quarter century timescale. And I guess, how do you, how do you think about timescales on things? Right. Because it's like, on the one hand, this is like, obviously like science can take [00:06:35] 25 years on the other hand, you know, it's like, you need to be, you can't just sort of like do nothing for 25 years. [00:06:44] Adam: So if you had told people back in the nineties that the Sloan digital sky survey was going to still be going after a quarter of a century, they probably never would have funded it. So, you know, I think that That you have an advantage in the foundation world, as opposed to the the, the federal funding, which is that you can have some flexibility about the timescales on what you think. And so you don't have to simply go from grant to grant and you're not kind of at the mercy of a Congress that changes its own funding commitments every couple of years. We at the Sloan foundation tend to think that it takes five years at a minimum to have impact into any new field that you go into it. And we enter a new science field, you know, as we just entered, we just started a new program matter to life, which we can talk about. [00:07:35] That's initially a five-year commitment to put about $10 million a year. Into this discipline, understanding that if things are going well, we'll re up for another five years. So we kind of think of that as a decadal program. And I would say the time scale we think on for programs is decades. The timescale we think of for grants is about three years, right? But a program itself consists of many grants may do a large number of investigators. And that's really the timescale where we think you can have, have an impact over that time. But we're constantly re-evaluating. I would say the timescale for rethinking a program is shorter. That's more like five years and we react. So in our ongoing programs, about every five years, we'll take a step back and do a review. You know, whether we're having an impact on the program, we'll get some outside perspectives on it and whether we need to keep it going exactly as it is, or adjust in some [00:08:35] interesting ways or shut it down and move the resources somewhere else. So [00:08:39] Ben: I like that, that you have, you almost have like a hierarchy of timescales, right? Like you have sort of multiple going at once. I think that's, that's like under underappreciated and so w one thing they want to ask about, and maybe the the, the life program is a good sort of like case study in this is like, how do you, how do you decide what pro, like, how do you decide what programs to do, right? Like you could do anything. [00:09:04] Adam: So th that is a terrific question and a hard one to get. Right. And we just came out of a process of thinking very deeply about it. So it's a great time to talk about it. Let's do it. So To frame the large, the problem in the largest sense, if we want to start a new grantmaking program where we are going to allocate about $10 million a year, over a five to 10 year period, which is typical for us, the first thing you realize is that that's not a lot of money on the scale that the federal government [00:09:35] invest. So if your first thought is, well, let's figure out the most interesting thing science that people are doing you quickly realize that those are things where they're already a hundred times that much money going in, right? I mean, quantum materials would be something that everybody is talking about. The Sloan foundation, putting $10 million a year into quantum materials is not going to change anything. Interesting. So you start to look for that. You start to look for structural reasons that something that there's a field or an emerging field, and I'll talk about what some of those might be, where an investment at the scale that we can make can have a real impact. And And so what might some of those areas be? There are fields that are very interdisciplinary in ways that make it hard for individual projects to find a home in the federal funding landscape and one overly simplified, but maybe helpful way to think about it is that the federal funding landscape [00:10:35] is, is governed large, is organized largely by disciplines. That if you look at the NSF, there's a division, there's a director of chemistry and on physics and so forth. And but many questions don't map well onto a single discipline. And sometimes questions such as some of the ones we're exploring in the, you know, the matter to life program, which I can explain more about what that is. Some of those questions. Require collaborations that are not naturally fundable in any of the silos the federal government has. So that's very interdisciplinary. Work is one area. Second is emerging disciplines. And again, often that couples to interdisciplinary work in a way that often disciplines emerge in interesting ways at the boundaries of other disciplines. Sometimes the subject matter is the boundary. Sometimes it's a situation where techniques developed in one discipline are migrating to being used in another discipline. And that often happens with physics, the [00:11:35] physicist, figure out how to do something, like grab the end of a molecule and move it around with a laser. And suddenly the biologists realize that's a super interesting thing for them. And they would like to do that. So then there's work. That's at the boundary of those kind of those disciplines. You know, a third is area that the ways in which that that can happen is that you can have. Scale issues where, where kind of work needs to happen at a certain scale that is too big to be a single investigator, but too small to kind of qualify for the kind of big project funding that you have in the, in the, in the federal government. And so you're looking, you could also certainly find things that are not funded because they're not very interesting. And those are not the ones we want to fund, but you often have to sift through quite a bit of that to find something. So that's what you're looking for now, the way you look for it is not that you sit in a conference room and get real smart and think that you're going to see [00:12:35] things, other people aren't going to see rather you. You source it out, out in the field. Right. And so we had an 18 month process in which we invited kind of proposals for what you could do on a program at that scale, from major research universities around the country, we had more than a hundred ideas. We had external panels of experts who evaluated these ideas. And that's what kind of led us in the end to this particular framing of the new program that we're starting. So and, and that, and that process was enough to convince us that this was interesting, that it was, you know, emergent as a field, that it was hard to fund in other ways. And that the people doing the work are truly extraordinary. Yeah. And that's, that's the, that's what you're looking for. And I think in some ways there are pieces of that in all of the programs that particularly the research programs that. [00:13:29] Ben: And so, so actually, could you describe the matter to life program and like, [00:13:35] and sort of highlight how it fits into all of those buckets? [00:13:38] Adam: Absolutely. So the, the, the matter of the life program is an investigation into the principles, particularly the physical principles that matter uses in order to organize itself into living systems. The first distinction to make is this is not a program about how did life evolve on earth, and it's actually meant to be a broader question then how is life on earth organized the idea behind it is that life. Is a particular example of some larger phenomenon, which is life. And I'm not going to define life for you. That is, we know what things are living and we know things that aren't living and there's a boundary in between. And part of the purpose of this program is to explore that it's a think of it as kind of out there, on, out there in the field. And, and mapmaking, and you know, over here is, you [00:14:35] know, is a block of ice. That's not alive. And, you know, over here is a frog and that's alive and there's all sorts of intermediate spaces in there. And there are ideas out there that, that go, you know, that are interesting ideas about, for example, at the cellular level how is information can date around a cell? What might the role of. Things like non-equilibrium thermodynamics be playing is how does, can evolution be can it can systems that are, non-biological be induced to evolve in interesting ways. And so we're studying both biotic and non biotic systems. There are three strains, stray strands in this. One is building life. That is it was said by I think I, I find men that if you can't build something, you don't understand it. And so the idea, and there are people who want to build an actual cell. I think that's, that's a hard thing to do, but we have people who are building in the laboratory little bio-molecular machines understanding how that might [00:15:35] work. We, we fund people who are kind of constructing, protocells thinking about ways that the, the ways that liquid separate might provide SEP diff divisions between inside and outside, within. Chemical reactions could take place. We funded businesses to have made tiny little, you know, micron scale magnets that you mix them together and you can get them to kind of organize themselves in interesting ways. Yeah. In emerge. What are the ways in which emergent behaviors come to this air couple into this. And so that's kind of building life. Can you kind of build systems that have features that feel essential to life and by doing that, learn something general about, say the reproduction of, of, of, of DNA or something simple about how inside gets differentiated from outside. Second strand is principles of life, and that's a little bit more around are [00:16:35] there physics principles that govern the organization of life? And again, are there ways in which the kinds of thinking that informed thermodynamics, which is kind of the study of. Piles of gas and liquid and so forth. Those kinds of thinking about bulk properties and emergent behavior can tell us something about what's the difference between life that's life and matter. That's not alive. And the third strain is signs of life. And, you know, we have all of these telescopes that are out there now discover thousands of exoplanets. And of course the thing we all want to know is, is there life on them? We were never going to go to them. We maybe if we go, we'll never come back. And and we yet we can look and see the chemical composition of these. Protoplanets just starting to be able to see that. And they transition in front of a star, the atmospheres of these planets absorb light from the stars and the and the light that's absorbed tells you something about the chemical composition of the atmosphere. [00:17:35] So there's a really interesting question. Kind of chemical. Are there elements of the chemical composition of an atmosphere that would tell you that that life is present there and life in general? Right. I, you know, if, if you, if you're going to look for kind of DNA or something, that might be way too narrow, a thing to kind of look for. Right. So we've made a very interesting grant to a collaboration that is trying to understand the general properties of atmospheres of Rocky planets. And if you kind of knew all of the things that an atmosphere of an Earth-like planet might look like, and then you saw something that isn't one in one of those, you think, well, something other might've done that. Yeah. So that's a bit of a flavor. What I'd say about the nature of the research is it is, as you could tell highly interdisciplinary. Yeah. Right. So this last project I mentioned requires geoscience and astrophysics and chemistry and geochemistry and a vulcanology an ocean science [00:18:35] and, and Who's going to fund that. Yeah. Right. It's also in very emerging area because it comes at the boundary between geoscience, the understanding of what's going on on earth and absolutely cutting edge astrophysics, the ability to kind of look out into the cosmos and see other planets. So people working at that boundary it's where interesting things often, often happen. [00:18:59] Ben: And you mentioned that when, when you're looking at programs, you're, you're looking for things that are sort of bigger than like a single pie. And like, how do you, how do you think about sort of the, the different projects, like individual projects within a program? Becoming greater than the sum of their parts. Like, like, you know, there's, there's some, there's like one end of the spectrum where you've just sort of say, like, go, go do your things. And everybody's sort of runs off. And then there's another end of the spectrum where you like very explicitly tell people like who should be working on what and [00:19:35] how to, how to collaborate. So like, how do you, [00:19:37] Adam: so one of the wonderful things about being at a foundation is you have a convening power. Yeah. I mean, in part, because you're giving away money, people will, will want to come gather when you say let's come together, you know? And in part, because you just have a way of operating, that's a bit independent. And so the issue you're raising is a very important one, you know, in the individual at a program at a say, science grant making program we will fund a lot of individual projects, which may be a single investigator, or they may be big collab, collaborations, but we also are thinking from the beginning about how. Create help create a field. Right. And it may not always be obvious how that's going to work. I think with matter to life we're early on and we're, you know, we're not sure is this a single field, are there sub fields here? But we're already thinking about how to bring our pies together to kind of share the work they're doing and get to share perspectives. I can give you another example from a program Reno law, we recently [00:20:35] closed, which was a chemistry of the indoor environment. Where we were funded kind of coming out of our work in the microbiology indoors. It turns out that there's also very interesting chemistry going on indoors which is different from the environmental chemistry that we think about outdoors indoors. There are people in all the stuff that they exude, there's an enormous number of surfaces. And so surface chemistry is really important. And, and again, there were people who were doing this work in isolation, interested in, in these kinds of topics. And we were funding them individually, but once we had funded a whole community of people doing. They decided that be really interesting to do a project where, which they called home cam, where they went to a test house and kind of did all sorts of indoor activities like cooking Thanksgiving dinner and studying the chemistry together. And this is an amazing collaboration. So we had, so many of our grantees came together in one [00:21:35] place around kind of one experiment or one experimental environment and did work then where it could really speak to each other. Right. And which they they'd done experiments that were similar enough that they, the people who were studying one aspect of the chemistry and another could do that in a more coherent way. And I think that never would have happened without the Sloan foundation having funded this chemistry of indoor environments program. Both because of the critical mass we created, but also because of the community of scholars that we, that we help foster. [00:22:07] Ben: So, it's like you're playing it a very important role, but then it, it is sort of like a very then bottom up sort of saying like, like almost like put, like saying like, oh, like you people all actually belong together and then they look around and like, oh yeah, yeah, [00:22:24] Adam: we do. I think that's exactly right. And yeah. You don't want to be too directive because, you know, we're, we're just a foundation where we got some program directors and, you know, [00:22:35] we, we do know some things about the science we're funding, but the real expertise lives with these researchers who do this work every day. Right. And so what we're trying to see when, when we think we can see some things that they can't, it's not going to be in the individual details of the work they're doing, but it may be there from up here on the 22nd floor of the Rockefeller center, we can see the landscape a little bit better and are in a position to make connections that then will be fruitful. You know, if we were right, there'll be fruitful because the people on the ground doing the work with the expertise, believe that they're fruitful. Sometimes we make a connection and it's not fruitful in that. It doesn't fruit and that's fine too. You know, we're not always right about everything either, but we have an opportunity to do that. That comes from the. Particular in special place that we happen to sit. Yeah. [00:23:28] Ben: Yeah. And just speaking of program directors, how do you, how do you think about, I mean, like [00:23:35] you're, you're sort of in charge and so how do you think about directing them and, and sort of how do you think about setting up incentives so that, you know, good work like so that they do good work on their programs and and like how much sort of autonomy do you give them? Sort of how does, how does all of that work? [00:23:56] Adam: Absolutely. So I spent most of my career in universities and colleges. I was my own background is as, as, as a theoretical physicist. And I spent quite a bit of time as a Dean and a college president. And I think the key to being a successful academic administrator is understanding deep in your bones, that the faculty are the heart of the institution. They are the intellectual heart and soul of the institution. And that you will have a great institution. If you hire terrific faculty and support them you aren't telling them, you know, you as, and they don't require a lot of telling them what to do, but the [00:24:35] leadership role does require a lot of deciding where to allocate the resources and helping figure out and, and figuring out how, and in what ways, and at what times you can be helpful to them. Yeah. The program directors at the Sloan foundation are very much. The faculty of a, of a university and we have six right now it's five PhDs and a road scholar. Right. And they are, each of them truly respect, deeply respected intellectual leaders in the fields in which they're making grants. Right. And my job is to first off to hire and retain a terrific group of program directors who know way more about the things they're doing than I do. And then to kind of help them figure out how to craft their programs. And you know, there's different kinds of, you know, different kinds of help that different kind of program directors needs. Sometimes they just need resources. Sometimes they need, you know, a collaborative conversation. You know, [00:25:35] sometimes, you know, we talk about the ways in which their individual programs are gonna fit together into the larger. Programs at the Sloan foundation sometimes what we talk about is ways in which we can and should, or shouldn't change what we do in order to build a collaboration elsewhere. But I don't do much directing of the work that program directors to just like, I don't, didn't ever do much of any directing of the work that, that that the faculty did. And I think what keeps a program director engaged at a place like the Sloan foundation is the opportunity to be a leader. Yeah. [00:26:10] Ben: It's actually sort of to double click on that. And on, on, on hiring program directors, it seems it like, I, I, I would imagine that it is, it is sometimes tough to get really, really good program directors, cause people who would make good program directors could probably have, you know, their pick. Amazing roles. And, and to some extent, and, and [00:26:35] they, they, they do get to be a leader, but to some extent, like they're, they're not directly running a lab, right. Like they're, they, they don't have sort of that direct power. And they're, they're not like making as much money as they could be, you know, working at Google or something. And so, so like how do you both like find, and then convince people to, to come do that? [00:26:57] Adam: So that's a great question. I mean, I think there's a certain, you know, P people are meant to be program directors are, are not the, usually the place like the Sloan foundation and different foundations work differently. Right. So but in our case are not people who Otherwise, who would rather be spending their time in the lab. Yeah. Right. And many of them have spent time as serious scholars in one discipline or another, but much like faculty who move into administration, they've come to a point in their careers, whether that was earlier or later in their [00:27:35] career where the larger scope that's afforded by doing it by being a program director compensates for the fact that they can't focus in the same way on a particular problem, that, that the way a faculty member does or a researcher. Yes. So the, the other thing you have to feel really in your bones, which is, again, much like being an academic administrator is that there's a deep reward in finding really talented people and giving them the resources. They need to do great things. Right. And in the case, if you're a program director, what you're doing is finding grantees and When a grantee does something really exciting. We celebrate that here at the foundation as, as a success of the foundation. Not that we're trying to claim their success, but because that's what we're trying to do, we're trying to find other people who can do great things and give them the resources to do those great things. So you have to get a great kind of professional satisfaction from. So there are people who have a [00:28:35] broader view or want to move into a, a time in their careers when they can take that broader view about a field or an area that they already feel passionate about. And then who have the disposition that, that, you know, that wanting to help people is deeply rewarding to them. And, you know, say you, how do you find these folks? It's, it's just like, it's hard to find people who were really good at academic administration. You have to look really hard for people who are going to be great at this work. And you persuade them to do it precisely because they happen to be people who want to do this kind of work. Yeah. [00:29:09] Ben: And actually and so, so you, you sort of are, are highlighting a lot of parallels between academic administration and, and sort of your role now. I think it. Is there anything that, but at the same time, I think that there are many things that like academics don't understand about sort of like science funding and and, and this, that, that world, and then there's many things that it seems like science funders don't understand about [00:29:35] research and, and you're, you're one of the few people who've sort of done in both. And so I guess just a very open-ended question is like, like what, what do you wish that more academics understood about the funding world and things you have to think about here? And what do you wish more people in the funding world understood about, about research? Yeah, [00:29:54] Adam: that is, that is great. So I can give you a couple of things. The, I think at a high level, I, I always wish that on both sides of that divide, there was a deeper understanding of the constraints under which people on the other side are operating. And those are both material constraints and what I might call intellectual constraints. So there's a parallelism here. I, if I first say from the point of view of the, of as a foundation president, what do I wish that academics really understood? I, I, I'm always having to reinforce to people that we really do mean it when we say we do fund, we fund X and we don't fund Y [00:30:35] yeah. And that please don't spend time trying to persuade me that Z, that you do really is close enough to X, that we should fund it and get offended. When I tell you that's not what we fund, we say no to a lot of things that are intrinsically great, but that we're not funding because it's not what we fund. Yeah. We as, and we make choices about what to fund that are very specific and what areas to fund in that are very specific so that we can have some impact, right. And we don't make those decisions lightly, you know, for almost any work someone is doing, we're not the only foundation who might fund it. So move on to someone else. If you're not fitting our program, then argue with us and just understand why it is that, that we do that. Right. I think that is that's a come across that a lot. There's a total parallel, which I think is very important for people in foundations who have very strong ideas about what they should fund to understand that, you know, academics are not going to drop what they're doing and start doing something else because there's a [00:31:35] little bit of money available that, you know, is an academic, of course, you're trying to make. Your questions, two ways, things you can support, but usually driven because some question is really important to you. And if, you know, if some foundation comes to you and says, well, stop doing that and do this, I'll find it. You know why maybe that's, you're pretty desperate. You're not going to do that. So the best program directors spend a lot of time looking for people who already are interested in the thing that the foundation is funding, right? And really underst understand that you can't bribe people into doing something that they, that they, that they otherwise wouldn't do. And so I think those are very parallel. I mean, to both to understand the set of commitments that people are operating under, I would say the other thing that I think it's really important for foundations to understand about about universities is and other institutions is that these institutions. Are not just platforms [00:32:35] on which one can do a project, right? They are institutions that require support on their own. And somebody has to pay the debt service on the building and take out the garbage and cut the grass and clean the building and, you know hire the secretaries and do all of the kind of infrastructure work that makes it possible for a foundation such as Sloan to give somebody $338,000 to hire some postdocs and do some interesting experiments, but somebody is still turning on the lights and overhead goes to the overhead is really important and the overhead is not some kind of profit that universities are taking. It is the money they need in order to operate in ways that make it possible to do the grants. And. You know, there's a longer story here. I mean, even foundations like Sloan don't pay the full overhead and we can do that because [00:33:35] we typically are a very small part of the funding stream. But during the pandemic, we raised our overhead permanently from the 15% we used to pay to the 20% that we pay now, precisely because we've, we felt it was important to signal our support for the institutions. And some of those aren't universities, some of those are nonprofits, right? That other kinds of nonprofits that we're housing, the activities that we were interested in funding. And I just think it's really important for foundations to understand that. And I do think that my own time as a Dean at a college president, when I needed that overhead in order to turn on the lights, so some chemist could hire the post-docs has made me particularly sensitive [00:34:16] Ben: to that. Yeah, no, that's, that's a really good. Totally that I don't think about enough. So, so, so I really appreciate that. And I think sort of implicit implicit in our conversation has been two sort of core things. One, is that the way that you [00:34:35] fund work is through grants and two, is that the, the primary people doing the research are academics and I guess it just, w let's say, w w what is, what's the actual question there it's like, is it like, do you, do you think that that is the best way of doing it? Have you like explored other ways? Because it, it, it feels like those are sort of both you know, it's like has been the way that people have done it for a long time. [00:35:04] Adam: So there's, there's two answers to that question. The first is just to acknowledge that the Sloan foundation. Probably 50 out of the $90 million a year in grants we make are for research. And almost all of that research is done at universities, I think primarily because we're really funding basic research and that's where basic research has done. If we were funding other kinds of research, a lot of use inspired research research that was closer to kind of technology. We would be, you might be [00:35:35] funding people who worked in different spaces, but the kind of work we fund that's really where it's done. But we have another significant part of the foundation that funds things that aren't quite research, that the public understanding of science and technology diversity, equity and inclusion in stem, higher ed of course, much of that is, is money that goes into universities, but also into other institutions that are trying to bring about cultural change in the sciences badly needed cultural change. And then our technology program, which looks at all sorts of technologies. Modern technologies that support scholarships such as software scholarly communication, but as increasingly come to support modes of collaboration and other kinds of more kind of social science aspects of how people do research. And there are a lot of that funding is not being given to universities. A lot of that funding is given to other sorts of institutions, nonprofits, always because we're a [00:36:35] foundation, we can only fund nonprofits, but that go beyond the kind of institutional space that universities occupy. We're really looking for. You know, we're not driven by a kind of a sense of who we should fund followed by what we should fund. We're interested in funding problems and questions. And then we look to see who it is that that is doing that work. So in public understanding some of that's in the universities, but most of it isn't and [00:37:00] Ben: actually the two to go back. One thing that I wanted to to ask about is like It seems like there's, if you're primarily wanting to find people who are already doing the sort of work that is within scope of a program, does it, like, I guess it almost like raises the chicken and egg problem of like, how, how do you, like, what if there's an area where people really should be doing work, but nobody is, is doing that work [00:37:35] because there is no funding to do that work. Right. Like this is just something that I struggled with. It's not right. And so, so it's like, how do you, how do you sort of like bootstrap thing? Yes. [00:37:46] Adam: I mean, I think that the way to think about it is that you work incrementally. That is if, if once, and I think you're, you're quite right. That is in some sense, we are looking for areas that. Under inhabited, scientifically because people aren't supporting that work. And that's another way of saying what I said at the beginning about how we're looking for maybe interdisciplinary fields that are hard to support. One way you can tell that they're hard to support is that there isn't a support people aren't doing it, but typically you're working in from the edges, right. There's people on the boundaries of those spaces chomping at the bit. Right. And when you say, you know, what is the work? You can't do what you would do if you add some funding and tell [00:38:35] us why it's super interesting. That's the question you're asking. And that's kind of the question that drives what we talked about before, which is how do you identify a new area, but it's it it's actually to your point, precisely, it's not the area where everybody already is. Cause there's already a lot of money there. Right? So I would say. You know, if you really had to bootstrap it out in the vacuum, you would have to have the insights that we don't pretend to have. You'd have this ability to kind of look out into the vacuum of space and conjure something that should be there and then have in conjure who should do it and have the resources to start the whole thing. That's not the Sloan foundation we do. We don't operate at that scale, but there's another version of that, which is a more incremental and recognizes the exciting ideas that researchers who are adjacent to an underfunded field. Can't th th th th th the, the excitement that they have to go into a new [00:39:35] area, that's just adjacent to where they are and being responsive to that. [00:39:39] Ben: No, that's, and that's, it sort of ties back in my mind to. Y you need to do programs on that ten-year timescale, right? Like, you know, it's like the first three years you go a little bit in the next three years, you do a little bit in, and by like the end of the 10 years, then you're actually in, in [00:39:59] Adam: that new. No, I think that's exactly right. And the other thing is you can, you know, be more risky or more speculative. I like the word speculative better than risky. Risky makes it sound like you don't know what you're doing. Speculative is meant to say, you don't know where you're going to go. So I don't ever think the grants we're funding are particularly risky in the sense that they're going to, the projects will fail. They're speculative in the sense that you don't know if they're going to lead somewhere really interesting. And this is where. The current funding landscape is really in the federal funding. Landscape is really challenging because [00:40:35] the competition for funding is so high that you really need to be able to guarantee success, which doesn't just mean guarantee that your project will work, but that it will, you know, we will contribute in some really meaningful way to moving the field forward, which means that you actually have to have done half the project already before that's, what's called preliminary data playmate. As far as I'm concerned, preliminary data means I already did it. And now I'm just going to clean it up with this grant. And that is, that's a terrible constraint and we can, we're not bound by that kind of constraint in funding things. So we can have failures that are failures in the sense that that didn't turn out to be as interesting as we hoped it would be. Yeah. I, [00:41:17] Ben: I love your point on, on the risk. I, I, I dunno. I, I think that it's, especially with like science, right? It's like, what is it. The risk, right? Like, you're going to discover something. You might discover that, you know, this is like the phenomenon we thought was a [00:41:35] phenomenon is not really there. Right. But it's, it's still, it's, it's not risky because you weren't like investing for, [00:41:43] Adam: for an ROI. Can I give you another example? I think it was a really good one. Is, is it in the matter of the life program? We made a grant to a guy named David Baker, the university of Washington and hated him. And so, you know, David Baker. And so David Baker builds these little nanoscale machines and he has an enormous Institute for doing this. It's extraordinarily exciting work and. Almost all of the work that he is able to do is tool directed toward applications, particularly biomedical applications. Totally understandable. There's a lot of money there. There's a lot of need there. Everybody wants to live forever. I don't, but everybody else seems to want to, but, so why did, why would, why do we think that we should fund them with all of the money that's in the Institute for protein engineering? Which I think is what it's called. It's because we actually funded him to do some basic science.[00:42:35] Yeah to build machines that didn't have an application, but to learn something about the kinds of machines and the kinds of machinery inside cells, by building something that doesn't have an application, but as an interesting basic science component to it, and that's actually a real impact, it was a terrific grant for us because there's all of this arc, all of this architecture that's already been built, but a new direction that he can go with his colleagues that that he actually, for all of the funding he has, he can't do under the content under the. Umbrella of kind of biomedicine. And so that's another way in which things can be more speculative, right? That's speculative where he doesn't know where it's going. He doesn't know the application it's going to. And so even for him, that's a lot harder to do unless something like Sloan steps in and says, well, this is more speculative. It's certainly not risky. I don't think it's risky to fund David bay could do anything, but it's speculative about where this particular [00:43:35] project is going to lead. [00:43:36] Ben: Yeah, no, I like that. It's just like more, more speculation. And, and you, you mentioned just. Slight tangent, but you mentioned that, you know, Sloan Sloan operates at a certain skill. Do you ever, do you ever team up with other philanthropies? Is that, is that a thing? [00:43:51] Adam: Yeah, we, we do and we love, we love co-funding. We've, we've done that in many of our programs in the technology program. We funded co-funded with more, more foundation on data science in the, we have a tabletop physics program, which I haven't talked about, but basically measuring, you know, fundamental properties of the electron in a laboratory, the size of this office rather than a laboratory. You know, the Jura mountains, CERN and there we, it was a partnership actually with the national science foundation and also with the Moore foundation we have in our energy and environment program partnered with the research corporation, which runs these fascinating program called CYA logs, where they bring young investigators out to Tucson, Arizona, or on to zoom lately, but [00:44:35] basically out to Tucson, Arizona, and mix them up together around an interesting problem for a few days, and then fund a small, small kind of pilot projects out of that. We've worked with them on negative emission science and on battery technologies. Really interesting science projects. And so we come in as a co-funder with them there, I think, to do that, you really need an alignment of interests. Yeah. You really both have to be interested in the same thing. And you have to be a little bit flexible about the ways in which you evaluate proposals and put together grants and so forth so that, so that you don't drive the PIs crazy by having them satisfy two foundations at the same time, but where that is productive, that can be really exciting. [00:45:24] Ben: Cause it seems like I'm sure you're familiar with, they feel like the common application for college. It just, it seems like, I mean, like one of the, sort of my biggest [00:45:35] criticisms of grants in general is that, you know, it's like you sort of need to be sending them everywhere. And there's, there's sort of like the, the well-known issue where, you know, like PI has spend some ridiculous proportion of their time writing grants and it. Sort of a, like a philanthropic network where like, it just got routed to the right people and like sort of a lot happened behind the scenes. That seems like it could be really powerful. Yeah. [00:46:03] Adam: I think that actually would be another level of kind of collective collaboration. Like the common app. I think it would actually in this way, I love the idea. I have to say it's probably hard to make it happen because pre-site, for a couple of reasons that don't make it a bad idea, but it just kind of what planet earth is like. You know, one is that we have these very specific programs and so almost any grant has to be a little bit re-engineered in order to fit into because the programs are so specific fit into a new foundations [00:46:35] program. And the second is. We can certainly at the Sloan foundation, very finicky about what review looks like. And very foundations have different processes for assuring quality. And the hardest work I find in a collaboration is aligning those processes because we get very attached to them. It's a little like the tenure review processes at university. Every single university has its own, right. They have their own tenure process and they think that it was crafted by Moses on Mount Sinai and can never be changed as the best that it possibly ever could be. And then you go to another institution, that thing is different and they feel the same way. That is a feature. I mean really a bug of of the foundation, but it's kind of part of the reality. And, and we certainly, if, if what we really need in order for there to be more collaboration, I strongly feel is for everyone to adopt the Sloan foundation, grant proposal guidelines and review practices. And then all this collaboration stuff would be a piece of cake.[00:47:35] It's like, [00:47:35] Ben: like standards anywhere, right. Where it's like, oh, of course I'm willing to use the standard. It has to be exactly. [00:47:41] Adam: We have a standard we're done. If you would just, if you would just recognize that we're better this would be so much simpler. It's just, it's like, it's the way you make a good marriage work. [00:47:51] Ben: And speaking of just foundations and philanthropic funding more generally sort of like one of the criticisms that gets leveled against foundations especially in, in Silicon valley, is that because there's, there's sort of no market mechanism driving the process that, you know, it's like, it, it can be inefficient and all of that. And I, personally don't think that that marketing mechanisms are good for everything, but I'd be interested in and just like. Sort of response to, to [00:48:23] Adam: that. Yeah. So let me broaden that criticism and because I think there's something there that's really important. There's the enormous discretion that [00:48:35] foundations have is both their greatest strength. And I think their greatest danger that is, you know what, because there is not a discipline that is forcing them to make certain sets of choices in a certain structure. Right. And whether that's markets or whether you think that more generally as, as a, as a kind of other discipline in it, disciplining forces too much freedom can, or I shouldn't say too much freedom, but I would say a lot of freedom can lead to decision-making that is idiosyncratic and And inconsistent and inconstant, right? That is a nicer, a more direct way to say it is that if no one constraints what you do and you just do what you feel like maybe what you feel like isn't the best guide for what you should do. And you need to be governed by a context which assure is strategic [00:49:35] consistencies, strategic alignment with what is going on at other places in, in ways that serve your, you know, that serve the field a commitment to quality other kinds of commitments that make sure that your work is having high impact as a, as a funder. And those don't come from the outside. Right. And so you have to come up with ways. Internally to assure that you keep yourself on the straight and narrow. Yeah. I think there's some similar consideration about which is beyond science funding and philanthropy about the necessity of doing philanthropic work for the public. Good. Yeah. Right. And I think that's a powerful, ethical commitment that we have to have the money that we have from the Sloan foundation or that the Ford foundation, as of the Rockefeller foundation as are in it, I didn't make that money. What's more Alfred P Sloan who left us this money made the money in a context in which lots of people did a lot of work [00:50:35] that don't have that money. Right. A lot of people working at general motors plants and, and, you know, he made that work in a society that support. The accumulation of that fortune and that it's all tax-free. So the federal government is subsidizing this implicitly. The society is subsidizing the work we do because it's it's tax exempt. So that imposes on us, I think, an obligation to develop a coherent idea of what using our funding for the public good means, and not every foundation is going to have that same definition, but we have an obligation to develop that sense in a thoughtful way, and then to follow it. And that is one of the governors on simply following our whims. Right? So we think about that a lot here at the Sloan foundation and the ways in which our funding is justifiable as having a positive, good [00:51:35] that You know, that, that, that attaches to the science we fund or, or just society in general. And that if we don't see that, you know, we, we think really hard about whether we want to do that grant making. Yeah. So it's [00:51:47] Ben: like, I, and I think about things in terms of, of, of like systems engineering. And so it's like, you sort of have these like self-imposed feedback loops. Yes. While it's not, it's not an external market sort of giving you that feedback loop, you still there, you can still sort of like send, like to set up these loops so [00:52:09] Adam: that, so my colleague, one of the program directors here, my colleague, Evan, Michelson is written entire book on. On science philanthropy, and on applying a certain framework that's been developed largely in used in Europe, but also known here in this state, it's called responsible research and innovation, which provides a particular framework for asking these kinds of questions about who you fund and how you fund, what sorts of funding you do, what [00:52:35] sorts of communities you fund into how you would think about doing that in a responsible way. And it's not a book that provides answers, but it's a book that provides a framework for thinking about the questions. And I think that's really important. And as I say, I'm just going to say it again. I think we have an ethical imperative to apply that kind of lens to the work we do. We don't have an ethical imperative to come up with any particular answer, but we have an ethical imperative to do the thinking and I recommend Evan's book to all right. [00:53:06] Ben: I will read it recommendation accepted. And I think, I think. Broadly, and this is just something that, I mean, sort of selfishly, but I also think like there's a lot of people who have made a lot of money in, especially in, in technology. And it's interesting because you look at sort of like you could, you could think of Alfred P Sloan and, and Rockefeller and a lot of [00:53:35] in Carnegie's as these people who made a lot of money and then started, started these foundations. But then you don't see as much of that now. Right? Like you have, you have, you have some but really the, the, the sentiment that I've engaged with a lot is that again, like sort of prioritizing market mechanisms, a implicit idea that, that, like anything, anything valuable should be able to capture that value. And I don't know. It's just like, like how do you, like, have you [00:54:08] Adam: talked to people about, yeah, I think that's a really interesting observation. I think that, and I think it's something we think about a lot is the, the different, I think about a lot is the differences in the ways that today's, you know, newly wealthy, you know, business people, particularly the tech entrepreneurs think about philanthropy. As relates to the way that they made their money. So if we look at Alfred [00:54:35] P Sloan, he he basically built general motors, right? He was a brilliant young engineer who manufactured the best ball bearings in the country for about 20 years, which turned out at the nascent automobile industry. As you can imagine, reducing friction is incredibly important and ball bearings were incredibly important and he made the best ball-bearings right. That is a real nuts. And, but nothing sexy about ball-bearings right. That is the perspective you get on auto manufacturer is that the little parts need to work really well in order for the whole thing to work. And he built a big complicated institution. General motors is a case study is the case study in American business about how you build a large. In large business that has kind of semi-autonomous parts as a way of getting to scale, right? How do you get general motors to scale? You have, you know, you have Chevy and you have a Buick and you're a [00:55:35] Pontiac and you have old's and you have Cadillac and GMC and all, you know, and this was, he was relentlessly kind of practical and institutional thinker, right across a big institution. And the big question for him was how do I create stable institutional structures that allow individual people to exercise judgment and intelligence so they can drive their parts of that thing forward. So he didn't believe that people were cogs in some machine, but he believed that the structure of the machine needed to enable the flourishing of the individual. And that's, that's how we built general motors. That does not describe. The structure of a tech startup, right? Those are move fast and break things, right? That is the mantra. There. You have an idea, you build it quickly. You don't worry about all the things you get to scale as fast as you can with as little structure as you can. You [00:56:35] don't worry about the collateral damage or frankly, much about the people that are, that are kind of maybe the collateral damage. You just get to scale and follow your kind of single minded vision and people can build some amazing institutions that way. I mean, I think it's, it's been very successful, right? For building over the last decades, you know, this incredible tech economy. Right? So I don't fault people for thinking about their business that way. But when you turn that thinking to now funding science, There's a real mismatch, I think between that thinking about institutions and institutions don't matter, the old ones are broken and the new ones can be created immediately. Right? And the fact that real research while it requires often individual leaps forward in acts of brilliance requires a longstanding functioning community. It [00:57:35] fires institutions to fund that research, to host that research that people have long, you know, that the best research is actually done by people who were engaged in various parts of very long decades, careers doing a certain thing that it takes a long time to build expertise and Eva, as brilliant as you are, you need people around you with expertise and experience. There's a real mismatch. And so there can be a reluctance to fund. Th the reluctance to have the commitment to timescales or reluctance to invest in institutions to invest in. There's a I, I think has developed a sense that we should fund projects rather than people and institutions. And that's really good for solving certain kinds of problems, but it's actually a real challenge for basic research and moving basic research forward. So I think there's a lot of opportunity to educate people. And these are super smart people in the tech sector, right. About the [00:58:35] differences between universities and which are very important institutions in all of this and star tech startups. And they really are different sorts of institutions. So I think that's a challenge for us in this sector right now. [00:58:48] Ben: What I liked. To do is tease apart why, why is this different? Like, why can't you just put in more nights to your research and like come up with the, come out with the, like the brilliant insight faster. [00:59:01] Adam: Yeah. I mean, these people who are already working pretty hard, I would say, I mean, you, you know, you're of course, you know, this really well, there are different, I mean, science has, you know, has different parts of science that work on different sorts of problems and, you know, there's, there are problems. Where there's a much more immediate goal of producing a technology that would be usable and applicable. And those require a diff organism organizing efforts in different ways. And, you know, as you well know, the, the national, you know, [00:59:35] the, the private laboratories like bell labs and Xerox labs, and so forth, played a really important role in doing basic research that was really inspired by a particular application. And they were in the ecosystem in a somewhat different way than the basic research done in the universities. You need both of them. And so it, it's not that the way that say the Sloan foundation fund sciences, if everybody only funded science that way, that would not be good. Right. But, but the, the, the big money that's coming out of the, the newly wealthy has the opportunity to have a really positive impact on basically. Yeah, but only if it can be deployed in ways that are consistent with the way that basic sciences is done. And I think that requires some education and, [01:00:22] Ben: and sort of speaking of, of institutions. The, like, as I know, you're aware, there's, there's sort of like this, this like weird Cambridge and explosion of people trying stuff. And I, I guess, like, in addition [01:00:35] to just your, your thoughts on that, I'm, I'm interested particularly if you see, if you see gaps. That people aren't trying to fill, but like, you, you, you think that you would sort of like want to, to shine spotlights on just from, from, from your, your overview position. [01:00:52] Adam: I mean, that's a great question. I, I'm not going to be able to give you any interesting insight into what we need to do. I do think I'm in great favor of trying lots of things. I mean, I love what's going on right now that people are, you know, the, that people are trying different experiments about how to, to fund science. I think that I have a couple of thoughts. I mean, I do think that most of them will fail because in the Cambrian explosion, most of things fail. Right. That is that's if they all succeeded people, aren't trying interesting enough things. Right. So that's fine. I think that there is a, I think that a danger in too much reinventing the wheel. And I, you know, one of the things I, you know, when notice is, is [01:01:35] that you know, some of the new organizations, many of them are kind of set up as a little bit hybrid organizations that they do some funding. And, but they also want to do some advocacy. They're not 5 0 1 they maybe want to monetize the thing that they're, that they're doing. And I think, you know, the, you know, if you want to set a bell labs set up bell labs, there aren't. Magic bullets for some magic hybrid organization, that's going to span research all the way from basic to products, right. And that is going to mysteriously solve the problem of plugging all of the holes in the kind of research, you know, research ecosystem. And so I think it's great that people are trying a lot of different things. I hope that people are also willing to invest in the sorts of institutions we already have. And and that there's a, that there is kind of a balance. There's [01:02:35] a little bit of a language that you start to hear that kind of runs down, that it kind of takes a perspective that everything is broken in the way we're doing things now. And I don't think that everything is broken in the way we do things. Now. I don't think that the entire research institution needs to be reinvented. I think. Interesting ideas should be tried. Right. I think there's a distinction between those two things. And I would hate to see the money disproportionately going into inventing new things. Yeah. I don't know what the right balance is. And I don't have a global picture of how it's all distributed. I would like to see both of those things happening, but I worry a little bit that if we get a kind of a narrative that the tech billionaires all start to all start to buy into that the system is broken and they shouldn't invest in it. I think that will be broken, then it will be broken and we'll [01:03:35] miss a great opportunity to do really great things, right? I mean, the, you know, the, what Carnegie and Rockefeller left behind were great institutions that have persisted long after Carnegie and Rockefeller. We're long gone and informs that Carnegie and Rockefeller could never have imagined. And I would like that to be the aspiration and the outcome of the newly wealthy tech billionaires. The idea that you might leave something behind that, that 50 or a hundred years from now, you don't recognize, but it's doing good right. Long past your own ability to direct it. Right. And that requires a long-term sense of your investment in society, your trust in other people to carry something on after you to think more institutionally and less about what's wrong with institutions, I think would be a [01:04:35] helpful corrective to much of the narrative that I see there. And that is not inconsistent with trying exciting new things. It really isn't. And I'm all in favor of that. But the system we have has actually produced. More technological progress than any other system at any other point in history by a factor that is absolutely incalculable. So we can't be doing everything wrong. [01:04:58] Ben: I think that is a perfect place to stop. Adam. Thanks for being part of idea machines. And now a quick word from our sponsors. Is getting into orbit a drag. Are you tired of the noise from rockets? Well, now with Zipple the award-winning space elevator company, you can get a subscription service for only $1,200 a month. Just go to zipple.com/ideamachines for 20% off your first two months. That's zipple.com/ideamachines.
In this conversation, Semon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction, and a lot more! Semon is currently a postdoc in mathematics at Harvard where he specializes in symplectic geometry. He has an amazing ability to go up and down the ladder of abstraction — doing extremely hardcore math while at the same time paying attention to *how* he's doing that work and the broader institutional structures that it fits into. Semon is worth listening to both because he has great ideas and also because in many ways, academic mathematics feels like it stands apart from other disciplines. Not just because of the subject matter, but because it has managed to buck many of the trend that other fields experienced over the course of the 20th century. Links Semon's Website Transcript [00:00:35] Welcome back to idea machines. Before we get started, I'm going to do two quick pieces of housekeeping. I realized that my updates have been a little bit erratic. My excuse is that I've been working on my own idea machine. That being said, I've gotten enough feedback that people do get something out of the podcast and I have enough fun doing it that I am going to try to commit to a once a month cadence probably releasing on the pressure second [00:01:35] day of. Second thing is that I want to start doing more experiments with the podcast. I don't hear enough experiments in podcasting and I'm in this sort of unique position where I don't really care about revenue or listener numbers. I don't actually look at them. And, and I don't make any revenue. So with that in mind, I, I want to try some stuff. The podcast will continue to be a long form conversation that that won't change. But I do want to figure out if there are ways to. Maybe something like fake commercials for lesser known scientific concepts, micro interviews. If you have ideas, send them to me in an email or on Twitter. So that's, that's the housekeeping. This conversation, Simon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction. is currently a post-doc in mathematics at Harvard, where he specializes in symplectic geometry. He has an amazing ability to go up, go up and down the ladder of [00:02:35] abstraction, doing extremely hardcore math while at the same time, paying attention to how he's doing the work and the broader institutional structures that affect. He's worth listening to both because he has great ideas. And also because in many ways, academic mathematics feels like it stands apart from other disciplines, not just because of the subject matter, but because it has managed to buck many of the trends that other fields experience of the course of the 20th century. So it's worth sort of poking at why that happened and perhaps. How other fields might be able to replicate some of the healthier parts of mathematics. So without further ado, here's our conversation. [00:03:16] Ben: I want to start with the notion that I think most people have that the way that mathematicians go about a working on things and be thinking about how to work on things like what to work on is that you like go in a room and you maybe read some papers and you think really hard, and then [00:03:35] you find some problem. And then. You like spend some number of years on a Blackboard and then you come up with a solution. But apparently that's not that that's not how it actually works. [00:03:49] Semon: Okay. I don't think that's a complete description. So definitely people spend time in front of blackboards. I think the length of a typical length of a project can definitely. Vary between disciplines I think yeah, within mathematics. So I think, but also on the other hand, it's also hard to define what is a single project. As you know, a single, there might be kind of a single intellectual art through which several papers are produced, where you don't even quite know the end of the project when you start. But, and so, you know, two, a two years on a single project is probably kind of a significant project for many people. Because that's just a lot of time, but it's true that, you know, even a graduate student might spend several years working on at least a single kind of larger set of ideas because the community does have enough [00:04:35] sort of stability to allow for that. But it's not entirely true that people work alone. I think these days mathematics is pretty collaborative people. Yeah. If you're mad, you know, in the end, you're kind of, you probably are making a lot of stuff up and sort of doing self consistency checks through this sort of formal algebra or this sort of, kind of technique of proof. It makes you make sure helps you stay sane. But when other people kind of can think about the same objects from a different perspective, usually things go faster and at the very least it helps you kind of decide which parts of the mathematical ideas are really. So often, you know, people work with collaborators or there might be a community of people who were kind of talking about some set of ideas and they may be, there may be misunderstanding one another, a little bit. And then they're kind of biting off pieces of a sort of, kind of collective and collectively imagined [00:05:35] mathematical construct to kind of make real on their own or with smaller groups of people. So all of those [00:05:40] Ben: happen. And how did these collaborations. Like come about and [00:05:44] Semon: how do you structure them? That's great. That's a great question. So I think this is probably several different models. I can tell you some that I've run across. So during, so sometimes there are conferences and then people might start. So recently I was at a conference and I went out to dinner with a few people, and then after dinner, we were. We were talking about like some of our recent work and trying to understand like where it might go up. And somebody, you know, I was like, oh, you know, I didn't get to ask you any questions. You know, here's something I've always wanted to know from you. And they were like, oh yes, this is how this should work. But here's something I don't know. And then somehow we realized that you know, there was some reasonable kind of very reasonable guests as to what the answer is. Something that needed to be known would be so I guess now we're writing a paper together, [00:06:35] hopefully that guess works. So that's one way to start a collaboration. You go out to a fancy dinner and afterwards you're like, Hey, I guess we maybe solved the problem. There is other ways sometimes people just to two people might realize they're confused about the same thing. So. Collaboration like that kind of from somewhat different types of technical backgrounds, we both realized we're confused about a related set of ideas. We were like, okay, well I guess maybe we can try to get unconfused together. [00:07:00] Ben: Can I, can I interject, like, I think it's actually realizing that you are confused about the same problem as someone who's coming at it from a different direction is actually hard in and of itself. Yes. Yes. How, how does, like, what is actually the process of realizing that the problem that both of you have is in fact the same problem? Well, [00:07:28] Semon: you probably have to understand a little bit about the other person's work and you probably have to in some [00:07:35] way, have some basal amount of rapport with the other person first, because. You know, you're not going to get yourself to like, engage with this different foreign language, unless you kind of like liked them to some degree. So that's actually a crucial thing. It's like the personal aspect of it. Then you know it because maybe you'll you kind of like this person's work and maybe you like the way they go about it. That's interesting to you. Then you can try to, you know, talk about what you've recently been thinking about. And then, you know, the same mathematical object might pop up. And then that, that sort of, that might be you know, I'm not any kind of truly any mathematical object worth studying, usually has incarnations and different formal languages, which are related to one another through kind of highly non-obvious transformation. So for example, everyone knows about a circle, but a circle. Could you could think of that as like the set of points of distance one, you could think of it as some sort of close, not right. You can, you can sort of, there are many different concrete [00:08:35] intuitions through which you can grapple with this sort of object. And usually if that's true, that sort of tells you that it's an interesting object. If a mathematical object only exists because of a technicality, it maybe isn't so interesting. So that's why it's maybe possible to notice that the same object occurs in two different peoples. Misunderstandings. [00:08:53] Ben: Yeah. But I think the cruxy thing for me is that it is at the end of the day, it's like a really human process. There's not a way of sort of colliding what both of, you know, without hanging out. [00:09:11] Semon: So people. And people can try to communicate what they know through texts. So people write reviews on. I gave a few talks recently in a number of people have asked me to write like a review of this subject. There's no subject, just to be clear. I kind of gave a talk with the kind of impression that there is a subject to be worked on, but nobody's really done any work on it that you're [00:09:35] meeting this subject into existence. That's definitely part of your job as an academic. But you know, then that's one way of explaining, I think that, that can be a little bit less, like one-on-one less personal. People also write these a different version of that is people write kind of problems. People write problem statements. Like I think these are interesting problems and then our goal. So there's all these famous, like lists of conjectures, which you know, in any given discipline. Usually when people decide, oh, there's an interesting mathematical area to be developed. At some point they have a conference and somebody writes down like a list of problems and the, the conditions for these problems are that they should kind of matter. They should help you understand like the larger structure of this area and that they should, the problems to solve should be precise enough that you don't need some very complex motivation to be able to engage with them. So that's part of, I think the, the trick in mathematics. You know, different people have very different like internal understandings of something, but you reduce the statements or [00:10:35] the problems or the theorems, ideally down to something that you don't need a huge superstructure in order to engage with, because then people will different, like techniques or perspective can engage with the same thing. So that can makes it that depersonalizes it. Yeah. That's true. Kind of a deliberate, I think tactic. And [00:10:51] Ben: do you think that mathematics is. Unique in its ability to sort of have those both like clean problem statements. And, and I think like I get the sense that it's, it's almost like it's higher status in mathematics to just declare problems. Whereas it feels like in other discipline, One, there are, the problems are much more implicit. Like anybody in, in some specialization has, has an idea of what they are, but they're very rarely made lightly explicit. And then to pointing out [00:11:35] problems is fairly low status, unless you simultaneously point out the problem and then solve it. Do you think there's like a cultural difference? [00:11:45] Semon: Potentially. So I think, yeah, anyone can make conjectures in that, but usually if you make a conjecture, it's either wrong or on. Interesting. It's a true for resulting proof is boring. So to get anyone to listen to you, when you make problem, you state problems, you need to, you need to have a certain amount of kind of controllers. Simultaneously, you know, maybe if you have a cell while you're in, it's clear. Okay. You don't understand the salary. You don't understand what's in it. It's a blob that does magic. Okay. The problem is understand the magic Nath and you don't, you can't see the thing. Right? So in some sense, defining problems as part of. That's very similar to somebody showing somebody look, here's a protein. Oh, interesting. That's a very [00:12:35] similar process. And I do think that pointing out, like, look, here's a protein that we don't understand. And you didn't know about the existence of this protein. That can be a fairly high status work say in biology. So that might be a better analogy. Yeah. [00:12:46] Ben: Yeah, no, I like that a lot that math does not have, you could almost say like the substrate, that the context of reality. [00:12:56] Semon: I mean it's there, right? It's just that you have to know what to look for in order to see it. So, right. Like, you know, number theorists, love examples like this, you know, like, oh, everybody knows about the natural numbers, but you know, they just love pointing out. Like, here's this crazy pattern. You would never think of this pattern because you don't have this kind of overarching perspective on it that they have developed over a few thousand years. [00:13:22] Ben: It's not my thing really been around for a few thousand years. It's pretty [00:13:25] Semon: old. Yeah. [00:13:27] Ben: W w what would you, [00:13:30] Semon: this is just curiosity. What, what would [00:13:32] Ben: you call the first [00:13:35] instance of number theory in history? [00:13:38] Semon: I'm not really sure. I don't think I'm not a historian in that sense. I mean, certainly, you know, the Bell's equation is related to like all kinds of problems in. Like I think grease or something. I don't exactly know when the Chinese, when the Chinese remainder theorem is from, like, I I'm, I'm just not history. Unfortunately, I'm just curious. But I do think the basic server very old, I mean, you know, it was squared of two is a very old thing. Right. That's the sort of irrationality, the skirt of two is really ancient. So it must predate that by quite a bit. Cause that's a very sophisticated question. [00:14:13] Ben: Okay. Yeah. So then going, going back to collaborations I think it's a surprising thing that you've told me about in the past is that collaborations in mathematics are like, people have different specializations in the sense that the collaborations are not just completely flat of like everybody just sort of [00:14:35] stabbing at a place. And that you you've actually had pretty interesting collaborations structures. [00:14:43] Semon: Yeah. So I think different people are naturally drawn to different kinds of thinking. And so they naturally develop different sort of thinking styles. So some people, for example, are very interested in someone had there's different kinds. Parts of mathematics, like analysis or algebra or you know, technical questions and typology or whatnot. And some people just happen to know certain techniques better than others. That's one access that you could sort of classify people on a different access is about question about sort of tasting what they think is important. So some people. Wants to have a very kind of rich, formal structure. Other people want to have a very concrete, intuitive structure, and those are very different, those lead to very different questions. Which, you know, that's sort of something I've had to navigate with recently where there's a group of people who are sort of mathematical physicists and they kind of like a very rich, formal structure. And there's other [00:15:35] people who do geometric analysis. Kind of geometric objects defined by partial differential equations and they want something very concrete. And there are relations between questions about areas. So I've spent some time trying to think about how one can kind of profitably move from one to the other. But did Nash there's that, that sort of forces you to navigate a certain kind of tension. So. Maybe you have different access is whether people like these are the here's one, there's the frogs and birds.com. And you know, this, this is a real, this is a very strong phenomenon and mathematics is this, this [00:16:09] Ben: that was originally dice. [00:16:11] Semon: And maybe I'm not sure, but it's certainly a very helpful framework. I think some people really want to take a single problem and like kind of stab at it. Other people want to see the big picture and how everything fits. And both of these types of work can be useful or useless depending on sort of the flavor of the, sort of the way the person approached it. So, you know, often, you know, often collaborations have like one person who's obviously more kind of hot and kind [00:16:35] of more birdlike and more frog like, and that can be a very productive. [00:16:40] Ben: And how do you make your, like let's, let's let's date? Let's, let's frog that a little bit. And so like, what are the situations. W what, what are the, both like the success and failure modes of birds in the success and failure modes of [00:16:54] Semon: frocks. Great, good. This is, I feel like this is somehow like very clearly known. So the success so-so what frogs fail at is they can get stuck on a technical problem, which does not matter to the larger aspect of the larger university. Hmm. And so in the long run, they can spend a lot of work resolving technical issues which are then like, kind of, not really looked out there because in the end they, you know, maybe the, you know, they didn't matter for kind of like progress. Yeah. What they can do is they can discover something that is not obvious from any larger superstructure. Right. So they can sort of by directly [00:17:35] engaging with kind of the lower level details of mathematical reality. So. They can show the birds something they could never see and simultaneously they often have a lot of technical capacity. And so they can, you know, there might be some hard problem, which you know, no one, a large perspective can help you solve. You just have to actually understand that problem. And then they can remove the problem. So that can learn to lead opened kind of to a new new world. That's the frog. The birds have an opposite success and failure. Remember. The success mode is that they point out, oh, here's something that you could have done. That was easier. Here's kind of a missing piece in the puzzle. And then it turns out that's the easy way to go. So you know, get mathematical physicists, have a history of kind of being birds in this way, where they kind of point out, well, you guys were studying this equation to kind of study the typology of format of holes instead of, and you should study, set a different equation, which is much easier. And we'll tell you all this. And the reason for this as sort of like incomprehensible to mathematician, but the math has made it much easier to solve a lot of problems. That's kind of the [00:18:35] ultimate bird success. The failure mode is that you spend a lot of time piecing things together, but then you only work on problems, which are, which makes sense from this huge perspective. And those problems ended up being uninteresting to everyone else. And you end up being trapped by this. Kind of elaborate complexity of your own perspective. So you start working on kind of like an abstruse kind of, you know, you're like computing some quantity, which is interesting only if you understand this vast picture and it doesn't really shed light on anything. That's simple for people to understand. That's usually not good. If you develop a new formal world that sort of in, maybe it's fine to work on it on this. But it is in the, and partially validated by solving problems that other people could ask without any of this larger understanding. That's [00:19:26] Ben: yeah. Like you can actually be too, [00:19:31] Semon: too general, almost. That's very often a [00:19:35] problem. So so you know, one thing that one bit of mathematics that is popular among non mathematicians for interesting reasons is category. So I know a lot of computer scientists are sort of familiar with category theory because it's been applied to programming languages fairly successfully. Now category theory is extremely general. It is, you know, the, the mathematical kind of joke description of it is that it's abstract nonsense. So, so that's a technical term approved by abstract now. this is a tech, there are a number of interesting technical terms like morally true, and the proof by abstract nonsense and so forth, which have, I think interesting connotation so approved by abstract nonsense is you have some concrete question where you realize, and you want to answer it and you realize that its answer follows from the categorical structure of the question. Like if you fit this question into the [00:20:35] framework of categories, There's a very general theorem and category theory, which implies what you wanted, what that tells you in some sense of that. Your question was not interesting because it had no, you know, it really wasn't a question about the concrete objects you were looking at at all. It was a question about like relations between relations, right? So, you know, the. S. So, you know, there's this other phrase that the purpose of category theory is to make the trivial trivially trivial. And this is very useful because it lets you skip over the boring stuff and the boring stuff could actually, you get to get stuck on it for a very long time and it can have a lot of content. But so category theory in mathematics is on one hand, extremely useful. And on the other hand can be viewed with a certain amount of. Because people can start working on kind of these very upstream, categorical constructions some more complicated than the ones that appear in programming languages, which, you know, most mathematicians can't make heads or tails of what they're about. And some of those [00:21:35] are kind of not necessarily developed in a way to be made relevant to the rest of mathematics and that there is a sort of natural tension that anyone is interested in. Category theory has to navigate. How far do you go into the land of abstract nonsense? So, you know, even as the mathematicians are kind of viewed as like the abstract nonsense people by most people, even within mathematics is hierarchy continues and is it's factal yeah. The hierarchy is preserved for the same reasons. [00:22:02] Ben: And actually that actually goes back to I think you mentioned when you're, you're talking about the failure mode of frogs, which is that they can end up working on things that. Ultimately don't matter. And I want to like poke how you think about what things matter and don't matter in mathematics because sort of, I think about this a lot in the context of like technologies, like people, people always think like technology needs to be useful for, to like some, [00:22:35] but like some end consumer. But then. You often need to do things to me. Like you need to do some useless things in order to eventually build a useful thing. And then, but then mathematics, like the concept of usefulness on the like like I'm going to use this for a thing in the world. Not, not the metric, like yeah. But there's still things that like matter and don't matter. So [00:23:01] Semon: how do you think about, so it's definitely not true that people decide which mathematics matters based on its applicability to real-world concerns. That might be true and applied with medics actually, which has maybe in as much as there's a distinction that it's sort of a distinction of value and judgment. But in mathematics, So I said that mathematical object is more real in some sense, when it can be viewed from many perspectives. So there are certain objects which therefore many different kinds of mathematicians can grapple with. And there are certain questions which kind of any mathematician can [00:23:35] understand. And that is one of the ways in which people decide that mathematics is important. So for example you might ask a question. Okay. So this might be some, so here's a, here's a question which I would think is important. I'm just going to say something technical, but I can kind of explain what it means, you know, understand sort of statements about the representation theory of of the fundamental group of a surface. Okay. So what that means is if you have any loop in a surface, then you can assign to that loop a matrix. Okay. And then if you kind of compose. And then the condition of that for this assignment is that if you compose the loops, but kind of going from one after the other, then you assign that composed loop the product of his two matrices. Okay. And if you deformed the loop then the matrix you assign is preserved under the defamation. Okay. So that's the, that's the sort of question was, can you classify these things? Can you understand them? They turn out to be kind of relevant to differential equations, to partial, to of all different kinds to physics, to kind of typology. Hasn't got a very bad. So, you know, progress on that is kind of [00:24:35] obviously important because it turns out to be connected to other questions and all of mathematics. So that's one perspective, kind of the, the, the simplest, like the questions that any mathematician would kind of find interesting. Cause they can understand them and they're like, oh yeah, that's nice. Those are that's one way of measuring importance and a different one is about the. Sort of the narrative, you know, mathematics method, you just spend a lot of time tying making sure that kind of all the mathematics is kind of in practice connected with the rest of it. And there are all these big narratives which tie it together. So those narratives often tell us a lot of things that are go far beyond what we can prove. So we know a lot more about numbers. Than we can prove. In some sense, we have much more evidence. So, you know, one, maybe one thing is the Remont hypothesis is important and we kind of have much more evidence for the Riemann hypothesis in some sense, then we have for [00:25:35] any physical belief about our world. And it's not just important to, because it's kind of some basic question it's important because it's some Keystone in some much larger narrative about the statistics of many kinds of number, theoretic questions. So You know, there are other more questions which might sound abstruse and are not so simple to state, but because they kind of would clarify a piece of this larger conceptual understanding when all these conjectures and heuristics and so forth. Yeah. You know, like making it heuristic rigorous can be very valuable and that heuristic might be to that statement might be extremely complex. But it means that this larger understanding of how you generate all the heuristics is correct or not correct. And that is important. There's also a surprise. So people might have questions about which they expect the answer to be something. And then you show it's not that that's important. So if there are strong expectations, it's not that easy to form expectations in mathematics, but, [00:26:30] Ben: but as you were saying that there, there are these like narrative arcs. [00:26:35] Do something that is both like correct and defies the narrative. [00:26:39] Semon: That's an interesting, that means there must be something there, or maybe not. Maybe it's only because there was some technicality and like, you know, the technicality is not kind of, it doesn't enlighten the rest of the narrative. So that's some sort of balance which people argue about and is determined in the end, I guess, socially, but also through the production of, I don't know, results and theorems and expect mathematical experiments and so forth. [00:27:04] Ben: And to, to, so I'm gonna, I'm going to yank us back to, to the, the, the collaborations. And just like in the past, we've talked about like how you actually do like program management around these collaborations. And it felt like I got the impression that mathematics actually has like pretty good standards for how this is. What [00:27:29] Semon: do you mean by program management? Meaning [00:27:31] Ben: like like you're like, like how, like [00:27:35] how you were basically just managing your collaborators, like you you're talking about like how, what was it? It was like, you need to like wrangle people for, for. I, or yeah, or like, yeah. So you've got like, just like how to manage your collaborators. [00:27:51] Semon: So I guess [00:27:54] Ben: we were developing like a theory on that. [00:27:56] Semon: Yeah, I think a little bit. So on one hand, I guess in mathematics and math, every, so in the sciences, there's usually somebody with money and then they kind of determined what has. Is [00:28:08] Ben: this, is this a funder or is this like [00:28:10] Semon: a, I would think the guy pie is huge. So yeah, in the sciences, maybe the model is what like funding agencies, PI is and and lab members, right. And often the PIs are setting the direction. The grant people are kind of essentially putting constraints on what's possible. So they steer the direction some much larger way, but they kind of can't really see the ground to all right. And [00:28:35] then a bunch of creative work happens at lowest level. But you know, you're very constrained by what's possible in your lab in mathematics. There aren't really labs, right. You know, there are certainly places where people know more. Other places about certain parts of mathematics. So it's hard to do certain kinds of mathematics without kind of people around you who know something because most of the mathematics isn't written down. And [00:28:58] Ben: that, that statement is shocking in and of itself. [00:29:01] Semon: The second is also similar with the sciences, right? Like most things people know about the natural world aren't really that well-documented that's why it pays to be sometimes lower down the chain. You might find something that isn't known. Yeah. But so because of that, people kind of can work very independently and even misunderstand one another, which is good because that leads to like the misunderstanding can then lead to kind of creative, like developments where people through different tastes might find different aspects of the same problem. Interesting. And the whole thing is then kind of better that way. And then [00:29:34] Ben: [00:29:35] like resolving, resolving. The confusion in a legible way, [00:29:40] Semon: it sort of pushes the field. So that's, but also because everyone kind of can work on their own, you know, coordination involves, you know, a certain amount of narrative alignment. And so you have to understand like, oh, this person is naturally suited to this kind of question. This person is naturally suited to this kind of question. So what are questions where both people are. First of all, you would need both people to make progress on it. That gives you competitive advantage, which is important, extremely important in kind of any scientific landscape. And secondly if you can find a question of overlap, then, you know, there's some natural division of labor or some natural way in which both people can enlighten the other in surprising ways. If you can do everything yourself and you have some other person, like write it up, that's sort of not that phonic club. So yeah, so there's, and then there's like, kind of on a [00:30:35] larger, but that's like kind of one on a single project collaboration to do larger collaboration. You have to kind of, you know, give you have to assign essentially you have to assign social value to questions, right? Like math, unlike sort of the math is small enough that it can just barely survive. It's credit assigning system almost entirely on the basis of the social network of mathematicians. Oh, interesting. Okay. It is certainly important to have papers refereed because like it's important for somebody to read a paper and check the details. So the journals do matter, but a lot happen. So, you know, it doesn't have the same scaling. The biology or machine learning has in part, because it's a small, [00:31:20] Ben: do you know, like roughly how many mathematicians. I can, I can look this [00:31:25] Semon: up. I mean, it depends on who you count as a mathematician. So that's the technique I'm asking you. The reason, the reason I'm asking [00:31:35] that is because of course there's the American mathematical society and they publish, like, this is the number of mathematicians. And the thing is like, they count like quite a lot of people. So you actually have the decision actually dramatically changes your answer. I would say there are on the order of the. Tens of thousands of mathematicians. Like if you think about like the number of attendees of the ICM, the international Congress of mathematicians, like, and then, you know, the thing is a lot of people, so it depends on like pure mathematicians, how pure, you know, that's going to go up and down. But that's sort of the right order of magnitude. Okay. Cause which is a very small given that [00:32:12] Ben: a compared to, to most other disciplines then, especially compared to even. Science as a whole like research [00:32:20] Semon: has a whole. Yeah. So yeah, I think like if you look at like, you know, all the, if you say like, well look at the Harvard Kennedy school of business, and then they have an MBA program, which is my impression is it's serious. [00:32:35] And then you also look at like all the math pieces. Graduates and like the top 15 kind of us schools are kind of like, you know, I think the MBAs are like several times lecture. Yes. So that's, maybe I was surprised to learn that [00:32:50] Ben: that's also good. Instead of [00:32:51] Semon: like, you can look at the output rate, the flow rate, that's a very easy way to decide. Yeah. But yeah, so you have to, yeah. So kind of you, there's like kind of, depending on how, if you can let go. There are certain you have to, if you want to work with people, you have to find you, there's not, you can't really be a PI in mathematics, but if you are good at talking to people, you can encourage people to work on certain questions. So that over time kind of a larger set of questions get answered, and you can also make public statements to the and which are in some ways, invitations, like. If you guys do these [00:33:35] things, then it'll be better for you because they fit into a larger context. So therefore your work is more significant that you're actually doing them a service by explaining some larger context. And simultaneously by sort of pointing out that maybe some problem is easy or comparatively, easy to some people that you, you might not do. So that helps you if then they solve the problem because you kind of made a correct prediction of like, there is good mathematic. Yeah. So this is some complicated social game that, you know, mathematicians are not like, you know, they're kind of strange socially, but they do kind of play this game and the way in which they play this game depends on their personal preferences and how social they are. [00:34:13] Ben: And actually speaking of the social nature of mathematics I get the impression that mathematics sort of as a discipline is. It feels much closer to what one might think of as like old academia then many other disciplines in the sense that my, my impression is [00:34:35] that your, your tenure isn't as much based on like how much grant money you're getting in. And It's, it's not quite as much like a paper mill up and out [00:34:46] Semon: gay. Yeah. There's definitely pressure to publish. There, the expected publishing rate definitely depends on the area. So, you know, probability publishers more, in some ways it's a little bit more like applied mathematics, which has more of a kind of paper mill quality to it. I don't want to overstate that. But so there is space for people to write just a few papers if they're good and have got a job. Yeah. And so it's definitely true as I think in the rest of the sciences, that kind of high quality trumps quantity. Right. Then, you know, but modular, the fact that you do have, you do have to produce a certain amount of work in order to stay in academia and You know, in the end, like where you end up is very much determined on the significance of your work. Right. And if you're very productive, consistently, certainly helps with people are kind of not as [00:35:35] worried. But yeah, it's definitely not determined based on grant money because essentially there's not that much grant money to go around. So that makes it have more of this old-school flavor. And it's also true that it's still not, it's genuinely not strange for people to graduate with like just their thesis to graduate from a PhD program. And they can do very well. So long as they, during grad school learn something that other people don't know and that matters. That seems that that's helpful, but so that allows for, yeah, this. You know, th this there's this weird trick that mathematicians play, where like proofs are kind of supposedly a universal language that everyone can read. And that's not quite true, but it tries to approximate that ideal. But everyone has sort of allowed to go on their own little journey and the communities does spend a lot of work trying to defend that. What, [00:36:25] Ben: what sort of, what, what does that work [00:36:27] Semon: actually look like? Well, I think it's true that it is actually true that grad students are not required to like publish a paper a year. Yeah, [00:36:35] that's true. And that's great that people, I think, do defend that kind of position and they are willing to put their reputation on the line and the kind of larger hiring process to defend that SAC separately. It's true that, you know, You know, work that is not coming out of one of the top three people or something is can still be considered legitimate. You know, because like total it's approved, it's approved. No one can disagree with it. So if some random person makes some progress, you know, it's actually very quickly. If, if people can understand it, it's very quickly kind of. And this allows communities to work without quite understanding one or other for awhile and maybe make progress that way, which can be [00:37:18] Ben: helpful. Yeah. And and most of the funding for math departments actually comes from teaching. Is that [00:37:26] Semon: yeah, I think that a lot of it comes from teaching. A certain chunk of it comes from grants. Like basically people use grants to, in order to teach less. Yeah. That's more or [00:37:35] less how it works. You know, of course there's this, as in, you know, mathematics has this kind of current phenomenon where, you know, rich individuals like fund a department or something or they fund a prize. But by and large, it seems to be less dependent on these gigantic institutional handouts from say the NSF or the NIH, because that the expenses aren't quite yet. But it does also mean that like, it is sort of constrained and you know, it can't, you know, like big biology has like, kind of so much money, maybe not enough, not as much as it needs. I mean, these grant acceptance rates are extremely low. [00:38:13] Ben: If it's, for some reason, it's every mathematician magically had say order of magnitude more funding [00:38:21] Semon: when it matters. Yeah. So it's not clear that they would know what to do with that. There is, I thought a lot about the question of, to what degree does the mathematics is some kind of social enterprise and that's maybe true of every research [00:38:35] program, but it's particularly true in mathematics because it's sort of so dependent on individual creativity. So I've thought a lot about to what degree you could scale the social enterprise and in what directions it could scale because it's true that kind of producing mathematicians is essentially an expensive and ad hoc process. But at the same time, Plausibly true that people might be able to do research of a somewhat different kind just in terms of collaborations or in terms of like what they felt to do free to do research on if they had access to different kind of funding, like math itself is cheap, but the. Kind of freedom to say, okay, well, these next two years, I'm going to do this kind of crazy different thing. And that does not have to fit with my existing research program that could, that you have to sort of fight for. And that's like a more basic stroke thing about the structure of kind of math academia. I feel like [00:39:27] Ben: that's, that's like structurally baked into almost the entire world where there's just a ton of it's, it's [00:39:35] very hard to do something completely different than the things that you have done. Right? People, people, boat, people. Our book more inclined to help you do things like what you've done in the past. And they are inclined to push against you doing different things. Yeah, [00:39:50] Semon: that's true. [00:39:50] Ben: And, and sort of speaking of, of money in the past, you've also pointed out that math is terrible at capturing the value that it creates in this. [00:40:02] Semon: Well, yeah. You know, math is, I mean, it may be hard to estimate kind of human capital value. Like maybe all mathematicians should be doing something else. I don't really know how to reason about that, but it's definitely objectively very cheap. Just in the sense of like all the funding that goes into mathematics is very little and arguably the [00:40:21] Ben: sort of downstream, like basically every, every technical anything we have is to some extent downstream. Mathematics [00:40:32] Semon: th there is an argument to be made of that kind. You know, [00:40:35] I don't think one should over I think, you know, there are extreme versions of this argument, which I think are maybe not helpful for thinking about the world. Like you shouldn't think like, ah, yes, computer science is downstream of the program. Like this turning thing. Like, I don't really know that it's fair to say that, but it is true that whenever mathematicians produce something that's kind of more pragmatically useful for other people, it tends to be. It tends to be easy to replicate and it tends to be very robust. So there are lots of other ideas of this kind and, you know, separately, even a bunch of the value of mathematics to the larger world seems to me to not even be about specific mathematical discoveries, but to be about like the existence of this larger language and culture. So, you know, neural network people now, you know, they have all of these like echo variant neural networks. Yeah. You know, that's all very old mathematics. But it's very helpful to have kind of that stuff feel like totally, like you need to have those kinds of ideas be completely explored [00:41:35] before a totally different community can really engage with them. And that kind of complete kind of that sort of underlying cultural substrate actually does allow for different kinds of things, because doing that exploration takes a few people a lot of time. So in that sense, then it's very hard to like you know, yeah. What you do well, most mathematicians do things which will have no relevance to the larger world. Although it may be necessary for the progress of the sort of more useful basal things. Like the idea of a manifold came out of like studying elliptic functions historically and manifolds are very useful idea. And I looked at functions are or something. I mean, they're also useful, but they maybe less well known. Certainly I think a typical scientist does not know about them. Yeah. It came out, but it did come out of like studying transformation laws for elliptic functions, which is a pretty abstruse sounding thing. So, but because of that, there's just, there's no S it's very hard to find a way for mathematicians to kind of like dip into the future. And because like, you can have a startup. You know, like it's not going to be industrially useful, but it is [00:42:35] clearly on this sort of path in a way that you kind of, it's very hard to imagine removing a completely. Yeah. [00:42:42] Ben: So, no, I like it also because it's, again, it's, it's sort of this extreme example of some kind of continuum where it's like, everybody knows that math is really important, but then everybody also knows that it's not a. Immediately [00:43:02] Semon: applicable. Yeah. And there's this question of, how do you kind of make the navigation that continuum smoother and that has you know, that's like a cultural issue and like an institutional issue to some degree, you know, it's probably true that new managers do know lots of stuff, empirically they get hired and then they get, they like, their lives are fine. So it seems that, you know, people recognize that but the, you know, various also in part too, because mathematicians try to kind of preserve this sort of space for [00:43:35] people to explore. There is a lot of resistance in the pure mathematics community for people to try to like try random stuff and collaborate with people. And, you know, there is probably some niche for you know, Interactions between mathematically minded people and kind of things which are more relevant to the contemporary world or near contemporary world. And that niches one where it's navigation was a little bit obscure. It's not There aren't, there are some institutions around it, but it's, it doesn't seem to me to be like completely systematized. And that's in part because of the resistance of the pure mathematics community. Like historically, I mean, you know, it's true that like statistics, departments kind of used to be part of pure mathematics departments and then they got kicked out, probably they left and they were like, we can make more money than you. No, seriously. I don't know. I mean, there's like, I don't know the history of Berkeley stats department isn't famously one of the first ones that have this. I don't know the detailed history, but there was definitely some kind of conflict and it was a cultural conflict. Yeah. So these sorts of cultural [00:44:35] issues are things that I guess anyone has a saying, and I, I'm kind of very curious how they will evolve in the coming 50 years. Yeah. [00:44:42] Ben: To, to change the subject just a bit again the, can you, can you dig into how. Do you call them retreats? Like when, when the, the thing where you get a bunch of mathematicians and you get them to all live in a place [00:44:56] Semon: for like, so there's this interesting well that's, there are things with a couple CS there. Of course they're there. That's maybe. So there are kind of research programs. So that's where some Institute has flies together. Post-docs maybe some grad students, maybe some sort of senior faculty and they all spend time in one area for a couple of months in order to maybe make progress on some kind of idea of a question. So, yeah. That is something that there are kind of dedicated institutes to doing. In some sense, this is one of the places where like kind of external [00:45:35] funding has changed the structure of mathematics. Cause like the Institute of advanced study is basically one of these things. Yes. This Institute at Princeton where like basically a few old people, I mean, I'm kind of joking, but you know, there's a few kind of totemic people like people who have gone there because they sort of did something famous and they sit there. And then what the Institute has done yesterday actually does in mathematics is it has these semester, longer year long programs. We're just house funding for a bunch of people to space. Been there spent a year there or half a year there, where to fly in there for a few weeks, a few times in the year. And that gets everyone together in one area and maybe by interacting, they can kind of figure out what's going on in some theoretical question, a different thing that people have done in much more short term is there's like a, kind of an interesting conference format, which is like, reminds me a little bit of like unconferences or whatnot, but it's actually kind of very serious where people choose you know, hot topic. In a [00:46:35] kind of contemporary research and then they like rent out a giant house and then they have, I don't know, 20 people live in this house and maybe cook together and stuff. And then, you know, everyone there's like every learning center is like a week long learning seminar where there's some people who are like real experts in the area, a bunch of people who don't know that much, but would like to learn. And then everyone has to give a talk on subjects that they don't know. And then there's serious people. The older people can go and point out where some, if there is a confusion and yeah, everyone. So there's like talks from nine to five and it's pretty exhausting. And then afterwards, you know, everyone goes on a hike or sits in the hot tub and talks about life and mathematics and that can be extremely productive and very fun. And it's also extremely cheap because it's much cheaper to rent out a giant house than it is to rent out a bunch of hotels. So. If you're willing to do that, which most mathematicians are and a story, [00:47:25] Ben: like, I don't know if I'm misremembering this, but I remember you telling me a story where like, there were, there were two people who like needed [00:47:35] to figure something out together and like they never would have done it except for the fact that they just were like sitting at dinner together every night for, for some number of nights. [00:47:45] Semon: I. I mean, there are definitely apocryphal stories of that kind where eventually people realize that they're talking about the same thing. I can't think of an example, right? I think I told you, you asked me, you know, is there an example of like a research program where it's clear that some major advance happened because two people were in the same area. And I gave an example, which was a very contemporary example, which is far outside of my area of expertise, but which is this. You know, Peter Schultz Lauren far kind of local geometric language and stuff where basically there was at one of these at this Institute in Berkeley. They had a program and these two people were there and Schultz was like a really technically visionary guy and Fargo talked very deeply about certain ideas. And then they realized that basically like the sort of like fart, his dream could actually be made. And I think before that [00:48:35] people didn't quite realize like how far this would go. So that's kinda, I just gave you that as an example and that happens on a regular basis. That's maybe the reason why people have these programs and conferences, but it's hard to predict because so, you know, I don't really, like, I wish I could measure a rate. Yes. [00:48:50] Ben: You just need that marination. It's actually like, okay. Oh, a weird thought that just occurred to me. Yeah. That this sort of like just getting people to hang out and talk is unique in mathematics because you do not need to do cause like you can actually do real work by talking and writing on a whiteboard. And that like, if you wanna to replicate this in some other field, you would actually need that house to be like stocked with laboratory. Or something so that people could actually like, instead of just talking, they could actually like poke at whatever the [00:49:33] Semon: subject is. That would [00:49:35] be ideal, but that would be hard because experiments are slow. The thing that you could imagine doing, or I could imagine doing is people are willing to like, share like very preliminary data, then they could kind of both look at something and figure out oh, I have something to say about your final. And I, that I don't know to what degree that really happens at say biology conferences, because there is a lot of competitive pressure to be very deliberate in the disclosure of data since it's sort of your biggest asset. Yeah. [00:50:05] Ben: And is it, how, how does mathematics not fall into that trap? [00:50:11] Semon: That is a great question. In part there is. So I'm part, there are somewhat strong norms against that, like, because the community is small enough. If it's everyone finds out like, oh yeah, well this person just like scooped kind of, yeah. There's a very strong norm against scooping. That's lovely. It's okay. In certain contexts, like if, if, if it's clear for everyone, like somebody [00:50:35] could do this and somebody does the thing and it's because it was that it's sort of not really scooping. Sure. But if you, if there is really You know, word gets around, like who kind of had which ideas and when people behave in a way that seems particularly adversarial that has consequences for them. So that's one way in which mathematics avoids that another way is that there's just like maybe it's, it's actually true that different people have kind of different skills. It is a little bit less competitive structurally because it isn't like everyone is working at the same kind of three problems. And everyone has like all the money to go and like, just do the thing. And [00:51:16] Ben: it's like small enough that everybody can have a specialization such that there are people like you, you can always do something that someone else can't. [00:51:24] Semon: Often there are people, I mean, that, that might depend on who you are. But yeah, often people with. It's more like it's large enough for that to be the case. Right? Like you [00:51:35] can develop some intuition about some area where yeah. Other people might be able to kind of prove what you're proving, but you might be much better at it than them. So people will be like, yeah, why don't you do it? That's helpful. Yeah. It's that's useful. I mean, it certainly can happen that in the end, like, oh, there's some area on everyone has the same tools and then it does get competitive and people do start. Sorry. I think in some ways it has to do with like a diversity of tools. Like if, if every different lab kind of has a tool, which like the other labs don't have, then there's less reason to kind of compete. You know, then you might as well kind of, but also that has to do with the norms, right? Like your, the pressure of being the person on the ground is that's a very harsh constraint. That's not. Premiere. I mean that my understand, I guess, that is largely imposed by the norms of the community itself in the sense that like a lot of like an NIH grants are actually kind of determined by scientific committees [00:52:35] or committees of scientists. So, [00:52:38] Ben: I mean, you could argue about that, right? Because [00:52:41] Semon: don't, [00:52:42] Ben: is it, is it like, I mean, yes, but then like, those committees are sort of mandated by the structure of the funding agencies. Right. And so is it which, and there's of course a feedback loop and they've been so intertwined for decades that I'm clear which way that causality runs. [00:53:02] Semon: Yeah. So I remember those are my two guesses for how it's like one, there's just like a very strong norm against this. And you don't, you just don't, you know, if you're the person with the idea. And then you put the other person on the paper because they like were helpful. You don't lose that much. So it's just, you're not that disincentivized from doing it. Like in the end, people will kind of find out like, who did what work to some degree, even though officially credit is shared. And that means that, you know, everyone can kind of get. [00:53:35] [00:53:35] Ben: It seems like a lot of this does is depend on, on [00:53:38] Semon: scale. Yeah. It's very scale because you can actually find out. Right. And that's a trade-off right. Obviously. So, but maybe not as bad a trade off in mathematics, because it's not really clear what you would do with a lot more scale. On the other hand, you don't know, like, you know, if you look at, say a machine learning, this is a subject that's grown tremendously. And in part, you know, they have all these crazy research directions, which you, I think in the end kind of can only happen because they've had so many different kinds of people look at the same set of ideas. So when you have a lot of people looking at something and they're like empowered to try it, it is often true that you kind of progress goes faster. I don't really know why that would be false in mathematics. [00:54:23] Ben: Do you want to say anything about choosing the right level of Metta newness? Hmm. [00:54:28] Semon: Yeah. You're thinking about, I guess this is a, this is like a question [00:54:35] for, this is like a personal question for everyone almost. I mean, everyone who has some freedom over what they work on, which is actually not that many people you know, You in any problem domain, whether that's like science, like science research or whether that's like career or whatnot, or even, you know, in a company there's this kind of the, the bird frog dichotomy is replicated. What Altitude's. Yeah. So for example, you know, in math, in mathematics, you could either be someone who. Puts together, lots of pieces and spend lots of time understanding how things fit together. Or you can be someone who looks at a single problem and makes hard progress at it. Similarly, maybe in biology, you can also mean maybe I have a friend who was trying to decide whether she should be in an individual contributor machine learning research company, or. And that for her in part is Metta non-metro choice. So she [00:55:35] really likes doing kind of like explicit work on something, being down to the ground as a faculty, she would have to do more coordination based work. But that, like, let's see, you kind of have more scope. And also in many cases you are so in many areas, but not in all doing the. Is a higher status thing, or maybe it's not higher status, but it's better compensated. So like on a larger scale, obviously we have like people who work in finance and may in some ways do kind of the most amount of work and they're compensated extremely well by society. And but you need people you need, you know, very kind of talented people to work with. Yeah, problems down to the ground because otherwise nothing will happen. Like you can't actually progress by just rearranging incentive flows and having that kind of both sides of this be kind of the incentives be appropriately structured is a very, very challenging balancing act because you need both kinds of people. But you know, you need a larger system in which they work and there's no reason for that [00:56:35] system. A B there's just no structural reason why the system would be compensating people appropriately, unless like, there are specific people who are really trying to arrange for that to be the case. And that's you know, that's very hard. Yeah. So everyone kind of struggles with this. And I think because in sort of gets resolved based on personal preference. Yeah. [00:56:54] Ben: I think, I think that's, yeah. I liked that idea that the. Unless sort of by default, both like status and compensation will flow to the more Metta people. But then that ultimately will be disastrous if, if, if taken to its logical conclusion. And so it's like, we need to sort of stand up for the trend. [00:57:35]
Professor Michael Strevens discusses the line between scientific knowledge and everything else, the contrast between what scientists as people do and the formalized process of science, why Kuhn and Popper are both right and both wrong, and more. Michael is a professor of Philosophy at New York University where he studies the philosophy of science and the philosophical implications of cognitive science. He's the author of the outstanding book “The Knowledge Machine” which is the focus of most of our conversation. Two ideas from the book that we touch on: 1. “The iron rule of science”. The iron rule states that “`[The Iron Rule] directs scientists to resolve their differences of opinion by conducting empirical tests rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power` in the book Michael Makes a strong argument that scientists following the iron rule is what makes science work. 2. “The Tychonic principle.” Named after the astronomer Tycho Brahe who was one of the first to realize that very sensitive measurements can unlock new knowledge about the world, this is the idea that the secrets of the universe lie in minute details that can discriminate between two competing theories. The classic example here is the amount of change in star positions during an eclipse dictated whether Einstein or Newton was more correct about the nature of gravity. Links Michael's Website The Knowledge Machine on BetterWorldBooks Michael Strevens talks about The Knowledge Machine on The Night Science Podcast Michael Strevens talks about The Knowledge Machine on The Jim Rutt Show Automated Transcript [00:00:35] In this conversation. Uh, Professor Michael And I talk about the line between scientific knowledge and everything else. The contrast between what scientists as people do and the formalized process of science, why Coon and popper are both right, and both wrong and more. Michael is a professor of philosophy at New York university, where he studies the philosophy of science and the philosophical implications [00:01:35] of cognitive science. He's the author of the outstanding book, the knowledge machine, which is the focus of most of our conversation. A quick warning. This is a very Tyler Cowen ESCA episode. In other words, that's the conversation I wanted to have with Michael? Not necessarily the one that you want to hear. That being said I want to briefly introduce two ideas from the book, which we focus on pretty heavily. First it's what Michael calls the iron rule of science. Direct quote from the book dine rule states that the iron rule direct scientists to resolve their differences of opinion by conducting empirical tests, rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power. In the book, Michael makes a strong argument that scientist's following the iron rule is what makes science work. The other idea from the book is what Michael calls the Taconic principle. Named after the astronomer Tycho Brahe, who is one of the first to realize that very sensitive measurements can unlock new [00:02:35] knowledge about the world. This is the idea that the secrets of the universe that lie into my new details that can discriminate between two competing theories. The classic example, here is the amount of change in a Star's position during an eclipse dictating whether Einstein or Newton was more correct about the nature of gravity. So with that background, here's my conversation with professor Michael strengthens. [00:02:58] Ben: Where did this idea of the, this, the sort of conceptual framework that you came up with come from? Like, what's like almost the story behind the story here. [00:03:10] Michael: Well, there is an interesting origin story, or at least it's interesting in a, in a nerdy kind of way. So it was interested in an actually teaching the, like what philosophers call that logic of confirmation, how, how evidence supports or undermines theories. And I was interested in getting across some ideas from that 1940s and fifties. Scientists philosophers of science these days [00:03:35] look back on it and think of as being a little bit naive and clueless. And I had at some point in trying to make this stuff appealing in the right sort of way to my students so that they would see it it's really worth paying attention. And just not just completely superseded. I had a bit of a gear shift looking at it, and I realized that in some sense, what this old theory was a theory of, wasn't the thing that we were talking about now, but a different thing. So it wasn't so much about how to assess how much a piece of evidence supports a theory or undermines it. But was it more a theory of just what counts as evidence in the first place? And that got me thinking that this question alone is, could be a important one to, to, to think about now, I ended up as you know, in my book, the knowledge machine, I'm putting my finger on that as the most important thing in all of science. And I can't say it at that point, I had yet had that idea, but it was, [00:04:35] it was kind of puzzling me why it would be that there would, there would be this very kind of objective standard for something counting is evidence that nevertheless offered you more or less, no help in deciding what the evidence was actually telling you. Why would, why would this be so important at first? I thought maybe, maybe it was just the sheer objectivity of it. That's important. And I still think there's something to that, but the objectivity alone didn't seem to be doing enough. And then I connected it with this idea in Thomas Kuhn's book, the structure of scientific revolutions that, that science is is a really difficult pursuit that I've heard. And of course it's wonderful some of the time, but a lot of. requires just that kind of perseverance in the face of very discouraging sometimes. Oh, it's I got the idea that this very objective standard for evidence could be playing the same role that Coon Coon thought was played by what he called the paradigm bar, providing a kind of a very objective framework, which is also a kind of a safe framework, [00:05:35] like a game where everyone agrees on the rules and where people could be feeling more comfortable about the validity and importance of what they were doing. Not necessarily because they would be convinced it would lead to the truth, but just because they felt secure in playing a certain kind of game. So it was a long, it was a long process that began with this sort of just something didn't seem right about these. It didn't seem right that these ideas from the 1940s and fifties could be so, so so wrong as answers to the question. Philosophers in my generation, but answering. Yeah, no, it's, [00:06:11] Ben: I love that. I feel in a way you did is like you like step one, sort of synthesized Coon and popper, and then went like one step beyond them. It's, it's this thing where I'm sure you'd go this, this, the concept that whenever you have like two, two theories that seem equally right. But are [00:06:35] contradictory, that demand is like that, that is a place where, you know, you need more theory, right? Because like, you look at popper and it's like, oh yeah, that seems, that seems right. But then there's you look at Kuhn and you're like, oh, that seems right. And then you're like, wait a minute. Because like, they sort of can't both live in the broom without [00:06:56] Michael: adding something. Although there is something there's actually something I think. Pop Harrington about Koons ideas now. And there's lots of things that are very unpopped period, but you know, Papa's basic idea is science proceeds through reputation and Koons picture of science is a little bit like a very large scale version of that, where we're scientists now, unlike in Papa's story by scientists, we're all desperately trying to undermine theories, you know, the great Britain negative spirits. And with, with, they just assume that that prevailing way of doing things, the paradigm is going to work out okay. But in presuming that they push it to its breaking point. And [00:07:35] that process, if you kind of take a few steps back, has the look of pop and science in the sense that, in the sense that scientists, but now unwittingly rather than with their critical faculties, fully engaged and wittingly are, are taking the theory to a point where it just cannot be sustained anymore in the face of the evidence. And it progresses made because the theory just becomes antenna. Some other theory needs to be counted. So there's at, at the largest scale, there's this process of that, of success of reputation and theories. Now, Coon reputation is not quite the right word. That sounds too orderly and logical to capture what it's doing, but it is nevertheless, there is being annihilated by facts and in a way that's actually quite a period. I think that interesting. [00:08:20] Ben: So it's like, like you could almost phrase Coon as like systemic pop area. Isn't right. To like no individual scientist is trying to do reputation, but then you have like the system eventually [00:08:35] refutes. And that like, that is what the paradigm shift [00:08:37] Michael: is. That's exactly right. Oh, [00:08:39] Ben: that's fast. Another thing that I wanted to ask before we dig into the actual meat of the book is like, wow, this is, this is almost a very, very selfish question, but like, why should people care about this? Like, I really care about it. There's some, and by this, I mean like sort of the, like theories of how science works, right? Like, but I know, I know many scientists who don't care. They're just like, I tried to, I talked to them about that because then they're like, like I just, you know, it's like I do, I do. I think, [00:09:12] Michael: you know, in a way that, and that's completely fine, you know, people to drive a car, you don't know how the engine works. And in fact the best drivers may not have very much mechanical understanding at all. And it's fine for scientists to be a part of the system and do what the system requires of them without really grasping how it works most of the time. 1, 1, 1 way it becomes important is when people start wanting.[00:09:35] Science might not be improved in some ways. So there's a few, there's always a little bit of that going on at the margin. So some string theorists now want to want to relax the standards for what counts as a, as a acceptable scientific arguments so that the elegance or economy of an explanation kind of officially count in favor of a theory as well as, as well as the empirical evidence in the fashion sense. Or there's, there's quite a bit of, of momentum for reform of the publishing system and science coming out of things like the replicability crisis, the idea that actually that, you know, it's talking about science as a game, but science has been gamified to the point where it's being gamed. Yes. And so, you know, there a certain kind of ambitious individual goes into science and yeah, not necessarily. One who has no interest in knowledge, but they, once they see what the rules are, they cannot resist playing those rules to the, to the limit. And what you get is a sequence of scientists sometimes call it the least publishable unit. That's tiny little [00:10:35] results that are designed more to be published and cited in advance of scientist's career than to be the most useful, a summary of research. And then you, and you get time to simply then even worse, choosing their research direction, less out of curiosity, or the sense that they can really do something valuable for the world at large then because they see a narrower and shorter term opportunity to make their own name. Know that's not always a bad thing, but you know, no system of no system of rules, as perfect as people explain the rules more and more that the direction of science as a whole can start to veer a little bit away. Now it's a complicated issue because you changed the rules and you may lose a lot of what's good about the system. Things that you may, it may all look like it's very noble and, and so on, but you can still lose some of what's good about the system as well as fixing what's bad. So I think it's really important to understand how the whole thing works before just charging in and, and, and making a whole series of reforms. [00:11:34] Ben: [00:11:35] Yeah. Okay. That makes a lot of sense. It's like, what are the, what are the actual, like core pieces that, that drive the engine? [00:11:42] Michael: So that's the practical, that's the practical side of the answer to your question. You might, people should care. I thing it's a fascinating story. I mean, I love these kinds of stories. Like the Coon story, where we're at turn, everything turns out to be working in completely different way from the way it seems to be working with that ideology turns out to be not such a great guide to the actual mechanics of the thing. Yeah, [00:12:03] Ben: yeah, no, I mean, yeah. I think that I like that there are some people who just like, think it's fascinating and it's like also just. My, my bias has also the, like how it sort of like weaves between history, right? Like you have to like, really like, look at all of these like fascinating case studies and be like, oh, what's actually going on there. So actually to build on two things you just said could, could you make the argument that with the ref replicability crisis and [00:12:35] like sort of this idea of like P hacking, you're actually seeing, you're seeing what you like th the, the mechanisms that you described in the book in play where it sort of, it used to be that looking at P values was like, like having a good P value was considered sufficient evidence, but then we like now see that, like, having that sufficient P value doesn't, isn't actually predictive. And so now. Everybody is sort of starting to say like, well, maybe like that, that P felt like the using P value as evidence is, is no longer sufficient. And so, because the, the observations didn't match the, the, like what is considered evidence it's like the, what is considered evidence is evolving. Is that like, basically like a case, like, [00:13:29] Michael: exactly. That's exactly right. So the, the whole, the significance testing is one of these, it's a [00:13:35] particular kind of instanciation of the sort of broadest set of rules. We, this whole rule based approach to science where you set up things. So that it's very clear what counts as, as publishable evidence, you have to have a statistically significant results in that P P value testing and stuff is the, is the most widespread of kind of way of thinking about statistical significance. So it's all very straightforward, you know, exactly what you have to do. I think a lot of. Great scientific research has been done and that under that banner, yeah. Having the rules be so clear and straightforward rather than just a matter of some, the referees who referee for journals, just making their own minds up about whether this result looks like a good mind or not. It's really helped science move forward. And given scientists the security, they need to set up research the research programs that they've set up. It's all been good, but because it sort of sets up this very specific role it's possible to, for the right kind of Machiavellian mind to [00:14:35] look at those rules and say, well, let me see. I see some ways, at least in some, in some domains of research where there's plentiful data or it's fairly easy to generate. I see ways that I can officially follow the rules and yet, and technically speaking, what I'm doing is publishing something that's statistically significant and yet. Take a step back. And what happens is, is you may end up with a result, know there's the John you need is one of the, one of the big commentators on this stuff has result. Most published research is false in the title of one of his most famous papers. So you need to step back and say, okay, well, the game was working for a while. It was really, we had the game aligned to people's behavior with what, with what was good for all of us. Right. But once certain people started taking advantage of it in certain fields, at least it started not working so well. We want to hang on to the value we get out of having [00:15:35] very clear objective rules. I mean, objective in the sense that anyone can make a fair judgment about whether the rules are being followed or not, but somehow get the alignment back. [00:15:46] Ben: Yeah. And then, so it's like, so, so that game, that game went out of whack, but then sort of like there's. The broader metagame that is like that that's the, the point of the consistent thing. And then also sort of you, you mentioned string theory earlier, and as I was reading the book, I, I don't think you call this out explicitly, but I, I feel like there are a number of domains that people would think of as science now, but that sort of by your, by, by the iron law would not count. So, so string theory being one of them where it's like very hard, we've sort of reached the limit of observation, at least until we have better equipment. Another [00:16:35] one that came to mind was like a lot of evolutionary arguments were sort of, because it's based on something that is lot like is in the past there there's sort of no way to. To gather additional evidence. W would you say that, like, it's actually, you have a fairly strict bound on what counts as science? [00:16:59] Michael: It is, it is strict, but I think it's, it's not my, it's not in any way. My formulation, this is the way science really is now. It's okay. The point of sciences to is to develop theories and models and so on, and then to empirically test them. And a part of that activity is just developing the theories and models. And so it's completely fine for scientists to develop models and string theory and so on and, and develop evolutionary models of that runway ahead of the evidence. Yeah. I, you know, there where, where, where it's practically very difficult to come up with evidence testimony. I don't think that's exact that in itself is not [00:17:35] unscientific, but then that the question of course immediately comes up. Okay. So now what do we do with these models and, and The iron rule says there's only one, there's only one way to assess them, which is to look for evidence. So what happens when you're in a position with string theory or see with some models and evolutionary psychology in particular where, where it's there's there just is no evidence right now that there's a temptation to find other ways to advance those theories. And so the string theorists would like to argue for string theory on the ground of its it's unifying power, for example, that evolutionary psychologists, I think relying on a set of kind of intuitive appeal, or just a sense that there's something about the smile that sort of feels right. It really captures the experience of being a human being and say, I don't know, sexually jealous or something like that. And that's just not, that is not science. And that is not the sort of thing that. In general published in scientific journals, but yeah, the [00:18:35] question that's come up. Well, maybe we are being too strict. Maybe we, if we could, we would encourage the creation of more useful, interesting illuminating explanatorily powerful models and theories. If we allowed that, allowed them to get some prestige and scientific momentum in ways other than the very evidence focus way. Well, maybe it would just open the gates to a bunch of adventure, idle speculation. Yeah. That was way science down and distract scientists from doing the stuff that has actually resulted in 300 years or so of scientific progress. [00:19:12] Ben: And, and, and your argument would be that like for the ladder, that is well don't [00:19:21] Michael: rush in, I would say, you know, think carefully before you do it. [00:19:25] Ben: No, I mean, I find that that very another, another place where I felt like your framework, [00:19:35] I'm not quite sure what the right word is. Like sort of like there was, there was some friction was, is with especially with the the, the Taconic principle of needing to find like, sort of like very minute differences between what the theory would predict. And the reality is sort of areas you might call it like, like complex systems or emergent behavior and where sort of being able to explain sort of like what the fundamentally, just because you can explain how the building blocks of a system work does like, makes it very hard to make. It does not actually help you make predictions about that system. And I I'm I'm do you have a sense of that? How, how you expect that to work out in with, with the iron rule, because it's, it's like when there are, there are just like so many parameters that you could sort of like, argue like, well, like we either we predicted it or we didn't predict it. [00:20:34] Michael: Yeah, [00:20:35] no. Right. So, so sometimes the productions are so important that people will do the work necessary to really crank through the model. So where the forecast is the best example of that. So getting a weather forecast for five days time, you just spend a lot of money gathering data and running simulations on extremely expensive computers, but almost all of, almost all of science. There just isn't the funding for that. And so you'd never going to be able to make, or it's never going to be practically possible to make those kinds of predictions. But I think these models are capable of making other kinds of predictions. So I mean, even in the case of, of the weather models, you can, without, without, without being able to predict 10 days in advance, as long as you relax your demands and just want a general sense of say whether that climate is going to get warmer, you can make, do with a lot with, with, with many fewer parameters. I mean, in the case of, in a way that's not the greatest example because the climate is so complicated that to, to [00:21:35] even, to make these much less specific predictions, you still need a lot of information and computing power, but I think most, most science of complex systems hinges on hinges on relaxing the, the demands for, for. Of the specificity of the prediction while still demanding some kind of prediction or explanation. And sometimes, and sometimes what you do is you also, you say, well, nevermind prediction. Let's just give me a retrodiction and see if we can explain what actually happened, but the explanation has to be anchored and observable values of things, but we can maybe with some sort of economic incident or evolutionary models are a good example of this weekend. Once we've built the model after the fact we can dig up lots of bits and pieces that will show us that the course of say, we, we, we never could have predicted that evolutionary change would move in a certain direction, but by getting the right fossil evidence and so on, we can see it actually did [00:22:35] move in that direction and conforms to the model. But what we're often doing is we're actually getting the parameters in their model from the observation of what actually happened. So there are these, these are all ways that complex system science can be tested empirically one way or [00:22:52] Ben: another. Yeah. The, the thing that I guess that I'm, I'm sort of hung up on is if you want, like, if you relax the specificity of the predictions that you demand it makes it harder than to sort of compare to compare theories, right? So it's like w the, you have, you know, it's like Newton and Einstein were like, sort of were drastically different models of the world, but in re like the reality was that their predictions were, you need very, very specific predictions to compare between them. And so if, if the hole is in order [00:23:35] to get evidence, you need to re lacks specificity it makes it then harder to. Compare [00:23:41] Michael: theories. No, that's very true. So before you, before you demand, is that theories explain why things fall to the floor when dropped then? Good. Einstein let's go. Aristotle looks. Exactly. Yeah. And one reason physics has been able to make so much progress is that the model, all Sara, the models are simple enough that we can make these very precise predictions that distinguish among theories. The thing in that in complex systems sciences, we often, often there's a fair amount of agreement on the underlying processes. So say Newton versus Einstein. There's what you have is a difference in the fundamental picture of space and time and force and so on. But if you're doing something like economics or population ecology, so that looking at ecosystems, animals eating one another and so on. [00:24:35] That the underlying processes are in some sense, fairly uncontroversial. And the hard part is finding the right kind of model to put them together in a way that is much simpler than they're actually put together in reality, but that still captures enough of those underlying processes to make good predictions. And so I think because the prob that problem is a little bit different. You can, the, the that's, it's less, the, the situation is less a matter of distinguishing between really different fundamental theories and Mora case of refining models to see what needs to be included or what can be left out to make the right kinds of predictions. In particular situations, you still need a certain amount of specificity. Obviously, if you, if you really just say, I'm not going to care about anything about the fact that things fall downwards rather than up, then you're not going to be able to refine your models very far before you run out of. It's to give you any further guidance. That's, that's [00:25:35] very true. Yeah. But typically that complex systems kinds of models are rather more specific than that. I mean, usually they're too specific and they give you, they, they, they say something very precise that doesn't actually happen. Right. And what you're doing is you're trying to bring that, that particular prediction closer to what really happens. So that gives, and that gives you a kind of that gives you something to work towards bringing the prediction towards the reality while at the same time not demanding of the model that already make a completely accurate prediction. [00:26:10] Ben: Yeah. But that makes sense. And so sort of to like another sort of track is like what do you think about like theory free? Predictions. Right? So so like the extremity exam question would be like, could a, like very large neural net do science. Right. So, so if you had no theory at all, but [00:26:35] incredibly accurate predictions, like sort of, how does that square with, with the iron rule [00:26:41] Michael: in your mind? That's a great question. So when I formulate the iron Roy, I build the notion of explanation into it. Yeah. And I think that's functioned in, in an important way in the history of science especially in fields where explanation is actually much easier than prediction, like evolutionary modeling, as I was just saying. Now when you have, if you have the, if you, if your, if your model is an effect, then you're on that, that just makes these predictions it looks, it looks like it's not really providing you with an explanatory theory. The model is not in any way articulating, let's say the causal principles, according to which the things that's predicting actually happen. And you might think for that reason, it's not, I mean, of course this thing could always be an aid there's no, it's not it almost anything can have a place in science as a, as a, as a tool, as a stepping stone. Right. So could you cook, but quickly [00:27:35] you say, okay, we now have we now have we've now finished doing the science of economics because we've found out how to build these neural networks that predict the economy, even though we have no idea how they work. Right. I mean, I don't think so. I don't think that's really satisfying because it's not providing us with the kind of knowledge that science is working towards, but I can imagine someone saying, well, maybe that's all we're ever going to get. And what we need is a broader conception of empirical inquiry. Yeah. That doesn't put so much emphasis on an explanation. I mean, what do you want to do. To be just blindsided by the economy every single time, because you insist on a explanatory theory. Yeah. Or do you want, what do you want to actually have some ability to predict what's going to happen to make the world a better place? Well, of course they want to make the world a better place. So we've, I think we've focused on building these explanatory theories. We've put a lot of emphasis, I would say on getting explanations. Right. But, [00:28:35] but scientists have always have always played around with theories that seem to get the right answer for reasons that they don't fully comprehend. Yeah. And you know, one possible future for science or empirical inquiry more broadly speaking is that that kind of activity comes to predominate rather than just being, as I said earlier, a stepping stone on the way to truly explanatory theories. [00:29:00] Ben: It's like, I sort of think of it in terms of. Almost like compression where the thing that is great about explanatory theories is that it compresses all, it just takes all the evidence and it sort of like just reduces the dimension drastically. And so I'm just sort of like thinking through this, it's like, what would a world in which sort of like non explanatory predictions is like, is fully admissible. Then it just leads to sort of like some exponential [00:29:35] explosion of I don't know, like of whatever is doing the explaining. Right? Cause it just, there there's never a compression. From the evidence down to a theory, [00:29:47] Michael: although it may be with these very complicated systems that even in an explanatory model is incredibly uncompressed. Yeah, exactly. Inflated. So we just have to, I mean, I think it's, it's kind of amazing. This is one of my other interests is the degree to which it's possible to build simple models of complicated systems and still get something out of them, not precise predictions about, about, about what's going to happen to particular components in the system. You know, whether, whether this particular rabbit is going to get eaten yeah. Tomorrow or the next day, but, but more general models about how say increasing the number of predators will have certain effects on the dynamics of the system or, or you know, how the kinds of the kinds of things that population ecologists do do with these models is, is, is answer questions. So this is a bit of an example of what I was saying earlier [00:30:35] about making predictions that are real predictions, but but a bit more qualitative, you know, will. Well one of the very first uses of these models was to answer the question of whether just generally killing a lot of the animals in an ecosystem will lead the the prey populations to increase relatively speaking or decrease. It turns out, but in general they increase. So I think this was after this was in the wake of world war one in Italy George, during world war one, there was less fishing because it's just a sailor, but we're also Naval warfare, I guess, not, maybe not so much in the Mediterranean, but in any case there was, there were, there was less fishing. So it was sort of the opposite of, of killing off a lot of animals in the ecosystem. And the idea was to explain why it was that certain just patterns and that increase in decrease in the populations of predator and prey were served. So some of the first population ecology models were developed to predict. So it's kind of a, and these are tiny. These, this [00:31:35] is, I mean, here you are modeling this ocean. That's full of many, many different species of fish. And yet you just have a few differential equations. I mean, that look complicated, but the amount of compression is unbelievable. And the fact that you get anything sensible out of it at all is truly amazing. So we've kind of been lucky so far. Maybe we've just been picking the low-hanging fruit. But there's a lot of that fruit to be had eventually though, maybe we're just going to have to, and, you know, thankfully there're supercomputers do science that way. Yeah. [00:32:06] Ben: Or, or, or developed sort of a, an entirely different way of attacking those kinds of systems. I feel like sort of our science has been very good at going after compressible systems or I'm not even sure how to describe it. That I feel like we're, we're starting to run into all of these different systems that don't, that sort of aren't as amenable [00:32:35] to to, to Titanic sort of like going down to really more and more detail. And so I, I I'd always speculate whether it's like, we actually need like new sort of like, like philosophical machinery to just sort of like grapple with that. Yeah. [00:32:51] Michael: When you modeling, I mean, festival, they might be new modeling machinery and new kinds of mathematics that make it possible to compress things that were previously incompressible, but it may just be, I mean, we look at you look at a complicated system, like the, like in an ecosystem or the weather or something like that. And you can see that small, small differences and the way things start out can have big effects down the line. So. What seems to happen in these cases where we can have a lot of compression as that, those, although those small, those there's various effects of small, small variations and initial conditions kind of cancel out. Yeah. So it may be, you change things [00:33:35] around and it's different fish being eaten, but still the overall number of each species being eaten is about the same, you know, it kind of all evens out in the end and that's what makes the compression possible. But if that's not the case, if, if these small changes make differences to the kinds of things we're trying to predict people, of course often associate this with the metaphor of the butterfly effect. Then I dunno if compression is even possible. You simply, well, if you really want to predict whether, whether there's going to be an increase in inflation in a year's time or a decrease in inflation, and that really every person that really does hinge on the buying decisions of. Some single parent, somewhere in Ohio, then, then you just need to F to, to figure out what the buying decisions of every single person in that in the economy are in and build them in. And yet at the same time, it doesn't, it, it seems that everyone loves the butterfly effect. [00:34:35] And yet the idea that the rate of inflation is going to depend on this decision by somebody walking down the aisles of a supermarket in higher, that just doesn't seem right. It does seem that things kind of cancel out that these small effects mostly just get drowned out or they, they kind of shift things around without changing their high-level qualitative patents. Yeah. Well, [00:34:56] Ben: I mean, this is the diversion, but I feel like that that sort of like touches right on, like, do you believe in, in like the forces theory of history, more like the great man theory of history, right? And then it's like, and people make arguments both ways. And so I think that. And we just haven't haven't figured that out. Actually split like the speaking of, of, of great man theory of history. The thing, like an amazing thing about your book is that you, you sort of, I feel like it's very humanistic in the sense of like, oh, scientists are people like they do like lots of things. They're [00:35:35] not just like science machines. And you have this, like this beautiful analogy of a coral reef that you, that, that scientists you know, contribute, like they're, they're, they're like the living polyps and they build up these they're, they're sort of like artifacts of work and then they go away and it, they, the new scientists continue to build on that. And I was sort of wondering, like, do you see that being at odds with the fact that there's so much tacit knowledge. In science in the sense that like you F for most fields, I found you probably could not reconstruct them based only on the papers, right? Like you have to talk to the people who have done the experiments. Do you see any tension [00:36:23] Michael: there? Well, it's true that the, the metaphor of the coral reef doesn't doesn't capture that aspect of science. It's very true. So I think on the one hand that what's what is captured by the metaphor is the idea that the, [00:36:35] the, what science leaves behind in terms of, of evidence that can is, is, is interpreted a new every generation. So each new generation of scientists comes along and, and, and, and sort of looks at the accumulated fact. I mean, this is going to sound it, this is, this makes it sound. This sounds a little bit fanciful, but you know, in some sense, that's, what's going on, looks at the facts and says, well, okay, how shall I, what are these really telling me? Yeah. And they bring their own kind of human preconceptions or biases. Yeah. But none of these break-ins the preconceptions and biases are not necessarily bad things. Yeah. They look at it in the light of their own mind and they are reinterpret things. And so the scientific literature is always just to kind of a starting point for this thought, which, which really changes from generation to generation. On the other hand, at the same time, as you just pointed out, scientists are being handed certain kinds of knowledge, [00:37:35] which, which are not for them to create a new, but rather just to kind of learn how to just have a use various instruments, how to use various statistical techniques actually. And so there's this continuity to the knowledge let's, as I say, not captured at all by the reef metaphor, both of those things are going, are going on. There's the research culture, which well, maybe one way to put it. It's the culture, both changes stays the same, and it's important that it stays the same in the sense that people retain their, know how they have for using these instruments until eventually the instrument becomes obsolete and then the culture is completely lost, but it's okay. Most of the time if it's completely lost. But on the other hand, there is this kind of always this fresh new re-interpretation of the, of the evidence simply because the the interpretation of evidence is is a rather subjective business. And what the preceding generations are handing on is, is not, is, should be seen more as a, kind of [00:38:35] a data trove than, as, than a kind of a body of established knowledge. But [00:38:43] Ben: then I think. Question is, is it's like, if, what counts as evidence changes and all you are getting is this data trove of things that people previously thought counted as evidence, right? Like, so you know, it's like, they all, all the things that were like, like thrown out and not included in the paper doesn't like that make it sort of harder to reinterpret it. [00:39:12] Michael: Well, there's, I mean, yeah. The standards for counselors, evidence, I think of as being unchanging and that's an important part of the story here. So it's being passed on, it's supposed to be evidence now of course, some of it, some of it will turn out to be the result of faulty measurements, all these suspicious, some of that even outright fraud, perhaps. And so, and so. To some extent, that's [00:39:35] why you wouldn't want to just kind of take it for granted and they get that, that side of things is not really captured by the reef metaphor either. Yeah. But I think that the important thing that is captured by the metaphor is this idea that the, what, what's the thing that really is the heritage of science in terms of theory and evidence, is that evidence itself? Yeah. It's not so much a body of knowledge, although, you know, that knowledge can, it's not that it's, it's not, it's not that everyone has to start from scratch every generation, but it's, it's this incredibly valuable information which may be, you know, maybe a little bit complicated in some corners. That's true, but still it's been generated according to the same rules that or, you know, 10 to. by the same rules that we're trying to satisfy today. Yeah. And so, which is just as [00:40:35] trustworthy or untrustworthy as the evidence we're getting today. And there it is just recorded in the animals of science. [00:40:41] Ben: So it's much more like the, the thing that's important is the, like the, the process and the filtering mechanism, then the, the, the specific artifacts that yeah. [00:40:55] Michael: Come out, I'll make me part of what I'm getting at with that metaphor is the scientists have scientists produce the evidence. They have their, an interpretation of that evidence, but then they retire. They die. And that interpretation is not really, it doesn't need to be important anymore enough and isn't important anymore. Of course, they may persuade some of their graduate students to go along with their interpretation. They may be very politically powerful in their interpretation, may last for a few generations, but typically ultimately that influence wanes and What really matters is, is, is the data trove. Yeah. I mean, we still, it's not, as you, as you said, it's not perfect. We have to regard it with that [00:41:35] somewhat skeptical eye, but not too skeptical. And that's the, that's the, the real treasure house yeah. Of [00:41:43] Ben: science and something that I was, I was wondering, it's like, you, you make this, this really, you have a sentence that you described, you say a non event such as sciences non-rival happens, so to speak almost everywhere. And I would add, like, it happens almost everywhere all the time, and this is, this is wildly speculative. But do you think that there would have been any way to like, to predict that science would happen or to like no. There was something missing. So like, could, could we then now, like, would there be a way to say like, oh, we're like, we're missing something crucial. If that makes sense, like, could we, could we look at the fact that [00:42:35] science consistently failed to arrive and ask, like, is there, is there something else like some other kind of like like intellectual machinery that also that has not arrived. Did you think, like, is it possible to look for that? [00:42:51] Michael: Oh, you mean [00:42:52] Ben: now? Yeah. Or like, like, or could someone have predicted science in the past? Like in [00:42:57] Michael: the past? I, I mean, okay. I mean, clearly there were a lot of things, highly motivated inside. Why is thinkers. Yeah. Who I assume I'd have loved to sell the question of say configuration of the solar system, you have that with these various models floating around for thousands of years. I'm not sure everyone knows this, but, but, but, but by, you know, by the time of the Roman empire, say that the model with the sun at the center was well known. The muddle with the earth at the central is of course well known and the model where the earth is at the center, but then the [00:43:35] sun rotates around the earth and the inner planets rotate around the sun was also well known. And in fact was actually that this always surprises me was if anything, that predominant model in the early middle ages and in Western Europe, it had been kind of received from late antiquity from that, from the writers at the end of the Roman empire. And that was thought to be the, the kind of the going story. Yeah. It's a complicated of course, that there are many historical complications, but I, I take it that someone like Aristotle would have loved to have really settled that question and figured it out for good. He had his own ideas. Of course, he thought the earth had to be at the center because of its that fit with his theory of gravity, for example, and made it work and having the Senate, the city just wouldn't wouldn't have worked. And for various other reasons. So it would have been great to have invented this technique for actually generating evidence that that in time would be seen by everyone has decisively in favor of one of these theories, the others. So they must have really wanted it. [00:44:35] Did they think, did they themselves think that something was missing or did they think they had what they needed? I think maybe Aristotle thought he had what was needed. He had the kind of philosophical arguments based on establishing kind of coherence between his many amazing theories of different phenomena. Know his. Falling bodies is a story about that. The solar system, as of course, he would not have called it the, the planets and so on, and it all fit together so well. And it was so much better than anything anyone else came up with. He may have thought, this is how you establish the truth of, of of the geocentric system with the earth at the center. So now I don't need anything like science and there doesn't need to be anything like science, and I'm not even thinking about the possibility of something like science. Yeah. And that, to some extent, that explains why someone like Aristotle, who seemed to be capable of having almost any idea that could be had, nevertheless did [00:45:35] not seem to have, sort of see a gap to see the need, for example, for precise, qualitative experiments or, or, or even the point of doing them. Yeah. It's, you know, that's the best, I think that's the most I can say. That I don't, I let myself looking back in history, see that people felt there was a gap. And yet at the same time, they were very much aware that these questions were not being said, or [00:46:04] Ben: it was just it just makes me wonder w w some, some period in the future, we will look back at us and say like, oh, that thing, right. Like, I don't know, whatever, like, Mayans, right? Like how could you not have figured out the, like my antigenic method? And it's just it, I, I just find it thought provoking to think, like, you know, it's like, how do you see your blind spots? [00:46:32] Michael: Yeah. Well, yeah, I'm a philosopher. And we in, in [00:46:35] philosophy, it's still, it's still much like it was with Aristotle. We have all these conflicting theories of say you know, justice. What, what really makes the society just to what makes an act. Or even what makes one thing cause of another thing. And we don't really, we don't know how to resolve those disputes in a way that will establish any kind of consensus. We also feel very pleased with ourselves as I take it. Aristotle's are these really great arguments for the views? We believe in me, that's still sort of quite more optimistic maybe than, than we ought to be. That we'll be able to convince everyone else. We're right. In fact, what we really need and philosophers, do you have this thought from time to time? There's some new way of distinguishing between philosophical theories. This was one of the great movements of early 20th century philosophy. That logical positivism was one way. You can look at it as an attempt to build a methodology where it would be possible to use. [00:47:35] And in effect scientific techniques to determine what to, to adjudicate among philosophical theories, mainly by throwing away most of the theories as meaningless and insufficiently connected to empirical facts. So it was a, it was a brutal, brutal method, but it was an idea. The idea was that we could have, there was a new method to be had that would do for philosophy. What, what science did for, you know, natural philosophy for physics and biology and so on. That's an intriguing thought. Maybe that's what I should be spending my time thinking about, please. [00:48:12] Ben: I, I do want to be respectful of your time, the like 1, 1, 1 last thing I'd love to ask about is like, do you think that and, and you, you talked about this a bit in the book, is that, do you think that the way that we communicate science has become almost too sterile. And sort of one of my, my going concerns [00:48:35] is this the way in which everybody has become like super, super specialized. And so, and sort of like once the debate is settled, creating the very sterile artifacts is, is, is useful and powerful. But then as, as, as you pointed out as like a ma as a mechanism of like, actually sort of like communicating knowledge, they're not necessarily the best. But, but like, because we've sort of held up these like sterile papers as the most important thing it's made it hard for people in one specialization to actually like, understand what's going on in another. So do you think that. That, that, that we've sort of like Uber sterilized it. You know, it's like, we talked earlier about people who want to, to change the rules and I'm very much with you on like, we should be skeptical about that. But then at the same time you see that this is going [00:49:35] on. [00:49:35] Michael: Yeah. Well, I think, I mean, there's a real problem here, regardless, you know, whatever the rules of the problem of communicating something as complicated as scientific knowledge or the really, I should say the state of scientific play because often what needs to be communicated is not just somebody that's now been established beyond any doubt, but here's what people are doing right now. Here's the kind of research they're doing here are the kinds of obstacles they're running into to communicate, to, to put that in a form where somebody can just come along and digest it all easily. I think it was incredibly difficult, no matter what the rules are. Yeah. It's probably not the best use of most scientists time and to try to present their work in that way. It's better for them just to go to the rock face and start chipping away at their and little local area. So what, what you need is either for a scientist to take time out from time to time. And I mean there exists these publications review [00:50:35] publications, which try to do this job. That's true. So that people in related fields, you know, typically in the typically related fields means PhD in the same subjects. They're usually for the nearest neighbors to see what's going on, but often they're written in ways that are pretty accessible. I find. So then you create, you create a publication that simply has a different set of rules. The point here is not to in any way to evaluate the evidence, but simply to give a sense of the state of play for. To reach further a field, you have science journalists or what's going on with newspapers and magazines right now is because it's not very good for serious science journalism. And then you have scientists and people like me, who, for whatever reason, take some time out from what they usually do to really, really look kind of a self-standing project to explain what's going on those activities all to some extent, take place outside the narrow narrow view of the [00:51:35] iron rule. So, and I think, I think it's, it's going okay. Given the difficulty of the task. It seems to me that that the, the, the knowledge of the information is being communicated in a, in a somewhat effective, accessible way. I mean, not that if anything, the real, the real, the real barriers to. Some kinds of fruitful, interdisciplinary thinking, not just that it's hard for one mind to simply take on all this stuff that needs to be taken on no matter how effectively, even brilliantly it's communicated the world is just this very complicated place. Yeah. You know, one, one thing I'm interested in historically not, I mean, just, I find fascinating is that fruitfulness of certain kinds of research programs that came out of came out of finding serious wars, like in particular, the second world war, you threw a bunch of people together and they had to solve some problem, like [00:52:35] building at a bomb , it's usually something, something horrendous or a a device, the device for the guns and bombers and so on that would allow that. To rather than having to bit very skillfully. I forget the word for it. You know, you kind of have to put your guide ahead of where the enemy fighter so by the time that your, your, your bullets get there, the plane arrives at the same time, but they built these really sophisticated analog computers basically would do the job. So the Ghana, some, you know, some 19 year olds, like just pointed the plane again. Yeah. And a lot of problems to do with logistics and weather forecasting. And so on this, these, these, the need to have that done through together, people from very different areas in engineering and science and so on and resulted in this amazing explosion. I think if knowledge [00:53:35] it's a very, it's a very attractive period in the history of human thought. When you go back and look at some of the things people were writing in the late forties and fifties, Computers, how the mind works. And so on. And I think some of that is coming out from this, this kind of almost scrambling process that that happened when, when these very specific kind of military engineering problems are solved by throwing people together who never normally would have talked to one another. Maybe we need a little bit of that. Not the war. Yeah. But [00:54:08] Ben: I have a friend who described this as a serious context of use is it is a thing. And it's, I, I mean, I'm, I'm incredibly biased towards looking at that period. Okay. But [00:54:20] Michael: I guess it's connected to what you're doing. [00:54:23] Ben: Absolutely. Is I do you know who. Yeah. So, so he actually wrote a series of memoirs and I just there reprinting it. I wrote the forward to it. So that's, [00:54:35] so I'm like, I agree with you very strongly. And it is it's. I find, I always find that fascinating because I feel like there's, there's like this. I mean, there's this paradigm that sort of got implemented after world war II, where do you think like, oh, like theory leads to applied science leads to leads to technology, but you actually see all these, these places where like, trying to do a thing makes you realize a new theory. Right. And you see similar thing with like like, like the steam engine, right? Like that's how we get thermodynamics is it's like what, like that's a great piece of work that's right, right. Yeah. So that's, I mean, like that, that absolutely plays to my biases that like, yeah, we. Like not, not doing interdisciplinary things for their own sake. Like just being like, no, like let's get these people that are rude, but like having very serious contexts of use that can like drive people having [00:55:32] Michael: problem to solve. It's not just the case [00:55:35] of kind of enjoying kind of chatting about what you each do. And then just going back to the thing you were doing before. Yeah. Feeling, feeling enriched. Yeah. But otherwise I'm changed it. It's interesting [00:55:46] Ben: though, because the incentives in that situation sort of like now fall outside of the iron rule right. Where it's like, it's like, you don't care. Like you don't care about like, I mean, I guess to some extent you could argue like the thing needs to work. And so if it works, that is evidence that your, your theory is, is [00:56:09] Michael: correct. That's true. But, you know, but I think as you're about to say, engineering is not science and it's not it's the own rule is not overseeing engineering. It's the it's engineering is about making things that work and then about producing evidence for, or against various ideas. That's just a kind of a side effect, [00:56:27] Ben: but then it can sort of like, I guess it can like spark those ideas that people then sort of like take, I [00:56:35] was like, I mean, in my head, it's all of this, like I think of what would I call like phenomena based cycles where like, there's, there's like this big, like cyclical movement where like you discover this like phenomena and then you like, can theorize it and you use that theory to then do like, I dunno, like build better microscopes, which then let you make new observations, which let you discover new phenomena. [00:57:00] Michael: It's really difficult to tell where things are going. Yeah. I think the discovery of plate tectonics is another good example of this sea, of these, all of these scientists doing things that, that certainly not looking into the possible mechanisms for continental drift, right. But instead, getting interested for their own personal reasons and doing things that don't sound very exciting, like measuring the magnet, the measuring the ways that the orientation of the magnetic field has changed over past history. By looking at the, by basically digging up bits of rock and tests, looking at the orientations of the, [00:57:35] of the iron molecules or whatever, and the lock and, you know, it's, I mean, it's not, it's not completely uninteresting, but in itself it sounds like a kind of respectable, but probably fairly dull sideline and geology. And then things like that. We're developing the ability to meet very precise measurements of the gravitational field. Those things turn out to be. Key to understanding this, this amazing fact about the way the whole planet works. Yeah. But nobody could have understood in advance that, that they would play that role. What you needed was for a whole bunch of, that's not exactly chaos, but I kind of I kind of diversity that might look almost, it might look rather wasteful. Yeah. That's very practical perspective to, to blossom. Yeah. This is, [00:58:29] Ben: I, I truly do think that like, moving forward knowledge involves like being almost like [00:58:35] irresponsible, right? Like if you had to make a decision, it's like, it's like, should we fund these people who are going in like measuring magnetic fields just for, for funsies. Right. And it's like, like, like from, from like a purely rational standpoint, it's like, no, but yeah, [00:58:51] Michael: the reason that sort of thing happens is cause a bunch of people decide they're interested in. Yeah, persuade the students to do it too. And you know, whether they could explain it to the rest of the world, actually that's another, there was also a military angle on that. I don't know if you know that, but the, the, some of the mapping of the ocean floors that was also crucial to the discovery of plate tectonics in the fifties and sixties was done by people during the war with the first sonar systems who nobody's supposed to be, you know, finding submarines or whatever, but decided, Hey, it would be kind of interesting just to turn the thing on and leave it on and sort of see what's down there. Yeah. And that's what they did. And that's how some of those first maps started being put together. [00:59:35] That's [00:59:36] Ben: actually one of the, one of my concerns about trying to do science with, with like no networks is. How many times do you see someone just go like, huh, that's funny. And like, like so far you can't like computers. Like they can sort of like find what they're setting out to find or like they have a, or they, they almost have like a very narrow window of what is considered to evidence. And perhaps like through, through your framework the, the thought of like, huh, that's funny is like you're someone's brain, all of a sudden, like take something as evidence that wasn't normally like supposed to be evidence. Right. So it's like, you're doing like one set of experiments and then you just like, notice this like completely different thing. Right. And you're like, oh, like maybe that's actually like a different piece of evidence for something completely different. And then it opens up a rabbit hole. [01:00:31] Michael: Yeah. This is another one of those cases though, with.[01:00:35] Sort of the, some kind of creative cause it, and they do think it's incredibly important that scientists not get distracted by things like this. On the other hand, it would be terrible if scientists never got distracted by things like this. And I guess I, one thing I see with the iron rule is it's is it's a kind of a social device for making scientists less distracted. Well, not putting the kind of mental fetters on that would, would make it impossible for them ever to become distracted. [01:01:05] Ben: And maybe perhaps like the, like the, the distraction and like saying, oh, that's funny. It's like the natural state of human affairs. [01:01:12] Michael: Well, I think so. I think if we, we would all be like Aristotle and it turns out it was better for science fair, actually a little bit less curious and yeah. And it's interesting and variable and we had actually our, so [01:01:24] Ben: one could almost say that like the, the iron rule, like w w would you say it's accurate that like the iron rule is absolutely. But so [01:01:35] is breaking in the sense that like, like if, if like somehow there, like you could enforce that, like every single person only obeyed it all the time science, like we, we actually, we make serendipitous discoveries. And so it's like in order to make those, you need to break the rule, but you can't have everybody running around, breaking the rule all the [01:01:57] Michael: time. All right. Put it a little bit differently. Cause I see the rule list is not so much, it's not so much a rural for life. And for thinking is for, for sort of publishing activity. So you don't, you're not, you're not technically breaking the rule when you think. Huh? That's funny. And you go off and start thinking your thoughts. You may not be moving towards. Yeah. It has the kind of scientific publication that, that satisfies the role. But nor are you breaking. The F, but if all scientists can, as it were live to the iron rule, not just in there, not just when they took themselves to be playing a game in every way that they thought about [01:02:35] they, they, they thought about the, the point of their lives as, as kind of investigators of nature. Then, I mean, that's, people are just not like that. It's hard to imagine that you could really, that would ever really happen. Although, you know, to some extent, I think our science education system does encourage it. Yeah. But if that really happened, that would probably be disastrous. We need, it's like the pinch of salt, you know, if you only want to pinch, but without it, it's not good. Yeah. That [01:03:06] Ben: seems like an excellent place to end. Thank you so much for being part of idea missions. [01:03:35]
A conversation with the VitaDAO core team. VitaDAO is a decentralized autonomous organization — or DAO — that focuses on enabling and funding longevity research. The sketch of how a DAO works is that people buy voting tokens that live on top of the Etherium blockchain and then use those tokens to vote on various action proposals for VitaDAO to take. This voting-based system contrasts with the more traditional model of a company that is a creation of law or contact, raises capital by selling equity or acquiring debt, and is run by an executive team who are responsible to a board of directors. Since technically nobody runs VitaDAO the way a CEO runs a company, I wanted to try to embrace the distributed nature and talk to many of the core team at once. This was definitely an experiment! The members of the core team in the conversation in no particular order: Tyler Golato Paul Kohlhaas Vincent Weisser Tim Peterson Niklas Rindtorff Laurence Ion Links VitaDAO Home Page An explanation of what a DAO is Molecule Automated Transcript VitaDAO [00:00:35] In This conversation. I talked to a big chunk of the VitaDAO core team. VitaDAO is a decentralized autonomous organization or Dao that focuses on enabling and funding. Longevity research. We get into the details in the podcast, but a sketch of how a DAO works is that people buy voting tokens that live on top of the Ethereum blockchain. And then they use those tokens to vote on [00:01:35] various action proposals for me to doubt to take. This voting based system contrasts with more traditional models of the company. That is a creation of law or contract raises capital by selling equity or acquiring debt, and is run by an executive team who are responsible to a board of directors. Since technically, nobody runs for you to doubt the way it CEO runs the company. I wanted to try to embrace the distributed nature and talk to many of the core team at once. This was definitely experiment. Uh, I think it's your day. Well, Oh, well, but I realize it can be hard to tell voices apart on a podcast. So I'll put a link to a video version. In the show notes. So without further ado, here's my conversation with Vita Dao. What I want to do so that listeners can put a voice to a name is I want to go around everybody say your name and then you say how you would pronounce the word VI T a D a O. Tim, would you say your name and then, and then pronounce the word that [00:02:35] that's kind of how I've done it. Yeah. And so I'm the longevity steward we can help kind of figure out deal flow on, edited out, so. Awesome. All right, Tyler, you're next on. It is definitively Vieta Dell. Yeah. And I also help out with the longevity steward group. I started starting longevity group and I'm the chief scientific officer and co-founder at molecule as well. And then Nicholas you're next on my screen. It's definitely beats it out. And I'm also a member of the longevity working group in this science communication group and also currently initiating and laptop. Great. And then Vinson. Yeah. So it's the same pronunciation weeded out, but I'm helping on the side and also on kind of like special projects, like this incline where that I took around, we had recently and yeah, in Lawrence. Lauren Sajjan Vieta thou. And I [00:03:35] also steward the deal flow group within the longevity working group. And I think we should all now say as a hive mind, Paul Paul has said at the same time, oh, sorry. I'm going to say bye to dad. Mess with her in yeah. Hi everyone. My name is Paul cohost. I would say, be to down. I actually wonder what demographics says, Vida, like RESA. We should actually look into that. It's interest, interesting community metric. I'm the CEO and co-founder of molecule and one of the co-authors of the VW. I also work very deeply on the economic side and then essentially help finalize deal structures. So essentially the funding deals that we've been carry through into molecule and yeah, very excited to be here today. And maybe we can jump back into Lawrence adjusted we well, [00:04:35] also, so the thing that's confusing to me is that I always assumed that the Vith came from the word vitality. Right. And so that's, that's where the idea of calling it a fight Vita doubt, right? Because like, I don't say vitality, I say fighting. In German, it's actually retaliatory. Yeah. So it's just like the stupid Anglo centrism that is from the Latin, I would say from the word life. Yeah. Cool. So to really sort of jump right in, I think there's the, to like, be very direct, like, can we like walk through the mechanics of how the, how, how everything actually works? Right. So I think listeners are probably familiar with sort of like the high level abstract concept of there's a bunch of people. They have tokens, they vote on deals you give researchers money to, to do work, but like, sort of [00:05:35] like very, very mechanical. How does the dowel work? Could you like walk us through maybe like, sort of a a core loop of, of like what, what you do Yeah. So I mean, the core goal of the DAO is really to try and democratize access to decision-making funding and governance of longevity therapeutics. And so mechanically, there's a few different things going on and anyone feel free to interrupt me or jump in as well. But, so I would start from the base layer is really having this broad community of decentralized token holders, which are ultimately providing governance functions to this community. And the community's goal is to deploy funding that it's raised into early stage. Clinical proof of concept stage longevity therapeutics projects. And these basically fall between these two, let's say points where some tension exists in when it comes to translating academic science. So you have this robust early stage, let's say basic research funding mechanism through things like the NIH [00:06:35] grant funding, essentially. And that gets really to the point of being able to do, let's say very early stage drug discovery. And there's also some sort of downstream ecosystem consisting of venture capital company builders, political companies that does let's say late stage funding and incubation of ideas. They're more well-vetted, but between there's this sort of problem where a lot of innovation gets lost, it's known as the translational valley of death. Yeah. What did we try to do is we try to identify as a community academics that are working and let's say, have stumbled onto a potentially promising drug, but aren't really at the point yet where they can create a startup company. And what we want to do is basically by working together as a community, provide them the funding, the resources, in some cases, even the incubation functions to be able to do a series of killer experiments, really deep risk of project, and then file intellectual property, which in exchange for the funding, the dowel actually, and this is, this is sort of mechanically enabled by a legal primitive that we've been developing a molecule called an IP [00:07:35] NFP framework, which basically consists on one side of a legal contract, typically in the form of a sponsored research search agreement between a funder and a party that would be receiving the funding, the laboratory, and on the other side of federated data storage layer. And so the way this works is basically beat a doubt would receive applications. Some of these projects could, for example, be listed on molecules marketplace have an IPN T created meta dealt with would send funds via the system to the university and in exchange, they would hold this license and essence for the IP, that results from that project. And then within the community, we have domain experts. For example, we have a longevity working group which consists of MDs. Post-docs PhD is basically anyone that has deep domain experience in the longevity space. They work to evaluate projects due diligence and ultimately serve as sort of a quality control filter for the community, which consists of non-experts as well. Maybe just people who are enthusiastic about what. And beyond that, there's also additional domain expertise in the [00:08:35] form of some people who have worked at biotech VCs, for example, people with entrepreneurial experience and through this community, you basically try to form, let's say a broad range of expertise that can then coach the research or work with them and really help the academic move the IP and the research project towards the stage where, where it can be commercialized. And now VitaDAO stewarding this process. They have ownership in the IP and basically what would happen is if that research has out license co-developed sold onto another party, just made productive in essence and. It's successful in commercializing those efforts and received some funds, let's say from the commercialization of that asset, that goes back into the treasury and is continuously deployed into longevity research. So the long-term goal is to really create this sort of self-sustaining circular funding mechanism to continue to fund longevity research over time. And now within that, we could wrap it all into, you know, there's a bunch of like specific mechanics in there. I would love to, to rabbit hole, [00:09:35] I think Vincent, yes, to and on the kind of very simple technical layer, kind of very initially we started off just having this idea and putting it out there and then like having like a kind of Genesis auction where everyone could contribute funds. Like some people contribute 200 bucks and others contributed millions. And in exchange for that. Just like as a, there is an example, like for every dollar they gave, they gave, got one vote in organization. And then this initial group of people that came together to put, to, to pool their resources, to fund longevity, research, got votes and exchange, and actually with these votes, basically they can then what Tyler described make on the, on these proposals that that are vetted through the longevity working group, they can make a vote if it shouldn't get funding. And, and that's of course kind of like the traditional, like model of like a Dow and of like token based governance and boating [00:10:35] and yeah, which we did of course was like kind of like a very easy mechanism that got it started, but then the storm of course can also be useful for different purposes and can also incentive. People working on specific projects, research has also getting told and so kind of getting a governance, right. And organization in exchange for good and contributing work. Nicholas, did I see your hand? Yes. And maybe one thing to add here that takes a bit of a step back. It's adding, adding the question. Why does all of this matter? Why does the style framework Adderall fall? And I think when you, when you look at the way currently academic research works, basically the incentives for the scientists and the moment that something is published in a peer reviewed journal, so that the system is optimized for peer review publication. And then on the other hand, on the translational side, when something, you know,[00:11:35] Turning into a medicine return on investment. And they're basically calculating a risk adjusted net present value of the project. Now, the problem with a lot of biomedical research is the science has, is done. The paper is published, but a risk adjusted present value of the project is still approaching zero because there there's still some key experiments are missing or to get that experiment off the ground. And actually this is where the doubt can come in and using new technologies to basically financialize the IP and make it more liquid. And may, maybe more specifically the asset isn't created, you know, a lot of research you know, the NIH has not focused on therapies. I mean, not the creation of new therapies where value is actually created. They'll, they'll do clinical trials on existing therapies, but, but you know, the real value inflection points are not done through basic basic research. So, so that's where we hope to solve. Got it. So, [00:12:35] w I think in my, in my mind, the thing that's really interesting about Vieta Dow, as opposed to other dads is, is the sort of like interface with the, with the world of atoms that that's like a pretty, pretty unique and exciting thing. So there's, there's, there's a lot of mechanics there that I'm actually interested in digging into. So like one thing is in order. So, so sort of all, in order to. Give money to a researcher even at some point they need to turn it into dollars or euros in order to buy the equipment that they need to, to do the research. And so are they, they're like taking the, VitaDAO token and then converting that into, into currency. How does, how does that work? Yeah, I can speak to this or Paul or if you want to, if you want to speak to it. So, I mean, I can, I can maybe kick it off. So one of the things that's really important and that we've been really focused on at molecule is ensuring that the process of working with researchers, which goes [00:13:35] well beyond just working with the research, right? You need to work with the university, with the tech transfer office. You need to negotiate a licensing agreement and all of this can happen in a way that is somewhat seamless and it doesn't require them. Let's say having to do all of their interactions with it, let's say a. You know, this sort of ephemeral entity that exists on the Ethereum blockchain. So we've basically created rails by a molecule for things like Fiat forwarding we're negotiations with the TPO for a lot of the legal structures to ensure that it's as smooth as possible, the Vino tokens themselves don't actually play into. We can, we can give those to researchers as an incentive and to people who perform work for the community. But that is not actually the, what is, what is given to researchers. Researchers. When, when a proposal is passed within the community, we have a certain treasury and ether, for example, that we've raised over over a period of time that is liquidated and sold for USD decency. And then that USB-C travels via off rails that molecule has created to ensure that the university [00:14:35] can just receive beyond currency. So I mean, a big part of this. Know, defy in a lot of ways has some advantages in that. It never has to really interact with real world banking systems. This is a challenge in the D space. We still have to interface with tech transfer offices. We still have to, you know, speak to general counsel at universities and make sure that people are comfortable working this sort of way. And I would say this is probably one of the most significant challenges and the reason that, you know, a lot of legal engineering and a lot of thinking went into how to create the base layer infrastructure that allows us to actually operate in this space. So it's, yeah, it's a challenge. It's something that we're always trying to iterate on. I mean, we imagine a future where universities do have wallets. You know what I mean, researchers do have wallets, but it's going to take some time for that future to be realized. And in the midterm, I think it's really important to show the world. The dowels can work effectively, especially these types of dowels that have a core mission, vision of funding research. They can do that productively even given the constraints of the, of the current. And [00:15:35] so that, so, so like negotiating with tech transfer offices like they, I assume need to sign a sort of an analog legal agreement with a analog legal entity. Is that correct? And is that, is that like, is molecule that, that legal entity or like, how does, how does that work? Yeah, so maybe so to reiterate what Tyler said, there's actually nothing stopping that say from a university to directly engage with a doubt. I think it's more that those systems don't exist it, and there's not enough, like precedents to kind of enable those. There's also much larger, for example, question of like, to what extent could it litigate against the patent and actually actually enable, enable this protection. And so if you did operates through a set of different agents so these are analog, real wealth legal partners, and some molecule is one of those legal partners in essence. So we can, we can ensure that we are the licensing party, for example, with a tech transfer office. And then we enter into a sub licensing agreement, for [00:16:35] example, with with B to them. And in the same sense as what were tologist explained, we also then ensure that all of the, the, the payment flows and the. Are compliant to Kahn systems, something that we've realized it's, it's, it's really important to kind of bridge this emerging with three world with the real world to really make it as seamless as possible. And not for example, for us that yeah. University to go through the process of opening a Coinbase account figure out what is USB-C actually. But I mean, fundamentally I I like to use this analogy. If you can make an international EFG with like a big number and a swift number, like actually crypto is much easier than that by now, but it's a much less much less adopted system, even from an accounting perspective. Accounting for funding flows in, in this decentralized system is very simple. Like the, the proof of funds is very easy to provide because you can visually see where every single transaction can be traced back to. But so the way that we've tried to design really the flow of funding within. With Nvidia down within molecules to make it as seamless and [00:17:35] interoperable with the real well today as possible. And also to ensure that we have the highest degree of legal standards, legal integrity. So we work with with specialized IP counsel and IP law firms across the world in different jurisdictions to really ensure also that any IP that we adopt funds and that is encapsulated within these IP NFTE frameworks is future-proof. Because that, that's something that became very apparent for us. When we, when you work with IP, you can't really, you can't really make mistakes in terms of how you protect the intellectual property. And you also have a responsibility to actually the therapeutics that are being developed there, because if you, if anything was to invalidate the IP that could fundamentally influence whether a potential therapeutic can actually ever reach patients. Yeah. And so I think that the, the, the one. The question is there has to be a lot of trust between the Dow itself and sort of the, the organization or [00:18:35] people doing the negotiation and sort of holding the IP and forcing the IP. Because, because there's like at that sort of Dow analog interface there's is my impression is that there's no like enforceable legal contract. Right. So is that correct? I'm just, I'm just trying to like wrap my head around, like the actual. It is an enforceable legal contract, actually. So the initial agreement between let's say molecule and the university is a typical stock standard sponsored research agreement that you would do at sea, between between two parties, like a pharmaceutical company and a university, for example. So these are, these are the same agreements that the universities use. In many case, we plug into their pre-existing templates. Those typically have within them an assignment agreement or an ability to sub-license where the company or whomever is doing this initial licensing then has [00:19:35] the right to license exclusively the, the resulting intellectual property, or in some cases, even the full rights of the agreement molecule now engages in. Fully contractual, fully enforceable, typically in the context of Switzerland where the company is based agreement sublicensing agreement with the Dow via the election by via the election of this agent process. And now, so I would say the weakest part of that, if you want to think about where the sort of core let's say. Yeah, like the breaking points are with in that process would be, would be around the fact that there is required a large amount of trust in the agents, but really what the agent is doing is, is actually putting themselves at risk. They're taking on legal liability in some cases on behalf of the dowel. And so. Something if that Peyton was, let's say that agent made offer something or wasn't able to honor their agreement. I mean, there is full legal recourse that it could be, that [00:20:35] could be taken. But this is, yeah. Again, when you look at Peyton enforceability and Indian electoral property landscape, most of these things like, you know, you find out what works through, through litigation. These things have not been litigated yet. There's not really precedent for enforcement here. But this is also what it takes to innovate in the intellectual property landscapes. So it's, there is a tension between these things, but it, yeah, maybe to your original question, there's a lot of, is a lot of trust, certainly involved in I'm thinking about when we go, we go stuff is that there's like no first principles of it. It's just sort of like poke it and see what happens. Yeah, maybe as an interesting, it will be interesting case studies before it becomes relevant to us because in the space, kind of like some of the core protocols, like units open curve, I actually governed by dolls now. And actually they are now enforcing the IP actually at the courts. So even before it will be come necessary for us, there will be cases and case studies of kind of like it's very big organizations like a human 12 or [00:21:35] curve enforcing and going through the courts like this, even this year or next year already cases that are coming up. So it will be really interesting to see what are the legal precedents or like a Dow and forces is yeah. IP through agents basically. And I think there will be precedent before we will have to kind of in false our IP. Yeah. Well, it's literally saying your name. Well, one thing to add there. So to reiterate what Vincent said as well, I mean, that was a very quickly become powerful economic agents. And I think enforcing enforcing let's say processes in our legal system is often a function of capital. So I think if you did that, for example, was to ever get to a point where it had to enforce one of its one of its IP cases. It would definitely have the financial backing to do so, and it can operate through agents to kind of enforce the validity of its IP. And then the remaining processes that's, that's considered like the relationships between agents are really [00:22:35] subject to the same legal processes that we have today. When two companies enter a entered equilibrium, and if a biotech company enters a sponsored research agreement with the university, the trust agreements that are set up there are, are, are not different. And, and the underlying legal contracts that we using are also the same, I think. Back to Vincent's point, there are actually first cases where Dows are enforcing their IP. This is in the context of fits in ICU, open source software development, where, where a dowel let's say has developed a certain protocol, but that protocol is open source. But it's probably running under a specific software license and the Dow is not choosing to actively enforce its its IP against someone who infringed against that license. I think one additional aspect here is to when we think through trust and where is trust, concentrated and power concentrated from the Dow is to note that that, although there are these agents that are available for a Dodge interact with the real world, the capital's [00:23:35] concentrated within the network of token holders. And, you know, just on a technical level, there's this multisignature wallet that holds all the funds and that's controlled by members of the community. And it's all basically in a token gated way. And that network structure, that social network, which is basically the Dow, I think can be very well compared also to some kind of association where you have people all across the world, collaborating, they're all aligned by, by a token incentive to pursue one shared mission. And then the Dow the network. Start agreements with various agents. So it's not really relying on one particular agent fulfill its mission. If there was a situation which trust or agreement with one individual real-world agent know w would be broken, then still most of the capital wise with the Dow and the Dow would have the ability to engage in a D and an agreement with a different entity. It's not like there's one entity or one vulnerability. When, when you think [00:24:35] through the contact zone between the digital Dow and the physical company, and speaking of agents at what level does the entire membership of the Dow folk, right? Like, are they, are they voting on every decision? Like we want this person as our lawyer, we want this person. Yeah. Yeah. Now basically to make it kind of concrete there's like, of course, like a core team and stewards who actively working and we'll also have of course some yeah, for example, on the, on the longevity side, helping to solve steel flow, doing all of these activities, and then it's mostly on the bigger funding decisions, for example, should we fund this project with automation dollars, but it won't be on, should we hire this designer that will be like autonomy, for example, with the. The design team to hire a designer and budgets that are basically voted through. So it's not, micro-managing kind of in depth sense, but it [00:25:35] kind of more the key overall big decisions, what the community was able to do. So, I mean, early in the, in the community's formation and in the Dallas formation, there was a governance framework that basically laid out a series of, of, of decisions as to how governance actually functions in the doll. And there's in B doll, there's this sort of three tier governance system moving from conversation that is quite stream of consciousness oriented in discord, moving to semi formalized proposals for community input in a governance framework called discourse. And then ultimately. Things that make it past that stage, moving onto this sort of software platform for a token bass boat. And part of that governance framework that was initially created, also invested a certain amount of decision making power within working groups and also set thresholds on what those working groups were able to spend, what sort of budgets they had and where they needed permission from the community ultimately to make decisions. So there might be. No for decisions greater than $2,500. That requires a [00:26:35] soft phone for things more than $50,000 requires a token days vote. And this is really important because as you can imagine, early on the organization, it can be super chaotic and really, really unproductive if every single decision that that was making needs to have this sort of laborious community-wide boat. But this is also a really interesting sort of iterative experiment, but I think many dollars are participating at the moment, which is really trying to figure out to what extent you can involve the community in a productive way in the sort of day-to-day operation. What's differentiates, differentiates a token holder from a contributor, from a core team member, from a working group member. How do people sort of move along that funnel and traverse those sort of worlds in a way where you get the most productive sort of organization? And this is something that is, I would say, being iterated on and improved constantly based on, you know, the, the sort of dialogue happening between the team and. And actually on that note, I have one vaguely silly question, which is why are all Dow's run on? [00:27:35] This is, this is, this is my, my, my biggest, my biggest complaint is I, I cannot pay attention to like streaming walls of text. Yeah. So it's like, how did, how did that emerge? Like, has anybody done a, a doubt, like, just run on like a forum or by email or something? Yeah, it is actually the biggest bag holder in most DAOs that operates. I'm just kidding. Actually. It's it's of course almost like mimetic it's like, that's how, like a lot of crypto projects, even like three, four years ago began to organize. And I think it's, it's ultimately, it's just the tooling. Like they were like slack and discord, this court to coordinate, and this court was much better in like enabling to participate in a lot of different channels very easily. But we're going to be, I think it's a lot about like, even like file sharing. All of these things you need, which go beyond. But ultimately there are kind of like some leading doubts that emerge just as a telegram chat between [00:28:35] five friends. And that I know like the leading, like art collected, I was like, please, it doubt. And that was just like five friends on a telegram or something. So of course you can envision like every possible way and model. Ultimately, I think it's, it's more like a, became a pattern that like a lot of projects organized food like this. Yeah. And I think there's also this like feedback thing that occurs with like the more people that are organizing by a discord in the early days, the more that people started to create like token integrations and token gating and things like snapshot and all of these sort of things where now there's like, because of that, a bunch of tooling from an integration perspective, that is, that is now developed, that makes it easier to operate in a community like that than it would be to have a slack channel, for example. Yeah. The best part, there is a serious lock-in effect. If you start your new Dow, the best choice is to go with discord because that's where all the other books, we, you know, folks that are already active plus you can leverage a lot of bots to allow you to token gate access or [00:29:35] send notifications, similar things. And another question is how did you all become the core team? Just show up Tyler and Paul probably could start telling them. I think maybe one interesting thing is that ultimately like every journey is kind of individually, but ultimately most people are just like saw very initially or like at a similar idea and kind of, it's almost, I think like, like a shelling point where like like also like, like I literally tried to register longevity doubt just the domain two years ago, before we, even before I met anyone who wasn't a Dow. And, and so I think there's like, and I think it's a similar story, even for Tim that, and then ultimately of course, there's like some mechanism of discovering it and, or like hearing him about the idea or meeting, like, ultimately for me, it was meeting Tyler and Paul because of molecule and then for a lot of people, actually, they just saw an interview. They saw [00:30:35] an article about it, jumped into discord, introduced themselves, for example said, yeah, we would love to help on the website. I would love to help on the view flow and then started helping and ultimately through that mechanism. And I think like, People like bubble it up basically, and just started writing an article or doing a low or, but then became more and more integrated parts of most kind of like, like work themselves into it. And also of course, like like a lot of people have never met each other in person or is it like, and, but it kind of like this, this trust I think emerges and builds up doing like just engaging and helping progress the Dow as a whole. But I think it's, it's actually really interesting, exciting to see kind of like just this like global coordination emerging out of like the shared purpose or mission. And a lot of people just stepping up and like initially we didn't have a token, we had $0 and they were like people who like spend weeks, we building a website pro bono without [00:31:35] expecting anything like re like really good research has joining me into this. Before we even had like $1 funding to give towards research. So I think it would have to, yeah, that's kind of also the inspiring part. I think about a lot of dialysis that it just naturally emerged and everyone can do this a bit of like no boundaries, but then yeah, self-selected almost, On Nicholas raising his hand was going to give him a chance to say something, right? Yeah. So I think there's the saying, I've read a couple of days ago that some ideas are occur in multiple different brains at the same time. And I think that's really what also happens if we, to Dow Vincent let's think about this for some time. Lawrence had basically stopped developing mobile applications, really figured, you know, focused on aging research, Paul and Tyler thought about this topic. Marketplace for ideas, intellectual property. Tim had been, I think, thinking about this idea and, you know, basically crop funding, academic, or just fundamental research as a community for some [00:32:35] time. And I've been sufficiently frustrated with the way academia currently works and have been actually also thinking about, okay, can there be some kind of mechanism where a community bootstraps itself into existence and funds, scientists and entrepreneurs within its community. Everybody pays a little and then you can actually allocate a lot to the really good ideas. And in some ways I think, you know, we all have some kind of predecessor to this idea. And then when we each had these individual time points heard about it, there was just a, who was a very intuitive decision to join. I think it's like a certain amount of serendipity, a certain amount of like Twitter network effects, like a weird variety of things. Like you know, we started out with like just like white paper and an idea. And then, you know, through, through that, got in touch with a couple of different people, but then people just start showing up. I mean, weird. The most interesting thing for me about the Dow experiment is like early on, we had like this [00:33:35] sort of like, okay, people want to be working with group members. This is like pre doubt. Not even like Vincent saying no token yet, nothing trying to figure out, like, how do you organize this community? How do you do something meaningful? We were like trying to collect applications or something. And then it's like, some people would apply and we're like academy, who's going to be good or whatever one person who's now the lead of the, of the tech working group, this guy, Audi Sheridan applied and was rejected, but then just like made himself super valuable. Like he started doing things that were like, no one else could do. Became an invaluable member of the community. And then we sort of realized like, why are we doing this application thing? Like people just show up there's things that need to be done. Sometimes we don't even see what those things are. People have good ideas, they make proposals. And all the sudden, you know, you, it's not like a company where there's a hiring process. There's very little, you can, anyone can show up on the discord tomorrow, identify some pain points, make a proposal, and just demonstrate to all these other people [00:34:35] that they have value to add to the community. And then, you know, there's, there's a sort of process there, but that process is, is still very loose. So I mean, most people who are here even on this call showed up through some like, like Nicholas and Vince were saying, I had been thinking about this before. We're sort of attracted to this magnet that is now a selling point for crypto in longevity and just had really great ideas about how to improve the community and elevated. And that's sort of, that's sort of, for me, the magic, I mean, You know, six months old now, roughly, I guess it'll be about six months and you know, the community is like 3,500 people or so, and, you know, hundreds of researchers, you know, dozens of people who are contributing pretty often, I don't mean some people full time at this point. And that's like a, a growth cycle to go from like a white paper and nothing to, you know, a bunch of money to fund R and D a bunch of intellectual capital you know, pretty strong political force in that amount of time would be [00:35:35] unprecedented. I think, for, for a company, especially something that's like bootstrapping from a community, not raising money from didn't raise money from BCS or anything like that. Just like had an auction for a token. It's to me, this is really interesting, and it sort of proves that, like, in terms of organizing intellectual capital and monetary capital, it's a really, really powerful mechanism. And so sort of related to the company point, or are you, are you worried about the sec. I mean, a huge amount of thought has gone into like the legal structuring and middle engineering and the dowel. So, I mean, the way it basically works is that the intellectual property that the Dao holds in the form of B's IP NFTs are not owned by the token holders. The token holders can sort of govern them by proxy through this governance, token and dividends are not paid out either. So the idea is to create, you know, it's not a nonprofit organization and the. As an organization is trying to make profit to further fund longevity research, but those dividends don't flow to, to token holders. [00:36:35] So there's not, you know, it's, it's, there's several prongs of the Howey test that are essentially being broken under, you know, whether it's things like making profits from the efforts of others and the fact that no one in the organization is directly profiting from, from sort of commercialization efforts the Dow is doing. But yeah, I mean, this is something, you know, thinking about the interaction between the Dow and the sec or, or, you know, like securities concerns played a pretty big role in the design thinking around the entire organization, the structure, because, you know, you can also go different routes. You know, some security token route or, or, you know, this, if you go these sort of routes, you really end up just excluding huge numbers of people from, from participating. So the goal here was like, how do you maximize participation in a way that is still ultimately creating value, but not necessarily creating value? You know, it's plumbed individual token holders, but really for the field of longevity as a whole and to move the needle on research. Got it. [00:37:35] So maybe, maybe to add a couple of points here. So the way that Vita token is fundamentally designed as a governance and utility token and at its highest level, you can think of it as something that is actively used by all members to curate the IP and the projects that they want to fund. And something that taught us that earlier is this, this very strong with typical let's say security, security, like assets, you have direct low dividends. You have very clear expectation of profits. In this case, first of all, you need to actively do something to be a member of VitaDAO and to then actively help to curate the IP. And the rights that come with it be don't token. There's no way that you could like say, okay guys, I'm out. And I want to take my share of IP that I helped create with me, which is also typical thing that you might have. You could have this as a shareholder, or if you're kind of in like more like a limited liability partnership type setting. So in this case, the Dao owns the IP and there's also, no, VDX not any expectations of profits that you could have because first of [00:38:35] all the goal here is to fund, to fund research, and really open up that research and then to try and make it accessible for the wealth which could actually mean open sourcing the research or open-sourcing the IP thus killing its commercial value. So that's a beat that discovered some. And it deem that discovery to be so important that it had to be open sourced and, and made accessible and thus they could never become a patient of all therapeutic down the line. Token holders have full rights to do that. Whereas I think if you, if you had a typical setting where you had a company and it was the whole, the shareholders and those shareholders had a very clear expectation of profits that would never fly in most normal companies. And so because there is no direct expectation of any, any potential returns that are made, there's not even the potential for return per se. And then there's that there's the full of governance option to essentially not commercialize anything. Yeah. Yeah, that's really cool. And actually sort of not quite related, but so, so I, I would say that that therapeutics. [00:39:35] Sort of a very special case in the sense of it's like very IP based there's, there's sort of very much a, like a one-to-one correlation between IP and product and those products can be very lucrative. So, so that's sort of why, you know, the therapeutics as, as an industry work. Do you think that the, the sort of the beat Dow approach could work for research and development outside of the therapeutic world? I guess as you're maybe, maybe rephrase your question, Ben is, yeah. It's just like, I guess the question is, is like the sort of idea that you can create incredibly valuable IP that like. It's fairly unique to the world of therapeutics and in many other sort of technological domains th the value really comes from like building the company around some IP and IP is [00:40:35] not that important. So yeah who wants to go for it? Go for Tyler. No, I was just gonna say quickly. So, I mean, I think absolutely because it also, it doesn't doubt doesn't need to be also IP centric, for example, Bita doc and have the holding data that was being produced by something. And that data could have intrinsic value. Similarly, meted out could try to get involved in manufacturing or create products. I mean, there's many different design flavors for these dowels. And I think the governance framework around this, and let's say the organizational capacity and the coordination capacity can be applied to many different problems in many different industries. And I think even the intellectual property thing does hold true well beyond therapeutics. So with therapeutics, you're right. They're very, very expensive to develop, which is why you tend to get this enforceable monopoly to try and basically incentivize people, developing them, but in textiles or engineering or, you know, [00:41:35] any sort of field where. IP plays a role. You could even apply almost a one for one one-to-one sort of model here, but beyond that there's many different flavors of assets and that sentence that adult could hold the other than probably most excited by is really things like data, which I think can be really, really powerful or software, which could be similarly powerful. And then, which I think a lot of dowels are already, already doing. For example, maybe it also has as one point also in addition to like, even like activities, like funding I P directly and kind of like having like a self fulfilling or like also you know, sustainable funding cycle there. We also, for example, had like these efforts that are completely philanthropic, if you like, and just helping to use also our community and to, for example, put together like this donation round on longevity and like exploring kinetic donations, like basically where, like I also like this idea even like before Kind of be the Dow existed.[00:42:35] And I was like, okay, now we're like, kind of, there's like enough people and enough attention to do this. And the doll basically donated $65,000. But then for example, we literally donated 400,000 and we helped curate a projects which are all purely philanthropic, which are like open source projects, different even like, like NGOs doing like different projects and and basically helped also get our community together to donate to these different projects. And then talking a little bit for me, it's like, like one example where it's like really powerful because you have this like shining point of like crypto people were interested to fund longevity and they're not just interested to fund IP and FTS in a sustainable loop, but also to explore other funding experiments or other experiments. Like what another one we were discussing is like a longevity prize or like grants and fellowships for young people entering the field. All of that is actually kind of like advancing the whole cause and the whole community [00:43:35] and, and, and the core focus and activity of funding, IP, because with growth, our community and, and yeah, the whole field. So I think that's kind of actually an interesting point is that we are not limited to kind of funding IP, but it's of course, one of the core mechanisms we're engaging in. Yeah. I would add that there's also value in the community itself. Imagine Bitcoin, right? Anyone can fork it Instagram it's, it's a simple app. Anyone could have made a copy, but there's most of the value there and the net there that gets built. So here we have a team, right? The stellar team and the Dow itself is ultimately our R D. Awesome organization here. It got born in a Genesis by itself. It's a smart contract. So it's sort of unique in that way of it. Of course, someone interacted with the smart contract. It can be someone anonymous, but it issued 10% of his tokens, which by the way are [00:44:35] 64 million which is we're on 64 million, which is about the lifespan in minutes of the longest lived person, John Como. And that's sort of Beaky right. We can only extend that if someone lives longer than that, but anyone could buy those tokens, right? It's a fair auction including us, including random people. And then there was a vote to empower a core team like us. Yes, some of it, most of us here got involved before. But the cool part is anyone can start showing up and contributing a lot of value and ultimately the community can decide to do make them a core contributor to make them a steward of even some other efforts, right. Even something that we haven't thought about. There's always room it's permissionless. That's, that's something special definitely a metal experiment right here. And it's an experiment of sort of organizing people towards a common goal and a different way to make experiments, scientific experiments and, and figure out how to advance the therapeutics. We need to extend our healthy. You would actually be [00:45:35] curious. If I could ask you a question, Ben, on, on your thinking on, on poverty, how do you think, like, something like that fit into your thinking on just like new institutions for funding science, because you also mentioned this, like, it could also be a model of course, like we're potentially exploring it all. It's four different areas. And ultimately for me, it's like, if there's like big of enough of a community that is interested to fund something, like, like one of course, very like public example could be something like climate change or something exciting, like space. They would probably be at some point a community that would form resources and community to fund those research areas. It would be curious to hear from you, like kind of yeah. How you think for you to dial in the framework that you're outlining there. Like you're listening to the work? No really well with like pop up on this theme, you're exploring. Yeah. I mean, frankly, the reason I, one of the reasons I wanted to have this conversation was to sort of form those thoughts. So I [00:46:35] will be able to answer that much better after sort of like going after this. Right. So I think the, so just some of the tricky pieces, I think outside the domain of longevity is like longevity is, is very, you know, exciting to a lot of people with money both in the crypto community and outside of it. And so I think that's, you know, it's like, there's, there's lots of people who are excited about space, but from my experience, space, geeks tend to not be that wealthy. And so, so there's a question of just like you can, you can have a very excited community, but I think the real thing is like how much are those people really willing to put their money where their excitement is? That that's, that's a big question. Another question is, is for me that I think about it is like coordination around research. So, so I think another sort of great thing about therapeutics is that you really can, there's sort of like this nice, like one-to-one to one where you can have one [00:47:35] lab develop one therapeutic, which corresponds to one piece of IP, which corresponds to one product. And obviously it doesn't always work that way, but that's, that's sort of like a pretty strong paradigm, whereas with a lot of other technology it's. Sort of that, that attribution chain is very hard to do and it involves lots of different groups contributing different things. And so, and you need someone coordinating them. So this is, this is a lot to say. I think that there's very much something here. That's why I'm interested in it and why a lot more people to learn about it and why we're talking about this. But I, I think it's, it, it w it needs a lot of thought as is. It's not sort of like, I, I don't think that you could like, literally take what you all have done and just like, copy paste it for, for other domains. But that isn't to say that. Modify it and do something. Cause you know I think it's actually really, really pretty. Yeah. I mean maybe if I can speak on that [00:48:35] quick. So I think Dow will be a highly use case specific. It's actually been an interesting site. I've been I've I started writing about thousand mid 2016, 2016. There was an article that I wrote on like, what would happen if we combined let's say AI conscious AI systems with Adar. So kind of having adapt, having operated by autonomous agents in essence. And so what happened after the Dow launched, which was one of the first dollars on Ethereum? It was, it was a big complex autonomous kind of setup where the Dow was almost entirely just controlled through through dot holders. But then that also enabled a. An attack vector that essentially allowed someone to hack those core contracts. And then kind of the Dallas space went into a long period of of considering whether something like this should ever be attempted again. And people became, began to variously, very cautiously build out these systems. And there were, there's a couple of projects that over really over five years already have tried to build like generalizable Bal frameworks. And many of those projects have kind of have [00:49:35] you have failed that it actually providing frameworks that really got to mass adoption. And I think w whoever, someone, when, when you start building a down, it's kind of like, when you say, like, I want to build a company and there's many ways to build companies. And the difficult thing is not incorporating the Delaware or getting the bank account set up. And that's what sometimes people think today when they set up a doubt that like, oh, okay, it's a multisig, it's a discourse. But you obviously need that entire ecosystem that you're building. You need to think about, like, what is the, what is the value creation model for this style? What's its, what's its unique value proposition based on a value proposition, what type of community do I want to build? What type of culture do I need to implement that value proposition that will attract that community to help me? So we've needed that. For example, we've been very conscious about the type of open community that we wanted to build. And then this goes into all sorts of follow on questions. It's like, where do you actually get funding from to do what you do? And based on where that funding comes from, that will influence the culture of the community. For example, if you have a Dow that's funded by [00:50:35] several groups of larger VCs, that thou will be very different from a cultural perspective. And also from its goal is then a down that is funded by an open. Where now the individual members are much more, let's say engaged because they put some of their funding in. So they want to have a say on how to control it and what it gets used for. It's going to be very interesting to see, I think in the coming years, if, if generalizable frameworks and register, like just press a button and like spin up it, that you can already do that. There's many systems that do that, but I keep being surprised that like, they're actually not being very actively used. I think what is really important for example is to build basic infrastructure that can serve industries. And so, for example, if this is something that we we've been very focused on a molecule like drug development, isn't that different, whether you're saying you're developing longevity therapeutics or counts of therapeutics like the base kind of the base infrastructure and how you interact with the real world, for example, through IPS the same We kind of realized like decentralized drug development through Dallas, for example, could only really work if that was not a way to own IP. And then, but now I think for example, I [00:51:35] think a community like meta down will be very different than let's say a Dow that's focused on rare diseases where you're working with several patient advocacy groups. And it's not like there's a huge general excitement, unfortunately about diseases that are, that only affect small patient populations. Whereas aging affects affects all of us. And now the data that we're currently, for example, building out at molecule is called SIDA which will be a Dao that's focused on exploring and essentially democratizing access to psychedelics and mental health. Again, because we feel this is a topic that has a very broad appeal and where we can, where you can very effectively scale culture and also apply and also apply some of the frameworks ta-da. Yeah, maybe just one other thing. I think it's important to highlight in terms of how we think about this as well. Like the reason that dowels are interesting, even for me, like the reason that crypto is interesting is because it's effectively just a sandbox environment to try experiments that create behavioral outcomes like token engineering and token economics is [00:52:35] simply a way to motivate certain outcomes and certain behaviors in real time, sort of building and production, texting and production and academia. If I said, I want to change drug development, I want to change the way that pharmaceutical companies behave. I could probably write a paper in like nature reviews, drug discovery, and maybe kick off a policy discussion that ultimately isn't really going to move the needle at least on like a tangible timeline around how these things get solved. But what's interesting about those is that you can basically say I have this idea. There's the stakeholders that I want to incentivize to behave a certain way and achieve a certain outcome. And you can just like deploy this with software and start doing it. It's really crazy. I mean, the, one of the most interesting comments that Vitalik said, we hosted this topic. comment that resonated was that like, he felt the biggest sort of gift to humanity that corporate provided was this sandbox environment for experiments. And I think [00:53:35] as a scientist, this is one of the things that, that really, really strongly resonates. It's like move beyond the theoretical and go directly to the apply and start testing things in production, seeing what works. And I don't think we can say confidently that like dolls are biotech dolls. They're better than biotech companies. And achieving goals and drug development. But I think in a couple of years, we'll have a bunch of data points to suggest the things that Dallas are really good at, at least with this design implementation we'll know what they aren't good at. And because the organizations are so flexible and because they operate through this very iterative governance model that you have the ability to always be tweaking and always be improving. And so this for me is what's really, really exciting. It's like this crazy experiment that you're doing pulling in people from all over the world, independent of geography, geography. Like I haven't, if there was another tool kit to do it, that was an on crypto. We probably would have built it using that like it's. And, but really that the point here is. I haven't seen a better way to [00:54:35] scale incentives to a large group of people. Then we went three and crypto. So to me, this is, this is the most, yeah, I think we're done when it comes down to the point of the rights before that ultimately it's about a community, even with sidearm, like there's no token, there's nothing. We literally just set up like a telegram chat, invited some interesting people. They self selected themselves into now. It's like 500 people. We hosted like meetups and there's like ideas emerging out of all the people. And ultimately it doesn't really matter like how it's almost implemented or if there's a token, but it was like, what does community is to share the values and the culture of it and like, Like a shared mission also. So I think that's really, for me also, what's interesting to takeaway is that also looking at like the most successful projects in crypto has probably been projects like Bitcoin and Ethereum and, and I think a big part of the team success was its community and its culture persevering through thick and thin, like building and improving the protocol [00:55:35] together, building on it, being incentivized to build on it. I think that's like, like a major takeaway is that it would've made it. It's like all about communities and yeah, shared missions. Did you have your. Yeah, everything I'm curious about is how have tech transfer offices responded to this? I, I assume that there've been many conversations with them. I put cards on the table, don't have the highest opinion of the innovative-ness of tech transfer offices. And so I I'm wondering how, how have those interactions gone? There is surprisingly technophobic organizations for supposed to like, suppose to be like focusing on innovation. Yeah. Supposed to be helping out professors and researchers sort of bring innovation into the real world. But I would say on the whole, you know, not necessarily by fault of their [00:56:35] own, but rather just because tech transfer is largely a failed business model. Instructionally is not operated well. It's a couple of general councils sitting in an office that are not domain experts in any one field have typically grossly inflated ideas of what innovation is worth. It's challenging that said we've been super lucky, lucky to engage. Some amazing people at tech transfer offices that are really, I mean, and this is self selecting, right. If you're inter if you're interacting with us, probably amongst the most forward thinking let's say tech transfer people, so keep a list of them. So that like right. So that like, so, so like then, then if you can get some kind of feedback loop where, like you say, like, okay, these are the best tech transfer offices to work with. And then people start working with them and then all the other tech transfer offices start seeing. Totally. I mean, but this is what happens. [00:57:35] Like the first one does it. And then they've sort of de-risked it for the others. And this is what we see happening with every subsequent one that goes for it. It's easier to have the next conversation. We also learn more about how to work with them, how to structure these deals. I would say the main thing here is that tech transfer is largely not profitable. There's very, very few tech transfer offices in the world that are cashflow positive. Their business model is in danger. Their existence is in danger and they desperately need new ways of innovating the smart there's outside of Harvard, MIT, Stanford, Oxford, Cambridge, not that many that are really doing big things. And I think what we see is that there are people in even smaller tech transfer offices around the world that recognize this and are actually really, really hungry for a different way of doing things. And those are the people we hope to work with. But yeah, you're right. It's not the most, not the easiest. Let's say stakeholder group to engage. Yeah. Sorry, go ahead. Having said that though, [00:58:35] because we've been so this is also for example, a core role that we see that we see at molecule. Again, tech transfer can be standardized, like working with tech transfer. It doesn't matter if you are outsourcing in longevity asset or. And what we've actually made as I got developing systems that are as close as what they're used to today makes life massively easier. So the kind of things to avoid is to create the impression let's say. So even within Veit about in terms of negotiating contracts next steps around the IP it's important to realize that there's not a thousand people in a, in a discord like that will then contact the university or try to get involved in the research, make decisions. It's also then important to realize that these funds are like, it's not, they're not coming from kind of anonymous accounts in this like weird ether that is kind of the cryptocurrency space. But kind of. Give those stakeholders, the assurance that that we using the same process that they used to, that we've developed sophisticated legal [00:59:35] standards. And then all of this can run kind of through the existing banking system once it's, once it's bridged into them. And actually once you provide those assurances, it's surprisingly easy to work with them. In, in some cases, not in all of them, but I think as an organization, we, for example, I think can be much easier for them to work with them. Let's say a venture capital firm that wants to out lessons, the IP is setting up a company has, and then engages in three to six month long negotiations. I think the tech transfer offices that we have engaged, I've been pleasantly surprised how quick and easy they can actually be to work with a Dow or a decent size community. If the right structures and processes are. And like one out of every 20 is just some person who's like, oh my God, this is so cool. I also love when I play around in defy, I'm also into, it happens rarely, but when, when that happens, you're like, okay, this has got to work.[01:00:35] And also work with, sorry, go ahead. We also work with companies that themselves have negotiated with the TTOs and they can, sub-license a stake. And either first of all, they can also work with the company molecule and the molecule can be, don't even need to know necessarily about via that way initially. Right. Molecule can have a sponsored research agreements with that startup or with the TTO. There was a company they don't TTO is, might prefer to work directly with the company, right. Or even a revenue share. We can have royalty agreements as an, as a, as an FDA as well. With, with the company a startup, right? And if, if, if the deals are too slow, we can work directly with our ups initially. And as things open up and this gets more popular and they see that there's a better place to go. So you have the, the, that was a bidder, you know, maybe other people in the crypto community can become bitters [01:01:35] for these IP NFPS. And it can be a much better way to sort of decide as a market what the value of assets are. And so if you have an asset that, that the market would this market more and more liquid market would value higher, why would you go with the traditional players when you can get much, much better terms? And so I think they will get convinced once they what they see that. Yeah. And I think also one thing that like today we funded like a new project as well. And what the research has said also that he was pleasantly surprised how quickly it went from like application to funding. So I think it was within four weeks or something, which I think is not common for like planning to funding. And I think that's also something that like and a lot of researchers are also really excited to have a community behind them that is really excited to follow the progress, to publish a process, to do interviews and the video about their research and, and connect to the other research we are funding. So I think that's [01:02:35] also like a huge value proposition to the researchers. And speaking of applications is this is a question from on Twitter. All of your proposals seem to have passed with like resounding consensus Not necessarily, not necessarily, no. It's I think there was one or two that would almost almost 50 50, but like really, like I would read on some there's like resounding like almost like a hundred percent voting in favor on two or three, there was like only 60% of voting in favor. And what I think is interesting, what I observed as a pattern is that like on the ones people voted against it was mostly in working group members voting against, but the community was like oftentimes voting in favor. So like, my feeling was like, the community wants to fund a lot of things and then things keep everything that is getting listed for funding should be funded. But then the people that in turn the, like [01:03:35] some of them who might've looked and, and help you
Dr. Brian Arthur and I talk about how technology can be modeled as a modular and evolving system, combinatorial evolution more broadly and dig into some fascinating technological case studies that informed his book The Nature of Technology. Brian is a researcher and author who is perhaps best known for his work on complexity economics, but I wanted to talk to him because of the fascinating work he's done building out theories of technology. As we discuss, there's been a lot of theorizing around science — with the works of Popper, Kuhn and others. But there's been less rigorous work on how technology works despite its effects on our lives. Brian currently works at PARC (formerly Xerox PARC, the birthplace of personal computing) and has also worked at the Santa Fe institute and was a professor Stanford university before that. Links W. Brian Arthur's Wikipedia Page The Nature of Technology on Amazon W. Brian Arthur's homepage at the Santa Fe Institute Transcript Brian Arthur [00:00:00] In this conversation, Dr. Brian Arthur. And I talk about how technology can be modeled as modular and evolving system. Commentorial evolution more broadly, and we dig into some fascinating technological hae studies that informed your book, his book, the nature of tech. Brian is a researcher and author who is perhaps best known for his work on complexity economics. Uh, but I wanted to talk to him [00:01:00] because of the fascinating work he's done, building out theories of technology. Uh, as we discussed in the podcast, there's been a lot of theorizing around science, you know, with the works of popper and Kuhn and other. But there's has been much less rigorous work on how technology works despite its effect on our lives. As some background, Brian currently works at park formerly Xerox park, the birthplace of the personal computer, and has also worked at the Santa Fe Institute and was a professor at Stanford university before that. Uh, so without further ado, here's my conversation with Brian Arthur. Mo far less interested in technology. So if anybody asks me about technology immediately search. Sure. But so the background to this is that mostly I'm known for a new framework and economic theory, which is called complexity economics. I'm not the [00:02:00] only developer of that, but certainly one of the fathers, well, grandfather, one of the fathers, definitely. I was thinking one of the co-conspirators I think every new scientific theory like starts off as a little bit of a conspiracy. Yes, yes, absolutely. Yeah. This is no exception anyways. So that's what I've been doing. I'm I've think I've produced enough papers and books on that. And I would, so I've been in South Africa lately for many months since last year got back about a month ago and I'm now I was, as these things work in life, I think there's arcs, you know, you're getting interested in something, you work it out or whatever it would be. Businesses, you [00:03:00] start children, there's a kind of arc and, and thing. And you work all that out. And very often that reaches some completion. So most of the things I've been doing, we've reached a completion. I thought maybe it's because I getting ancient, but I don't think so. I think it was that I just kept working at these things. And for some reason, technologies coming back up to think about it in 2009, when this book came out, I stopped thinking about technology people, norm they think, oh yeah, you wrote this book. You must be incredibly interested. Yeah. But it doesn't mean I want to spend the rest of your life. Just thinking about the site, start writing this story, like writing Harry Potter, you know, it doesn't mean to do that forever. Wait, like writing the book is like the whole [00:04:00] point of writing the book. So you can stop thinking about it. Right? Like you get it out of your head into the book. Yeah, you're done. So, okay. So this is very much Silicon valley and I left academia in 1996. I left Stanford I think was I'm not really an academic I'm, I'm a researcher sad that those two things have diverged a little bit. So Stanford treated me extraordinarily well. I've no objections, but anyway, I think I'd been to the Santa Fe Institute and it was hard to come back to standard academia after that. So why, should people care about sort of, not just the output of the technology creation process, but theory behind technology. Why, why does that matter? Well[00:05:00] I think that what a fine in in general, whether it's in Europe or China or America, People use tremendous amount of technology. If you ask the average person, what technology is, they tell you it's their smartphone, or it's catch a tree in their cars or something, but they're, most people are contend to make heavy use of technology of, I count everything from frying pans or cars but we make directly or indirectly, enormously heavy use of technology. And we don't think about where it comes from. And so there's a few kind of tendencies and biases, you know we watch we have incredibly good retinal displays these days on our computers. [00:06:00] We can do marvelous things with our smartphone. We switch on GPS and our cars, and very shortly that we won't have to drive at all presumably in a few years. And so all of this technology is doing marvelous things, but for some strange reason, We take it for granted in the sense, we're not that curious as to how it works. People trend in engineering is I am, or I can actually tell you that throughout my entire life, I've been interested in how things work, how technology works, even if it's just something like radios. I remember when I was 10, I like many other kids. I, I constructed a radio and a few instructions. I was very curious how all that worked and but people in general are not curious. So I [00:07:00] invite them quite often to do the following thought experiments. Sometimes them giving talks. All right. Technology. Well, it's an important, yeah, sort of does it matter? Probably while I would matter. And a lot of people manage to be mildly hostile to technology, but there are some of the heaviest users they're blogging on there on Facebook and railing about technology and then getting into their tech late and cars and things like that. So the thought experiment I like to pose to people is imagine you wake up one morning. And for some really weird or malign reason, all your technology is to super weird. So you wake up in your PJ's and you stagger off to the bathroom, but the toilet, [00:08:00] you trying to wash your hands or brush your teeth. That is no sink in the bathroom. There's no running water. You scratch your head and just sort of shrugged in you go off to make coffee, but there's no coffee maker, et cetera. You, in this aspiration, you leave your house and go to clinch your car to go to work. But there's no car. In fact, there's no gas stations. In fact, there's no cars on the roads. In fact, there's no roads and there's no buildings downtown and you're just standing there and naked fields. And wondering, where does this all go? And really what's happened in this weird Saifai set up is that let's say all technologies that were cooked up after say 1300. So what would that be? The last 700 years or so? I've disappeared. And and you've [00:09:00] just left there and. People then said to me, well, I mean, wouldn't there have been technologies then. Sure. So you know how to, if you're a really good architect, you might know how to build cathedrals. You might know how to do some stone bridges. You might know how to produce linen so that you're not walking around with any proper warm clothes and so on. But our whole, my point is that if you took away everything invented. So in the last few hundred years, our modern world or disappear, and you could say, well, we have science, Peter, but without technology, you wouldn't have any instruments to measure anything. There'd be no telescopes. Well, we still have our conceptual ideas. Well, we would still vote Republican or not as the case may be. Yeah, you'd have, and I'd still have my family. Yeah. But how long are your kids gonna [00:10:00] live? Because no modern medicine. Yeah, et cetera. So my point is that not only does technology influence us, it creates our entire world. And yet we take this thing that creates our entire world. Totally. For granted, I'd say by and large, there are plenty of people who are fascinated like you or me, but we tend to take it for granted. And so there isn't much curiosity about technology. And when I started to look into this seriously, I find that there's no ology of technology. There's theories about where science comes from and there's theories about music musicology and theories, endless theories about architecture and, and even theology. But there isn't a very [00:11:00] well-developed set of ideas or theories on what technology is when, where it comes from. Now, if you know, this area is a, was that true? On Thur, you know, I could mention 20 books on it and Stanford library, but when I went to look for them, I couldn't find very much compared with other fields, archi, ology, or petrol energy, you name it technology or knowledge. It was, I went to talk to a wonderful engineer in Stanford. I'm sure he's no longer alive. Cause this was about 15 years ago. He was 95 or so if I couldn't remember his name it's an Italian name, just a second. I brought this to prompts. Just a sec. I'm being sent to you. I remember his name and [00:12:00] make it the first name for him. Yeah. Walter VIN sent him. So I went to see one it's rarely top-notch aerospace engineers of the 20th century had lunch with them. And I said, have engineers themselves worked out a theory of the foundations of their subject. And he looked, he sort of looked slightly embarrassed. He says, no. I said, why not? And he paused. He was very honest. He just paused. And he says, engineers like problems they can solve. It's. So compared with other fields, there isn't as much thinking about what technology is or how it evolves over time, where it comes from how invention works. We've a theory of how new species come into existence since 1859 and Darwin. [00:13:00] We don't have much for theory at all. At least. This was 10, 15 years ago about how new technologies come into being. I started to think about this. And I reflected a lot because I was writing this book and people said, what are you writing about? I said, technology that is always followed by Y you know, I mean, I could say I was maybe writing the history of baseball. Nobody would've said why, but Y you know, what could be interesting about that? And I reflected further that and I argue in my book, the nature of technology, I reflected that technology's not just the backdrop or the whole foundation of our lives. We depend on it 200 years ago, the average length of life, might've been 55 in this country, or 45. [00:14:00] Now it's 80 something. And maybe that's an, a bad year, like the last year. So, and that's technology, medical technology. We've really good diagnostics, great instruments very good methods, surgical procedures. Those are all technology. And by and large, they assure you fairly well that if you're born this year in normal circumstances, Reasonably the normal circumstance through born, let's say this decade, that's with reasonable, lucky to live, to see your grandchildren and you might live to see them get married. So life is a lot longer. So I began to wonder who did research technology and strangely enough maybe not that strangely, it turns out to be if not engineers, a lot sociologists and economists. [00:15:00] And then I began to observe something further in that one was that a lot of people. So wondering about how things change and evolve had really interesting thoughts about how science, what science is and how that evolves. And so that like Thomas Kuhn's, there are many people speculated in that direction, whether they're correct or not. And that's very insightful, but with technology itself I discovered that the people writing about it were historians associates, which is an economist and nearly, always, they talked about it in general. We have the age off the steam engines or when railroads came along, they allowed the expansion of the entire United States Konami that connected his coast and west coast and [00:16:00] so on. So they're treating the technology has sort of like an exogenous effect sent there and they were treating that also. I discovered there's some brilliant books by economic historians and sociologists add constant is one. He wrote about the turbo chapter, super good studies about Silicon valley, how the internet started and so on. So I don't want to make too sweeping the statement here, but by and large, I came to realize that nobody looked inside technologies. So this is if you were set in the 1750s and by ology certain biologists, they would have been called social scientists, natural philosophers. That's right. Thank you. They would have been called natural philosophers and they would have been interested in if they were interested [00:17:00] in different species, say giraffes and Zebras and armadillos or something. It was as if they were trying to understand these from just looking outside. And it wasn't until a few decades later, the 1790s, the time of George cookie that people started to do. And that to me is, and they find striking similarities. So something might be a Bengal tiger and something might be some form of cheetah. And you could see very similar structures and postulate as Darwin's grandfather did that. There might be some relation as to how they evolved some evolutionary tree. By time, Darwin was writing. He wasn't that interested in evolution. He was interested in how new species are formed. So I began to realize that in [00:18:00] technology, people just by and large looking at the technology from the outside, and it didn't tell you much. I was at a seminar. I remember in Stanford where it was on technology every week. And somebody decided that they would talk about modems. Those are the items that just connect your PC. The wireless internet. And they're now unheard of actually they're built into your machine. I'm sure. And we talked for an hour and a half about modems or with an expert who from Silicon valley who'd been behind and venting. These never was the question asked, how does it work? Really? Yeah. Did, did everybody assume that everybody else knew how it worked? No. Oh, they just didn't care. No, no. Yeah, not quiet. It was [00:19:00] more, you didn't open the box. You assume there was a modem who is adopting modems. How fast were modems, what was the efficiency of modems? How would they change the economy? What was in the box itself by and large was never asked about now there are exceptions. There are some economists who really do get inside, but I remember one of my friends late Nate Rosenberg, superb economist of technological history here at Stanford. Rude poop called inside the black box, but he didn't even in that book, he didn't really open up too many technologies. So then I began to realize that people really didn't understand much about biology or zoology or evolution for that matter until this began to open up or can [00:20:00] isms and see similarities between species of toads and start to wonder how these different species had come about by getting inside. So to S set up my book, I decided that the key thing I was going to do, I didn't mention it much in the book, but was to get inside technologies. So if I wanted to talk about jet engines, I, wasn't just going to talk about thrust and about manufacturers and about people who brought it into being, I was going to talk about, you know heat pumps, exactly Sur anti surge systems for compressors different types of combustion systems and materials whole trains of compressors. Oh, assemblies of compressors the details of turbines that drove the compressors. [00:21:00] And I found that in technology, after technology, once you opened it up, you discovered many of the same components. Yeah. So let me hold that thought for a moment. I thought it was amazing that when you look at technologies from the outside, you know, see canoes and giraffes, they don't look at all similar legs. Yeah. But they all have the same thing, basic construction there. And then their case, their memos, and they have skeleton their vertebrates or et cetera, whatever they are or something. And so in technologies, I decided quite early on with the book that I would understand maybe 25 or so technology is pretty well. And of those [00:22:00] I'd understand at least a dozen very well, indeed, meaning spending maybe years trying to. Understand certain technologies are understanding. And and then what I was going to do is to see how they had come into being and what could be said about them, but from particular sources. So I remember calling up the chief engineer on the Boeing 7 47 and asking them questions personally, the cool thing about technology, unlike evolution is that we can actually go and talk to the people who made it right. If they're still alive. Yes. And so, so, so I decided that it would be important to get inside technologies. When I did that, I began to realize that I was seeing the same components [00:23:00] again and again. So in some industrial system, safe for pumping air into coal mines or something, fresh air, you'd see compressors taking in their piping, it done. And and yeah. Again, and again, you see piston engines or steam engines, or sometimes turbines powering something on the outside. They may look very different on the inside. You are seeing the same things again, again, and I reflected that in biology and say, and yeah, in biology save mammals we have roughly the same numbers of genes, very roughly it's kind of, we have a Lego kit of genes, maybe 23,000 case of humans slightly differently for other creatures. [00:24:00] And these genes were put together to express proteins and express different bone structures, skeletal structures, organs in different ways, but they were all put together or originated from roughly the same set of pieces put together differently or expressed differently, actuated differently. They would result in different animals. And I started to see the same thing with technology. So again, you take some. You take maybe in the 1880s some kind of a threshing machine or harvester that worked on steam summer inside. There there'd be a boiler. There'd be crying, Serbia steam engine. If you looked into railway locomotive, you'd see much the [00:25:00] same thing, polars and cranks, and the steam engine there be a place to keep fuel and to feed it with a coal or whatever it was operating on. So once I started to look inside technologies, I realized it was very different set of things that there's ceased to become a mystery. And so the whole theme of what I was looking at was see if I can get this into one sentence. Technologies are means to human purposes normally created from existing components at hand. So if I want to put up some structures and Kuala lumper, which is a high level high rise building, I've got all the pieces I needed. Pre-stressed concrete, whatever posts are needed to create. [00:26:00] Fundations the kinds of bolts and fasteners the do fastened together, concrete, high rise, cranes, and equipment et cetera. Assemblies made of steel to reinforce the whole thing and to make sure the structure stands properly. It's not so much of these are all standardized, but the type of technology, every technology I thought is made with pieces and parts, and they tend to come from the same toolbox used in different ways. They may be in Kuala, lumper used in Seattle's slightly different ways, but the whole idea was the same. So it's technology then cease to be a mystery. It was matter of combining or putting together things from a Lego sets in M where [00:27:00] I grew up in the UK. We'd call them mechano sets. What are they called here? Erector sets or, well, I mean, Legos are, or, but like, I mean, there's, there's metal ones, the metal ones. I think the metal ones are erector sets. There's also like the wood ones that are tinker toys. Anyway, I like Legos, like, like I'm kinda like, okay. Okay. So, and that goes and yeah. And then you could get different sorts of Lego sets. You know, a few were working in high pressure, high temperature, it'd be different types of things of you're working in construction. There'd be a different set of Lego blocks for that. I don't want to say this is all trivial. It's not a matter of just throwing together these things. There's a very, very high art behind it, but it is not these things being born in somebody's attic. And in fact [00:28:00] of you were sitting here and what used to be Xerox park and Xerox graphy was invented by not by Mr. Xerox. Anyway, somewhere in here, but xerography was invented by someone who knew a lot about processes. A lot about paper, a lot about chemical processes, a lot about developing things. And shining light on paper and then using that maybe chemically at first and in modern Sarah Buffy. Electrostatically. Yeah. And so what could born was rarely reflecting light known component of marks on paper, thinking of a copier machine focused with a lot of lenses, [00:29:00] well-known onto something that was fairly new, which was called a Xerox drum. And that was electrostatically charged. And so you arranged that the light effected the electrostatic charges on the Xerox drum and those electrostatic as the drum revolved, it picked up particles of printing, ink like dust and where being differentially charged, and then imprinted that on paper and then fused it. All of those pieces were known. It's and it's not a matter of someone. I think mine's name is Carlson by the way. It's not a matter of what's somebody working in an attic that guy actually, who was more like that, but usually it's a small team of [00:30:00] people who are, who see a principal to do something to say, okay, you know, we want to copy something. Alright. But it could, you know cathode Ray tube and maybe it could project it on to that. And then there might be electrons sensitive or heat sensitive paper, and it could make her copies that way. But certainly in here Xerox itself for zero park, the idea was to say, let's use an electrostatic method combined with Potter and a lot of optics to ride on a Xerox drum and then fuse that under high heat into something that, where the particles stuck to paper. So all of those things were known and given. So I guess there's sorry. There's, there's so many different directions that I, that I want to go. One. [00:31:00] So sort of just like on the idea of modularity for technology. Yeah. It feels like there's both I guess it feels like there's almost like two kinds of modularity. One is the modularity where you, you take a slice in time and you sort of break the technology down into the different components. Yeah. And then there's almost like modularity through time that, that progresses over time where you have to combine sort of different ideas, but it doesn't necessarily, but like those ideas are not necessarily like contained in the technology or there's like precursor technology, like for example there's you have the, the moving assembly line. Right. Which was a technology that was you originally for like butchering meat. Yup. Right. And so you had, you had car manufacturing [00:32:00] and then you had like a moving assembly line. Yep. And then Henry Ford came along and sort of like fused those together. And that feels like a different kind of modularity from the modularity of. Of like looking at the components of technology, M I D do you think that they're actually the same thing? How do you, how do you think about those sort of two types of modularity? I'm not quite sure what the difference is. So, so the, the Henry T I guess like the, the, the, the, the Ford factory did not, doesn't contain a slaughter house. Right. It contains like some components from the slider house. And some components, I guess. Let's see, I think, like, [00:33:00] this is like, I, I was like, sort of like thinking through this, it feels like, like when, when you think of like the sort of like intellectual lineages of technology the, like a technology does not always contain the thing that inspires it, I guess is and so, so there's this kind of like evolution over time of like, almost like the intellectual lineage of a technology that is not necessarily the same as like the. Correct evolutions of the final components of that technology like for yeah. Does that, does that make sense? Like th th th or am I just like, am I seeing a difference where there, there is no difference which could be completely possible? Well, I'm not sure. I think maybe the latter, let me see if I can explain the way I see it, please stop me again. If it [00:34:00] doesn't fit with what you're talking about. I could fascinated by the whole subject of invention, you know, where to radically new technologies come from, not just tweaks on a technology. So we might have we might have a Pratt and Whitney jet engine in 1996, and then 10 years later have a different version of that. That's a good summer different components. That's fine. That's innovation, but it's not ready. Invention invention is something that's quite radical. You go from having air piston engines, which spit like standard car engines, driving propellers systems, 1930s, and you that gets replaced by a jet engine system working on a different principle. So the question really is so I've [00:35:00] begun to realize that what makes an invention is that it works in a different principle. So when Cox came along, the really primitive ones in the 12 hundreds, or a bit later than that are usually made up, they're made with their water clocks and are relying on this idea that a drip of water is fairly regular. If you set it up that way and about the time of Galileo. And in fact, Galileo himself realized that the pendulum had a particular regular beat. And if you could harness that regularity, that might turn into something that can measure time I clock. So, and that's a different principle that the principle is to use the idea that something on the end of a string or on the end of a piece of wire, give you a regular. [00:36:00] Frequency or regular beat. So the country realize that inventions themselves something was carrying out unnecessary purpose using a different principle before the second world war in Britain, they in the mid 1930s, people got worried about aircraft coming from the continent. They thought it could well be terminated and and bombers coming over to bomb England and the standard methods then to detect bombers over the horizon was to get people with incredibly good hearing, quite often blind people and attach to their ear as the enormous air trumpet affair that went from their ear to some big concrete collecting amplifier, some air trumpet that was maybe 50 or a hundred [00:37:00] feet across to listen to what was going on in the sky. And a few years later in the mid thirties, actually the began to look for something better and then. Made a discovery that fact that being well-known in physics by then, that if you bounced a very high frequency beam electromagnetic beam of say piece of metal, the metal would distort the beam. It would kind of echo and you'd get to stores and see if it was just to adore three miles away, made a word, wouldn't have that effect, but it was metal. It would. So that that's different principle. You're not listening. You're actually sending out a beam of something and then trying to detect the echo. And that is a different principle. And from that you get radar, how do you create such a beam? How'd [00:38:00] you switch it off very fast. Search can listen for an echo or electronically how do you direct the beam, et cetera, et cetera. How do you construct the whole thing? How can you get a very high energy beam because needed to be very high energy. These are all problems that had to be solved. So in my, what I began to see, she was the same pattern giving invention guidance began usually an outstanding problem. How do we detect enemy bombers that might come from the east, from the continent, if we need to how do we produce a lot of cars more efficiently and then finding some principle to do that, meaning the idea of using some phenomenon in the case of ear trumpets, it was acoustic phenomena, but these could be greatly amplified for somebody's ear. If you directed them into a big [00:39:00] concrete here, right? Different ways to put out high frequency radio beams and listen for an echo of that. Once you have the principle, then it turns out there's sort of sub problems go with that in the case of radar, how do you switch the beam off so that you can, things are traveling at the speed of light. I just switched it off fast enough that the echo isn't drowned out by the original signal. So then you're into another layer of solving another problem and an invention. Usually not. Well, I could talk about some other ways to look at it, but my wife looking at an invention is that nearly always is a strong social need. What do we do about COVID? The time that [00:40:00] says February, March 20, 20 oh, cur we can do a vaccine. Oh, okay. The vaccine might work on a different principle, maybe messenger RNA rather than the standard sort of vaccines. And so you find a different principle, but that brings even getting that to work brings its own sub problems. And then if with a bit of luck and hard work, usually over several years or months, you solved the sub problems. You managed to put all that in material terms, not just conceptual ones, but make it into some physical thing that works and you have an invention. And so to double click on that, couldn't you argue that those, that the solution to those sub problems are also in themselves inventions. And so it's just like inventions all the way down. [00:41:00] No great point there. I haven't thought of that. Possibly the, if they need to use a new principal themselves, the sub solutions. Yeah. Then you'd have to invent how that might work. But very often they're standing by let me give you an example. I hope this isn't I don't want to be too sort of technical here, please go, go, go, go rotate. Here we go then. So it's 1972 here in Xerox park where I'm sitting and the engineer, Gary Starkweather is his name, brilliant engineer and trained in lasers and trend and optics PhD and master's degrees, really smart guy. And he's trying to [00:42:00] figure out how to how to print. If you have an image in a computer, say a photograph, how do you print that now at that time? In fact, I can remember that time there. There are things called line printers and they're like huge typewriter systems. There is one central computer you put in your job, the outputs it was figured out on the computer and then central line printer, which is like a big industrial typewriter. And then it clanked away on paper and somebody tore off the paper and handed it to through a window. Gary, Starkweather wondered how could you print texts? But more than that images where you weren't using a typewriter, it's very hard to his typewriters and very slow if you wanted to images. So he [00:43:00] cooked up a principle, he went through several principles, but the one that he finished up using was the idea that you could take the information from the computer screens, a photograph you could use computer processors to send that to a laser. The lasers beam would be incredibly, highly focused. And he realized that if he could use a laser beam to the jargon is to paint the image onto the Xerox drum. Then so that it electrically charged the Xerox drum, right then particles would stick to the Xerox, strung the charge places, and the rest would be zero graphy, like a copier machine. He was working in Xerox park. [00:44:00] This was not a huge leap of the imagination, but there were two men's sub-problems in as well. We want to mention, if you look at it there's an enormous two huge problems if you wanted. So you were trying to get these black dots to write on a zero extremity to paint them on a zero Ekstrom. I hope this is an obscure. No, this is great. And I'll, I'll, I'll include some like pictures and this is great. All right. So you suppose I'm writing or painting a photograph from the computer through a processor, send to a laser. The laser has to be able to switch on and off fast. If it's going to write this on a Xerox Trump, and if you work out commercially how fast it would have to operate. Starkweather came to the conclusion. He'd have to be able to switch his [00:45:00] Lezzer on and off black or white 50 million times a second. Okay. So 50 megahertz, but nobody had thought of modulating or doing that sort of switching at that speed. So he had to solve that. That's a major problem. He solved it by circuitry. He got some sort of pizza electric device that's kind of don't ask, but he got a electronic device that could switch on and off. And then he could send signals to modulator for that to modulator, to switch on and off the laser and make a black or white as needed. And so that was number one. Now that kind of, that in your terms acquired an invention, he had to think of a new principle to solve that problem. So how do you, how do you write images on a computer? Sorry, on [00:46:00] how do you write it? How do you write computer images? Print that onto paper. That's required a new principal switching on a laser and. 50 million times the second required a new principal or acquire a new principal. So those are two inventions. There's a third one and another sub problem. The device, by the way, he got to do this was as big as one of these rooms in 1972. If I have my if I have the numbers, right a decent laser would cost you about $50,000 and you could have bought a house for that in 1978 here. And it would be the size, not of a house, but of a pretty big lab, but not something inside a tiny machine, but an enormous apparatus. And so how do you take [00:47:00] a laser on the end of some huge apparatus that you're switching on and off the 15 million times a second and scan it back and forth. And because there's huge inertia, it's an enormous thing. And believe it or not, he, he solved that. Not with smoke, but with mirrors. So he actually, instead of moving the laser beam, He arranged for a series of mirrors under evolving a piece of apparatus, like actuate the mirrors. Yeah. All he had to do was 0.1 beam at the mirror, switch it on and off very quickly for the image. And then the mirror would direct it kind of like a lighthouse beam right across the page. And then the next [00:48:00] face of the mirror exactly little mirror would come along and do the next line. So how do you do that? Well, that was easier. But then he discovered that the different facets on this mirror you'd have to, they'd have to line up to some extraordinarily high precision that you could not manufacture them to. So that's another sub problem. So to solve that he used ope optics if there was so here's one facet of mirror here is the beam. So directs the beam right across the page, switching it off and on as need be. Then the next facet of the mirror comes round switches. The same beam that you want to line up extraordinary. Precisely. Couldn't do it manufactured. [00:49:00] In manufacturing technology. But you could do it with optics. It just said, okay, if there's a slight discrepancy, we will correct that. He did agree and optics. He really knew what he was doing with optics in the lab. So using different lenses, different condensing lenses, whatever lenses do he solved that problem. So it's took two or three years, and it's interesting to look at the lab notebooks that he made. But for me let me see if I can summarize this. There is no such thing as Gary Starkweather scratching his head saying, wouldn't it be lovely to wouldn't it be lovely to be able to print images off the computer and not have to use a big typewriter. And and so he sits in his attic, a star of some self for three months comes up with the solution, not at all. What he did was he envisaged a [00:50:00] different principle. We're writing the image, using a highly focused laser beam onto the Xerox drum. The rest then is just using a copier machine fair. But to do that, you have to switch on and off the laser beam problem. So that's at a lower level to invent a wedge to that. And he also had to invent a principle for scanning this beam across the Xerox strung, maybe whatever it would be 50 times a second, or maybe a hundred times the second without moving the entire apparatus. And the principally came up for that was mirrors. Yeah. And so, and then I could go down to another level, you have to align your mirrors. And so, so what I discovered and see if I can put this in a nutshell [00:51:00] invention, isn't a sort of doing something supremely creative in your mind. It finishes up that way. It might be very creative, but all inventions are basically as problem-solving. Yeah. So to do something more mundane imagine I live here in Palo Alto let's say I work in the financial district in San Francisco and let's say my car's in the shop getting repaired. How am I going to get to work? And or how am I going to get my work done tomorrow? I have no car. The level of principle is to say, okay, I can see an overall concept to do it with. So I might say, all right, if I can get to Caltrain, if I can get to the station I'll go in on the train, but hang on. How do I get to the station? So that's a sub problem. [00:52:00] Maybe I can get my daughter or my wife or her husband, whatever it is to, to drive me. Then the other end, I can get an Uber or I could get a a colleague to pick me up, but then I'd have to get up an hour earlier, or maybe I'll just sit at home and work from home, which is more of the solution we would do these days. But how will that work? Because I et cetera. So invention is not much different from that. In fact, that's the heart of invention. If we worked out that problem of getting worked when your car is gone nobody would stand up and say, this was brilliant yet you've gone through exactly the same process as the guy who invented the polymerase chain reaction. Again, I can't recall his name. Getting older. I can't [00:53:00] eat there, but anyway so what's really important in invention. I think this goes to your mission. If I understand it, rightly is the people who have produced inventions are people who are enormously familiar with what I would call functionalities. Yeah. How do you align beams using optical systems? How do you switch on and off lasers fast? And so the people who are fluent at invention are always people who know huge amounts about those functionalities. I'm trained as an electrical engineer. You're, what's it I'm trained as a mechanical engineer robotics. Oh yeah. Brilliant. So what's really important [00:54:00] in engineering, at least what they teach you apart from all that mathematics is to know certain functionalities. So you could use capacitors and inductors to create, and also electronic oscillations or regular waves. You can. Straighten out varying voltage by using induction in the system, you can store energy and use that in capacitors. You, you can actually change a beam using magnets. And so there's hundreds of such things. You can amplify things you can use using feedback as well to stabilize things. So there are many functionalities and learning engineering is a bit like becoming fluent in this set of functionalities, not learning anything that's semi [00:55:00] creative. What might that be? Yes. Paint learning to do plumbing. Yep. Learning to work as a plumber. Good. A true engineer. So it is a matter of becoming fluent. You want to connect pipes and plumbing. You want to loosen pipes. You want to unclog things you want to reduce. The piping systems or pumping system, you want to add a pump you want, so there's many different things you you're dealing with. Flows of liquids, usually and piping systems and pumping systems and filtration systems. So after maybe three to four years or whatever, it would be a for rail apprentice ship in this, not only can you do it, but you can do it unthinkingly, you know, the exact gauges, you know, the pieces, you know, the parts, you know where to get the parts, you know how to set them up and you look at [00:56:00] some problem and say, oh, okay. The real problem here is that whatever, the piping diameter here is wrong, I'm going to replace it with something a bit larger. So Lincoln's whatever. And here's how I do that. So, you know, being good at invention is not different people. Like Starkweather, Starkweather new, I think is still alive. Knows all about mirrors, but optical systems above all, he knew an awful lot about lasers. He knew a lot about electronics. He was fluent in all those. So if we don't, if we're not fluent ourselves, we stand back and say, wow, how did he do that? But it's a bit like saying, you know, you write a poem and French, let's say I don't speak French. French and support them and it worked, how did he [00:57:00] do that? But if I spoke French, I might, so, okay. Yeah, but I can see, so this actually touches on sort of like an extension of your framework that I wanted to actually run by you, which is what I would describe what you were just describing as talking about almost like the, the affordances and constraints of different pieces of technology and people who invent things being just very like intimately familiar with the, the affordances and constraints of different technologies, different systems. And so the, the question I have that I think is like an open question is whether there is a way of sort of describing or encoding these affordances and constraints [00:58:00] in a way that makes creating these inventions easier. So like in the sense that very often what you see is like someone who knows a lot about. One like the, the affordances in one area, right. When discipline and they sort of like come over to some other discipline and they're like, wait a minute, like, there's this analogy here. And and so they're like, oh, you have this, this constraint over here. Like, there's, there's like a sub problem. Right. And it's like, I know from the, the affordances of the things that I'm, I'm really familiar with, how to actually solve the sub problem. And so like, through that framework, like this framework of like modularity and constraints and affordances, like, is it possible to actually make the process easier or like less serendipitous? Yeah. In, in a couple of ways. One is that I [00:59:00] think quite often you see a pattern where some principle is borrowed from a neighboring discipline. So Henry you were saying that Henry Ford took the idea of a conveyor belt from the meat industry. Right. And and by analogy use the same principle with manufacturing cars. But to get that to work in the car industry, the limitations are different cars are a lot heavier, so you could have a whole side of beef and it's probably 300 pounds or whatever. It would be for a side of beef, but for the car, it could be at 10 and a half. So you have to think of different ways. Yeah. And in the meat industry to do conveyor belts, there's two different ways. You can have a belt standard, rubber thing or whatever it would be just moving along at a certain speed, or you [01:00:00] can have the carcass suspended from an over hanging belts working with a chain system and the carcass is cut in half or whatever and suspended. And you could be working on it pretty much vertically above you both. It was that second system that tended to get used cars as, so things don't translate principles translate from one area to another, and that's a very important mechanism. And so if you wanted to enhance innovation I think the thing would be to set up some institution or some way of looking at things, whereas. They're well-known principles for doing this in area in industry X, how would I do something equivalent in a different industry? So for [01:01:00] example blockchain is basically let's say it's a way of validating transactions that are made privately between two parties without using an intermediary, like a bank. And you could say, well, here's how this works with a Bitcoin trading or something. And somebody could come along and say, well, okay, I want to validate art sales using maybe some similar principle. And I don't want to have to go to some central authority and record there. So maybe I can use blockchain to do fine art sales, in fact, that's happening. So basically you see an enormous amount of analogous principle transfer of principles from [01:02:00] one field to another. And it's we tend to talk about inventions being adopted. At least we do an economic. So you could say the, the arts trading system adopts block chain, but it's not quite that it's something more subtle. You can get a new principal or new, fairly general technology comes out, say like blockchain and then different different industries or different sets of activities in conjure that they don't adopt it then countries. Oh, blockchain. Okay. No, I'm saying the medical insurance business let's say so I can record transactions this way and I don't have to involve a room or, and I particular, I don't have to go through banking systems and I can do it this way and then [01:03:00] inform insurance companies. And so they're encountering and wondering how they can use this new principle, but when they do, they're not just taking it off the shelf. Yeah. They're actually incorporating that into what they do. So here's an example. A GPS comes along quite a while ago. I'm sure. 1970s in principle using atomic clocks. Satellites or whatever. Basically it's a way of recording exactly time and using multiple satellites to know exactly where they are at the same time and allowing for tiny effects of even relativity. You figure out you can triangulate and figure out where something is precisely. Yeah, no, that just exists. But by the [01:04:00] time, so different industries say like Oceanwide Frazier shipping and you conjure it exists. Okay. And by the time they encounter it, they're not just saying I'm going to have a little GPS system in front of, in the Bennett code it's actually built in. And it becomes part of a whole navigational system. Yeah. So what happens in things like that is that some invention or some new possibility becomes a component in what's already done just as in banking around the 1970s, being able to. Process customer names, client names, and monetary months you could process that fast with electronic computers and there most days they were [01:05:00] called and data processing units that we don't think of it that way now, but you could process that. And then that changed the banking industry significantly. So by 1973, there was a, the market and futures in Chicago where you were dealing with say pork belly futures and things like that because computation coming home. Interesting. So the pattern there's always an industry exists using conventional ideas, a new set of technologies becomes available. But the industry doesn't quite adopted it, encounters it and combines it with many of its own operations. So banking has been recording people in ledgers and with machinery, it has been facilitating transactions, [01:06:00] maybe on paper unconscious computation. Now can do that. Yeah. Automatically using computation. So some hybrid thing is born out of banking and computation that goes into the Lego set and actually sort of related to that, something I was wondering is, do you think of social technology as technology, do you think that follows the same patterns? What do you mean social technology? I, I think like a very obvious one would be like for example, like mortgages, right? Like mortgages are like mortgages had to be invented. And they allow people to do things that they couldn't do before. But it's not technology in the sense of, of built. Yeah, exactly. It's not like, there's no, like you can create a mortgage with like you and me and a piece of [01:07:00] paper. Right. But it's, it's something that exists between us or like democracy. Right. And so, so I feel like there's, there's like one end, like, like sort of like things like new legal structures or new financial instruments that feel very much like technology and on the other end, there's like. Great. Just like new, like sort of like vague, like new social norms and like, yeah. Great question. And it's something I did have to think about. So things like labor unions nation states nature. Yes, exactly. These thing democracy itself, and in fact, communism, all kinds of things get created. Don't look like technologies. They don't have they don't have the same feel as physical technologies. They're not humming away in some room or other. They're not under the hood of your [01:08:00] car. And things like insurance for widows and pension systems. There's many of those social technologies even things like Facebook platforms for exchanging information. Sometimes very occasionally things like that are created by people sitting down scratching heads. That must have happened to some degree in the 1930s when Roosevelt said there should be a social security system. But that wasn't invented from scratch either. So what tends to come about in this case, just to get at the nitty gritty here, what tends to happen is that some arrangement happens. Somebody maybe could have been a feudal Lord says, okay, you're my trusted gamekeeper. You can have a [01:09:00] rather nice a single house on my estate. You haven't got the money to purchase and build it. I will lend you the money and you can repay me as time goes by. And in fact, the idea that so many of those things have French names, more, more cash. You know, it's actually, I think the act of something dying as far as my, my school friends would go, I don't know. But a lot of those things came about in the middle ages. There are other things like What happens when somebody dies the yeah. Probate again, these are all things that would go back for centuries and centuries. I believe the way they come about is not by deliberate invention. They come about by it being natural in [01:10:00] to something. And then that natural thing is used again. And again, it gets a name and then somebody comes along and says, let's institutionalize this. So I remember reading somewhere about the middle ages. They it was some Guild of some traders and they didn't feel they were being treated fairly. I think this was in London. And so they decided to withhold their services. I don't know what they're supplying. It could have been, you know, courage, transport, and along the streets or something. And some of these people were called violets. We were, would not be valet again, very French, but so they withheld their services. Now that wouldn't be the first time. [01:11:00] It goes back to Egypt and engineered people withholding their services, but that becomes, gets into circulation as a meme or as some repeated thing. Yeah. And then somebody says, okay, we're going to form an organization. And our Gilda's going to take this on board as being a usable strategy and we'll even give it a name that came to be called going on, strike or striking. And so social invention kind of should take place just by it being the sensible thing to do. The grand Lord allows you. It gives you the money to build your own house. And then you compare that person back over many years [01:12:00] and and put that, put that loan to to its death and mortgage it. So the I think in this case, what happens in these social inventions is that sensible things to do gets a name, gets instituted, and then something's built around it. Well, one could also say that many inventions are also the sensible thing to do where like it's someone realizes like, oh, I can like use this material instead of that material. Or like some small tweak that then enables like a new set of capabilities. Well, I'm not, yeah. In that case, I wouldn't call it really an invention that the, the vast majority of innovations, like 99 point something, something, something 9% or tweaks and, you know, [01:13:00] w we'll replace this material. Well, why doesn't that count as an invention? If, if, if it's like a material, like it's a different, like, I guess why doesn't that also count as, as a new principal, it's like bringing a new principal to the thing. The word to find a principal is it's the principles, the idea of using some phenomenon. And so you could say there's a sliding scale if you insist. Up until about 1926 or 1930 aircraft were made of wooden lengths covered with canvas dope. The dope, giving you waterproofing and so on. And and then the different way of doing that came along when they discovered that with better engines, you could have heavier aircraft, so you could make the skeleton out of [01:14:00] metal, right? And then the cladding might be metal as well. And so you had modern metallic aircraft. There's no new principal there, but there is a new material and you could argue, well, the new materials, different principle, then you're just talking about linguistics. So, so, so you would not consider the, like the transition from cloth aircraft to metal aircraft to be an invention. No. Huh? Not got another, I mean, sure might be a big deal, but I don't see it as a major invention going from air piston Angeles to jet engines. That's a different principle entirely. And I, so I, I've a fairly high bar for different principles. But you're not using a different phenomenon. That's my that's, that's my criteria. And if you have a very primitive clock [01:15:00] in this 16, 20 or 16, Forties that uses a string and a bulb on the end of the string. And then you replace the string where the wire or piece of metal rigid. You're not really using a new phenomenon, but you are using different materials and much of the story of technology isn't inventions, it's these small, but very telling improvements and material. In fact jet engines, weren't very useful until you got combustion systems where you were putting in aircraft fuel. Yeah. Atomizing that and setting the whole thing and fire the early systems down. When you could better material, you could make it work. So there's a difference between a primitive technology and [01:16:00] then one that's built out of better components. So I would say something like this, the if you take what the car looks like in 1919 0 5, is it a very, is it a different thing than using horses? Yeah, because it's auto motive. There is an engine. It's built in. So it's from my money. It's using a different principle. What have you changed? What if you like took the horse and you put it inside the carriage? Like what have you built the carriage around the horse? Would that be an automotive? Well then like, like what if I had a horse on a treadmill and that treadmill was driving the wheels of the vehicle with the horse on it, then I think it would be it would be less of an invention. I don't know. I mean, you're basically say I find it very useful to say that if [01:17:00] that radar uses a different principle from people listening, you could say, well, I mean, people listening are listening for vibrations. So is radar, you know, but just at a electro magnetic vibrations, what's different for my money. It's not so much around the word principle. All technologies are built around phenomena that they're harvesting or harnessing to make use of. And if you use a different set of phenomena, In a different way, I would call it an invention. So if you go from a water wheel, which is using water and gravity to turn something, and you say I'm using the steam engine, I would regard that as you're still, you [01:18:00] could argue, well, aren't you use a phenomenon phenomenon of the first thing you're using the weight of water and gravity, and the fact that you can turn something. And then the second thing you are using the different principle of heating something and having it expand. And so I don't see, I would say those are different principles. And if you're saying, well, there's a different principle, I'd go back to, well, what phenomena are you using? So, yeah, I mean, if you wanted to be part of a philosophy department, you could probably question every damned thing because yeah. I'm actually not trying to, to challenge it from a semantic standpoint. I think it's just actually from like really understanding, like what's going on. I think there's actually like a, sort of a debate of like, whether [01:19:00] it's. Like, whether it's like a fractal thing or whether there are like, like multiple different processes going on as well. Maybe I'm just too simple, but let's start to look at invention. The state of the art was pathetic. It wasn't very good because all papers, well, all the versions of invention, I was reading, all of us had a step, then something massively creative happens and that wasn't very satisfactory. And then there was another set of ideas that were Darwinian. If you have something new, like the railway locomotive that must have come out of variations somehow happening spontaneously, and might've been sufficiently different to qualify as radically new inventions. It doesn't do it for me either because you know, 1930 you could have varied [01:20:00] radio circuits until you're blue in the face. You'd never get radar. Yeah. So what the technology is fundamentally is the use of some set of phenomena to carry out some purpose. The, there are multiple phenomena. So but I would say in this maybe slightly too loose speaking, that's the principal phenomenon you're using or the, the key phenomenon constitutes the concept or principle behind that technology. So if you have a sailing ship, you could argue, well, you know, it, displaces water it's built to be not have water intake. It's got a cargo space, but actually for sailing ships, the key principle is to use the motive, power of wind in clever ways to be able to propel a [01:21:00] ship. If you're using steam and take the sails down you're using, in my opinion, a different principle, a different phenomenon. You're not using the mode of power of wind. You're actually using the energy that's in the, some coal fuel or oil and clever ways and to move the ship. So I would see those as two different principles you could say, well, we also changed whatever the staring system or as does that make it an invention. It makes maybe that part of it, an invention, but overall The story I'm giving is that inventions come along when you see a different principle or a set of phenomena that you want to use for some given purpose and you managed to solve the problems to put that into reality. Yeah. I completely agree [01:22:00] with that. I think the, the thing that I'm interested in is like like to, to use is the fact that sort of, again, we go back to like that modular view then, you're you sort of have like many layers down you, the, the like tinkering or, or the, the innovations are so based on changing the phenomena that are being harnessed, but like much, like much farther down the hierarchy of, of the modularity. Like, like in, in S like sailing ships you like introduce like Latin sales, right? Like, and it's like, you change the, into, like, you've invented a new sale system. You haven't invented a new kind of ship. Right. So you've changed the phenomenon, but yeah, I think the distinction you're making is totally on target. When you introduced Latina sales, you have invented a new. Cell system. Right. [01:23:00] But you haven't invented a new principle of a sailing ship. It's still a sailing ship. So I think you're getting into details that are worth getting into at the time I'm writing this. I I was trying to distinguish, I'm not trying to be defensive here. I hope, but I was just, I'm not trying to be offensive in any way. Wait for me to, I haven't thought about this for 10 years or more the I think what was important in yeah, let's just in case this whole thing that said innovation happens. Nobody's quite sure what innovation is. But we have a vague idea. It's new stuff that works better. Yes. In the book I wrote I make a distinction between radically new ways to do something. So it's radically new to propel the ship by a [01:24:00] steam engine. Even if you're using paddles versus by wind flow. Okay. However, not everything's right. Radically new. And if you look at any technology, be it computers or cars the insides, the actual car Bratcher system in the 1960s would have been like a perfume spray or a spraying gasoline and atomizing it, and then setting that in light. Now we might have as some sort of turbo injections system, that's, that's working, maybe not with a very different principle, but working much more efficiently. So you might have an invention or a technology that the insights are changing enormously. But the, the, I, the overall idea of that [01:25:00] technology hasn't changed much. So the radar would be perfect examples. So be the computer, the computers kept changing its inner circuitry, the materials it's using, and those inner circuits have gotten an awful lot faster. And so on. Now that you could take a circuit out and you could say, well, sometime around 1960, the circuit cease to be. Certainly it seems to be trialed, vacuum tubes and became transistors monitored on boards. But then sometime in that deck, could it became integrated circuits, was the integrated circuit and invention yeah. At the circuit level, at the computer level better component. Yeah. So hope that, that absolutely has I guess as, as actually a sort of a closing question is there, is there like work that you [01:26:00] hope people will sort of like do, based on what you've written like, is, is there, is there sort of like a line of work that you want people to be, to be doing, to like take the sort of the framework that you've laid out and run with it? Cause I, I, I guess I feel like there's like, there's so much more to do. Yeah. And so it's like, do you have a, do you have a sense of like what that program would look like? Like what questions, what questions are still unanswered in your mind? I think are really interesting. I think that's a wonderful question off the red cord. I'm really glad you're here because. It's it's like visiting where you grew up. I am. I'm the ghost of, of books. Oh, I don't know. I mean, it's funny. I was injured. This is just, yeah. I was interviewed a month or two ago on [01:27:00] this subject. I can send you a link if you want, please. Yeah. I listened to tons of podcasts, so, yeah. Anyway, but I went back and read the book. You're like, wow, I'm really smart. Well, it had that effect. And then I thought, well, God, you know, it could have been a lot better written. It had all sorts of different things. And, and the year this was produced and free press and New York actually Simon Schuster, they put it up for a Pulitzer prize. That really surprised me because I didn't set out to write something. Well-written I just thought of keep clarifying the thing. And it went to come back to your question. Yeah. My reflection is this the book I wrote the purpose of my book was to actually look inside technologies. So [01:28:00] when you open them up, meaning have you look at the inside components, how those work and how ultimately the parts of a technology are always using some, none, you know, we can ignite gasoline and a, in a cylinder, in a car, and that will expand rapidly and produce force. So there's all kinds of phenomena. These were things I wanted to stay at. And yeah, the book there's that book has had a funny effect. It has a very large number of followers, meaning people have read that and I think of a field for technology and they're grateful that somebody came along and gave them a way to look at technology. Yeah. But having, let me just say it carefully that I've done other things in research [01:29:00] that have had far more widespread notice than this. And I think it's something tech the study of technology, as I was saying earlier on is a bit of a backwater in academic studies. Yeah. It's eclipsed. Is that the word dazzled by science it's? So I think that it's very hard to we, if something wonderful happens, we put men on the moon, we put people on the moon. We, we come up with artificial intelligence. Some are vaguely. That's supposed to be done by scientists. It's not, it's done by engineers who are very often highly conversant, both with science and mathematics, but as a matter of prestige, then a [01:30:00] lot of what should have been theories of technologies, where they come from, it's sort of gone into theories of science and I would simply point out no technology, no science when you can't do much science without telescopes crystallography x-rays systems microscopes. So yeah, it's all. Yeah. So you need all of these technologies to give you modern science. Without those instruments, we'd still have technology. We'd still have science, but be at the level of the Greeks, which would
In this Conversation, Jason Crawford and I talk about starting a nonprofit organization, changing conceptions of progress, why 26 years after WWII may have been what happened in 1971, and more. Jason is the proprietor of Roots of Progress a blog and educational hub that has recently become a full-fledged nonprofit devoted to the philosophy of progress. Jason's a returning guest to the podcast — we first spoke in 2019 relatively soon after he went full time on the project . I thought it would be interesting to do an update now that roots of progress is entering a new stage of its evolution. Links Roots of Progress Nonprofit announcement Transcript So what was the impetus to switch from sort of being an independent researcher to like actually starting a nonprofit I'm really interested in. Yeah. The basic thing was understanding or getting a sense of the level of support that was actually out there for what I was doing. In brief people wanted to give me money and and one, the best way to receive and manage funds is to have a national nonprofit organization. And I realized there was actually enough support to support more than just myself, which had been doing, you know, as an independent researcher for a year or two. But there was actually enough to have some help around me to basically just make me more effective and, and further the mission. So I've already been able to hire research [00:02:00] assistants. Very soon I'm going to be putting out a a wanted ad for a chief of staff or you know, sort of an everything assistant to help with all sorts of operations and project management and things. And so having these folks around me is going to just help me do a lot more and it's going to let me sort of delegate everything that I can possibly delegate and focus on the things that only I can do, which is mostly research and writing. Nice and sort of, it seems like it would be possible to take money and hire people and do all that without forming a nonprofit. So what what's sort of like in your mind that the thing that makes it worth it. Well, for one thing, it's a lot easier to receive money when you have a, an organization that is designated as a 5 0 1 C three tax status in the United States, that is a status that makes deductions that makes donations tax deductible. Whereas other donations to other types of nonprofits are not I had had issues in the past. One organization would want to [00:03:00] give me a grant as an independent researcher, but they didn't want to give it to an individual. They wanted it to go through a 5 0 1 C3. So then I had to get a new. Organization to sort of like receive the donation for me and then turn around and re grant it to me. And that was just, you know, complicated overhead. Some organizations didn't want to do that all the time. So it was, it was just much simpler to keep doing this if I had my own organization. And do you have sort of a broad vision for the organization? Absolutely. Yes. And it, I mean, it is essentially the same as the vision for my work, which I recently articulated in an essay on richer progress.org. We need a new philosophy of progress for the 21st century and establishing such a philosophy is, is my personal mission. And is the mission. Of the organization to just very briefly frame this in the I, the 19th century had a very sort of strong and positive, you know, pro progress vision of, of what progress was and what it could do for humanity and in the [00:04:00] 20th century. That optimism faded into skepticism and fear and distrust. And I think there are ways in which the 19th century philosophy of progress was perhaps naively optimistic. I don't think we should go back to that at all, but I think we need a, we need to rescue the idea of progress itself. Which the 20th century sort of fell out of love with, and we need to find ways to acknowledge and address the very real problems and risks of progress while not losing our fundamental optimism and confidence and will to, to move forward. We need to, we need to regain to recapture that idea of progress and that fundamental belief in our own agency so that we can go forward in the 21st century with progress. You know, while doing so in a way that is fundamentally safe and benefits all of humanity. And since you, since you mentioned philosophy, I'm really like, just, just ask you a very weird question. That's related to something that I've been thinking about. And [00:05:00] so like, in addition to the fact that I completely agree the philosophy. Progress needs to be updated, recreated. It feels like the same thing needs to be done with like the idea of classical liberalism that like it was created. Like, I think like, sort of both of these, these philosophies a are related and B were created in a world that is just has different assumptions than we have today. Have you like, thought about how the, those two, like those two sort of like philosophical updates. Yeah. So first off, just on that question of, of reinventing classical liberalism, I think you're right. Let me take this as an opportunity to plug a couple of publications that I think are exploring this concept. Yeah. So so the first I'll mention is palladium. I mentioned this because of the founding essay of palladium, which was written by Jonah Bennet as I think a good statement of the problem of, of why classical liberalism is [00:06:00] or, or I think he called it the liberal order, which has maybe a slightly different thing. But you know, the, the, the basic idea of You know, representative democracy is you know, or constitutional republics with, with sort of representative democracy you know, and, and basic ideas of of freedom of speech and other sort of human rights and individual rights. You know, all of that as being sort of basic world order you know, Jonah was saying that that is in question now and. There's essentially now. Okay. I'm going to, I'm going to frame this my own way. I don't know if this is exactly how gender would put it, but there's basically, there's, there's basically now a. A fight between the abolitionists and the reformists, right. Those who think that the, the, the, that liberal order is sort of like fundamentally corrupt. It needs to be burned to the ground and replaced versus those who think it's fundamentally sound, but may have problems and therefore needs reform. And so you know, I think Jonah is on the reform side and I'm on the reform side. I think, you know, the institutions of you know, Western institutions and the institutions of the enlightenment let's say are like [00:07:00] fundamentally sound and need reform. Yeah, rather than, rather than just being raised to the ground. This was also a theme towards the end of enlightenment now by Steven Pinker that you know, a lot of, a lot of why he wrote that book was to sort of counter the fundamental narrative decline ism. If you believe that the world is going to hell, then it makes sense to question the fundamental institutions that have brought us here. And it kind of makes sense to have a burn it all to the ground. Mentality. Right. And so those things go together. Whereas if you believe that you know, actually we've made a lot of progress over the last couple of hundred years. Then you say, Hey, these institutions are actually serving us very well. And again, if there are problems with them, let's sort of address those problems in a reformist type of approach, not an abolitionist type approach. So Jonah Bennett was one of the co-founders of palladium and that's an interesting magazine or I recommend checking out. Another publication that's addressing some of these concepts is I would say persuasion by Yasha Munk. So Yasha is was a part of the Atlantic as I recall. [00:08:00] And basically wanted to. Make a home for people who were maybe left leaning or you know, would call themselves liberals, but did not like the new sort of woke ideology that is arising on the left and wanted to carve out a space for for free speech and for I don't know, just a different a non-local liberalism, let's say. And so persuasion is a sub stack in a community. That's an interesting one. And then the third one that I'll mention is called symposium. And that is done by a friend of mine. Roger Sinskey who it himself has maybe a little bit more would consider himself kind of a more right-leaning or maybe. Just call himself more of an individualist or an independent or a, you know, something else. But I think he maybe appeals more to people who are a little more right-leaning, but he also wanted you know, something that I think a lot of people are, are both maybe both on the right and the left are wanting to break away both from woke ism and from Trumpism and find something that's neither of those things. And so we're seeing this interesting. Where people on the right and left are actually maybe [00:09:00] coming together to try to find a third alternative to where those two sides are going. So symposium is another publication where you know, people are sort of coming together to discuss, what is this idea of liberalism? What does it mean? I think Tristan ski said that he wanted some posting to be the kind of place where Steven Pinker and George will, could come together to discuss what liberalism means. And then, then he like literally had that as a, as a podcast episode. Like those two people. So anyway, recommend, recommend checking it out. And, and Rob is a very good writer. So palladium, persuasion and symposium. Those are the three that I recommend checking out to to explore this kind of idea of. Nice. Yeah. And I think it looks, I mean, I mean, I guess in my head it actually like hooks, like it's sort of like extremely coupled to, to progress. Cause I think a lot of the places where we, there's almost like this tension between ideas of classical liberalism, like property rights and things that we would like see as progress. Right. Cause it's like, okay, you want to build your [00:10:00] Your Hyperloop. Right. But then you need to build that Hyperloop through a lot of people's property. And there's like this fundamental tension there. And then. I look, I don't have a good answer for that, but like just sort of thinking about that, vis-a-vis, it's true. At the same time, I think it's a very good and healthy and important tension. I agree because if you, if you have the, if you, so, you know, I, I tend to think that the enlightenment was sort of. But there were at least two big ideas in the enlightenment, maybe more than two, but you know, one of them was sort of like reason science and the technological progress that hopefully that would lead to. But the other was sort of individualism and and, and, and, and Liberty you know concepts and I think what we saw in the 20th century when you have one of those without the other, it leads to to it to disaster. So in particular I mean the, the, the communists of you know, the Soviet union were were [00:11:00] enamored of some concept of progress that they had. It was a concept of progress. That was ultimately, they, they got the sort of the science and the industry part, but they didn't get the individualism and the Liberty part. And when you do that, what you end up with is a concept of progress. That's actually detached from what it ought to be founded on, which is, I mean, ultimately progress by. To me in progress for individual human lives and their happiness and thriving and flourishing. And when you, when you detach those things, you end up with a, an abstract concept of progress, somehow progress for society that ends up not being progress for any individual. And that, as I think we saw in the Soviet union and other places is a nightmare and it leads to totalitarianism and it leads to, I mean, in the case specifically the case of the Soviet union mass. And not to mention oppression. So one of the big lessons of you know, so going back to what I said, sort of towards the beginning that the 19th century philosophy of progress had, I think a bit of a naive optimism. And part of that, [00:12:00] part of the naivete of that optimism was the hope that that all forms of progress would go together and work sort of going along hand in hand, the technological progress and moral and social progress would, would go together. In fact, towards the end of. The, the 19th century some people were hopeful that the expansion of industry and the growth of trade between nations would lead to a new era of world peace, the end. And the 20th century obviously prove this wrong, right? There's a devastating, dramatic proof though. And I really think it was my hypothesis right now is that it was the world war. That really shattered the optimism of the 19th century that, you know, they really proved that technological progress does not automatically lead to moral progress. And with the dropping of the atomic bomb was just like a horrible exclamation point on this entire lesson, right? The nuclear bomb was obviously a product of modern science, modern technology and modern industry. And it was the most horrific destructive [00:13:00] weapon ever. So so I think with that, people saw that that these things don't automatically go together. And I think the big lesson from from that era and and from history is that technological and moral progress and social progress or an independent thing that we have. You know, in their own right. And technological progress does not create value for humanity unless it is embedded in the, you know, in the context of good moral and social systems. So and I think that's the. You know, that's the lesson of, for instance, you know, the cotton gin and and American slavery. It is the lesson of the of the, the Soviet agricultural experiments that ended on in famine. It's the lesson of the, the Chinese great leap forward and so forth. In all of those cases, what was missing was was Liberty and freedom and human in individual rights. So those are things that we must absolutely protect, even as we move technological and industrial progress forward. Technological progress ultimately is it is [00:14:00] progress for people. And if it's not progress for people and progress for individuals and not just collectives then it is not progress at all the one. I agree with all of that. Except the thing I would poke is I feel like the 1950s might be a counterpoint to the world wars destroying 20th century optimism, or, or is it, do you think that is just sort of like, there's almost like a ha like a delayed effect that I think the 1950s were a holdover. I think that, so I think that these things take a generation to really see. And so this is my fundamental answer at the, at the moment to what happened in 1971, you know, people ask this question or 1970 or 73 or whatever date around. Yeah. I think what actually happened, the right question to ask is what happened in 1945, that took 25 years to sink in. And I think, and I think it's, so my answer is the world wars, and I think it is around this time that [00:15:00] you really start to see. So even in the 1950s, if you read intellectuals and academics who are writing about this stuff, you start to read things like. Well, you know, we can't just unabashedly promote quote-unquote progress anymore, or people are starting to question this idea of progress or, you know, so forth. And I'm not, I haven't yet done enough of the intellectual history to be certain that that's really where it begins. But that's the impression I've gotten anecdotally. And so this is the, the hypothesis that's forming in my mind is that that's about when there was a real turning point now to be clear, there were always skeptics of. From the very beginning of the enlightenment, there was a, an anti enlightenment sort of reactionary, romantic backlash from the beginnings of the industrial revolution, there were people who didn't like what was happening. John chakra. So you know, Mary Shelley, Karl Marx, like, you know, you name it. But I think what was going on was that essentially. The progress you know, the, the progress movement or whatever, they, the people who are actually going forward and making scientific and technological progress, they [00:16:00] were doing that. Like they were winning and they were winning because they were because people could see the inventions coming especially through the end. I mean, you know, imagine somebody born. You know, around 1870 or so. Right. And just think of the things that they would have seen happen in their lifetime. You know, the telephone the the, you know, the expansion of airplane, the automobile and the airplane, right? The electric light bulb and the, and the, the electric motor the first plastics massive. Yeah, indoor plumbing, water, sanitation vaccines, if they live long enough antibiotics. And so there was just oh, the Haber-Bosch process, right. And artificial or synthetic fertilizer. So this just like an enormous amount. Of these amazing inventions that they would have seen happen. And so I basically just think that the, the, the reactionary voices against against technology and against progress, we're just drowned out by all of the cheering for the new inventions. And then my hypothesis is that what happened after world war II is it wasn't so much that, you [00:17:00] know the people who believed in progress suddenly stopped believing in it. But I think what happens in these cases, The people who, who believed in progress their belief was shaken and they lost some of their confidence and they became less vocal and their arguments started feeling a little weaker and having less weight and conversely, the sort of reactionary the, the anti-progress folks were suddenly emboldened. And people were listening to them. And so they could come to the fore and say, see, we told you, so we've been telling you this for generations. We always knew it, that this was going to be what happened. And so there was just a shift in sort of who had the confidence, who was outspoken and whose arguments people were listening to. And I think when you, when you have then a whole generation of people who grew up in this new. Milia, then you get essentially the counterculture of the 1960s and you know, and you get silent spring and you get you know, protests against industry and technology and capitalism and civilization. And, [00:18:00] you know, do you think there, mate, there's just like literally off the cuff, but there might also be some kind of like hedonic treadmill effect where. You know, it's like you see some, like rate of progress and, you know, it's like you, you start to sort of like, that starts to be normalized. And then. It's true. It's true. And it's funny because so well before the world war, so even in the late 18 hundreds and early 19 hundreds, you can find people saying things like essentially like kids these days don't realize how good they have it. You know, people don't even know the history of progress. It's like, I mean, I found. I found it. Let's see. I remember there was so I wrote about this, actually, I hadn't had an essay about this called something like 19th century progress studies, because there was this guy who was even before the transcontinental railroad was built in the U S in the sixties. There was this guy who like in the 1850s or so [00:19:00] was campaigning for it. And he wrote this whole big, long pamphlet that, you know, promoting the idea of a transcontinental railroad and he was trying to raise private money for it. And. One of the things in this long, you know, true to the 19th century, it was like this long wordy document. And one of the parts of this whole thing is he starts going into the, like the whole history of transportation back to like the 17th or 16th century and like the post roads that were established in Britain and you know, how those improve transportation, but even how, even in that era, that like people were speaking out against the post roads as, and we're posing them. No sidebar. Have you seen that comic with like, like the cave men? The caveman? Yes. I know exactly what you're talking about. Yeah. The show notes, but caveman science fiction. Yeah, that one's pretty good. So I'm, I'm blanking on this guy's name now. But he, so he wrote this whole thing and he basically said that. The [00:20:00] story of progress has not even been told and people don't know how far we've come. And if, you know, somebody should really like collect all of this history and tell it in an engaging way so that people knew, you know, how far people knew, how far we've come. And this is in like the 1850s. So this is before the, the, the railroad was built, right? The transcontinental one, this is before the, the light bulb and before the internal combustion engine and before vaccines and, you know, everything. It was pretty, that was pretty remarkable. I also remember there was like an 1895 or 96 anniversary issue of scientific American, where they went over like 50 years of progress. And there was this bit in the beginning that was just like, yeah. You know, people just take progress for granted these days. And there was another thing, a similar thing in the early 19 hundreds, I read where somebody went out to find one of the inventors who'd improved. The the mechanical Reaper I think it was somebody who'd invented an automatic binder for the sheaves of grain and and was saying something like, yeah, people don't even remember, you know, the, the inventors who, you know, who made the modern world. And so [00:21:00] we've got to go find this inventor and like interview him and to record this for posterity. So you're seeing this kind of kids these days type attitude all throughout. So I think that that kind of thing is just natural, is like, I think is sort of always happening. Right. There's this constant complaint. I mean, it's just like, you know, at any pretty much any time in history, you can find people complaining about the decline of morality and you know, the, how the youth are so different and The wet, the ankles, they exposed ankles. Right? Exactly. So I think you have to have some somewhat separate out that sort of thing, which is constant and is always with us with kind of like, but what was, you know, what we're. What was the intellectual class? You know as Deirdre McCloskey likes to call it the clerisy, what were they saying about progress and what was the general zeitgeist? Right. And I think that even though there are some constants, like people always forget the past. Whatever they have for granted. And even though you know, every new invention is always opposed [00:22:00] and fought and feared. There is an overall site Geist that you can see changing from the late 19th century to the mid 20th century. And I think where you can really see. There's a, there's a couple of places you can really see it. So one is in the general attitude of people towards nature. And what is mankind's relationship to nature in the 19th century? People talked unabashedly and unironically about the conquest of nature. They talked about nature almost as an enemy that we had to fight. Yeah. And it sort of made sense you know, nature truly is red in tooth and claw. It does not, it's not a loving, loving mother that has us in her nurturing embrace. You know, the reality is that nature is frankly indifferent to us and you know, we have to make our way in the world in spite of now. Let's say, let's say both because of, and in spite of nature, right? Nature obviously gives us everything that we need for life. It also presents it. It also gives none of that in a [00:23:00] convenience form. Everything that nature gives us is in a highly inconvenient form that, you know, we have to do layers and layers of industrial processing to make into the convenient forms that we consume. David Deutsch also makes a similar point in the beginning of infinity, where he says that, you know, the idea of earth as like a biosphere or a life support, you know, or the ecosystem as a, as a life support system is absurd because a life support system is like deliberately designed for, you know, maximum sort of safety and convenience. Whereas nature is nothing of that. So there was some, you know, so there was some justification to this view, but the way that people just on a, on ironically talked about conquering nature, mastering nature, taming nature improving nature, right? The idea that the manmade, the synthetic, the artificial was just to be expected to be better than nature. Like that is a little mindblowing. Today I was just there was a quote, I was just looking up from I think the plastic is a great example [00:24:00] because plastic was invented and, and, and you know, or arose in this era where people were more favorable to it, but then quickly transitioned into the era where It, it became just one of the hated and demonized inventions. Right. And so in the early days, like in the 1930s I think it was 1936 Texas state had a, some sort of state fair and they had a whole exhibition about plastics and somebody was quote, one woman who was, who, who saw the exhibition, you know, was quoted as saying like, oh, it's just wonderful how everything is synthetic these days, you know, as this is like, nobody would say. Yeah, right. Or there was a documentary about plastic called the fourth kingdom and it was something like, you know, in addition to the, the three kingdoms of what is it like animal vegetable and mineral, you know, man has now added a fourth kingdom whose boundaries are unlimited. Right. And again, just that's just like nobody would ever put it that way. And sometimes, okay. So to come back to the theme of like naive optimism, sometimes this actually led [00:25:00] to problems. So for instance, in this, this still cracks me up in the late 19th century. There were people who believed that we could improve on. Nature is distribution of plant and animal species. The nature was deficient in which species you know, we're aware and that we could improve on this by importing species, into non-natural habitats. And this was not only for like, you can imagine some of this for industrial, like agricultural purposes. Right. But literally some of it was just for aesthetic purposes. Like someone wanted to imitate. Yeah. If I'm recalling this correctly, someone wanted to import into America like all of the species of birds that were mentioned in Shakespeare sun. And this is purely just an aesthetic concern. Like, Hey, what if we had all these great, you know, songbirds in from, from Britain and we have them in America. So it turns out that in importing species, Willy nilly like can create some real problems. And we got by importing a bunch of foreign plants, we got a bunch of invasive pest species. And so this was a real [00:26:00] problem. And ultimately we had to clamp down. Another example of this that is near to my heart currently, because I just became a dad a couple months ago. Thanks. But it turns out that a few decades ago, people thought that for me, that infant formula was like superior to breast milk. And there was this whole generation of kids, apparently that was, that was just like raised on formula. And, you know, today, There's this, I mean, it turns out, oops. We found out like, oh, mother's breast milk has antibodies in it that protect against infection. You know, and it has maybe some, I don't know, growth hormones, and it's like this, we don't even know. It's a really complicated biological formula. That's been honed through, you know, millions or hundreds of millions of years of evolution. Right. However long mammals have been around. Right. And. So yeah, so again, some of that old sort of philosophy of progress was a little naive. You know, but now I think that someday we'll be able to make synthetic, you know whatever infant sustenance that will, [00:27:00] you know, that could be better than than what moms have to put out and given the amount of trouble that some women have with breastfeeding. I think that will be a boon to them. And we'll just be part of the further, a story of technology, liberating women. But we're not there yet, right? So we have to be realistic about sort of like where, where technology is. So this, this sort of relationship to nature is I think part of where you see the the, the, the contrast between then and now a related part is people's people's concept of growth and how they regarded growth. So here's another. One of these shocking stories that shows you going like the past is a foreign country in the, in 1890 in the United States. The, the United States census, which has done every 10 years was done for the first time with machines. With that we didn't yet have computers but it was done for the first time with tabulating machines made by the Hollerith tabulating company. And if it, if it ha you know, the, the, the census had grown large and complicated enough that it had, if it hadn't been known these machines, they probably wouldn't have been able to get it done on time. It was becoming a huge clerical challenge. So, okay. Now, [00:28:00] everybody, now this is an era where. The population estimates are not, are just there. Aren't like up to the minute, you know, population estimates just available. You can't just Google what's the population of the U S and get like a current, you know, today's estimate. Right? So people really didn't have a number that was more like the number they had for the population in the U S was like 10 years old. And they were all sort of curious, like wondering, Hey, what's the new population 10 years later. And they were gunning for a figure of at least 75. There was this one, the way one one history of computing put, it was there were many people who felt that the dignity of the Republic could not be sustained on a number of less than 75 million. And so then, then, so then the census comes in. And the real T count is something in the 60 millions, right? It's not even 70 million. And like, people are not just disappointing. People are incensed, they're angry. And they like, they like blame the Hollerith tabulating company for bundling. They're like, it must've been the machines, right. The machines screwed this. [00:29:00] Yeah, that's right. Demand a recount. Right. And, and so they, they they're like, man, this, this Hollerith guy totally bungled the census. Obviously the number is bigger. It's gotta be bigger than that. Right. And it's funny because, so this is 1890, right? So fast forward to 1968 and you have Paul and, and Erlick writing the population bomb, right. Where they're just like overpopulation is the absolute worst problem facing the entire world. And they're even essentially embraced. You know, coercive population control measures, including you know, and and not, but not limited to like forced sterilization essentially in order to in order to control population because they see it as like the worst risk facing the planet. I recommend by the way Charles Mann's book, the wizard and the prophet. For this and, and many other related issues. One of the things that book opened my eyes to was how much the the 1960s environmentalist movement was super focused on on overpopulation as like its biggest risk. And then, you know, today it's shifted to, they've shifted away [00:30:00] from that in part, because population is actually slowing. Ironically, the population growth rates started to slow right around the late 1960s, when that hysteria was happening. You know, but now now the population is actually projected to level off and maybe decline within the century. And so now of course the environmentalist concern has shifted to resource consumption instead because per capita resource consumption is growing. But, yeah. So just like that flip in, how do we regard growth? Right. Is growth a good thing? Something to be proud of as a nation that our population is growing so fast, right? Or is it something to be worried about? And we breathe a sigh of relief when population is actually level. Yeah, I'm getting like a very strong, like thesis, antithesis synthesis vibe of like we've had, we had the thesis, like the sort of like naive but like naive progress is the thesis, the sort of backlash against that is the, the antithesis. [00:31:00] And then like, now we need to come up with like, what is, what is the new city? Yeah, I mean, I'm not a hit Gelien, but I agree. There's something, there's something. Yeah, sir. Like a police back to two routes of progress, the organization something that I've been just sort of like wondering like Fox is like I feel like sort of a lot of the people. In, in like the, the progress movement in the slack, or like, I would say people like us, right? Like people, people from tech and I've, I've sort of talked to people who are either in academia or in government. And they're like really interested. And I was like, wondering if you have like, faults about like, sort of like now that is sort of onto like the next phase of, of this. I have like, sort of like ways to Rodan broad, like almost like broadened the scope brought in the sort of like people, [00:32:00] I don't know what the right word is like under, under the umbrella, under the tent. And sort of like, yeah, or like just sort of how you, how do you think about that? Cause it seems like really useful to have sort of as many sort of worlds involved as possible. Yeah, absolutely. Well, let me talk about that. Maybe both longterm term and short term. So. Fundamentally, I see this as a very, long-term like a generational effort. So in terms of, you know, results from my work do like direct results from my work. I'm sort of looking on the scale of decades on games. And I think that yeah, I would refer you to a, an essay called culture wars are long wars by tenor Greer of his blog scholar stage which really sort of lays out why this is that the ideas at this fundamental level are sort of they, they take effect on a generational level, just like the, just like the philosophy of progress took about a generation to flip [00:33:00] from, I think, 1945 to 1970, it's going to take another generation to re. Established something deep and new as as the nude psychosis. So how does that happen? Well, I think it starts with a lot of deep and hard and difficult thinking. And and writing and like the most absolutely the, the fundamental thing we need is books. We need a lot of books to be written. And so I'm writing one now tentatively titled the story of industrial civilization that I intend to be sort of. To, to lay the foundation for the new philosophy of progress, but there are dozens more books that need to be written. I don't have time in my life to even write them all. So I'm hoping that other people would join me in this. And one of the things I'd like to do with the new organization is to help make that possible. So if anybody wants to write a progress book and needs help in our support doing it, please get in touch like a list of titles that you'd love to see. Yeah, sure. So I think we actually need three categories of of books or more broadly of contents. [00:34:00] So one is more histories of progress. Like the kind that I do where just a retelling of the story of progress, making it more accessible and more clear, because I just think that the story has never adequately been told. So I'm writing about. The, in, in the book that I'm writing virtually every chapter could be expanded into a book of its own. I've got a chapter on materials and manufacturing. I have a chapter on agriculture. I have a chapter on energy. I have one on you know, health and, and medicine. Right. And so just like all of these things does deserve a book of their own. I also think we could use more sort of analysis of maybe some of the failed promises of progress, what went wrong with nuclear power, for instance what what happened. The space travel and space exploration. Right? Why did that take off so dramatically and then sort of collapse and, and have a period of stagnation or similarly for for air travel and like, why is it that we're only now getting back supersonic air travel, for instance. Perhaps even nanotechnology is [00:35:00] in this category, if you believe. Jason was, Hall's take on it. In his book, where is my flying car? You know, he talks about he talks about nanotechnology as sort of like something that we ought to be much farther along on. So I think, you know, some of those kinds of analyses of what went wrong I think a second category. Of books that we really need is taking the the, just the biggest problems in the world and addressing them head on from kind of the, the pro progress standpoint. Right. So what would it mean to address some of the biggest problems in the world? Like climate change global poverty the environment War existential risk from, you know, everything from you know, bio engineered, pandemics to artificial intelligence, like all of these different things. What would it mean to address these problems? If you fundamentally believe in human agency, if you believe in science and technology and you believe that kind of like we can overcome it, it will be difficult. You know, it will, it's, it's not easy. We shouldn't be naive about it, but like we can find solutions. What [00:36:00] are the solutions that move us, the move humanity forward? You know, how do we, how do we address climate change without destroying our standard of living or killing economic growth? Right. So those are, that's like a whole category of books that need to be written. And then the third category I would say is visions of the future. So what is the, what is the kind of future that we could create? What are the exciting things on the horizon that we should be motivated by and should be working for? Again Hall's book where's my flying car is like a great entry in this. But we could use do you use a lot more including you know, I mean, I would love to see one and it made some of the stuff probably already exists. I haven't totally surveyed the field, but we absolutely need a book on longevity. What does it mean for us all to, to, to, to conquer aging and disease? You know, maybe something on how we cure cancer or how we cure all diseases, which is the the, the mission for instance, of the Chan-Zuckerberg foundation or Institute. We should you know, we should totally have this for nanotechnology. I mean, I guess some of this already exists maybe in Drexler's work, but I just think, you know, more positive visions [00:37:00] of the future to inspire people, to inspire the world at large, but especially to inspire the young scientists and engineers and founders who are going to actually go you know, create those things. The plug is a project hieroglyph which was like, if you've seen that. I've heard of that. I haven't read it yet. Why don't you just say what's about, oh, it was a, it's a collection of short science fiction of short, optimistic science fiction stories. That was a collaboration between, I believe Arizona state university and Neal Stephenson. And like the, the opening story that I love is by Neil Stevenson. And it just talks about like, well, what if we built just like a, a mile high tower that we use that like we've launched rockets out. Like, why not? Right? Like, like you could just, it's like, you don't need a space elevator. You seem like a really, really tall tower. And it's like, there's nothing, we wouldn't actually need to invent new technologies per se. Like we wouldn't need to like discover new scientific principles to do this. It would just take a lot of [00:38:00] engineering and a lot of resources. Yeah. Yeah. And there's a similar concept in Hall's book called the space pier, which you can look up. That's also on, on his website. It does require like discovering new things. Right? Cause the space depends on like being able to build things out of them assignments. The, the space tower just like involves a lot of steel like a lot of steel. So, so you've touched a little bit actually on, this is a good segue into, I've been talking about. But then like, beyond that, you know, the same, the basic ideas need to get out in every medium and format. Right. So, you know, I also do a lot on Twitter. We need, we need people who are good at like every social media channel. You know, I'm, I'm much better at Twitter than I am at Instagram or tech talks. So, you know, we need people kind of on those channels as well. We need, you know, we need video, we need podcasts. We need just sort of like every, every format platform me. These ideas need to get out there. And then ultimately you know, they need to get out there through all the institutions of society. Right. We need more journalists who sort of understand the history on the promise of [00:39:00] technology and use that as context for their work. We need more educators, both at the K-12 level and at university who are going to incorporate this into the. And I've already gotten started on that by creating a high school level course in the history of technology, which is currently being taught through a private high school, the academy of thought and industry we need you know, it needs to get out there in documentaries, right? Like there should be I'm really I'm really tempted as a side project. A a docu-drama about the life of Norman Borlaug, which is just an amazing life and a story that, that everybody should know is just, it's just like an underappreciated hero. I think a lot of these sort of stories of great scientists that had mentors could really be turned into really excellent, compelling stories, whether it's documentaries or I sort of fictionalized you know, dramas. The Wright brothers, it would be, you know, another great one. I, I decided after reading David McCullough's history of them and their invention and, and so forth. Right. So there could just be a lot of these. And then I think ultimately it gets into the culture through through fiction as well in all of its [00:40:00] forms. Right. So optimistic Saifai in, you know, novels and TV shows and movies and everything. Yeah. It's just also, I think I'm not. Science fiction, but just like fiction about what it's like to like what it's actually like to, to, to push things forward. Because I think I, like, I don't know. It's like most people don't actually know. Like researchers do along these lines Anton house had a good post blockbuster two, where he was talking about movies that dramatize invention and was looking for recommendations and was sort of reviewing movies by the criteria. Which ones actually show what it's like to go through the process. Right. And the sad thing about a lot of popular, even the popular treatments of this stuff, like Anton reviewed I guess there was a recent movie about Mary Curie. And there's a similar thing about you know Edison and like the current wars starting Benedict Cumberbatch. [00:41:00] And the problem with a lot of these things is they just sort of focus on like human drama, like people getting mad at each other and yelling and like fighting each other and so forth. Right. And they don't focus on like the iterative discovery process and the joy of, of inventing and discovering. So the, one of the totally you know, unexpected, the sleeper hit of Anton's review was this movie, I think it's actually in Hindi called pad man, which is a drama. the real story of. A guy who invented a cheap menstrual pad for women and that could be made you know, using a sort of like very low capital and then, and be made affordable to women in India. And I mean, he was really trespassing on social you know, cultural norms and boundaries to do this and was sort of like ostracized by his own community. But really pursued this process and the, the movie I saw the movie it's, I, I recommend it as well. It really does a good job of dramatize. The process is process of iteration and [00:42:00] invention and discovery and the trial and error and the joy of finding something, you know, that that actually works. So we need, yeah, we need more stuff like that that actually shows you know, shows the process and and the dedication you know, it's funny, one of the. One of my favorite writers in Silicon valley is Eric Reese who coined this term, the lean startup and read a book at the same name. And he's got this. He has this take that you know, whenever you see these stories of like business success, there's kind of like the opening scene, which is like the spark of inspiration, the great idea, you know, and then there's like, there's like the closing scene, which is. Basking in the rewards of success and in between is, is what he calls the montage, right? Because it's typically just a montage of kind of like people working on stuff, you know, and maybe, you know, maybe there's some like setbacks and there's some iteration and stuff, but it's just kind of glossed over. There's this like two minute montage of people iterating and some music is sort of playing over it. Right. And, and Eric's point is like, the montage is where all the [00:43:00] work happens. Right. It's unglamorous, it's a grind. It's like, you know, it's not necessarily fun and, you know, in and of itself, but it is where the actual work is done. And so you know, his point in that, in that context, it was like, we need to open up the, the, you know, the covers of this a little bit. We need to like teach people a little bit more about what it's like in the montage. And I think that's what we need, you know, just sort of like more broadly for science and. Okay. Here's, here's a pitch for a movie. I believe that the, the Pixar movie inside out right where they like go inside the, the little girl's head that, but for the montage. Right? So like the hall with the montage is that a lot of it is like sitting and thinking and like, not necessarily, it's like not necessarily communicated well with other people or just be talking, but like, you could have an entire internal drama. Oh, The of the, the process as a way to like, show what's [00:44:00] going on. Yeah. Good work. I don't know. I'm so sorry. All of that is so all of that is sort of the long-term view. Right? I think how things happen. A bunch of people including me, but not only me need to do a lot of hard thinking and research and writing and and speaking, and then these ideas need to get out to the world through every, in every format, medium platform and channel and, and institution and you know, sort of that's how ideas get into the zeitgeist. And so then I, you know, I said there's also, so the short term, so what's, so in the short term I'm going to work on doing this as much as possible. Like I said, I'm writing a book. I'm hoping that when I hire some more help, I'll be able to get my ideas out in more formats and mediums and channels. I would like to support other people who want to do these things. So again, if. Any vision that you are inspired to pursue along the lines of anything I've been talking about for the last 10 minutes. And, and there's some way that you need help doing it, whether it's money or connections or advice or coaching or [00:45:00] whatever, please get in touch with me at the roots of progress. And you can find my email on, on my website. And and I would love to support these products. And then another thing I'm going to be doing with the new organization and these resources is just continuing to build and strengthen the network, the progress community finding people who are sympathetic to these ideas and meeting them, getting to know them and. Introducing them to each other and getting them and getting them to know that they all getting everybody to sort of look around at everybody else and say, ah, you exist. You're there. You're interested in this great list form of connection. And I hope through that that there will be you know, a people will just understand, Hey, This is more than just me or more than just a small number of people. This is a growing thing. And also that people can start making connections to have, you know, fruitful collaborations, whether it's supporting each other, working together coaching and mentoring each other, investing in each other and so forth. So I plan to hold a a series of events in the beginning probably be private events. For a, you know, people in various niches or sub-communities of [00:46:00] the progress community to sort of get together and talk and meet each other and start to make some plans for how we develop these ideas and get them out there. Isn't that seems like an excellent, an optimistic place to close. I, I really sort of appreciate you, like laying out the, the grand plan. And just all the work you're doing. It's it's I mean, as you know, it's like, it's super exciting. Thanks. Same to you and yeah, it was great to be here and chat again. Thanks for having me back.
In this conversation, Dr. Stephen Dean talks about how he created the 1976 US fusion program plan, how it played out and the history of fusion power in the US, technology program planning and management more broadly, and more. Stephen has been working on making fusion energy a reality for more than five decades. He did research on controlled fusion reactions in the 60s and in the 70s became a director at the Atomic energy commission which then became the Energy Research and Development Administration which *then* became the department of energy. In 1979 he left government to form the consultancy Fusion Power associates, where he still works. In 1976, he led the preparation of a report called “Fusion power by magnetic confinement” that laid out a roadmap of the work that would need to be done to turn fusion from a science experiment into a functional energy source. References Fusion Power by Magnetic Confinement Executive Summary Volume 1 Volume 2 Volume 3 Volume 4 Fusion Power Associates The notorious fusion never plot Adam Marblestone on technological roadmapping My hypotheses on program design (which were challenged by this conversation!) Fusion Energy Base (a good website on fusion broadly) ITER Transcript (Machine generated, so please excuse errors) [00:00:00] In this conversation, Dr. Steven Dean, and I talk about how he created the 1976 S fusion program plan, how it played out in the history of fusion power in the U S technology program, planning and management more broadly, and even more things. Steven has been working on making fusion energy a reality for more than five decades. He did research on control, fusion reactions in the 1960s and seventies, he became a director [00:01:00] at the atomic energy commission, which then became the energy research and development of administration, which then became the department of energy in 1979. He left government to form the consultancy fusion, power associates, where you still want. In 1976, he led the preparation of a report called fusion power by magnetic confinement that laid out a roadmap of the work that needed would need to be done to turn fusion from a science experiment, into a functional energy source. And if I can sort of riff about this for a minute, the thing is. Unlike what I sort of see as modern roadmaps, it lays out not just the sort of like plan of record to getting fusion, to be a real energy source, but lays out all the different possible scenarios in terms of funding, in terms of new technology that we can't even think of being created and lays everything. Yeah. In a way that you can actually sort of make decisions off of it. [00:02:00] And I think one of the most impressive things is that it has several different what it calls logics of funding, which is like different, different funding levels and different funding curves. And it actually, unfortunately, accurately predicts that if you fund fusion below a certain level, even if you're funding it continually you'll never get to. An actual useful fusion source because you'll never have enough money to build these, these demonstrator missions. And so in a way it's sort of predicts the future. This, this document is super impressive. If you haven't seen it you should absolutely check it out there. There are links in the show notes and it's sort of, one of the reasons I wanted to talk to Dr. Dean is because this, this document. Is one of the pieces of evidence behind my hypothesis. That to some extent, program design and program management for advanced technologies is a bit of a lost art. And so I wanted to learn more about how he thought about it and built [00:03:00] it. So without further ado, here's my conversation with Steven Dean. To start off, what was the context of creating the fusion plan? Well, I guess I would have to say that it started a few years earlier in a sense that in 1972 the I was in the fusion office and in the atomic energy commission and the office of men and mission management and budget at the white house put out instructions to, I guess, all the agencies that they should prepare an analysis of their programs under a system, they called management by objectives. And this was some, this was a formalism that was, had a certain amount of popularity at that time. And I was asked to prepare something on the fusion program as a part of the agency, doing this for all of its programs. And [00:04:00] in doing that I looked at our program and I Laid out a map basically that showed the different parts of the program on a map like a roadmap and what the timelines might be and what the functions of those of facilities would be. And when the decisions might be and what decisions would work into into, into what, and that was never published in, in a report, but it w except internally, but the map itself was published and widely distributed. And I have it on my wall and it's in my book. So that was the first, my first venture into. Into doing something that resembled plan, it was not a detailed plan, but it was an outline of decision points and flow this sort of a flow diagram, but it did connect all the different parts of the [00:05:00] program and the identified sub elements, you know, not in great detail and, and budgets were not asked for at that time. So that's how I got into this idea and a little experience in, in the planning area. And then a few years later, we had the gasoline crisis in the U S where there were long lines and we couldn't get gas and people were sitting in their cars over overnight. And the, the white house at that time said that you know, we had to become energy, independent oil you know, the OPEC. And, and so Bob Hirsch, who was at that time about to transition from the director of the fusion program to an assistant minister traitor of Urdu in, I think it was 74, late 74, 75. The, the government decided to Congress decided, or the [00:06:00] administration decided to abolish the atomic energy commission and transition it into something called the energy research and development administration or arena. And the reason for that was to. It create an agency whose function was clearly for all of energy and not just for atomic energy in order to respond to the energy crisis and to get us off of the dependence on foreign oil imports for, for vehicles and things. And so when, when, when that happened, my boss, who was Bob Hirsch at the time he became, he was actually appointed in assistant administrator of errata for basically all the long range energy programs, which included fusion. And as he was at transition, he, he came up with the idea that we should create a detailed long range plan for the, [00:07:00] for the program. And he, he was obviously becoming sort of a senior manager for the many things and he wasn't certainly going to try and do this himself. And so he and I were very close. I was at that point he had three divisions in the fusion program and I was the director of the largest division, which had all of the main experimental programs. And so he asked me to prepare this plan. And if you look at the plan at the very beginning, there's this there's a chart that shows Bob's basically guidance, which was to note that that there needed to be a multiplicity of pathways because no one organization or, or group or division or program was in response could be in full control. And that in order to have a plan that might have some hope of [00:08:00] Last thing that you had to take into account a number of policy variables he said, and technical variables, which meant that he said, because need for the, for the, for fusion and the intent of the government and the funding is all in control by other people in the government. We had to have a number of plans by which the program could be conducted. So he came up with the idea that, well, let's have five plans, which he called logic. So he basically created that framework and turned it over to me at the beginning, I guess, of 1975, I think it was. And to, to create this. This plant. So that's how it all got started. And I had been doing a number of things with the program in terms of the major [00:09:00] experiments that were under my control as a director of the confinement systems, division magnetic confinement systems. I was forcing all, all the people that were that whose budget I had to control over to, to tell me what they were doing and what they needed to do. And so on. It's all right though, I had already been and working on a lot of these things in, within my area, but at that point I took over the responsibility of creating the, in the entire plant. And so I, I, I took it over and I started I created a, a small working group within our office. And we added people that we thought were responsible that could do this for us, or give us the details out in the various parts of the program, all elements of the program. And we created a team and we, we launched this and and this was the result. We were determined to look to these five [00:10:00] logics. They ranged from both, you know, basically a steady level of effort to a maximum level of effort. And and we just started creating these things. During that six months, first six months of 1976, And this was the result. Nice. And did you, so, so each of the logics is kind of a, a wiggly curve. Did, did you go in knowing what the shape of the funding curve for each logic would be, or did you just go in with the framework that there would be five logics and over the course of designing the program, you figured out what the actual shape of those curves would be? Well, we created a definition, a rough definition of what each of the logics was supposed to look like, not in detail, but for example, a [00:11:00] logic to what says moderately. Expanding. But the tech progress would be limited by the availability of funds. But new projects were not started unless we knew that funds would be available. And so we knew that we could not address a lot of problems in parallel. And so we had a general idea that this was a program that was not running at a maximum maximum feasible. Pace. And then the logic three, we said, well, let's look at one, that's a little more aggressive. And we would lay out in that one that as soon as these projects were scientifically justified, they would be in the plan. We would not wait till we knew that there were probably people that wanted them were when he was available. But, and we also said in this scenario, we would address a number of things concurrently rather than in [00:12:00] series. So we assume that the funding was ample. We didn't have a number in mind. At that point, we started laying these things out and asking people. If you had all the money you needed, what could you do if you didn't have quite enough money, what would you do? And people started responding to us that we're working on all of these subtopics. We were mostly working at the beginning laying out what the topics were and what had to be worked on eventually to get to the end point and that these topics could proceed at different rates and with different amounts of risks, depending upon the budget. So this was a sort of an iterative thing that went back and forth with the community and the areas, and our team kept putting these together until they made some sense. Got it. And just to, to sort of step back a second so before [00:13:00] you created this plan sort of all the activities were happening already. Is that, is that right? There were activities in all these areas that were going ongoing. Yes, that's right. They were at, at relatively low level at that stage at the early seventies, the total fusion budget was $30 million. And by the mid seventies, because of the energy crisis, we were told, you know, tell us what you want. And we had raised that budget from 30 million to 300 million. So the program had been undergoing the first five years between 72 and 75, a very rapid expansion. And we had started a lot of new programs. And so the program had been built up quite a bit, although with all of these programs, of course, because they were new. They were at a, still a fairly early stage of their. Their development. The other thing that drove the, the the curves was the [00:14:00] ignition that getting to a fusion power plant required a couple of identifiable major facility steps. And these actually came from that map. I mentioned from 72, which said that from the experiments that we want to do in the near term, which were to build like a physics proof of principle experiment that had to be followed by an engineering step, that was an engineering test reactor. And that had to be then followed by a demonstration power plant. And that those steps were big facilities. Each one, much more expensive than the previous one and making a much more definitive demonstration of fusion that was on a. And, and the wiggly curves that you see, not the, the, the the smooth ones have these bumps on them.[00:15:00] And those bumps reflect the fact that these major experiments were going to cost a lot of money. And depending on how fast you build them would, would also reflect a different path pace to an end point you know, the faster you build them, the faster you get there, because these major steps really drove the progress and drove the budget. And do you think that sort of, I guess it's hard to think about, but like do you think that the plan helped anything in the sense of. If, if instead you just sort of had continued with the, the program as it started, where I imagined it was like much more sort of bottom up. Do you, do you think that the, the outcome, how do you think the outcome would have been different? I think without the [00:16:00] plan, I don't know what would have happened. I don't think we would've gotten the support that we got in the next few years during the seventies that we got, because the outcome of this cut was that. The plan, the plan was published with all of its detail and all of its budget. It was published publicly. The office of management and budget tried to stop us from publishing this plan because they didn't want budgets out there that said, well, if the Congress would give you so much money, then, then you'd get the job done because that would tie their hands because, you know, they like to be in control of how much money they're going to give to every program. And so they don't want the agencies to put out plans with budgets. And so we had to fight that. And luckily for us, the energy research and development administration, which was fairly new and I [00:17:00] actually only lasted a couple of years before it transitioned to the department of energy, had a a head of it. Bob Siemens, who came from NASA. He overruled the office of management and budget. He said, I'm in charge of this and I'm putting whole plan out. So we probably pushed it. And it got picked up over in the Congress by Congressman Mike McCormick and his staff. And they became champions for this plan and they came. What's the a legislative agenda and they got the Senator from Massachusetts and the Senate to get on board. And and by 1980, I think it was in October, 1980. Congress had passed the magnetic fusion engineering act of 1980, which basically adopted our plan for getting to the end point by the year 2000. [00:18:00] And so the result of our plan was that Congress picked it up. It passed a legislation, making it national policy. And it was signed by president Carter in October 7th, 1980. And we thought at that point we were in that we had a commitment of the United States government at the presidential level to implement this the, the plan for getting there by the year 2000. And so the, the problem, the only problem was that he president Carter signed it in October and lost the election for reelection in November. And as you probably know, whenever there's a change of administration to, especially if it's a change of party in here, Almost everything that the previous administration has decided to do. The other, the new people want to either not do, or they [00:19:00] want to completely reevaluate and start over. And so that's what, that's what happened to this plan in 1981. Got it. And so, because as far as I can tell we've, we've sort of like the, the, the way that it's panned out is that we've, we've sort of followed below logic one, right? Oh yeah. Oh yeah. It was less and less than a, less than ever Guinea. It's the never get their logic. There's one, but there's one caveat to that is that in the 1980s Ronald Reagan was a post to all of this energy stuff until 1985. When he met with Gorbachev. And they decided to work together on fusion and build our first major step that was in our plan. We were going to build this engineering device in the 1980s and he and Grover Jeff decided let's get together and build it together with Europe.[00:20:00] And this became the eater project, which is under construction in France. So what the program really did to work around this problem of the budget being so low was to say, okay, we're not on our own track, but we're on a wall track and we're all working together. And so they're building this multi tens of billions of dollars engineering test reactor and it's taken them a long time to get it going, but it's hopefully going to be finished in a few years. It's going to turn on by hopefully 20, 25 is plasma, so we're way behind, but, but that was a response to being on this. Their thing was to say, we're all in this together. And we don't have our own plan to get there, but the world has a plan and we'll get there together that that's how this all evolved. Got it. And so I guess the, if, if I'm understanding this correctly, the, the sort of the, [00:21:00] the purpose and the value of this plan was less as a coordination mechanism for the people doing the work and more as a sort of communication mechanism with people sort of outside the organization in terms of. What the work would entail is that, is that accurate? I, I can tell you that when I was doing this plan and I was in a senior management position there, I had responsibility for the bulk of the program. I didn't have the basic pilot, the physics program and universities, and I didn't have the technology part, but I had all the major experiments in my ballywick. And when Bob Hirsch was, I was still reporting to Bob Hirsch and he had all the energy programs in Herta, it was our intent to manage his program, to implement this plan internally. It did turn out that part of the plan, part of our implementation required us getting the money and that all went through this energy building in [00:22:00] Congress. We thought we had the whole thing put together that not only we did, we eventually have the Congress on board, but we also had a management and we had 80 staff in the office then. And And we were prepared to manage the program to, to implement this if we got, get the money in detail. So it was both the management plan for implementation within the within an arena. But of course the other thing that happened in all of this was that Erna was abolished and became the department of energy. So I think Jane and I left, I left in 1979 because I thought we were about to implement this plan. And I formed fusion power associates, and I got a dozen electric utilities. I mean, a dozen major industries like Westinghouse and companies like that to, to form this organization, to bring industry in, to actually bring industry into the, into the implementation phase of this program plan, we were all set to [00:23:00] go. And even in the early eighties, before the whole thing sort of fell apart, I had a dozen electric utilities in fusion, power associates. And so we had both industry that wanted to do this and the electric utilities that were on board and all we needed really was for the new department of energy to to follow through with the management of this thing and try to get, get the money, but. The money never, never came through. And the industries and fusion power associates in the early eighties realized that there wasn't going to be any money for industry because there wasn't any money coming through. And the electric utilities were deregulated by Ronald Reagan and they abandon all their R and D departments, which were the ones that were in, you know, in our organization that were interested in developing fusion. And they became taken over by [00:24:00] business people in the utilities whose main purpose was to make money. And they were not interested in getting involved in brand new technologies. They are only comfortable with the technologies that they had. Yeah. But that makes a lot of sense. And I guess to sort of go, go back to you. You mentioned earlier that this plan was sort of part of a bigger trend of management by objectives. Do you think that that was effective management by objectives? And just because I feel like sort of the, the modern idea, very much like projects, plans like this that like, you know, multi-decade technical plans are at, at best foolish and at worst detrimental. And so, so what do you, what do you think about sort of like big plans for technology projects? More generally, [00:25:00] just sort of say that management objectives was an OMB guidance in the early seventies. And it's soon disappeared from the the roof work, if you will, as the the OMB. One of the things that happens in Washington every two years is that people change in administrations changed. Whatever one group is wanting to do it just a lot, go by the wayside. So by the mid seventies, when the came about there was there was no management by objectives, formalism still going on in the government. Basically they start all over again with how they're going to try. Do these things. And as this thing all evolved, you know, up to the present at the OMB I don't know, probably more than 10 years ago, 10 or 15 years ago, the OMB said to fusion, you are guys are not an energy [00:26:00] program anymore. You are a science program and we are going to evaluate you and have you managed like a science program. And so they stopped even asking us for both aimed towards an energy program. They said that we should go to the scientific community, take unsolicited proposals from the community to do good science, evaluate them under peer review by other scientists. And if it was good science, we should fund. And we should not be trying to make them into seeing we should not evaluate these proposals as to whether or not they are getting us to an energy source. So for over a decade now, the fusion program has not had an energy source as it's, as its goal, and it hasn't been funded or evaluated within the government as an energy program. Now, this has all changed in the [00:27:00] last year, but up until just very recently they're trying to put now the energy mission back into the, into the mission, but it hasn't actually formally happened at OMB yet. Got it. And just, just to sort of pull us back to well mentioned by objectives and just more broadly having very concrete plans W w w do you think it was useful or do you think it was just sort of like a a fad almost. Well, it's been disappointing from him personally. I think that it's been disappointing that like, we haven't actually done the plan. Right? Well, it's just a point. You spent so much effort laying out how you, how you would do it and how you would make decisions and you get everybody that's under your purview out in the in the community of people that you're funding, you get them all set up to try to achieve these things and you try to get them the [00:28:00] money and then it all falls apart. And then somebody tells you that, well, we don't care because we don't really think we really don't care if you ever get there. It's been the attitude until very recently. So it's very demoralizing, you know, to everybody, except the scientific community itself is kind of immune from this to some degree, as long as they get funded for research. As long as the universities getting money for basic research in this area, and they're training students in these trainings and these students can get jobs either in the private sector or they start their own companies, or they go to work at government laboratories, as long as that is moving along at some reasonable degree of success for people getting trained and getting in doing work and publishing papers. There's a certain degree of apathy if you will, or there even a certain degree [00:29:00] of satisfaction in the scientific community since nobody seems to care, if you should never goes on the grid. Yeah. Yeah. And so I guess like counterfactually, if the money had been there, so actually one thing that I, I still do find really impressive about the plan, although it is disappointing is that you basically predicted that. Right. Like you, you said, you said here's logic one. If you're below this line fusion won't happen and indeed you were right. So that's, let's just say like, that is one of the reasons why I'm I'm so impressed by it. Because it, it really did, it made a very precise prediction and that prediction came true, although it is disappointing. If you, if you could imagine that, like the say the money came through, do you think that this plan would have been useful in the sense of like, like how much confidence do you have that you sort of [00:30:00] accounted for all the things that you would need to do over the course of several decades? In order to, to get to fusion as an energy. Well, as it says in sort of early part of the plan, these plans are not bent to be followed blindly in their detail. They are guidance to management and management has to keep updating them and looking to see how they're doing and keeping an eye out for new discoveries and revising the plans in detail to see if new things are emerging or some things are failing. Or the money is coming in in such a way that that the plan schedule has to be changed. That's why you need management structure that's in place and following it, but not blindly following it. Yeah. So I personally believe if the management structure that we had in the [00:31:00] mid seventies had been maintained and, you know, right now I think we had 80 people in the office and they were all management oriented. And right now I think they probably have about, I don't know, maybe, maybe 15 people in the office because they're running it like a research program. So they just taking proposals and getting them evaluating and sending out money. So they're not managing in the way that we would have managed if we had had 80 people and we'd had the divisions that we had divided up and we revise the management structure from time to time. Along the way. And I know if I hadn't been there and what we had in mind, we were going to transition the money starting out into industry to get these things built and to bring engineering oriented people in more into the program, because even in the mid seventies, the pro was dominated by plasma physicists. And we were only in the process at that point of starting to bring in engineering [00:32:00] people, but still the money. The government's laboratories in their technology. People like Oak Ridge has a big technology laboratory. And so there was technology programs being developed in these laboratories. And other a little bit of it was going out into industries as on a job basis for the labs, but we didn't have a big industry program. And you know, one of the things I did just before I left was I brought in McDonald Douglas, which a big aerospace company to build an engineering center at Oak Ridge for fusion. That was sort of the last. Done and you know, and when this whole thing folded in the eight earlier eighties McDonald Douglas basically was told to shut down and they went away. They were, they were eventually bought out by Boeing. So we had started a transition where part of the implementation of this plan was to implement it by bringing industry in to bring [00:33:00] that talent from, we had a bunch of people, for example, in fusion power associates at the beginning, who were the architect engineers that were building nuclear power plants. So, you know, those were the people that we needed to implement the plan, but they were not quite in the program by now, 1980. And when the money didn't come through them, they just all disappeared from any plan that the government had because the government in the eighties and was only interested in trying to make their scientists survive. Yeah. And I guess you don't really see plans like this today. It feels like. And so I get the sense that creating plans like this, and more generally like technology management, like competence, technology management is a bit of a lost art. Do you, do you think that's true or, or am I, or is it like, am I missing something?[00:34:00] Well, I don't know if it's true or not across the board that they must be out there somewhere. I think when you look at big construction projects and the people that do those projects know how to manage and they know how to cost things out and they know how to, they know the importance of, of keeping things on the schedule and they know how important it is to have pieces of the schedule coming in on the right time timing so that the whole project comes together. And, and we tried to do lay that out so that, that could be done for fusion, but I don't see it being done in the department of energy. And I don't know about any other agencies. I I can I have the feeling that maybe the defense department does it a little better on weapons systems and aircraft systems and fighter systems with some of the big aerospace companies? I mean, I think my observation from a fire department of defense is that [00:35:00] they, they do it the right way, but they're not on top of the cost and schedule and they do get taken to the cleaners by these companies, but somehow or another, they do get the job done, even if it's costing more than it should and taking longer. Yeah. That's, that's the thing that there's sort of been this like wider observation that since the 1970s things just take like sort of complex projects like this take longer and cost, like have, have dramatic like cost and time overruns. And it's sort of like this, there's like this trend of that happening more and more. And so, so I wonder if it's like w what it is about the world. That's changed. Do you have any hypotheses there? Well, you know, I'm not sure if it was ever that good first place ever, because when we, when I was there [00:36:00] in the seventies and we were laying out our plans, we thought we knew how, how to do it and do it right. But at the same time, within the atomic energy commission, there was a a nuclear fission program called the breeder reactor program. And it was a mess. And, and yet the industries out there like Westinghouse and general electric, they were actually building nuclear power plants in those days. And they were building nuclear reactors for submarines in those days. And so those programs were actually working, but at the department, they were working on advanced reactors and they weren't getting them done. And they eventually had to shut down the breeder reactor program because it just wasn't just wasn't seemed to be working. So I'm not sure the government, at least the part that I knew ever did that. Well, you know, when Admiral Rick over wanted to put a. I nuclear reactor in the submarine. The Navy wanted to fire [00:37:00] him as a department of energy, wanted him to put this program into the Lac, their national laboratories. And he had a fight them tooth and nail through his friends in Congress to get put in charge of the program and be allowed to put this out to general electric and Westinghouse. He had to fight them, and this was back in the sixties. So I'm not, I'm not sure that the government itself ever was very efficient at any of these things. Now, I have to say that NASA seems to have a good reputation and I, if it's true, it's I attributed to the fact that Kennedy went public and made it a national priority to get there by the end of the decade. And he demanded that they do it in a way to make it. And he, he had the backing of the Congress and he completely set up a whole new agency focused on [00:38:00] just that. And they got there. So I have to say that that was a success story and it remains a success story today with the evolution of a commercial industry. That's coming out of all of that. All this is quite a few decades later, but nevertheless they seem to have done a good job. I've never, I've, I've never been in NASA. So I only can see it from a farm. I'm sure there's some problems within it, but you know, somehow or another, it proved that we could get it done. And going back further to the Manhattan project for the atomic bomb was clear that when there was a commitment from president Truman, I guess it was, or or maybe it was, maybe it was Roosevelt. To do it and the army set up to take charge of it. They put a general in charge of it and they went to Los Alamos and they forced to deploy the part, the atomic energy commission laboratories to, to work on the problem that was at hand to get it done in a short amount of time. And when you had that kind [00:39:00] of leadership and management, it seems like it can be done, but it all depends on management and it's rare in government. And I would say it's rare even outside of government as well. And, and so, so I guess the upshot of this for me is that and correct me if this is wrong, but that you feel like it's much more about sort of the, the individuals in charge. And then it is about sort of like the, the process of, of planning and roadmapping out the techniques. Yeah, absolutely. I can't tell you how many plans have made since the one that you were looking at that I, that I've gathered dust on shelves. They almost every other year, the program launches a new plan. It finishes the plan. Everybody says whether they like it or they don't, and it's not implemented in a couple weeks [00:40:00] later, they'll turn it over to the national academies to evaluate or proposal new a plan. And I can't tell you, it's countless number of plans in fusion that are gathering dust on shelves over the past 40 years. You mean, it's the managers and the people that are want to implement the plans that, that supervise the plan. And as long as they're there we'll implement the plan, but as soon as they're gone, they, somebody else comes in, maybe makes a new plan or makes no plans at all. You know, just try to keep things alive. And w what do you, what would you think about so I feel like the sort of modern ethos is that planning plenty. Is it that useful? That you should just go and just start doing stuff? So I guess if we, if we think of like a counterfactual world where you just [00:41:00] have a very, like, you, you have consistent management, but they don't have a plan. How do you think that would be. I'm not quite sure what you said, but let me, let me give you an example of this big international project. Either in France, it was, it was, it was started by Ronald Reagan in 1985, but it didn't really get launched as a serious construction project for 2006. And it very rapidly became something that was getting behind schedule and over budget. And it was completely out of control until about 10 years ago. They, they had a management review and they said, we've got to get control of this project. They brought in this guy, that's now the director, Bernard big go. And he, and he took charge of this. And now he's got the thing organized, reorganized. Countries [00:42:00] from all over the world on a schedule to deliver this piece of equipment or that piece of equipment on a certain time schedule, he's got them all being delivered in a sequence and he's having them put together in a sequence. And he's got a great management plan and he's been keeping the thing on schedule now for the last five years. And I have great confidence. He's going to get the job done, but it all started with putting somebody like him in charge that knew he had. Have a plan that was in detail for everybody working together because, and totally took charge every country that had part of the job wasn't controlled with the wrong piece. And there was no, there was no control if they got behind. Sometimes the director in, in France didn't even know until it was too late to get it back on schedule and, and he didn't control the money anyway, each country controlled its own money. So, you know, I think it all comes down to management and then the management [00:43:00] makes the plan. Yeah. And w we'll see, so that's that I do think is worth noting in the sense that there's, there's also sort of a philosophy of management that says management shouldn't actually be imposing a plan on people. They should just like. Let it be very bottom up. Right. And just like, instead of planning, like, you don't know what's going to happen, so you should just sort of like let ideas bubble up from, from the bottom and let people work on what they think is the best thing to work on. Right? Well, you know, managers are managers of people and they oversee people. And so in a company, there's somebody at the top when there's somebody under him, but underneath them, It companies, there are thousands of people they're doing their bit. So a managers is not just say, Hey, we're going to get this done by tomorrow or next week he, he supervises all these people and these [00:44:00] people feed him up the information and help create this plan. And they all have to be on board and supervised properly all the way down the line through it, through a management chain. So it's not like one person does the whole plan by himself or with a couple of people in his office. He supervised the preparation of a plan with the community. So I had, you know, dozens of people around the country who helped prepare this plan. I helped them piece it together. And, you know, I helped organize the structure of the whole thing, but it was, it was an ongoing interaction that went from. And then guidance from top down, it was back and forth through the whole process. Got it. So you could almost think of the plant as a coordination mechanism in a way. Absolutely. Because the managers can actually do the work. Yeah. [00:45:00] Yeah. And they probably like don't, the managers can't know enough to be able to say accurate. They don't know the level of detail. If there's a problem. For example, if there is a problem they can say, okay, let's fix that problem. And they go back to the people that know about it and they tell them, okay, you guys go out and find out how you're going to fix this problem and come back and tell me how you're going to do it. But then the manager has to approve it. You know, if he doesn't, if he thinks it doesn't been done, right. He will go back to them and until they get her. Right. So, and I guess another interesting thing about Th the, the plan is that at some point someone was willing to make a prediction but a decade or more out. And that's sort of an attitude. I, I see people as being very hesitant to make that predictions on that timescale now do you feel like there's, or at least with that amount of w with like that [00:46:00] amount of precision, right? Like people make very, like hand-waving predictions now. Do you think, like there's been some kind of attitude shift around making predictions like that? Well, it's changing in the last year or so. There's been a lot of planning activities going on here and you'll see some time schedules and all of these, like right now there's a whole bunch of the companies that are all saying. But by 2030 or 2040 or 2050 and so on and so forth. And there's sort of a goal that's been proposed to have fusion on the grid by 2050 and in order to participate in the climate change solutions. So there's a lot of thinking about this and there's a lot of people putting out what they think is a reasonable timeframe that is achievable. And it's interesting that these, these timetables are all. One two or three decades out, which is sort of like almost the timescale [00:47:00] with the timescale that we had. So it's not uncommon to think that almost anything that's technically thought to be feasible can be done in 10, 20 or 30 years, depending on how difficult it is. So it's pretty easy for people to think that something can be done in those kinds of timescales and then start backfilling the details to see how it can be done and what it costs. Yeah. I think, I think the thing that strikes me is different between the predictions that I see now. And what you worked on is that. I, I feel like the, the fusion plan was a, the producers were very precise. Like it wasn't like, oh, we'll get this thing working by this time. It was like, okay, we need to show this experiment, this experiment and this experiment. And then there are also like very clear sort of intermediate results and, and like different pathways. All of which I, I don't [00:48:00] see in, in modern modern predictions where there, who are, who it feels like it's like, step one, start project. Step two, question mark. Question mark. Question mark. Step three 30 years later, have this amazing result. And I feel like that well, and you see our times scale to look to around the year 2000. Come out of whole cloth, it was set by the fact that we were in a physics phase and we had just authorized the construction of a physics demonstration called Tokamak fusion test reactor at Princeton. In 1975, we had already launched construction of that, and we knew that to get to a power plant. We had to make two major steps. One was an engineering facility and next was a demonstration power plant. And the time to construct those things is, is kind of known that it takes [00:49:00] five years to build them and five years to run them. So that kind of for each step was a 10 year step. And that gets you to a 20 year timetable. And so that really the time to build those two facilities and operate them, set the timescale. Of 20 years, more or less, depending upon how, you know, give or take a few years how fast the money came in and so on. So you know, we had a, we had a reason that that 20 year time frame was sort of set that we couldn't get there any faster because we couldn't go direct to a power plant. Right. And, and I guess like, so, so two questions one is, how do you think about the difference between a engineering project and a physics project and then two, like how did you know that you couldn't go direct to a. Well, if you [00:50:00] look at all the pieces of a power plant, you'll know that there's an awful lot of stuff in there that is not needed for a physics experiment, you know, a physics experiment, you know, what makes up a fusion plasma, and it has a whole bunch of diagnostics on it, and you're not sure what it's going to do. And so you're, you have to allow for surprises and then you'll have to do theory and computation to see if you understand what's going on. And all of that requires people who, who understand the physics for a power plant. You have to actually have confidence that the plasma that you're making is actually going to sustain fusion for a long period of time and produce heat. That can then be converted into electricity. And that means that these power plant has to be doesn't have room for a lot of diagnostics to be doing experiments, to try to figure out [00:51:00] what's happening. You have to have high confidence that when it turns on it's going to run and not have to be shut down every day or every week to be fixed. Right? So all those things require technology and engineering development, where components, you know, there may be a thousand major components or hundreds if you combine them in the right way into a power plant that has certain functions. And each of these has to be developed by engineers as a company. It has to be run and tested for long periods of time to see you with your breaks, to see how to fix it. How long does it take all of these things have to be demonstrated before you put it all together. Otherwise when you put it all together in Nepal plant it's too late because you can't just hate the far plan a party again, and start over. So the engineering and technology has a whole separate track of development that requires [00:52:00] testing and and development of codes of, of of a manufacture materials have to have codes. How long they'll last in this environment? Yeah. When will they fail? There's a whole skill set called Time to failure and time to repair the engineers, work with that physicists don't work with, if it breaks, it breaks, they just, you know, you know, they, they, they fix it because it's a small piece and they put pieces in, it takes them maybe a few weeks, but it a major piece of a power plant. It might take you a year to take that piece out and repair it and put a new piece in. Yeah. So, so like, meanwhile, you're not making any money selling electricity, electric utility will not buy a power plant like that until someone's shown that every piece works and worked all, all together can be sex if it breaks, you know, in a week. [00:53:00] Yeah. Interesting. So, so in a sense engineering work has a lot more to do with robustness than, than physics. Once, you know, the physics, it's an engineering problem to power commercial aviation. Okay. Yeah. I think that, I guess in my mind that that's still, like, there's still a lot of like research work to be done in engineering problems, even if it is just an engineering problem. There's a, there's a melding of physics in it. That's what they call applied physics and there's basic physics. And so, and there's technology and then there's engineering and all of things. I have slightly different slants and slightly different communities, but they all have, and that's one of the functions of management is to work on a timeframe and with money to meld these things in the proper sequence to get to where you need to. Yeah. That's why a program has to, that's why a program like fusion has to evolve from [00:54:00] totally physicists to mix of physicists and technology, people to a mixtures of engineers, to commercial companies that do costs and schedules and all of this stuff. This all has to be supervised by management. Got it. And so sort of a nitty gritty that I'm interested in is like, how did you think about budgets and like how much things would cost? Cause I feel like there's, there's no good canonical resources about like, how to think about how much research programs cost. Well, the way we did it was we divided it into systems and subsystems. And we went to the people that were working in each area and we asked them to go into more depth and that's, what's in our other volumes. So we had teams of people in all these areas, and [00:55:00] then we use you know, people that from industry and from utilities. Had done similar things. He found, we looked at the cost of nuclear power plants. That was a big part of our, our thinking as to what we knew that the fusion plant had to compete. So, you know, the, the, the skillset was all out there. Technologic technology wise for the power plants because of fusion plant is almost like a nuclear power plant, except a fuel is different in the center. I mean, it doesn't look the same, but it has all the same pieces to get the power. So there, there was a lot of skills out there that we, we were able to draw from. And, and we did the best we could. We know we can't claim that. And we put some contingencies in there, you know, we didn't let them low ball or high ball us, you know, because we had, they had to fit into the different logics as to how much money might be available and stuff like that. So, and we didn't say that this number is where we're, you know, in [00:56:00] stone that they were, they were absolutely. Yes. Yeah. And how did you think about like, places where there's just like, sort of deep uncertainty like where you would need to actually, in terms of a physics problem where you would actually need like some kind of discovery in order to get the thing work? Because it seems like there, there could be a situation where like, you know, it's like you can make that discovery next year, or you could, it could take you 10 years to figure it out. Well, if you look at the say the logic three reference option to page 12 of the blue colored volume you will see. That there are a variety of paths the Tokamak with the lead path and freed, laid out a reference a lot for that to get there by a certain date. But underneath that, there's a path for authentic concepts. And there were decision points, which said that well, if these [00:57:00] things come along and there's even one.at the bottom that says other things that were in very early stages of proof of principle, but we weren't knew that these things might come to fruition. We laid out a timeframe for hoping that we would fund those so that they could be evaluated. And so if those things came to fruition, then they would transition to a next step. And so that would all, that was all sort of taken into account as to the decision point as to when some of these things might, might happen. And, and of course, if, if something really radical were to come along a long, one of these other paths it's listed, I'll say can see one if you'll, I don't know if you have it in front. But under other, you'll see a decision point in 1985 that we're going to try to bring some of those things to a decision.[00:58:00] If it looked like a positive one, we would proceed to what we call a prototype engineering, power reactor. And so it would take the place of that one up above that was called the Tokamak EPR that would have already been under construction if we kept following the back path. But, but it might still be. But if this other one came along, we would start its own track to compete. 1985 and then it would pick up at its own track and then it would come in later and we'd have to, at that point, if that became the favorite path, or maybe even there'd be three paths, you know, we didn't say that there could only be one winner. So you had a, you could eventually wind up with several of the earliest ones might come on around the year two thousands, but some of these other ones like abandoned 2005 or 2006, if they were better and they'd be a options for the [00:59:00] utilities, if they were better. Got it. Yeah, this is so cool. One of the really big takeaways that like, just like keeps coming through is almost just like consistency of, of management and not so much like the plan, but like of, of a plan. And, and I think like that's what you see. Not happening. And I guess sort of pulling us to today. Do you have a sense of which things that are happening in fusion now that you think are most. Well, you know, I don't want to get out on a limb to pick winners and losers because fusion power associates really is a home for all of these people. And I encourage the ball and there are people that we will not let into fusion, power associates as they're out there because they're so re almost crazy. And their claims are almost crazy that I wouldn't want to be associated with them. There are few and far between [01:00:00] fortunately, most of the alternates that are out there and these little companies they've been formed by good fusion. People who have, who have fallen on bad times because the government started funding all their money into tokamaks and stopped funding their off and net ideas. And so these people branched out and got support on their own. And I know some of these people and they're good people and their ideas need deserve to be pursued. But the truth is that most all of these are at what we used to call the proof of principle stage on their physics. They are not fully thought through power plants and their physics is not fully developed or at least not even far enough along to develop to know how probable their success is. They should be pursued. What was the room in the program for these, because improvements all those come along, any tech technology. So the first thing that comes out is it not going to be the best [01:01:00] thing 20 years after it? So I encourage all these things if they're credible people and you know, right now there are a couple of. Things in the Tokamak area, you know, the Tokamak mainline is the conventional Tokamak that is represented by either, but there are variations. There's one Commonwealth fuse and the systems at spying out of MIT. That's almost the exact same concept as the mainline Tokamak, except they're using high field new superconductors, which make the machine smaller. And which allows them also to be able to disassemble and repair it faster than the conventional tokamaks because the Magnus come apart in a different way. And the exhaust system that they've designed is more efficient. So that may help with some of the materials problems as the conventional talking back. So it does look like a much improved Tokamak and they're getting money and they're trying. You know, [01:02:00] they've got a facility that they're, that they've committed to in Massachusetts, and they're trying to build one step, a physics demonstration followed by a electricity generator. And so I, I have great hopes for them. If they can get money, they're privately funded. Now they're not getting hardly any government money at all. I think the government's helping them a little bit with some support work in the labs, but basically it's a private sector venture. And I think that one of the most promising, and there's another variation of the Tokamak called the spherical story. Or physical Toca Mac the British are going gangbusters on that. They've got one in operation. They've got a company that's also built one and they've got a site for building as a next step one, which they a site where they hope though will the actual power electricity generator. So that variation of the Tokamak is also looking very seeing. And it's the British are way out in front on that. Although [01:03:00] that idea first came. Ascend Princeton is actually had built one of those. And as another one coming in operation in a couple of years, that would support that line. So there's a couple of variations along the token back line that are looking, looking very good. All the other things that you hear about there are at a somewhat earlier stage of develop. They're all doing good work. T a E a tri alpha energy. Your TA in California is probably the most radical of the mall. But they are the farthest along of these alternates. And they've all, they've had success along the way. They built two or three generations. So machine, and they're all trying to get money for a really major step that would really demonstrate most everything they want to demonstrate before going on to a real power producing machine. So, you know, I think I have for them too, there's another company in Canada called general fusion that perhaps is a little bit farther behind, but they're working with the British and a [01:04:00] two. And so that's a promising area. And you know, I hope I have hope that that will evolve. This actually made me think of a question, which is Was sort of now all as, as you alluded to all the fusion development is being done by these sort of separate private companies which sort of stands in contrast to the, the fusion plan, which sort of implicitly is that everything is being at least managed from a central like a central management team. Do you think that w w what do you think about those two, two sort of different approaches towards getting to a technology of like, sort of the, the let a thousand flowers bloom in, in private companies versus a much broader program. Well, I think in the last maybe five years or so times have changed in that regard, you know, in the seventies and up until very recently, it was [01:05:00] only the governments that seem to be able to afford to do this. Those are the timescale and the cost. And so if was to come to pass, the government had to step up or the international governments had to step up and work together. And it was, seemed like the only way to get there was for the government to do it because of the cost. Now it seems that things have come along far enough, especially in the Tokamak area that some private companies are coming up with what they think are. Ways to fund what they want to do to demonstrate what they need to demonstrate because their ideas are at the moment, at least on relatively inexpensive facilities. Now they, they are going to run up against funding problem. If they're successful in the near term, you know, they're getting hundreds of millions of [01:06:00] dollars, some of them from private investors and they're building some things and hopefully they'll be successful, but these will not be powerful. And so they will have to be so successful that they will be able to get much, much larger amounts of money. They may have to be, be bought out by a Westinghouse or something in order to, to become real power plant manufacturers. These are not industries yet, even though they have an industry, what they call an industry association, there are small companies, and if there may be big by some companies standards, but they are not really money-making companies and they don't have their own money. So they have to continue to, to get money from investors and, and even maybe getting a hundred million dollars or $200 million from some billionaire venture capital company is doable. These days, getting a billion for the next step is a much different [01:07:00] problem because there isn't going to be a real fusion demonstration plant built for less than a couple of billion dollars. And private money doesn't come that easily at that Atlanta, unless the thing that's being built is going to make money back fast. Steven Dean. Thanks for being part of idea machines.'
Eli Dourado on how the sausage of technology policy is made, the relationship between total factor productivity and technological progress, airships, and more. Eli is an economist, regulatory hacker, and a senior research fellow at the Center for Growth and Opportunity at Utah State University. In the past, he was the head of global policy at Boom Supersonic where he navigated the thicket of regulations on supersonic flight. Before that, he directed the technology policy program at the Mercatus Center at George Mason University.. Eli's Website Eli on Twitter Transcript audio_only [00:00:00] In this conversation, Eli Durado. And I talk about how the sausage of technology policy has made the relationship between total factor productivity and technological progress, airships, and more Eli is an economist regulatory, hacker, and senior research fellow at the center for growth and opportunity at Utah state university. In the past, he was the head of global policy at boom supersonic, [00:01:00] where he navigated the thicket of regulations on superstar. Before that he directed the technology policy program at the Mercatus center at George Mason university. I wanted to talk to Eli because it feels like there's a gap between the people who understand how technology works and the people who understand how the government works. And Isla is one of those rare folks who understands both. So without further ado my conversation with Eli Dorado. So just jump directly into it. When you were on a policy team, what do you actually do? Well that depends on which policy team you're on. Right. So, so in my career you mean, do you mean the, in sort of like the, the public policy or like the research center think tanks kind of space or in, in, in a company because I've done both. Yeah, exactly. Oh, I didn't even realize that you do like that. It's like different things. So so like, I guess, like, let's start with [00:02:00] Boom. You're you're on a policy team at a technology company and. Yeah. Yeah. So when I, when I started at boom so we had a problem. Right. Which was like, we needed to know what landing and takeoff noise standard we could design too. Right. Like, so, so we needed to know like how loud the airplane could be. And how, how quiet it had to be. Right. And, and as a big trade off on, on aircraft performance depending on that. And so when I joined up with boom, like FAA had a, what's called a policy statement. Right. Which is, you know, some degree of binding, but not really right. Like that they had published back in 2008 that said, you know, we don't have standards for supersonic airplanes, but you know, like when we do create them they, you know, they're during the subsonic portion of flight, we anticipate the subsidy Arctic standards. Right. So, so for, [00:03:00] for, for landing and takeoff, which is like the big thing that we are concerned about, like that's all subsonic. So we, you know, so that sort of the FAA is like going in position was like, well, the subsonic standards apply to, to boom. And so I kind of like joined up in early 2017 and sort of my job was like, let's figure out a way for that, not to be the case. Right. And so it was, it was basically, you know, look at all the different look at the space of actors and try to figure out a way for that, not to be true. And so, and so that's like kind of what I did. I started, you know, started talking with Congress with FAA. I started figuring out what levers we could push, what, what what angles we could Work work with to ensure that that, that we have we've got to a different place, different answer in the end. And, and so the, like, so basically it's just like this completely bespoke process of [00:04:00] totally like, even trying to figure out like what the constraints you're under are. Exactly. Right. So, so yeah, so it was, there's like a bunch of different, different aspects of that question, right? So there will you know, there's, there is statute, you know, congressional laws passed by Congress that had a bearing on the answer to that question that I went back to like the 1970s. And before there w you know, there was the FAA policy statement. There was, of course the FAA team, which you had to develop, you know you know, relationships with and, and, and, and sort of work with you have the industry association, right. That we remember of that Had different companies, you know, in addition, you know, in addition to boom, there, there were a bunch of other companies Ariane, which is no longer operating. We had Gulf stream, which no longer has a supersonic program. Or actually they didn't Edward admitted to having it announced really dead. They, you know, there was, you know, GE and rolls Royce. And so you had all these companies like coming together, you know, sort of under the, [00:05:00] under the watchful eye of Boeing, of course also. And, and so like the industry association had to have a position on things, and then you had like the international aspect of it. So you had a, there's a UN agency called Oko that sort of coordinates aviation standards among all the different countries you had the European regulators who did not like this idea that there were American startups doing Supersonics because, because the European companies weren't going to do it. And so they wanted to squash everything and they were like, no, no subsonic standards totally applied. Right. And so so that was, that's really the. The environment that, you know, sort of, I came into and I was like, okay, I've got to figure out, you know, I've got to figure out, build a team and, and, and figure out an approach here. And and, and try to try to make it not be the case that the subsonic centers apply. So we, so, you know, basically we tried a bunch of things at first, right. Like we tried to like, get our industry association, like all geared up for like, okay, well, we've gotta, we gotta fight this and they didn't want to do that. Right. So like, like [00:06:00] the other people didn't want to do that. Right. We tried a bunch of different angles in terms of, you know, we, we, what we ended up doing w w we got Congress to get excited about it and sort of, they, they started to, you know, there was a. Sort of a draft bill that had some, some very forward-leaning supersonic language that we, we you know, worked with Congress on it never passed in exactly that form, but it passed later in the 2018 FAA reauthorization. And then the thing that actually kind of ended up working was I had this idea in late 2017 was, well, you know, what. The, the sub the subsonic standard changes at the end of this year. Right. So, so so the end of 2017, so I was like, well, let's apply for type certification this year. Right. So we applied, like, we are nowhere close to an airplane. Right. And know we're close. Right. Right. And I was like, well, let's just, let's just, let's just like, screw it. We're going to apply like, like in 2017. And I had to like, get the execs to sign off on that. Right. We're going to do it, but we did. [00:07:00] So by the end of, I think December, 2017, we applied, I of course, you know, talk to my FFA colleagues and told them like, Hey, we're going to apply. Just so you know, they're like, well, that raises a whole bunch of questions. And, and that sort of got it, got them working down this path where they were like, well, you only have under part 36 of the FAA rules. You only have five years to to keep that noise standard. If, if you apply today and you're probably not gonna be done in five years. And I was like, that's true. We're probably not going to be done in five years, but we think that part 36 doesn't apply to us at all right. The way it's written. And then they went back and then they looked at it and they were like, oh, Part 36 doesn't apply to them like they're right. Like, you know, Eli's the first person in the history of Supersonics three per 36 and very closely. Right. And so and so then they went back and they like talked to their lawyers and, you know, they, I think came up with a new position in a new legal interpretation [00:08:00] w basically a memo that, that was, that was published that was like, okay, the subsonic standards don't apply and we don't have standards. We can start making some standards. And if we don't have one at any time for any particular applicant, we can make one for that applicant. We can, it's called the rule of particular applicability. So that kind of, once we got that, then in early 2018, like that kind of solved their problem. Like, and I think in in at least th th the domestic part didn't solve the international part, like from, from from Europe and so on. So. I mean, I, so, so if you think about like, what do you do on a policy team? Like you figure out like how, you know, how, how do you solve the problem that you have, that, that you were, that you were hired hard to fix and you just try things, try things until something works. It's part of the answer. Yeah. That's I mean, that's, I really appreciate you going into that level of detail because it's like the sort of like affordances of these things seem incredibly opaque. And just [00:09:00] for, for context, the subsonic standards are the standards that do not a lot, like that set a very like low noise bar. It's very stringent. I mean, the modern, the modern standards are pretty stringent. Like it used to be like, you couldn't, you couldn't basically like stand on a runway and have a conversation while plane's taken off these days. Like, I mean, it's, it's, it's gotten very, very impressive, but they, you know, the, the modern planes have gotten that way because they have high bypass ratios and the engines like big, big fans that move a lot of air around the engine core, not through it. Right. And so so that is, you know, that's just not workable when you're kind of trying to push that big fan through, you know, through the air at mock you know, 2.2 is what we were doing now. Now it's 1.7 that boom. But but but anyway, that's that, you know, that, that just doesn't work as a solution. So that's why, you know, it had to be different. Right. Right. And then did you say it's 30 S w w was it articles 36 [00:10:00] or 36? And volume, volume, volume, 14 of the code of federal regulations, part 36. Yes. Yeah. Yeah. And that's that, that's the part that specifies all the takeoff and landing noise certification rules for bar, all, all kinds of aircraft. Got it. And, and you re and there's like, like particular wording in that part that does not apply to that didn't apply as it was in, in 2018. I think they've now rechanged some of the definitions. They went through a rulemaking To, to cover some supersonic planes, although interestingly, still not Boone's plane. It covers the plane up to Mach basically between Mach 1.4 Mach 1.8 and below a certain weight limit. So basically biz jets, right. Business jets, small sort of low Mach business jets, but it would be covered under, under the new role, but as part of that, they might have incorporated. [00:11:00] I, I forget the details, but they, they might've changed the definition so that so that boom was at least you know, would, would apply the five-year time limit and stuff like that might apply. Got it. Okay. And so that's so, so sort of like they, at a company, the policy team is like really going after a specific problem that the company has figured out anyway, to, to address that I mean, that, that was, that was how that one was. I mean, I think that there are different, there are different companies, right. And the companies that are playing more in defense rather than offense. Right. So you could imagine oh, I'm thinking of like a company like Facebook, right? Where like the first amendment applies for 30 applies. Like they have like the legal, like they have all the legal permission to operate as much. As as they need to. And they're mostly just like putting out fires right. Of like, like people wanting to like regulate them as utility and things like that. So, so it's, it's, it's more of a defensive mode in those companies, I think. But, but yeah, it's going to [00:12:00] vary from company to company, depending on what it is you need to do. And you just have to kind of be aware of all the different tools in terms of, well, you can go to Congress and get them to do something, and you might be able to get the executive branch to do an executive order, or you might be able to you know, get a new rulemaking or a new guidance or, you know there's, there's just a whole host of different tools in the, in the toolkit. And you've gotta be able to think about them in the different ways that you can use them to solve your problems. And actually so this perhaps getting a little ahead of ourselves, but speaking of those tools, like what in your mind is the theory of change behind writing policy papers? I think that sort of among many people, like you see. Policy papers being written and then, and like policy happens, but like, there's this like big question Mark Black box in between those two things. I think there's, there's, there's definitely different theories, right? I think so before I started at boom, when I was at the Mercatus center, Sam Hammerman and I [00:13:00] wrote a paper on Supersonics and that was, you know, that one I think actually was really influential. Right. So we, we published it a month before the 2016 election, when we thought Donald Trump was going to lose and we titled it sort of as a joke make America boom again you know, so it was like, the slogan was perfect. And and then lo and behold Trump gets elected and that paper like circulated in the, the sorta like when, when his administration got constituted in, in January, 2017 you know, a DLT like that paper circulator and people are like, okay, this makes sense. We need to be very forward-leaning on Supersonics. And, and so, so that, you know, like we still haven't changed the law that we said was most important in that paper. Right. That what we said is that we need to re repeal the Overland ban and replace it with some kind of permissive noise standard that lets the industry got going on Overland, Overland flight. But I think it was influential in the sense of, it was some reference material [00:14:00] that a lot of different policymakers could look at quickly and say like, okay there, you know, there's some good ideas behind this and we need to support this broadly. And, and, and it's, you know, it's a reputable sort of outlet that, that came up with this and it's, and it's got all the sort of info that we need to, to be able to operate independently and moving this idea forward. Got it. So, so really is like a lot of just sort of like tossing, tossing things out there and hoping like they get to the person who can make, make a decision. Well, I think you know, ideally you're not just hoping, right? Like ideally like you're, you're reaching out to those people establishing relationships with the right people and and, and sort of getting, getting your ideas taken, taken seriously by everybody that, that matters in your field. And another, so, so this is, again, just coming from [00:15:00] someone who's completely naive to the world is like, how do you figure out who the right person is? Well, I think it depends on what you need to do, right? So like, if you need to repeal an act of Congress, you know, you've got to go to Congress. Right. So, so that's that's an example. So I, so I don't know. I think a lot of times the right person is, is not just one right. Person. I think that there's like a, there's also a move where you're really just trying to go after elites in society. Right. Like if you can get, if you can get sort of like elites, however you define, I don't know what the right, right definition of that term is. But but, but you know, if you can get sort of a consensus among elites that you know, that, that supersonic flight should be allowed over land or that you know that, that we should invest, you know, like the con the government should invest deeply in, in like geothermal energy or that you know, Wait, we need to like have a a Papa program for ornithopter whatever it is. You know, if you convince, like it leads across the board in society that we should do this, [00:16:00] like, it it's pretty likely to happen. Right. It leads still, still sort of control the stuff that at least at least the stuff that nobody else cares about. If it leads care about it, then, then they'll, they'll get their way. One. What sort of pushback to that then I actually wanted to ask you about would be that there's there's this view that in a lot of cases, regulations sort of a codes, a trade-off into a very like a calcified bureaucracy and then sort of like seals it off specifically like an example being you could make this argument that. Nuclear regulation, as opposed to sort of being about health and wellbeing or the environment is actually encoding a trade off that like in order to absolutely prevent any sort of nuclear proliferation at all we basically just make it so that you can't build new nuclear things. What do you w what do you think about that? Do you have technology [00:17:00] regulations? I mean, I think like nuclear is, would be like, I would think that that would be like one of the hardest regulations change, right? The, the, the sort of you're taking an entire agency, like the national the nuclear regulatory commission. Right. And you're saying like, we have to completely change the way, like, like if I were, if I were at one of these efficient startups, right. It'd be like, All right. My job here as the policy lead or whatever, is to completely change the way this entire agency operates. Right? Like that seems really hard, right? That is that's, that's, that's really challenging. And, you know, I don't, I'm not optimistic frankly, about, about their success. And so, you know, so in, in sort of the more like the research-y like nonprofit side of policy that I do now, you know, like a lot of what I'm looking for is areas where it isn't, that it isn't hopeless, right? Where there, where you can work and where you only need like small change and it makes a big difference. Right. And so you're trying to find those [00:18:00] leveraged policy issues where, where you can make a big difference. So that's, that's, that's how I think about it. And it's issue selection. Like when you're, when you're in the nonprofit world and you have the luxury of that, right. Which you don't necessarily in the for-profit world Like that's really, I think that's really important. And that's what separates like good policy entrepreneurs from bad policy entrepreneurs is, is that sort of like awareness of issue selection, and, you know, small changes that make a big difference. And, and so let's dig into that. How did, how do you sort of like, look for that leverage? Like what, what yells to you like that, that you could actually make a big difference by changing a small thing? So I mean like, like Supersonics is a, is a great example, right? That's one that I chose to work on for several years. And that's like, if you could get rid of the Overland band, right. One, one line in the code of federal regulations, the bands over land and flight over land, right. You [00:19:00] would unlock. Massive amounts of aerospace engineering development in a completely you know, new regime of flight that no one else has, no one else is doing. Right. You'd get rapid learning. Then that curve you get like engines being developed specifically for that use case, you'd get, you know, variable, geometry, everything being developed. For, for airliners and so on and, and you'd make a big difference you know, in, in the future of the industry and, and in the, in sort of this state of the art for, for flight. So I think if you could change that one line, even if you could, even if you couldn't change it international, right. If you could change it just in the U S right, you would get, I think the U S is big enough that, you know, sort of LA to New York and, you know, other plus all the over plus all the transoceanic markets that, you know, sort of the, you know, like a boom is going for now, right. If you got, if you got the combined, combining those two markets, you're at like, you know, DUP say doubling the market size for those planes. And and you'd get a lot more investment. And so, you know, it would be [00:20:00] a, it would be a huge A huge improvement. Right. And so, so I think that's, that's a highly leveraged one, one that I'm working on, you know, a lot more lately, I'm sure you've seen is geothermal, right. Where sort of like, I think there's no like real policy blocker, but the sort of the thing that I've been focused on is permitting, right? So if you want to, if you want a permit you know, there's a huge overlap between like the prime geothermal locations and federal lands. And so, so a lot of it's on, you know, so you need to get the federal government to give you a lease and, and you need to get their approval for it to drill the well. Right. And so that, that approval brings in, you know, environmental review and so on and conveniently the oil and gas industry has gotten themselves exempted from a lot of those environmental review requests. And my argument is like geothermal Wells are like the same as oil and gas Wells. So if they're exempted, like geothermal should be two, and that would speed up the approval time from something like two years to something like two weeks. [00:21:00] Right. So you'd go, you massively speed it up. Right. And so, and so, so just that sort of speed up on federal lands that wouldn't even change anything on, on private lands or on, on state lands necessarily. W w that, that sort of acceleration, I think, would, would, you know, could bring forward sort of the timetable for sort of the geothermal industry as a whole, by a few years. Right. So, so one small change. And so that's, that's, if you think about that, like socially, like, what is the value of that? It's many billions of dollars, right? So if I spend a year of my time working on that and, and get that changed You know, like my ROI for society for that one year is, is many billions of dollars, which is pretty good. It's pretty good. Pretty good. Pretty good way to spend my time. Right. Yeah. Yeah. I mean, there's, I mean, other things you know, like like I'm really interested in, in enhanced weathering, right? So olivine you're using olivine to, to to capture CO2. And I think it's like, it was the neglected thing and I think policymakers just don't know about it and if I could [00:22:00] educate them and sort of, you know, get them, get them get buy-in for like some sort of, you know, pilot program or, or whatever, whatever would be, whatever the right answer is for for that. And I'm not sure what it is exactly. But if, you know, if you can get them going on that, it's like, oh, we, we, you know, potentially. Capture, you know, many gigatons of CO2 for, you know, 10 to $20 a ton. Yeah. That's, that's pretty cheap and we'd solve a lot of other climate problems. Right. And, and, and it would be maybe the cost of dealing with climate change would go down by something like an order of magnitude. Right. That would be that's, you know, like again, like pretty highly leveraged. So that's like, those are some examples of like, why I've chosen to work on certain areas. But I think, I think I'm not saying those are the only ones by any means, and it just, just what makes a good policy entrepreneur is figuring out what those are. And, and I guess, like the thing that to put a little bit more is like, how, like, is there something that people could do to [00:23:00] find more of those leverage points? Like it was, it, is it, I guess there's like two, maybe two purchase. One would be just like take an area of interest and like, just like comb through the laws. Like basically like point changes that way to unlock things. Or is, is there a way to like actually sort of like look for potential point changes agnostic of the actual no, it's a great question. So, so, so I've been, so I've been, you know, trying to talk to people about like, what is the way to systematize this. Right, right. So I think that's the question you're asking and, and, and I've been, so I've been thinking about like, what, what is my, you know, what is my system, if I have such as I, such as it exists. And I think that the right answer is to come at, I mean, one is to come at it from the perspective of the entrepreneur. Right? So, so if you, if you think about it from the perspective of, you know, this is a company that is trying to do this thing, or I wish there was a company that was trying to do this thing, like, what would, what would, what [00:24:00] would they run into, right? What is that? What is the actual obstacle? What is the actual policy obstacle that they face? I think that that is the most construct. Way to do it. And, and, and to give you an example of a different approach, right? You can think about some, you know, a bunch of our friends, you know, we're working on this endless frontier, Zack, right. Which is like complete rethinking of the entire like science funding and technology funding thing. Like that is a different approach. And maybe that maybe, you know, we probably need some people working on that and that modality as well. But I, I think it's released for me, it's more effective to do this sort of more bottom up You know, think about it from, from the perspective of here's this thing I want to exist in the world. Like here's the specific narrow problem that they would face if they tried to do it, like, let me work on that as much as possible. Yeah. I think, I think another thing that's really important is you know, the, the policy analyst or whatever should try to learn as much [00:25:00] as possible on from on a technical level about, about the technology and how it works and like the physics of it or the chemistry of it, whatever it is. And I think a lot of, a lot of policy folks don't right. I think that they they're like, well, I'm going to deal with this like legal stuff. And I'm just, you know, I'll go to the engineers if I have a question, but I don't really want to learn it. And I think that that's, that's that's not helpful. I think you want to get in the weeds as much as possible. I mean, Boom. Like I sat people down all the time. It was like, I need you to explain this to me cause I don't understand it. And, and, and I just had tons and tons of conversations with the engineering team and, and, you know, people who weren't on the engineering team, but who understood things better than me and over time, you know, so it got to the point where like, okay, I understand, you know, these airplane design trade-offs pretty well. Right. And, and then, and then, and then when I'm talking to a congressional staffer or, you know [00:26:00] someone at, at a federal agency or something like that, that I can explain it to them. Right. And in sort of in a way that they can understand. So, so I think that you know, thinking from the bottom up you know, try and trying to put yourself in the position of the bottom of the entrepreneur working on it, looking at it from looking at it from you're not being afraid to dig into the technical weeds. I think those are. Those are the things that I would encourage sort of other people working in policy to, to experiment with and to try. And I think that would make them, you know, more, more successful. Yeah. And actually on that note another thing I wanted to ask you about is if you have any opinions about sort of how to get more technical people in to government and policy and like vice versa, help more government policy people like actually understand technical constraints. Cause I just find like very often, like it's like I had this instinct too, where I'm like, I don't understand policy, so I'm just going to like try to avoid [00:27:00] anything that touches government. And, and like that seems suboptimal. Yeah. So it's something that I think about a lot. We're thinking about a lot at the CGO actually is, is, you know, how can we. How can we, you know, either when we train people up, you know, in terms of, you know, young policy analyst, how do we get them to like, engage, you know, like maybe so we're exploring ideas right. Of how we would do this. Right. How could we, could we bring in young policy analysts and like kind of mentor them or like teach them how to, how to sort of, how to self-teach some of the technical stuff, right? Like, like like work through this stuff, or conversely, as you say, like we can take some technical people and, and sort of teach them the road. So policy, if that's what they want to do. Right. And, and, and give them that, that toolkit as well. And cause I think that the overlap is, is really, is really effective. If you can get it, if you can get someone that's interested in playing in both spaces, I think that that is really effective. [00:28:00] And, and the question is like, who are these people that want to do it? You know, there's not, it's not really like a career track. Exactly. Right. It's. And so, you know, if we, if we found a bunch of people that wanted to be that, that you know, in, in that sort of Venn diagram overlap, like we would, we would definitely be interested in training them up. Yeah. W w one thought there is actually sort of what we're doing right now, which is making the, the policy process more legible. In that, like, I, I think it's, it's very silicone valley has done a very good job of like, making people see, like, this is how you change the world by like starting a tech company, whether that's true or not. But it's, it's like very unclear, fuzzy how one changes the world by like helping with policy. So like just making that legible seems very important, you know, I think, I think the other thing about it is that you know, Silicon valley, you know, I think investors and entrepreneurs are too afraid of. You know, what they would call [00:29:00] policy risk, right. Or something like that, you know, like, like, you know, I think it's you know, I think it varies case by case how much of a risk it actually is. But I think it, you know, sort of my view when I was at boom was like, look, there's no way that FAA is not going to let us certify plane. Like, there's no, like, like w we will, they will run us through the ringer. It'll be expensive. Like we'll have to like, spend, you know, new, all kinds of tests and stuff like that, but they are not going to get, we're not going to get to a point where, like, we have a plane ready to ready to fly. And like, yeah, it's not certifiable because of like, something like, like noise. Right. And so, and so like, like there was, or there, you know, there is not like that much policy risk and, and a lot of things you know, I wouldn't feel that same way about like a nuclear startup, right. Like like efficient startup, but but, but sort of being, you know, I think that I wish that The investors were a little bit more savvy about like, what is a smart policy risk to take [00:30:00] and, you know, what, what can be, what can be worked and what can't in terms of policy risks. Yeah. And again, I think it's, it's one of those things where it's like, we need more ways of people actually understanding that of like, like how do you, how do you grok those things? And then I guess, I guess the last thing on, on sort of the regulation front is like, are there historical examples of like sort of like very broad deregulation that enabled technology, like actual, like, it feels like regulation is very much this like bracket where like we keep regulating more and more things. And every once in a while you get like a little bit better, like in the FAA case, but like, is there ever a situation. There's a really big opening up. Yeah, there, there are a few cases. Aviation is a perfect example, actually. So, so if you're, I don't know if you've read the book hard landing, but but it's an excellent recommended it if you're, if you're interested in this at [00:31:00] all, but it's basically a history of sort of the aviation industry up through what they call deregulation. Right. Which is there's happened in the I guess the late 1970s. Because up until that point from I don't remember when it started, but there was this thing called the civil aeronautics board that basically regulated routes and affairs. So if you were an airline, you got to fly the routes that the government told you, you could fly and the fares that they, and you, you, you got to charge the fairs that they Told you, you could charge. Right. And you couldn't give discounts or anything like that. Right. Like you had to charge like that fair. Right. And so, so like, what did you have to compete on? Like, like, not very much, right? Like you, you competed actually like on in-flight service and stuff like that. So So, I mean, you had sort of before that deregulatory era, you had a very lavish in-flight meals and stuff like that. And, and super expensive, super expensive, super expensive tickets and not a lot of [00:32:00] convenient route choice and so on. And then And then sort of in the late 1970s under Jimmy Carter, I think I think Ted Kennedy was was the, one of the big proponents of it. So was like getting rid of the civil aeronautics board. They got rid of it, right. Like they got rid of an agency. And so and, and so that sort of deregulated the, the routes and, and the city, you know, city pairs and, and times, and, and the fairs that they could charge. So now, like you can buy like, you know, a ticket to Orlando or Charlotte, or, you know, whatever for like 200 bucks or less. Right. And, and it's and you know, that's all thanks to deregulation. Oh, that's not really exactly an enabling technology, I think, which was your initial question, but it just allowed the industry to move forward and and, and become a whole lot more efficient. And so one could imagine something similar for. Like technology regulations. Yeah. I think in getting rid of an entire agency is pretty rare. But [00:33:00] but, but, but yeah, I think that but yeah, it's, it's not, it's not like a lot of people think like regulations a one way ratchet. That's not totally true. Like there have been, has been times in the past where we got rid of a whole lot of regulation. Yeah. And actually related to that, do you have any good arguments against the position of like, we need regulation to like keep us safe besides sort of well, we also need to like, like there is too much safety. Like I, I find, I wish there was like a more satisfying thing besides like, well, you know, it's like sometimes we'll have to take risks. Right. So I think, I think, I mean, it's, it's true that Like, there's not, there's not like from an economics perspective, like there's not really a good argument for regulating safety, because you would think that the customer could, could make their own choice about how risky they want to live their life. Right. And so so, so it is a little bit awkward from that point of view, I think we're never going to get a situation where the government [00:34:00] doesn't regulate safety and a lot of things, right. They just it's just reality is that you know, the peop the public like sort of wants the government to regulate safety. And so therefore it will. But I think that there is still a difference in the kinds of kinds of safety regulation that we could have. Right. So, so I think one example that I think about a lot is The way planes are regulated versus the way cars are regulated. So if you, if you think so with, with planes FAA sort of type certifies, every plane that is produced or that is registered model of plane that is produced and you have to get that yeah, it has to get an airworthiness certificate and stuff when you register it. And so that's, that's an example of what's called pre-market approval. Before you go on the market, you have to be certified, right? Drugs are work that work the same way with cars. It's a little different, right? You have car safety standards that, that NITSA promulgates and enforces. But The way that that is [00:35:00] enforced or the way that that is, is dealt with is that the car companies, you know, know that they have to design to these standards NITSA monitors, the market, all right, the marketplace, they sample sample cars that, that and, and test them and stuff like that. And or if they observe a lot of accidents or whatever, they can go back and they can tell the, the car company. Okay. You have to do a recall on this car. And, and make sure, you know, fix all these things that we found that, that aren't up to snuff. Right. Right. And so, so, so that's, that's an example of post-market surveillance, right? So those are both safety regulations, but they have huge structural differences in how they operate in terms of, you know, how, how much of a barrier is there to like getting to market, right. The pre-market approval cases. It means you're, front-loading all of the costs. You're delaying you're, you're making it hard for your investors to recoup any, any returns, just see if the whole thing is going to work, et cetera. So there's like all kinds of effects of that. Whereas in the post-market surveillance model, like you're incentivizing good behavior, but we're not going to [00:36:00] necessarily like verify it upfront. We're going to, which is costly. We're gonna, we're gonna let it play out in the marketplace for awhile. And if we detect like a certain degree of unsafeness, we're going to make you fix it. Right. And so I think of that, I think of that structural difference is really important. And I would, I would like to see. It's more of that that post-market surveillance model. I mean, you could think about it even for drugs too. Instead of, you know, instead of upfront clinical trials, we could say, okay, like you have this technical here. Like we see that it makes sense as a potential treatment for this thing. Like, you know, you would have to test it on people one way or the other. Right. In terms of you know, w whether it's clinical trial subjects or patients who have had the condition we will allow you to use it on this, but we're gonna, we're gonna monitor like, carefully what the side effects are in those early applications of the drug. And if it turns out to be unsafe, we're gonna pull it. Right. And so that that's, that would be a different way of doing it. You know, you can imagine we could do that. Right. But that's, [00:37:00] that's just not where we are. And so I think it is hard for people with You know, sort of bought into the current system to, to think about like how we would get there or how that would be, you know, why we would ever do that. Right. It, it, it does seem much more attractable to just say like, okay, we're still going to regulate, but we're going to do it in a different way though. Like, I, I really liked that and I, I hadn't thought about that very much. I'm going to completely change gears here. And let's talk about GDP, total factor productivity. Your, your stated goal is for GDP per capita to reach 200 a thousand dollars by 2050. And just for the listener context, I looked up some numbers. The current global GDP is $11,000. So we're talking about more than an order of magnitude increase. The highest right now is Monaco at 190 K. So they're not even so I, so I'm, I'm, I'm thinking like S specifically I want to get to 200,000. I want to get everybody there [00:38:00] eventually, but by 2050, I think we, I think we could get the U S so the U S has 63 K right now. Which so, so like we've got a triple it, yeah, we've got it from the blood. And so the interesting thing that I think is like, so the U S looks like it's both low places like Ireland and Switzerland. And like, so, so, so my, the thing that I'd like you to justify is like why high GDP is the thing we should be shooting for, because I would argue that like, sort of on a, like, things that are going on there's like, I would rather be in the U S than Ireland or Switzerland. And so, but like they have higher GDP. Yeah. So like Ireland, is this a special case where like, they have a bunch of tax laws that are favorable and so a lot of like profits and stuff get booked there. So, so I, so I think that that's, I think that's what's going on there. So I would say so GDP is Is it not a perfect metric. [00:39:00] I think that the degree to which it's imperfect, it's often overstated by, by people. So it's, it's pretty good. Even, so I would say I like TFP better as a like, so I, I, I use GDP per capita because I think people are more familiar with it and stuff like that. But I, what I actually think about is in terms of TFP and so total factor productivity is just like, how much can you get more output? From a given amount of inputs. Right? So like, if, you know, if I have in my society, a certain number of plumbers and a certain amount of you know, lumber and a certain amount of, you know, any, all the inputs that you have, right. What can I make out of them? Right. Like how much, how much, how much was the value, total value of all the goods that I can produce out of all the, all the resources I have going in. Right. And you want that number to be as high as possible. Right. You want to be able to produce as much as possible given your inputs. Right. And so that's, that's the, that's the idea of TSP. [00:40:00] And just to like, dig into that, how do, how do you measure inputs? So like, like outputs is just like all, all like basically everybody's receipts, right. So I'll put, so, so in this, there's a very simple model yeah. That people use, right. It's called the, the sort of the solo model. Right. And the idea there is you have you have GDP, which is just a number, right? It's a, it's a dollar value real GDP is what you're concerned about. And then you have how much, how much labor do you have and how much capital do you have. And then, and then you you take logs actually of it, and then you do a linear aggression. And then the residual, the residual term in that regression is your, is your number for a total factor productivity or log total factor productivity. And so that's, that's how you would do it. Is it, that's a very, very rough estimate right. Of, of how you do it. Sometimes people add in things like human capital levels. Right. So if we if we brought in like a bunch of an educated [00:41:00] immigrants and and brought them in, so, okay. Like labor productivity would go down. If it's measured naively, but if you include in that regression, like a human capital term to, to to reflect education levels, like then, then it wouldn't right. Ideally it wouldn't. So, so anyway, so that's, so that's how you do it is you, you, you, you take labor, capital and output and you figure out the relationship between them and you see that you're getting more output than you used to from ideally hopefully from the given amount of, of labor and capital that that went into it. That's not true in every country. Right. You know, actually our countries where you go down in an output over time. So Brazil, where I. Peaked in total factor productivity in the year of my birth in 1980. And so, so, so it takes about 50% more resources today to produce the same amount of output that they produce that in real terms. Right. And, and, you know, Venezuela is like a basket case, right. They produce way less. So, so so it's, it's, I think it's a [00:42:00] good it's a good concept for thinking about two things bound up together. One is technology and the other is the quality of institutions, and those are the two things that if you improve them, then, then your output, given a certain basket of inputs is going to is going to be higher. Yeah. That's, that's compelling. I buy into the school of thought that institutions are like kind of a social technology that like, should we just actually talk about it that way? And like, to sort of sort of like prime my intuition and like other people's intuition about TFP are there examples. In history of like technologies that like very clearly increased TFP. Like you can like, see like thing invented TFP, like brand of TFP increased shoots up. Yeah. So, so the the guy who's written the most about this is this guy, Robert Gordon. And what he actually would argue is the thing it's like thing invented like a few decades pass [00:43:00] while things like integrating it and figuring it out, then big increase in, in, in, in TFP and GDP. Right, right. And so, and so he, he had this paper and then eventually a book on the five grade inventions. Right. And I, and so things like the internal combustion combustion engine, the idea of. Like sanitation plumbing, et cetera. The idea of pharmaceuticals, chemistry, and pharmaceuticals electricity was probably one and I think that's four, right? And I, and the fifth escapes me right now, but he, he basically argued that we had these sort of five great inventions in the late 18 hundreds. It took a few decades for them to get rolling. And then from 1920 to 1970, you had this like big spasm of growth TFE grew 2% a year. And he basically would argue today that's unrepeatable because we don't have those great inventions. And all, all we really have, according to him is, is progress in it. Right. Like we have, so we have one great invention [00:44:00] and, and that's, you know, it really still hasn't shown up in the productivity statistics. It may still be coming, but he would argue. Yeah. There's just, you know, we've, we've eaten all the low hanging fruit, like there's no more great inventions to be had. And when we just got to settle for a, you know, half a percent a year or TSP crows from here on out, but as I understand you disagreed like I, I certainly share your biases. And so recently you posted a great article about like possible technologies stack that could come down the pike. Do you have a sense, like, and so like through the framing of TFP do you have like, of, of all the things that you're excited about, like which ones do you think would have the biggest impact on TFP and like, what is the mechanism by which that would happen? I mean, so, so, so I think probably the closest, the thing that's like closest to us, where we are now is it's probably like big energy [00:45:00] price reductions. Right? So I've, I'm really bullish on geothermal, I think like 10 years from now. It's totally possible that we would have you know, sort of a geothermal boom, the way we had like a shell boom, right. In energy, in the, in the last 10 years. And then we'll be talking about like, oh man, like energy is getting so cheap. And so energy is something that sort of like infuses every production process in the entire country. And, and so it's difficult to really explain like how exactly it moves iffy. It just moves everything. Right. It just makes everything. You know, if we get, if we get energy costs, you know, down by, by half or something like that, then it makes a lot of things twice as, as productive or, or some, or some maybe not exactly twice, but a lot more productive. So that's, that's one example, but then like other things like longevity, right? Like, let's say we, we we, we fix a fix, you know, extending lifespan and say compress morbidity. Right? Like we make it so that people [00:46:00] don't get sick as much. Right. Well, that manifests as lower real demand for healthcare services. Right. So, so it's like, you don't even go see a doctor until like you're 90. Right. And like, and you don't need to learn because like you're still healthy. Then show up in GDP. They do. Right. But they, but what would happen. See here's where you have to distinguish between real and nominal GDP. Right. So in real, in real GDP, like we would, we would get the same, like with, with proper accounting, right. We would get the same or better. We'd get better at levels of health with fewer dollars spent on it. Right. So we'd be more productive in that, in that sense. Right. And so so we would so we might spend less on health services. But we would also have, we would employ fewer people in those sectors. Right, right. The employ those people would, you know, smart people right now who work in the healthcare sector, those people would all get to do other things like, and they would, they would all become researchers or, [00:47:00] you know, other, other kinds of technicians or, you know, whatever. And, and, and those people would produce things in their new role. So it's like, if, if, if all of a sudden we did not need. As many x-ray tacks or something like that. Right. And all those x-ray techs are out doing new things. That's like getting the x-ray texts for free. Right. It's another way of saying it is like we're getting all that for free, that same output that we used to get, we're getting it for free. And now we are we're taking those same people and, and getting the produce even more on top of it. So, so, so when you think about real GDP, like jobs are costs, right? Like you don't want jobs and you actually, you actually want to reduce as much as possible, like the spending on the need to spend money on things even. Right. And so that's how you actually increase productivity and ultimately real living standards and real GDP. And, and do we actually measure real GDP? Is that like hospital or is it like, sort of like a theoretical concept? No, [00:48:00] we, we, again, it's, it's kind of like the FP, right. We infer it. So we, we sort of And we estimate nominal GDP based on just how we, how we spend, how people are spending their money and how quickly they're spending it and so on. But even that, it's not like we're counting every receipt in the economy and adding tabulating them. Right. It's it's still an estimate. So we're estimating nominal GDP, and then we're also estimating the price level changes. Right. And so you address the nominal GDP estimate by the price level change and that's your real GDP number. Got it. Okay, cool. This is, I really appreciate this because I see all these terms being thrown around and I'm like, what is actually the difference here? Like what's, what's going on. And last question on TFE, can you imagine something that would be like really amazing for the world that would not show up in TFP? Is it like as just like a thought. I think, I think stuff that improves the quality of your leisure [00:49:00] time is unpaid, right? Like, like or that, or that you almost get for free. So like you know, if let's say, let's say open a designer, like an open source video game or something like that. And like, everybody loves it and it gets super high quality leisure time out of it. Right? Like there's no money changing hands. There are utilities going up. Right. So, so like you would, you would think that that would improve living standards without, without showing up in measured GDP at all. Right. So that's, that's the kind of stuff that it's like, yeah, he's got, you got to have that in the back of your mind that, that that's the kind of thing that could, you know, throw off your Your analysis. Okay. And so, and this is actually what some people claim is like, oh, the value of, of the internet, you know, the internet has, has, has increased welfare to something sentence. It's like, okay, yes. To some extent, but, but is it, you know, it's not like a whole like percent, 1% growth a year. It's not, it doesn't, it doesn't account for the reduction in, in TFP that we've seen. Yeah. [00:50:00] Yeah. That makes a lot of sense. Changing gears again make the case for airships air shifts. Yeah. So I think you know, you have. Cargo that is, there's basically two modes that you can take cargo on today. You can take them, put them on a 7 47 freighter, let's say, and, you know, get them to the destination the next day. And it costs a lot of money or you can put them on a container ship and it's basically free, but it takes, you know, a few weeks or even months to get to your destination. And, you know, what, if there was something in between, right? What if there was something that would take, you know, say four or five days anywhere in the world. But it's, you know, like a fifth of the cost of, of an airplane, right? That, that that's like a sweet spot for cargo you know, anywhere in the world. And. You know, so, and then, so with airships, there's an interesting thing about them is that they actually get more efficient, the bigger they get. [00:51:00] And so this is, I think the mistake that everybody's made when designing airships is, they're like, okay, we're going to design this cargo Airship to take like 10 tons to remote places. Well, no, you should be designing it to carry like 500 times, right. Because there's a square. Rule. Right. Right. If you, if you if you increase the length by a certain percentage, the, the volume increases by that factor to the cube, to the cubic power, through the third power and the the surface area and that the cross-sectional area increases by that power or that factor squared. Right. Right. And so your lift to drag ratio is getting better. Cause you, your, your lift is associated with the V with the volume and your drag is associated with the cross-sectional area. And so you're, you're getting more efficient, the bigger you get. And so I think if you designed say a, an Airship to go to carry about 500 tons a time at a time, so it's like four loads for 7 47 loads [00:52:00] at a time. And and, and, and sort of your target. Goods that had a value to weight ratio. That's sort of in the middle of the spectrum. So it's not, not computers or really high value items or, or electronics even, but more of the things like machinery or cars or part, you know, parts for factories and stuff like that. You could that be a nice little business and and you could. You know, provide a new, completely new mode of, of cargo transport. I think that would also be revolutionary for people in landlocked countries. You know, so, so, you know, I, I spent gosh, like a week in, in Rwanda about 10 years ago and, you know, just sort of like studying the country. And and one of the things that we noticed was to access a port on, in Tanzania, like, you know, you'd have to like, it's like 700 miles away or something like that, but you, you have to put the goods on like rail and the real [00:53:00] gauge changes several times between there and the port. And every time the rail gauge changes, like you would have to like pay a bribe to somebody to like move it and stuff like that, like just do their job. And and so that adds up to a lot of inefficiencies. So it's really cheap to get your container to the port on the coastline, but then to, to get it the last 700 miles, it's really expensive. Well, what if you could just get around that by, by taking something in the air ship, right. And so if you, if you designed the Airship for this, like transcontinental or, or Intercontinental. You know, ocean shipping market it would also work for that for that sort of landlord market pretty well. And you could, you know, you could, you could actually bring more than just machinery to a country like Rwanda from from, from that. And then I think there's also a high value remote services market, right. And this is, this is the one that people are going after and sort of like a standalone sense to some degree, like you know, smaller ships that carry 10 or 20, or maybe even 60 tons. It's like, okay, [00:54:00] yeah, you could serve that market, but even better if you design it for a 500 ton model. So, so anyway, that's, that's sort of, my view is like, this is a missing product that we should have. You know, it's over a hundred year old technology. We have way better materials today than we had in the last sort of the last Airship. Yeah. Think about like the, the rigid bot they ships of the past, they'll use aluminum for their internal trusses and you know, carbon fiber protrusions would have something like a six, six fold strength to weight ratio improvement. And let's say you double the, the safety factors. Okay. So your, your weight goes down by a factor of three for your, for your whole structure. You could do it autonomously today. You don't, you don't have to have labs and heads and, and galleys and all that stuff, and you don't have to have bunks. Like you could, you know, if you were on a a manned air ship, like you'd have to have multiple crews because, you know, it's like five day journey. So, or at least some of them would be so do it completely autonomously. [00:55:00] And then another question is like, could you use hydrogen as a lifting gas? Right. Because I mean, so there's a bunch of different arguments for why maybe you could, but if you were on yeah. You know, even, even, even the safety regulator would have to say, well, okay, like this might burn up, but like there's nobody on board. So so maybe it's okay. So, so anyway, I think that there's, I think there's definitely something really interesting there in terms of new, new vehicles that we could have that would enable, you know, a new mode of transportation for at least for Kartra and the so, and you've also written that it's less a technology question and more that sort of like a company that's willing to go all in on, on logistics question. And it seems like th th the way that I see it, it's like the problem is that there's not a like super lucrative niche market to go after. I think it could be super lucrative. And I think the, the, the big market is super lucrative, right? If you're, if you're let's say, you know, [00:56:00] you are. Yeah, let's say you can get 5% of the cargo of the container market, not the bulk cargo, like forget the bulk cargo. Don't don't do that. Like, don't go for the stuff that's already on air freight. Right. You might get some of that anyway, but, but just, just the, the stuff that's containerized today, right. If you could get 5% of that, I think that that would be 4,000 airships. And, you know, if you're, if you're the first one to market, like you have a monopoly right on that, or at least that, that segment of the market, and you could charge it like a decent markup. I think, I think it's like a, you know, you could in revenue, you could make like 150 to 200 billion a year, something like that. Right. And, and then, and then say you get you know, half of that in profit, right. An operating profit at least you know, like it's not a small market. So the culture problem that I see is like that it's, it's worth calling out is like, that is you need to like come out of the [00:57:00] gates at a certain scale. That would make it very hard to sort of like ramp smoothly, I think is like, it doesn't, it doesn't work with a small airstrip. Like you can't do like a half size Airship and expect to be competitive or like a small company even. Right. Like you just come out of the gates with like a big fleet, right? Like you could say, you could maybe like, say like your first, your first five airships are targeting, like the remote market where they might have a higher willingness to pay. I think that that could be a thing you do, but yeah, you want to just, you want to rent production and just, just churn out you know, hundreds of, you know, hundreds of airships a year, right? Like that's what you want to do. It's hard to call out. It's not like that. There's like this gap here. It's like, there could be this amazing, this like amazing new thing, but it's just like the way that companies start now. Yep. It does exist. Cool. And so in this last part, I want to just do some sort of rapid questions take as [00:58:00] long or as little time as you want to, to answer them. Why is your love of vertical farming? Irrational? I think it's, I like I am by no means a farming expert. Right. So like, so I, I see these th this sort of technology and I'm like, this is awesome, but I know next to nothing about it. So it's not like an informed like, well considered love it. It's sort of just like, I I think that this would be super cool if we moved to, into our farm. Right. And that's, that's about the extent I would say it's like potentially rational. It's potentially rational, but it's, it's, it's, it's not it's not well grounded. Okay. Why are there so few attempts at world dominance? Oh, man. I wrote a blog post on this a long time ago and I don't remember the answer. Oh man. I don't know. I think it's, I think I think it's a, I think it's a puzzle, right? You, you see these people who become like globally famous and super influential and they and they just sort of they, they sort of Peter out and they become self satisfied with whatever they [00:59:00] accomplish. But like somebody like there, there are some really talented people out there that you would expect some of them to apply themselves to this problem that I feel like the power influence of like extremely like wealthy, powerful people is like shockingly small compared to what I would expect. Like, I dunno. It's like, I feel like Jeff Bezos actually has a lot of trouble like making the things that he wants to happen with the world happened. And I find that certainly certainly true with like blue origin. Yeah. Yeah. Or just like, sort of like any, anything, like, like you see, you see all of these people who like we think of as like rich and powerful and like, they want things to happen in the world. And like, those things don't seem to happen very often. And that, that puzzles me. Like I have no, you know, I'd say that it does raise the question of like, whether there are people who actually are having a massive influence, which don't know who they are. Right. The, [01:00:00] the, the gray eminence. Yeah. The person behind the scenes who are, who's like really, really influential. Yeah. Yeah. Sort of within your field defined broadly, or like, however you want who do you pay attention to that many people may not be aware of? Oh, thank you. Okay. But like in all seriousness do I pay attention to, I mean, I think I don't know. I'm, I'm blessed to have have people who just like, you know, me out of the blue and like, like tell me things. And, and, and so so I, so I have a, I have a couple of friends, so like one that I worked with for many years who like still texts me, like interesting things all the time. And, and, you know, sort of like the, sort of the private conversations that that could, that could be public conversations. If there were like more public people, but they just like choose to choose to be like totally behind the Steens and choose to be gray. Eminences let's say. And, and like that, I think that that is a. [01:01:00] Like that's who I pay attention to. A lot of the time. Yeah. Yeah. That's that's fair. And I guess just like finally what are, what are some, we've talked about some of them, but like some unintuitive blockers for your favorite technologies, unintuitive Walker. So I think that that, like, I've written a lot about NEPA, right. This, so you may have heard me see me do a lot about this. This is the national environmental policy act. And, and, and so, you know, I think it's like sort of the theory behind it is like, okay, before we decide, we're going to like, Build this highway or whatever we're going to like study it and make sure that like the, what makes sure we understand what the environmental impacts are and that if, you know, if there are negative environmental impacts, we're gonna like study alternatives as well. Right. And, and so what got me sort of worked up about that was I was in a very high level meeting with FAA, got like, seen very senior, very senior people. [01:02:00] And, and, and sort of like the conversation like went to like, well, why can't we just change the, you know, the Overland bed? Like, why can't we do it? And so, and like one of the answers, and it's not the complete answer, one of the answers was like, well, we would have to do an environmental review if we were to change. If we were to change the. Of Berlanti rule and we don't have the data to justify, like, to even say what the impacts are like, what are the environmental impacts of, of Sonic booms on people? Because like, you know, and so this is why like NASA is doing a, a, a study to you know, they're, they're developing actually a many hundreds of millions of dollars. Airplane T to be a low, low boom demo. And they're gonna fly it over you at the cities and like figure out what the response, the human response is, so that we can have that data so that we can do an environmental impact study. Right. So, so that's [01:03:00] so, so yes. And so, so under so last year there was a rule change in NEPA, sort of in the implementing regulations that said that if you don't have data, that is okay. You just have to say, you don't have the data in the environmental impact statement. That's supposed to be enough. That's supposed to be adequate, like NEPA is not a requirement to go and do science projects. Right. So I wonder if that conversation would go differently if we were having it today. But, but that was the answer at the time was like, we don't have the date. To do this environmental im
In this conversation I talk to the Amazing Arati Prabhakar about using Solutions R&D to tackle big societal problems, gaps in the innovation ecosystem, DARPA, and more. Arati’s career has covered almost every corner of the innovation ecosystem - she’s done basically every role at - DARPA she was a program manager, started their Microelectronics Technology Office, and several years later returned to server as its Director. She was also the director of the National Institute of Standards and Technology and was a venture capitalist at US venture partners. Now she’s launching Actuate - a non-profit leveraging the ARPA model to go after some of the biggest problems in American society. Links Actuate Website In the Realm of the Barely Feasible - Arati's Article about Actuate and Solutions R&D Arati on Wikipedia Transcript [00:00:00] welcome to idea machines. I'm your host and Reinhart. And this podcast is a deep dive into the systems and people that bring innovations from glimmers in someone's eye, all the way to tools, processes, and ideas that can shift paradigms. We see these systems outputs everywhere, but what's inside the black boxes with guests. I dig below the surface into crucial, but often unspoken questions. To explore themes of how we enable innovations today and how we could do it better tomorrow. In this conversation, I talked to the amazing RFE provoca about using solutions R and D tackle, big societal problems, gaps in the innovation ecosystem, DARPA and more. Are these career has covered almost every corner of the innovation ecosystem. She's done almost every job at DARPA where she was a program manager, started their micro electronics technology office. And several years later returned serve as their [00:01:00] director. She was also the director at the national Institute of standards and technology and a venture capitalist at us venture partners. Now she's launching actuate a nonprofit leveraging the ARPA model to go after some of the biggest problems in American society. Hope you enjoy my conversation with Arthur. Provoca. I'd love to start off and sort of frame this for everybody is with a quote from your article, which, which everybody should read and which I will link to in the show notes. You say yet, we lack a systemic understanding of how to nurture the sort of rich ecosystem we need to confront the societal changes facing us. Now over 75 years, the federal government has dramatically increased supportive research and universities and national labs have built layers of incentives and deep culture for the research role. Companies have honed their ability to develop products in markets, shifting away from doing their own fundamental research in established industries, American venture capital and entrepreneurship have supercharged the startup pathway for commercialization in some [00:02:00] sectors, but we haven't yet put enough energy into understanding the bigger space where policy finance and the market meet to scale component ideas into the kind of deep and wide innovations that can solve big previously intractable problems in society. These sorts of problems, aren't aligned to tangible market opportunities or to the missions of established government R and D organizations today, the philanthropic sector can play a pivotal role by taking the early risk of trying new methods for R and D and developing initial examples that governments and markets can adopt and ramp up the hypothesis behind actuate is that solutions R and D can be a starting place for catalyzing the necessary change in the nation's innovation ecosystem. And so with that, with those, I think I want to test it in a nutshell exactly like that. So can we start with how do you see solutions R and D as being different from other R D and, and sort of coupled with that? How is actuate different from other non-profits. Yeah, I think [00:03:00] that's, that's one of the important threads in this tapestry that we want to develop. So solutions R and D let's see. I think those of us who live in the world of R and D and innovation are very familiar with basic research. That that is about new knowledge, new exploration, but it's designed all the incentives, all the funding and the structures are designed to have that end with publishing papers. And then on the other hand, there's. But the whole machinery that turns an advance into, you know, takes a technological advance or a research advance and turns it into the changes that we want in society that could be new products and services. It could be new policies, it could be new practices and that implementation machinery. The market companies, policymaking, what individuals choose to do pilot practices. I think we understand that. And there are places where the, you know, things just move from basic research over into actual [00:04:00] implementation. But in fact, there are, there are a lot of places where that doesn't happen, seamlessly and solutions, R and D is this weird thing in the middle. That builds on top of a rich foundation of basic research. It has it, its objective is to demonstrate and to prove out completely radically better ways. To solve problems or to pursue different opportunities so that they can be implemented at scale. And so it has this hybrid character that it is at the one on one hand, it's very directed to specific goals. And in that sense, it looks more like. Product development and marching forward and, you know, boom, boom, boom, make things happen, execute drive to drive, drive to an integrated goal. And on the other hand it requires a lot of creativity, experimentation risk-taking. And so it has some of those elements from the research side. So it's this middle [00:05:00] kingdom that I. Love because it has, I think it just has enormous leverage. And I, you know, I, I think a couple of points, number one, it's it requires to do it well, requires its own. Types of expertise and practices and culture that are different from either the research or implementation. And secondly, I would say that it, I think it's overall in the U S in the current us innovation system. I think it's something of a gap. There, there, there, there, there are many, many areas where we're not doing it as well as we need to. And then for some of the new problems, which I hope we'll talk about as well. I think it's actually a very interesting lever to boot the whole system up that we're going to need going forward. Yeah. And so actually just piggybacking right off of that, you've outlined three major sort of problems that you're tackling initially. Climate change sort of health, like general American [00:06:00] health and data privacy. I'm actually really interested in, like, what was the process of deciding, like, these are the things that we're going to work on. Yeah, but this whole actuate emerged from a thought process from a lot of. Bebe's rattling around in the box car in my head in the period as I was wrapping up at DARPA in 2016, at the end of 2016 and going into 2017 when I left and what I was thinking about was how phenomenally good our innovation machinery is. For the problems that we set out to tackle at the end of the second world war, that agenda was national security technology for economic growth. A lot of that was information technology. We set out to tackle health. Instead we did biomedicine. We went long on biomedicine, didn't break their left, left a lot of our serious health problems sitting on the shelf and a big agenda was funding, basic research and, and we've executed on that agenda. That's what we are [00:07:00] very, very, very good at what I couldn't stop thinking about. As I was wrapping up at DARPA is the problems that I think will, you know, many of us feel will determine whether we succeed or fail as a society going forward. So it's not that these challenges, you know, national security or how it's not that those problems have gone away and we should stop. It's just that we have some things that will break us at our. Yeah, arguably, they are in the process of breaking us. If we don't deal with them right now, one is access to opportunity for every person in our society. A second is population health at a cost that doesn't break the economy. Another is being able to trust data and information and the information age in which we now live. And the forest obviously is mitigating climate change. And if you think about it, these, these were not, but these weren't the top of mind issues at the end of the second world war, right? I mean, we had other problems. We didn't really know what to do about. So some of these are all problems that we didn't really know what to do about. Some of these are new problems. And, [00:08:00] and so, you know, now here we are in 2021, if you say what's what really matters those were the four areas that we identified that. Are critical to the success of our society. Number one, number two, we aren't succeeding. And that means we need innovation of all different types. And number three, we, we don't, we're not innovating, you know, we're either innovating at the zero billion dollars a year level, or we are spending money on R and D, but it's not yet turning the tide of the problem and, and that, so that's how we ended up focusing on those areas. Got it. And what could you actually, like, I, I love digging into sort of the nitty gritties of like, what was the process of designing these, these programs? Right. So just to sort of scope this a little bit, these broad areas that I'm talking about, I think of as. But the major societal challenges that we face today, actuate, which is a tiny early stage seed stage [00:09:00] nonprofit organization. Our our aspiration is over time to build portfolios of solutions, R and D programs. In each of these areas. And so very, you know, you, you, you made reference to a couple of the specific programs. One is about being able to access many more data sets to mine, their insights by cross-linking across while rigorously preserving privacy. That's some of the whole set that's one very specific program, but, but think of that as just one program and what will eventually be a much broader portfolio in this area of trusting data and information. So part of what we've been doing as we started actuate in late 2019 was big thinking about our strategy, about the four broad societal challenges that we wanted to work in. And then we've also been doing a lot of work on we've defined a couple of specific programs, but perhaps more importantly for scaling the organization, we've been working through our [00:10:00] art. Our mess, our process and methodology to take, you know, the core idea here of course is our, our founding team has a lot of different experiences, but we met at DARPA and what we our inspiration is really to take what we know from that particular model for solutions R and D. And. Mine, the critical, the essential insights and translate them to these very different societal challenges, not national security, but the ones that actuate is gonna focus on. And, and that, so we've, we've been formulating the four areas, but also thinking through, so how do you get from the question of changing population health outcomes to what are the programs that could be high leverage opportunities to do solutions R and D for that objective? Yeah. And so, so there's, there's sort of like two steps. There's one is like going from like the broad area to a specific program. And then there's another, which is sort of designing the [00:11:00] program itself. And I'm interested in what, what w what do you actually do to design the program? Like what, what is, what does that look like? Yeah. Go ahead. The first two programs that we have built out and defined were developed, were invented and designed by my co-founder Wade Shen he was a DARPA program manager for about five years. That's where we met his areas artificial intelligence and data science. And if you work in that area, you can work on any of the world's problems. And he, he worked on an amazing array of different problem areas as well as. Programs that at darker that drove the AI and data science technology itself forward. So you know, DARPA is a building full at any moment in time full of it. It's got a hundred amazing program managers in it. Wade was one of the really exceptional one people even in that very elite crowd. And so you know, Wade can And this is how he [00:12:00] thinks about the world. As you know, we came together because we share these concerns about these major societal challenges and a passion for bringing this kind of solutions R and D to these problems. And then Wade is the kind of guy who can invent these programs, you know, like he can just go do it. He knows how to think about it. He knows how to go do the research and talk to people and line up a program that could really be very impactful. So we, we weighed spelt these two programs, partly because we wanted to understand what that looked like in these areas. And but you know, that's the, as we go forward, we're going to need a process that engages a community of different people. Because over time, we're going to want to build our cadre of program leaders who will define, and then execute the solutions R and D program. And by definition, they can't all be, you know, they all, they can't all be weighed, right. We need to be able to draw from the talents and insights and the passions. With of people who have all kinds of backgrounds technology backgrounds, deep research backgrounds lived experiences [00:13:00] on these problems. People who have, who really, you know, deeply understand how the systems work that create opportunity or population health or, or take away from those objectives. And so a lot of what we've been doing is figuring out. So that's the question I was, if you want to change the future of health in the U S so that instead of spending twice as much as other developed nations per capita on healthcare, and yet having dozens of other countries that have longer lifespans and lower infant mortality rates, which is just criminal for the world's richest economy, if we want a future where that is radically different, where we don't have a hundred million people who either have diabetes or at risk of diabetes, where we don't have. Can, you know, we don't have a public health system. That's thoroughly incapable of containing and disease. Like COVID-19 unlike many other countries around the world. If we want a different future than you know, That's the landscape. And how do you get from that broad set of what we want to, [00:14:00] what do you do about it? And I think what that process looks like, so it has a top-down part and then a bottoms-up part. So the top-down part is understanding that landscape it's, it's the kind of, you know, it's understanding what, how big the problem is. What is the nature of the problem what's who's doing what I mean, these are big complex systems, right? There are many, many, many different kinds of actors. Actors practices, culture that you have to understand. You have to have some notion of how all of those complex systems components are operating and interacting. And then you can start thinking about where there are gaps or opportunities, but still at a very strategic, broad level. And that's about it for top-down because then of course, the model emulating a lot of the power we found in the way DARPA works is then to flip it, to bottoms up. And so then we go find people who are experts. In some aspect of this, again, they might have deep research expertise, deep knowledge of the specific problems or the way the system works. What you want is people who either [00:15:00] know or are willing to go learn enough about what the boxes, and then be willing to live outside of it and figure out how to recast it in a different way. And And, and then, you know, similar to DARPA, there's a process of nurturing and coaching, but allowing these smart individuals to bubble and brew program concepts from, you know, like a couple of bullets on a chart eventually to a full executable program, you know, a process that I think even for someone who's super good at this take six months or a year. So that's what we're just starting to embark on. Got it. And so that's sort of the beginning of of programs. I'm also interested in sort of like, What you hope to happen at the end of them? Sort of you're, you're in a slightly different position that DARPA, which sort of has a, hopefully a way being customer in the DOD. That's one of the funniest ideas on the planet. I just love it when people say, Oh, [00:16:00] well, It's easy because DARPA has DOD waiting for it. All right, please. Yeah. Let's, let's talk about how, yeah, I, okay. So yeah, let's, let's talk about that. And, and yes. And then what do we do? Right. So at DARPA, I first of all, think about six decades of history at DARPA in. Two halves for across generations of that agency. About half of what it has done is prototype military systems, things that were just crazy, that the services would never have tried by themselves, but were very directed at a specific military platform or capability. The other half has been. Sparking core enabling technologies. And that was out of a recognition that if you build your new military capabilities out of just the same old ingredients, you're only going to get so far and you need some very disruptive core technologies. So what came out of military systems? Iconically, of course, it's stealth aircraft. There's a much, much, much longer list, but that's the [00:17:00] easy one that everyone knows. A lot of people know that story in the national security world. Of course, what came out of core, enabling technologies. Well, arguably the entire field of advanced material science, but also ARPANET and the internet the seeds of artificial intelligence, advanced microelectronics, Microsystems, huge numbers of technological revolutions. So if that's what's going on at DARPA the first thing to point out is that half of it and some of the most transformative. Core technology, things that have come out of DARPA did not transition to the world because DOD went and bought a bunch of it. Right. And so and, and so the transition for most of the core enabling technologies is out to industry to turn into products and services. And, you know, we've seen. We've seen many, many stories and how that works often, what it looks like is a project that darkened funds at a university and or company. And then those individuals beyond DARPA funding go forward, identify markets, raise capital, build businesses, [00:18:00] build product lines, build industries, changed the world. Right? So we, that that's that that's not trivial. In itself. And then, but I think I just want to also be clear that even for the half of DARPA, that's been about building prototype military systems by and large DOD is not excited about someone they'd start. I'll tell you just, just one story. When I came to DARPA, we had just started just before I arrived, we had started a program. A great partner manager had been a Navy officer. I was serving at DARPA and he said, you know, wouldn't it be great if the Navy had an autonomous vessel, a ship that could leave the pier and navigate across open oceans for months at a time without a single sailor onboard, not a remote control vehicle, but one that just had sparse supervisory control, radically different tools for the Navy, if something like that existed. And maybe we can actually do that. And the Navy got when the DARPA was. Trying to do this and the Navy thought, but I observed this. And what they thought was that is a [00:19:00] really bad idea. And they tried to shut it down there. Important element of DARPA is the Navy doesn't actually get to tell their people what to do. And my predecessor appropriately said, I don't know if she said, thank you. But she definitely said, we're just doing this. By the time I got to DARPA, the Navy had gone from outright hostile to merely deeply skeptical, which is pretty important because that's the stage. It was. People will tell you what, you know, all the reasons that they don't believe it. And they say, well, how is it going to meet call rags, which are the rules of the road for navigating, you know, in dense areas. And how's it gonna last that long at sea in that harsh Marine environment, they had the entire long, difficult list of challenges. So then we knew then, you know what you gotta do, right. So fast forward before I left DARPA I got to Christen the first ever self-driving ship. See Hunter that we put in the water. And at that christening ceremony, by that time, we were paired up with the Navy and the Navy was a partner with us for awhile. And I think now is taking the effort [00:20:00] forward. And you know, now we have a working prototype. Now the Navy can say, Oh, let me figure out. Do I want to use it to hunt sea mines? Is it a cheaper, safer way to trail quiet diesel submarines? You know, there's a lot more that has to happen to really figure out how you take this and move it forward. So that's a success story, but I think that stealth is another great example. These things were not only not embraced or asked for, or, or. Welcomed when they weren't delivered from DARPA, they were, you know, they were spat upon often. But it doesn't matter because if it's radically better enough and, and the stars align and you get like, I mean, a lot of things she can't control, but that is how big changes happen. And you have to be able to do those things, even when there isn't a customer standing there waiting for it. I appreciate that. And so, yeah. Do so, so like then let's how does that then translate for you guys for actually, yeah, so I think the way to think [00:21:00] about it for any, any, so look, I mean, anytime you're setting out to make. To spark a radical transformation. You, it's not going to happen unless you really think about the entire system of what it's going to take to, to create the change that you want to see in the world. And so let me just take one really specific example. One of our programs, Dave safes at actuate. But one of these is one of Wade's programs that he's built. The objective there is to use privacy technologies that are emerging, that are currently being used ad hoc to build a new architecture and infrastructure that would allow for multiple data sets to be provided on an encrypted basis. And then what would allow researchers or policymakers, anyone who wants to analyze the data and cross-link among those data sets for the insights that they hold would allow them to do that entire process while rigorously preserving privacy. And that includes. The CR the linking, the cleaning and the [00:22:00] linking, you know, all the sort of, or ugly data science stuff that has to happen before you can actually start seeing the insights. So it's a soup to nuts full system. That's the ambition of that program is to demonstrate something that's that's that's. Robust enough and flexible enough to handle many different kinds of data and data problems. So the future that we want to see is that instead of today, where research is, you know, you ended up doing research or policy in halafu, it's sort of a lamppost problem, right? You do a lot of interesting research with the data you happen to be able to get a hold of, or that you happen to have permission to link to other data, but all the really interesting problems what, what. Happens in K through 12, but that leads to different kinds of life outcomes. How has that to other environmental factors in a kid's neighborhood or the way that, that education and that child is going to end up interacting with the criminal justice system? How, how do all of those things tie to the progress of the [00:23:00] economy and jobs and the things that lift people up and allow them to pursue opportunity? That's you know, to answer those kinds of questions, you need 53 different agencies at state local and federal levels, and you need private company data. And you know, like it's all just it's it exists, but that doesn't mean you actually can get at it and start using it. So we want to see a future where you could answer those kinds of questions. Well, so what's it going to take the piece that the program will do when we're able to get it going is to demonstrate a prototype system that allows for radically different kinds of data owners to put their data together, you know, run some real examples and. And do applications show that are demonstrations of what this new data capability would look like, but that's probably not going to be enough. Right. And so the other things that need to happen you know, my dream is there's a future where there's NIST or other standard for the kinds of. [00:24:00] Procedures and processes that would allow the legal counsel of the firm or the organization that owns the data to say, okay, if we comply with this regulation, if we meet this certification, I can now sign off and know that I'm protecting the data properly, but I can, I can make that decision tomorrow, not in six months or a year, like it usually takes today. And, and, and then over time with, you know, with a lot of different players and. An infrastructure for regulation and certification, you can start to see how you could, you could have the kind of rich data future that, you know, w we all talk about these days, but actually isn't quite happening yet. So, so I think that, I don't know if that's a useful, for example, but what the pic, the general picture is. Think about all the entities, all the actors that are going to have to. To do something to change their minds, take an action. And you may not be, I mean, we're not going to go fund all of that. We're going to fund a piece that would allow them to change their minds. And that's really, our [00:25:00] objective is a prototype and demonstrations that cause them to say, okay, we can, we can now do something in a different way. Do you see encouraging them to change their minds as part of the program in that there's sort of like a very there's there's a spectrum of from just like demonstrating the prototype and then washing your hands of it too. Like. Push like knocking on their doors for years. And I assume it's somewhere in the middle. Yeah. There's a lot of leading horses to water recognizing that you can't make them drink. What I, what I think is really clear for many, many years of experience at DARPA and other places is that if you're not deliberate and thoughtful about. Who those players are, what would cause them to change their minds and then doing the active work to engage them all along the process. For sure. If you don't do those things, the chances are pretty, pretty slim. If you do them, you might have a shot. Right. And [00:26:00] and so I think we're as we're designing programs that actually we're being. Very explicit about that engagement process, which starts by you have a lot of conversations with people who are like, most often, they're like, yeah, sure. You're in fantasy land. If that stuff existed, it'd be awesome. I'm like, that's not the reality. And let me tell you what I really need. So that's at the beginning. And then as a program starts, you know, during the execution of a program, that's really when it starts going from. Just, you know, something that the program leader believes in to something that now is starting to be palpably real potentially. Right. And so you want to bring those. Decision makers whose minds need to be changed, but at least could be investors. They could be entrepreneurs. They could be policymakers. I mean, a whole different sets of who those, those, those adopters need to be the ones that are going to take it to scale. But the places where we can bring them to the table are you know, you continue to call them up and tell them what's going on. But. But you [00:27:00] create demonstrations and updates where you bring them to the technology or you bring the technology to them and you say, look, did you, did, you know, this was possible. Look what we can now do. And, you know, ideally they get dazzled and then they say, Oh yeah, but they hear the next three things. That would be a problem. And that tells you what you need for the next phase. So that's what, that's a parallel track to the three to five years of technical work that's going on in the program. That makes a lot of sense. And in terms of the technical work, do you plan on having it be mostly externalized to the organization? The same way that DARPA does. I would say w there there's a very important piece of intellectual work and management and leadership that happens with the program leader and that individuals tiny little team within actuate, very much like at DARPA. But you know, the vast majority, the overwhelming amount of the funding goes out to the, the companies, the [00:28:00] universities, the nonprofits who are doing the different components of R and D and. Testing and demonstrations and all the people who are doing all of that work. And that's for a couple of reasons. Number one you know, these are three to five year projects programs, and we w w what we want to do is we don't want to hire them all and put them under our roof for that period of time just as a practical matter. But the other really important thing is when the program is over. What you want is, you know, a successful program and w a program starts with a program leader who has this vision. Yeah, they are, they are, you know, they're calling people to try to do this really difficult, new thing, and. At the end of a program, what you want is that entire community that you've been funding and working with that, they get the vision. Not only that they built, they delivered it, right? Like they've actually built this thing and they become the most important [00:29:00] vectors for moving it out into the world and getting it. Actually implemented. So the world starts changing. And so for both of those reasons up front and at the back end I think that's, I think that's one of the powers of the DARPA model is, is tapping these amazing talents wherever they are. Yeah. So something that I've actually wondered about with the DARPA model, that I've never been able to find any good information on is what do you do when you run into a situation where You need, like there there's multiple groups that have been working on different pieces and there's like, is there ever contention over, who's going to take it forward or like, like, how do you, how do you sort of coordinate it so that the outcome is the best for the world where like, which might involve like like squashing someone's ego or something like that. I was like, shocked. I'm shocked. So are you thinking I would say they're somewhat different answers if those junctures happen [00:30:00] during a program versus after a program. So, you know, let's say you have a program that that had different university groups working on dunno some advanced chip for doing machine learning or whatever. And, and, and it, I mean, this just happened. I think that there were multiple very good research results, but then were commercialized in different ways by the performers. So at that point, you know, it's like, great. Let them drive it out. Hopefully they, they. But they may compete with each other. They might go after different market segments, but there, there are multiple shots on goal to commercialize something coming out of a program. And I would characterize that as something that DARPA would not particularly, I certainly wouldn't control, probably doesn't even have much influence. Conversely, if you're in a program at the early stages of a program, a lot of the that's a lot of what the core management Work is for the program manager at DARPA or the program leader as we're calling them at actuate [00:31:00] is, you know, so let's back up. Number one, you're trying to do something that achieves huge impact sad, but true that involves taking risks because all the low-risk things have already been done. And so the, the whole art of this. Business is how do you intelligently take and then manage and drive down and eliminate risks. And one of the, one of the really effective tools in the toolkit for managing risk is a to S to S to plant a number of different seeds. And to deliberately have competitive efforts that might, you know one of our programs at actuate, for example is built on the idea that we have all kinds of research that could be better at real-time incentives to help people make better. To develop healthier habits. So, you know, it, when we get that program going, we're going to deliberately have multiple teams who are working on different kinds of incentives, themes, and then a core [00:32:00] management challenge in a program like that is going to be, you know, you, you may choose to start four, but you, you know, at some point you're, you're going to want to down select and go to two. And what is the right point? When is it. Point where you want to say, you know, I'm going to put more of my eggs in these baskets. And so I think that that's integral to the design and then the, the day to day or week to week management of the program. And I imagine that there might be one more situation where at the look you're actually sort of building a system and you have different groups working on different pieces of like different components in the system. And so. What, what, how do you, how do you manage that at the end? Where it's like, okay, like at the end of the day we, we want the system. Yeah. That's exactly right. Yeah. And I, I let's say maybe just one small point at DARPA. DARPA's running 250 or 300 programs at any moment in time. Right? So full-blown huge agency [00:33:00] relative to the scale that we're starting at zero right now at actuary, but in the DARPA portfolio, you will find programs. You know, the self-driving ship program was a systems development program, Gantt charts, milestones, boom, boom, boom. Right on the very other end of the spectrum might be a very much more research oriented program. That's highly exploratory. There's a new physical phenomenon that looks like it could be interesting down the road, but right now you just want to have vibrant research and people pursuing the question in lots of different ways. So there, there are many, many models. Yeah. Somewhat in the middle is probably where is, is what I would characterize where actuate will start and what we're finding in the kinds of programs that we're exploring is over and over again. Here's the pattern there. Number one, there's a, there's a problem for which we think there's a radically better solution. That's possible. The reason we think it's possible is because not because of one new research result, but because there are a handful of different research areas that are advancing in interesting ways. But they [00:34:00] haven't yet those advances have not yet really been applied to the right problem or critically to your point, integrated together into a system that can actually follow the problem. They're just like threads or hopes. Right? Yeah. And so that becomes, I think this is a classic template. For solutions R and D program at DARPA or an actuate. So a great way to manage those kinds of programs is, should think in terms of different tracks of effort. And the first track is to advance the research itself. So it's applied research where you're, you're building on these, these threads and nuggets, which you're really aiming at the specific new capability that your, that the programs. The program's goal is to demonstrate that, right? So track one is applied research. The second track is building prototypes and that's often that's a different kind of performer. It's someone who can integrate the different pieces and you can, you know, you can imagine a process where every seat. Three or six months, there's a drop from applied research into building prototypes. Right. And so, [00:35:00] especially for software tools, this is like the classic way you would do it. So every three to six months to see what's coming out of applied research, that's baked enough to put it into the prototype. And so that that's. That's becomes a very good way to flow things. That's tracks one and two track three is now you got to figure out if this stuff is doing anything. So then it's, it's testing, evaluation and working, you know, trying to show that it works for the application or applications that you're going after. And while there are different tracks, they interact, right? Because as you're learning what works and as you take the integrated prototype, so an integrated prototype for. But tool to help individuals choose healthier habits throughout their days and their weeks. So it's going to integrate a whole host of these different advances that are coming from different areas of a lie, including incentives, as I mentioned before, but, you know, ideally every six months or so as the prototype strop to testing, you start getting real feedback about this, this combination of. [00:36:00] Sensing and coaching and personalized incentive. Is it working or is it not working? Right. And then, then you go through these iteration loops. So I think that's So, yeah, I mean, I think what, what, so what the program looks like when it's underway is you'll see some researchers, universities, or companies you'll see prototype developers, typically more companies there you'll see people who do the tests or the demonstrations. It could be a clinical trial. If it's health-related it could be, I mean, it could be whatever, whatever the form of the prototype or the application is. And then throughout the whole thing, and the management challenge is. You know, you have a plan and then reality is going to happen. It's going to be something different. So how do you keep that whole engine moving forward? That is, that is an amazing description. I really appreciate you going into those details. Cause I think that that's something that. People don't think about it enough is, is sort of like how, how to manage those tracks. I want to actually go back to something that you said earlier, which is that the people that you want sort of as [00:37:00] performers in the program are the people who can see where the boxes and then, and think about, think outside of it. And do you have any, any strategies for finding those people and, and sort of teasing that out of them? Yeah, I, I think I said it more in the context of program leaders. And then, and, you know, by the way, at DARPA, one of the best ways to go find great new program managers or potentially great new program managers. Cause you don't really know until you give them a shot. Is to find, go through the performer base. Right? And there, there, there at DARPA I found there were always, there were always performers who were very, very good at their piece of it and they loved their piece of it. And you have to have those people, but then once in a while, you'd see a performer who started seeing the whole picture and they could help the, you know, they would start being creative about like, we could go here. And when you start seeing that, those are the, those are the signs. So I have a set of. Criteria that I thought about in terms of [00:38:00] DARPA program managers. And it's very similar for Dar for, for actually future program leaders. Number one, it's people who are driven to make a change in the world which like, I mean, this is where I live and breathe, but it. Over time. It has finally dawned on me that not everyone gets out of bed in the morning to make the future a better place. All right. Like that's just like what the culture and the whole point of the exercises. They have to find people who are driven to do that. I'm always looking for domain expertise because you need to be deeply rooted and deeply smart about something that's relevant to the problem it's going to work on almost by definition. You won't be a domain expert on everything that it takes, because these are big systems complex. Thanks. So the next thing I'm always looking for is the ability to understand the whole, the big picture of the system, and then to navigate seamlessly, you know, from, from forest to trees, to bark, to cells, right. And then back up and you have to be able to do that whole thing. And that means you may know a little, a lot [00:39:00] about how you know how some aspect of behavioral science works in a very specific context, but you also, I'm also looking for people who can then extrapolate up to how might that and other advances to be harnessed, to, to move the world forward. Right. And that that's that's I would tell you that's one of the harvest characteristics to find, cause of course. W w you know, there are lots of people who have domain expertise, but that ability to navigate from systems to details is, is actually a very precious commodity that I always love when I find I'm looking for people who, the overall thing I'm looking for is people who have, you know, head in the clouds feet on the ground, because you need to be able to dream, but you actually have to be able to go execute. And in this case, execute by managing other people on projects. Yeah. You know, it's not an individual contributor role. And then the final thing that matters deeply is an ethical core, just because you know, that that's important for how you treat people on [00:40:00] a day-to-day basis. But it's also important because we're talking about really powerful technologies and someone who we need people who are willing to be explicit and thoughtful about the ethical considerations that they'll be weighing in. Yeah. That that's great. I want to change gears just a little bit and sort of talk and talk about money for a little bit. So, so, so you spent many years in venture capital, and so I assume you, you know, the, the, sort of both the upsides and the downsides of, of startups and for capital organizations and you decided to, to start as a nonprofit. And so, so I'd love to sort of understand the thought process behind that because I definitely, I, there, there's sort of a line of thinking that. You know, it's like, if it, like, if it can be done, it should be done as a company, as a, like a startup. And so I'm interested in why you, so I would say that [00:41:00] simple minded and, and to the extent you think that's, if that's your worldview, I would say the things I think need to be done, that I can make a contribution to cannot aren't companies. They're not there. There's not a visible market. And so it's not, it's not a company today. Some of the things we want to work on will part of getting them out to the world will involve markets and therefore companies, including startups, but you know, coming back to these major societal challenges that we have none of them are simply going to be solved. By companies, building new products, services, and profits. And I do think that some of the solutions will ultimately will include companies having really interesting new market opportunities. But it, you know, this is the stuff that the market doesn't do and, and. But, you know, th the, so if you think about us, R and D we spend about half a trillion dollars a year in the U S economy on research and development. [00:42:00] The majority of that of course, is companies doing product development and but about a hundred, I think it's about 140 billion a year that's that's federally funded, R and D and and, and the, the. But areas in which actually is focusing are places where they are not market driven opportunities and, and they are not, I think they are not yet the places where we have the federal R and D machinery and yeah. But so those things need to happen for our ultimate dreams to come true. Right. Is to make the difference that we want more. And, and ideally it seems like you, you'd almost sort of like pull both of those both of those leavers, like towards a certain direction, right? Like that's, that seems like a, a place that you could sit getting opportunities for them. Right. I think that's the biggest pull as you show them something that, that changes their minds. Yeah. And are you funding the organization as like actually as an organization, as a whole? Or are you funding [00:43:00] each sort of program? Like, are you funding it as a program by program basis? We're still at a seed stage just to be really clear, but we spent a lot of time on this strategic question about whether first of all, let's be really clear that what we're trying, we think philanthropy has an important role to play because of the fact that market and government are not. For various reasons, stepping up to the plate on these topics that said that what we're trying to do in the social sector is there isn't a template for it. It's not what philanthropy has, has done at least in the last, you know, Six or eight decades. Very interesting stories about Rockefeller foundation and the green revolution and how they, how they funded the research. But, you know, if you go back and read how they thought about it in the methodologies that they developed, it looks a lot like solutions, R and D and then those. Actually those human beings, those exact people went into whenever bushes organizations on during the second war. And, and [00:44:00] I mean, that's the template for solutions R and D is right. We have an existential crisis and we have things we can do about it. And it's all hands on deck and integrating everything. And. Building radar on the bomb. Right? So, so anyway, so, but it's been decades since part of philanthropy, I would say, was really seriously focused on this kind of solutions, R and D. So with that, that is the significant caveat. So everything we're doing is going to be a big experiment in the social sector question you're to get now to get to your question. That we spend a lot of time thinking about whether we should try to build a program, build a program, go raise money for it. Or if we should try to do something that's even harder, which is to raise a fund, to do multiple programs and build a portfolio we've settled on the ladder. And the reason for that is simply that, first of all, I think, you know, sometimes doing an impossible thing. It's better to do the more impossible thing that actually. Can make an impact. I think this comes back to risk management and we talked about risk management within [00:45:00] a program, but a lot of, you know, how to start have one or two things, every single decade that literally is changing the world. Well, it certainly isn't because all the programs succeed, it is because you have a portfolio. And because it's a very deliberately managed diversified portfolio, it's diverse in. Aspects of national security it's that it's targeting, it's diverse in the technological levers that it's pursuing, it's diverse and timeframes to impact. And so at the end of the day, we concluded that for actually to make a dent on any of these met massive societal challenges that we needed to be able to build portfolio. Yeah, no, that makes a lot of sense. And so teaching to do tracks again and just talk to you a little bit about, about the Pat, like your, your, your, your career, which has included some like amazing things. Like when, when you became the DARPA director, like how, [00:46:00] how did. You know what to do? Like did they, I'm sorry, this is a silly question, but as you say, it seems like such a big role. Yeah. I've been super lucky in the things that I got to do. But I th the luckiest day, I would say in my professional life was the day that Dick Reynolds, who ran the defense sciences office at DARPA in the 1980s. He said to me at a workshop of there that I happened to be attending. He asked if I wanted to come to Darko as a program manager, and I was 27. I had been out of graduate school for. A year. Oh, maybe I was 26 at the time. Anyway, I had only been out of graduate school for about a year. And I was in Washington on a congressional fellowship at that time because I had decided I wanted to do something other than research on the academic track, but I didn't know what that was. It's like on a [00:47:00] Lark, I went to Washington for a year, which was critical because even when you leave the trot, you know, th the, the path you are supposed to be on, that's when you don't know what's going to happen. But one of the things that can happen as amazing new possibilities occur. And that's what happened when Dick asked if I wanted to come to DARPA. So at a very early stage of my career, I landed at DARPA and it was the first place I had ever been. I mean, I had worked. Two summers at bell labs who put me through graduate school. I'd worked at Lawrence Livermore one summer as a summer student. I'd worked at Texas tech and the laser lab as an undergraduate. I'd done this graduate work at Caltech and then I'd been on at the office of technology assessment. And the honest congressional fellowship I got to DARPA and all of a sudden, it just made sense to me, right? Like everything that I thought and believed in the way I was culturally oriented, which was you go find really hard problems. And then the contribution we get to make as technologists is we get to come up with a better way to solve a really hard problem. And we get to [00:48:00] blow open these doors to new opportunities. I just, it just resonated so deeply. So I spent seven years at DARPA the last couple of years, which we're starting with micro at that time, it was the micro electronics technology office, which we spun out of the rail defense sciences office at that time. And I, I, you know, I loved it. It was, it was a crazy ride. Right. I got to do all kinds of things that were very, very meaningful then. And that, you know, for the 30 years, since then, it's been just. Such a delight to see so many things, but have come into the world that trace back to some of the early investments that we got to make. And I would tell you that while I loved it, everything else I got to do after DARPA and I treasure it and I needed those experiences, I never really got over being at DARPA. It was just like, it was my home. It was my place. It was what made sense to me. And So when I got the call in 2012 to go back and lead it I, you know, it was just a dream come true. And [00:49:00] when I got there, it was, you know, being a program manager and then being an office director at DARPA, which I had done in the eighties and nineties, and then going back as director, those are three very different jobs, but so there was a huge amount of learning and growth in every stage. But they are all. Lined up to this mission and vision of an organization. That's just like, I'm wired the way that DARPA is wired. So, so I, I have to say it's, it was the most satisfying job that I've had so far, I'm trying to make actually even more. It was very hard. It was very meaningful, but I have to tell you, it just felt natural. It felt instinctive and natural in a way that none of my other jobs really did. I have to say, I mean, you know, And they were all. Okay. And I think there are other jobs. I think I was good at their other jobs. I was horrible at that matter, but DARPA was the place where it just sort of, it just felt natural to me. Yeah. And, and so sort of to provide on that and, and in closing do you [00:50:00] think that there are any ways to improve on the DARPA model that you're trying to implement going forward? So we talk about this all the time. I mean, I think for small, if the work that we're starting an actuator can have anything like the kind of impact that DARPA has had in, and, and, you know, any subset of its programs Then I can die happy, right? Like if we can really make a contribution to these big societal problems, that's, that's, that's going to make, that's just going to be deeply meaningful to me. We've talked about some of the things that I think are difficult in the DARPA model. One of them is about the more radical the innovation and advanced the harder typically is to get anyone to. Change the way that they work in order to adopt it and get the benefits of it. So I think being we're, we're trying to be even more deliberate about how would you get decision makers to change their minds and implement in the design of our programs have actually, I mean, I think DARPA does that, but that's something we're trying to put [00:51:00] special focus on. I think DARPA's done a huge amount of work to make it easier to, they have legislative authorities and good practices about being able to hire people who. Many of them normally wouldn't consider public service for many reasons, but especially of course, low compensation levels. And while dark was not fully market competitive, we w we were able to move very quickly and had a little bit of a salary cap relief. So, you know, the nonprofit sector is not going to be the place that you make your billions obviously. But I think being outside of government has that advantage and something that we'll, we'll definitely take advantage of. And they're, you know, they're things that are simply not appropriate for the government in a market economy. To do. And so there, there are things that you can do for national security, but that, that unless we have a radical change in our thoughts about industrial policy, which by the way might be happening, I can't quite tell, but there are ways in which government has not [00:52:00] chosen in the past to work with industry or with finance that I think are less, you know, those are not as significant on a limitation for the work we're doing in the social sector. Nice. Excellent. Well, I want to be really respectful of your time. How can, how can people find out more about what you're doing? And like if they, if they think this is interesting, like what, what should they do to, to help out. Well, thanks so much for talking about this. I love the fact that you, that you care about these issues and you've done more than anyone I've seen from outside DARPA to really understand the agency. So that it's been so much fun talking with you, Ben, about that. I think you're going to provide the link to the issues in science and technology. And our website courses, if it's all brand new. So take a look and you know, we're so early right now, but I'm, I'm always looking for people who have a deep passion for these societal challenges who see new opportunities to do things that are radically better way. [00:53:00] And please reach out to us from our website. If you, if, if it resonates, we'd love to hear from you. Thanks for listening. We're always looking to improve. So we'd love feedback and suggestions. You can get in touch on Twitter at Ben underscore Reinhardt. If you found this podcast intriguing, don't forget to share and discuss it with your friends. Thank you.
In this conversation I talk to Ilan Gur about what it really means for technology to “escape the lab”, the power of context to shape the usefulness of research, the inadequacies of current institutional structures, how activate helps technology escape the lab *by* changing people’s context, and more. Ilan is the CEO and founder of Activate, which is a nonprofit that runs a fellowship enabling scientists to spend two years embedded in research institutions to mature technology from a concept to a first product. In the past, he has also served as a program director at ARPA-E and was a cofounder of Seeo, where he commercial new high-density battery technology. Links Activate Ilan on Twitter Ilan on My Climate Journey Podcast Transcript In the past, we've talked about the, how the whole process of really turning hardcore scientific research into products that have an impact on people's lives is fairly abstract to people outside of the system. Since you've both walked the path and now help other people do the same, let's round the conversation. would you go into detail on what the actual actions you need to take to go from say, being a graduate student who just published a paper on a promising battery technology to an improved battery in a car. That's that's a great place to start. let me try and answer that from a few different dimensions. I'll, I'll start by answering it, just from an anecdote about my personal experience, which I've shared in other places, but, you know, I basically. Went into my PhD program because I felt like the field I was studying material scientists, material science could, be the biggest way to make a big impact on climate change by basically taking new science and turning it into the next generation of all the technologies. We need to have a sustainable economy. And, I was working in nanotechnology, joined. Kind of the world, the best research group in the world that that was working on how nano materials could improve solar cells. and this is before the, the enormous solar market that exists today exists. There was a sense at the time that, you know, we needed a completely new generation of technology to make solar ubiquitous and cost effective. And so, you know, we had this great mantra around how we were going to print solar cells like newspapers, using these small colloidal nano, semiconductors. and the research was phenomenal. we were driven by the fact that what I like to say is, you know, we wrote a science paper where the first paragraph, like any, talked about how the research was going to change the world. And it wasn't until I randomly got connected with some business school folks at Berkeley, where I was doing my PhD. and they actually. It didn't take long. they put me through just a few cycles of digging one level deeper into, how solar cells were actually made, how they were sold, what determined their, their costs and the cost of energy they produce. and I ended up, you know, over the course of a few weeks with a spreadsheet that I still have somewhere, which told me that. If we hit all of our targets and our research in terms of what we thought could change the world. we would end up with a solar cell where even if you gave it away for free, it couldn't compete with the existing state of the art Silicon solar cells at the time. and it was a really. Simple idea, which was, we were making dirt cheap solar cells, but they probably wouldn't last very long. And we didn't think that was such a big deal. You just print some more. and yet, certainly at the time, and it's still true. It's such a, such a predominant amount of the cost of solar energy came from the balance of systems and installations. And I bring up the story because, for me, it was a tipping point. We had so much excitement about our research. It was even published in Forbes, you know, so a business magazine, and. It just showed how it showed, how easy it was to think you were doing something productive and successful. I it's not that I, I, I was in academia, but the reason I was there was to try and get something productive that could turn into a product. Right. And I had missed the boat so much, even with that intention. and so that was a shock to me. And so. That was kind of the first lesson around how, you know, institutions matter and incentives matter. but what I ended up doing was then leaving academia and jumping into an early stage startup, which was an amazing vehicle to think about how this transition happens and, you know, basically the learning there, and, This is what we now, you know, this is a lot of what we now indoctrinate and try and help people understand in the fellowship we run, was that, you know, the depth and multitude of elements that determine whether a technology can actually make it from the research stage to a product in the market. You know, first of all, you know, the idea is like, you know, the easy part in some regard. but yeah. You know, the number of levels deeper, you have to go to understand, okay, how is it going to be, how is it actually going to be valuable? Who's going to buy it. Why are they going to buy it? You know, how does, how does the whole system get built to make it, it's it's a month multi-dimensional problem where everything needs to line up between finance and the team you have in the market yet. And it's technology. and. You know, for me, I think, you know, this we've talked before, one of the biggest things that I've come to realize is we've got, you know, we've got hundreds of billions of dollars that government spends to do the idea and ideation. We've got hundreds of billions of dollars that the private sector spends to basically take the early prototypes and the idea of a product and scale it. and we've got really very little, that goes into how you do all the really hard stuff of translating one to the other. Yeah. So, so let's like what I'm going to actually continue to poke at. Like, what is that actual stuff? So the, the start that you joined did w what, what sort of was the origin of the technology that you were working on? I assume it came out of a lab somewhere. I, yeah, I was involved in two startups. One was after that epiphany moment in my PhD work, I basically threw out the work we were doing, and shifted gears and ended up developing the technology. That was the basis for, for actually a solar startup thinking about sort of thin-film, nanocrystal based, solar cells, Basically realizing that the, that the lifetime was so important, we just threw out all of the organics that we were working on and focused on. Like, you basically just need a new manufacturing approach to make something that looks like a traditional solar. So, that was a company that I kind of helped establish, but then ultimately didn't go. I was, I was meant to be sort of the founding, you know, grad student turn CTO. and then, for a number of reasons, didn't end up jumping into that as a startup and instead, through. Just some of the serendipity of being in the Bay area and Silicon Valley ended up, on the founding team of a battery startup that came out of another research lab at Berkeley. and this was funded by, Samira and Vanessa who, when, when coastal ventures was just going to start it. yeah, so, so like let's so. W when we say coming out of a lab, I think it's actually worth almost disecting what that means. Cause I suspect that it means different things to different people. and so, so someone in the lab. Did some research, figure it out. Okay. We think we can extend it was, it was a lifetime, et cetera, extended battery lifetimes, or, this was about making or energy batteries, higher energy density, batteries that were still safe and stable. using basically solid electrolytes. so, so they like publish her paper, like, like I assume that there's like, like they do some experiments. They come up with like the core. sort of process improvement. It's like, okay, we, we make batteries this, this old way, and now we need to make batteries at different way that will eventually make the battery into something useful. then what did, like, what did they need to do? What do you, what did you all do? Yeah, the origin story of CEO is I think a great one. So ingredients in this case and, and some, and there are some universal, I think things that you can pull out of this, you had a couple of graduate students and a professor at Berkeley, Natasha Bulsara, doing research, basically a polymer expert who starts doing research in terms of how polymers can be applied to batteries. the, the business as usual or the incentive structures within universities generally, you know, would say for Natasha to be successful in his career, he needs to make some new discoveries. He needs to write some great papers. he needs to advance, you know, as an academic, right. And he was doing that. and. In this case, it took this moment where, you know, Natasha was a dreamer and had, you know, just had a sense of, well, wait a second, I want this to be useful. I think this can be useful. He kind of had a zero with order idea that there's this problem in batteries, where, you know, you can, if you try and use high energy density, electrodes, like lithium metal, they can short across and lithium metals, flammable and combustible. And so, you know, There's this idea that you could make a high energy density battery. Unfortunately, it starts to look more like a bomb than a battery. and he, you know, to zero with order, the polymers that he's making could solve that problem, right. It could be robust and strong mechanically and still be highly conductive, for ions and. Tasha to his credit is audacious enough to say, Oh, and this is a time to, we have to recognize when venture capitalists are interested in funding these things at the early stages. Right? So it takes Natasha being audacious enough to say, I think we can, we can start something. And then it takes someone in this case, like the node who is as audacious as it comes in saying, well, I think batteries are going to be a big deal. I think this is a really smart team and they'll figure it out. And so like, let's start a company here. it turns out and, you know, I don't really, I don't know if anyone will be upset at this point to say this. Right. Like I joined the company, not being a battery expert. I kind of was the entrepreneurial scientist who jumped in to kind of help start it. You know, I had a meeting with severe, he said like, all right, buy a cell phone. And like, you'll be employee number one, like just, just go and let's start. So there's a whole nother story about that, but, It wasn't until I was in the, you know, and coastal ventures decided to fund it. and then I actually saw the diligence, the early diligence that coastal ventures had done on the idea. And it was like battery expert. I won't say who world-renowned battery expert, who I now highly respect. Basically said, this is total BS. You know, like there is no way this, this idea and this technology could solve this problem for these 10 reasons. and what I love about the node, and what allows him to really catalyze new things, things is, he just said he just ignored it. He said like, all right, the experts don't think it's possible. Fine. you know, and invested in any way, a couple of million dollars to go, you know, to go start this company. And so. You know, you have me as a scientist, who's motivated to be entrepreneurial, but has no experience. And you have a, like really incredible, you know, genius academic professor out of Berkeley and two of his students that are really entirely scientific in their thinking at the time. and now all of a sudden, like we're in a startup and we're meant to go develop a product. And so this question of like, well, what does that actually take? Like, you know, we just got thrown into the deep end, about that. but you know, the first thing is some people to just be audacious and say, there could be value created here. Let's take these individuals. And this was one of the reasons why I found it cyclotron road and now created activate like the origin story was let's take these individuals and get them into a different mode of how they're thinking about their R and D. And there was just an, there was an entire phase transformation that happened where all of a sudden, you know, Natasha and Mohit and Hani, and I, are now in a startup and. Our only reason for existence is figuring out how you make a product that could be impactful and get out to the market. And jeez, like, you know, I mentioned zero with order before, because like at the first order, all of the assumptions around why that technology could have been valuable in batteries were not all of them, but most of them were wrong. and, and yet now there was no choice. We were all, I mean, Antosz was still a professor, but the rest of the team is basically now in a mode. We're like, okay, we got to figure out how to make something valuable for batteries. You know, ideally starting with the technology that we have. and you know, it's funny, you're I think your audience, a lot of your audience is sort of scientific and technical. So what I like to say now is like, if you want to move science to products, you need to live for some time and a superposition of those States. before you can kind of collapse the wave function and understand like, where, like, what is it that you have. And for me, like what was so lucky was because of the node was there to be able to put that speculative money in those first 18 months of co, like we weren't a research project anymore, but we certainly weren't a company. and we had to figure out like, Okay. Which parts of this are just still interesting research that Natasha can keep doing this lab, which he did. And he benefited from, and which of these might actually turn into something that could be valuable to the market and a product. I'm not actually sure. I'm answering your question. I think you are. I think we're, we're getting to it. I'm going to like, sort of tease it out. Cause I think it's actually really like, I, I love this because it, I think that it is probably different for every situation, but then there are these similarities where it, so actually, so like during that, that 18 months, what did you spend your time on? So I assumed there was some amount of like going and talking to battery companies and like trying to figure out their end and then some amount of like, It's like, like you were still doing experiments, like, Well, first of all, it's worth noting. You know, there's so much value in taking some smart folks and putting them in a different mode of working. But the idea that the way to do that aggressive applied research was to be in a startup. There's a, there's a bunch of activation barriers there that we have to cross. So luckily even node and the financing, wasn't one of them, he made that easy. but then it's like, Oh shit, like where are we going to do this work? and it's like, we can't meet up at a Starbucks and open our laptops and start prototyping. So like, my job as employee, number one, as unsexy as it was, was like buy a cell phone and like figure out like, Where are you guys going to work? You know, like, Oh, we got to find space. We gotta, you know, I gotta go call em Bron and negotiate, you know, glove glovebox order. and see whether I can find some way, because frankly for them to build us a custom blood box is going to take three to five months. And like, I want it in six weeks, you know, like, how are we going to do that? so that's, you know, like that's number one that was kind of just table stakes, you know, then. And I think this is the tough part, in this transition, especially when you start with venture capital, which is, you know, the team has certain assumptions, those zeros or assumptions around. Here's how we think it was valuable. So obviously like the node wasn't going to fund without at least like, Some plan around like, okay, we're going to take your money. And here are the experiments or the development we were going to do. Right. So the development was like, okay, we've got a polymer that can do X, Y, and Z. And we need a polymer that could do a, B and C. And so the first part of this effort is going to be to, you know, we're gonna. Take the synthesis. We're going to make these better polymers. We're going to show that we can get the properties you need for it to be valuable. we're going to show that we can develop a process where this can actually be a scalable polymer to produce because otherwise it's going to be way too expensive. Yeah. And then we're actually going to figure out, like, how do you make a battery so that we can show that you can put this into a battery. And what's interesting is you look at all three of those things. If we were to go back and look at the early plans and experiments that we had on those. Like they were totally, you know, they're pointed in the wrong direction because we didn't understand what the real problems or, and so you then set out on, you set out on building the, the lab infrastructure and the experiments to go do that. I mean, all you can do is March in the direction that you is your current assumption. Right. and so that's what we did. we found. We were, I remember like the node was blown away just in terms of the rate at which we were able to make progress. and then alongside that, you know, my job as the kind of person to start thinking about how the technology met the market within the team, you know, like I started going to industry conferences and shadowing people. On the technical side that I knew just walking around with them and asking questions. And I started to realize, Oh, you know, we, in our, in our thinking, we said that X innovation was going to make the battery five times better. Like once you actually understand how the battery gets made and what the convention is and the different, like, it turns out that five times better as closer to like 1.2 times, it's better. Like, you know, the differences were that big. Yeah. and so, So then this really hairy and stressful process of, okay, w we want to make progress against the dimension, but here we are learning things where that suggested like that vector, you know, our assumptions were wrong and maybe it's not actually as important as we thought. And now you've got to figure out, okay, well, how do we change our experiments to actually work in a direction that matters with a lot of limited information and. You know, that's, it's an insane process. it's an insane and hairy process from so many elements because it's, you know, imperfect information. you know, sometimes you don't, it's, it's a lot of unknown unknowns where like, you're not, you're not totally even missing the thing that's going to kill you. meanwhile, you have different people on the team, who all are at different levels of, of their own understanding and perception around, you know, Maybe difference in the, in the spectrum of like a dreamer of like, yeah, people are telling us it won't be as good, but like radically can be, you know, and it's like, yes, theoretically, it can be, I'll give you an example from, from the CEO days, which is. You know, theoretically lithium metal as a, as an anode and a battery can give you enormous energy densities. cause rather than sticking the lithium ions and holding them within some other material. Right? So in the battery, normally Lithium's, intercalated into some other compounds. So you have to carry the weight and volume of that compound around in your battery. Even though it's not playing an active role, lithium lamp metals, pure lithium. So you don't have to carry any of that baggage around. And so you can have a really lightweight small battery. You know, one of the big epiphanies for us was like, yes. if you can essentially have a two micron thick. Piece of lithium in your battery. and the way we were thinking about making the batteries, it was like you take a piece of lithium foil that would make it so easy and you'd have lithium foil, the, one of the electrodes that you've turned into the battery, you know, so one of the aha moments as stupid as it was, was like, all right, let's go source a five micron thick foil of lithium, right? No, like, that's not like no one makes that. because guess what? Like, you know, lithium at those, you can't even handle with them at that thickness. Cause we've just burst into flame, you know? well, no, cause it was just too, you know, Lithium's not very robust mechanically. Okay. So like you can buy a hundred microns, thick foil of lithium and handle it nicely and easily. And you can buy that really cheap. You start talking to people about like, can I source 10 microns of lithium? And, you know, and they say, yeah, you can, there's one person, there's one group in the world that can do that for you. And it's going to cost a thousand, 10,000 times more than a hundred microns. Now, all of a sudden, like the entire, the entire proposition goes away. and then you're stuck saying like, okay, does this kill the idea? Or actually it probably means we got to figure out a different way to get two to five micron thick lithium. Which is like an entirely new development path and expense that we just never thought about, you know? And is it gonna work? Is it gonna be possible? So this is like, you know, this is that there's this multi Plex, you know, divergent and crazy, you know, optimization that has to happen in terms of like, okay, what do we do next? And, and at the. End of it where you actually building batteries or were you like ha ha like, what is the, like, once you figure out the, like, once you've sort of like gone through that optimization, you actually even have a process to make batteries better. Like, how does that process end up in a device that's using a battery, Yeah. great question. So in that multi sort of, multi-dimensional. Development that we had to be doing. You know, one of the questions basically was like, okay, let's imagine we can make this phenomenal electrolyte, which could enable this phenomenal battery, like a, how are we going to actually prove that the battery is better? You know? you could imagine partnering with battery companies to do that existing battery companies, but like they have no idea how to handle our stuff. Yeah. And they don't have the equipment. So like, no, we can't do that. Like we got to figure out how to make batteries ourselves. So now, like all of a sudden, like you've got an innovation, which is a materials innovation for a component of the battery that you can think Naval and stuff anymore. And all of a sudden, like we have to figure out how to make batteries okay. And produce them. and then we need to think about, are we just going to produce them as at the pilot stage? And then we'll teach a partner how to produce them, or are we going to actually have to build in our little startup, like the entire battery manufacturing capability for this entirely new batteries. And this is where that multidimensional optimization and what we like to tell our fellows now is like, You know, everything has to align in terms of the way you're going to go build this. So for instance, you can have, and I like to think about this in thermodynamics and kinetics. Like the thermodynamics can be great. Oh, we actually do have a magic material that could build a magic battery and we think it's possible. Like we know it's possible. And then the kinetics could kill you. Meaning for us as a small company to actually build out and figure out how to manufacture these at scale, it might take $200 million of capital to do the development, to figure that out. if at that moment in time, the venture capital community doesn't have the appetite to put $200 billion into a battery manufacturing company, then that's not going to happen. And that's going to be the reason why that entire vector doesn't make sense. so this is, you know, like the idea that to navigate how something is going to get to a product into the market. There's a lot of strategy, but there's a lot of dynamic optimization that needs to happen as you learn more and you understand your context. And one of the things you and I have talked about this a little bit, but one of the things that's that for me is the biggest challenge in this, especially for hard technologies that take infrastructure and capital and manufacturing is. There's in theory, there's a really broad, in terms of ways, you can take an idea from research and get it out to market. And what I mean by that is you can, from the university lab, Natasha could have licensed the technology to a big company, right? he could start a company which he did and that company could raise venture capital money to go try and become the biggest battery manufacturer in the world. and raise a lot of money or it could spend some time working and developing technology and then it could license it to a big company where it can be acquired by a big company. Yeah. Interestingly w the lesson learned for me was one, the only way we were able to get that group of people into the mode where we were working aggressively in the supplied R and D way was because of a node gave us a couple million dollars. Yeah. But now the entire organization of the startup was founded on the nodes venture capital money. And the node is notorious in the best possible way of being an amazing venture capitalist. In the sense that his view is if I'm going to invest in you as a VC, my goal is to make you the next multi-billion dollar company. That is the industry leader in this space. And my incentive is to make my bets count. Yeah. And so I would rather do everything. I would rather do everything we can to get it, to beat that you you've learned and adapted this so that it can be the biggest battery company in the world or fail Trump. and there are no other sort of off routes. And what I mean by that is, you know, we recognized, we recognize probably two years in a CEO that. The idea that we could line up all those stars and create a battery. You know, I, I remember learning, you know, the battery manufacturing industry, you looked at the most successful battery manufacturers like Panasonic's battery manufacturing business unit had like 5%, margins in terms of like income net income, or operating margins. And. You basically said, like, I don't know how to justify a big, massive multi-billion dollar business. That sucks. It's a shitty business. Right? Like, so, so we started, then we started thinking like, and we even have a chance at that. Like we gotta go figure out how to we had enough money to go figure out how to manufacture an entirely new battery chemistry. Like I w how are we going to do that? And so it started to realize like, And we had developed, we had some really amazing ideas. We had a really amazing kind of early development and validation, and we had people from big corporates coming to us with a lot of interest, including one that ultimately came through an intermediary. We got the sentence. Well, maybe there's an acquisition here that someone would want to do. And this was early in the company. and you know, the VC board that we had basically said, like, okay, we know we put, we put $5 million in, you could get acquired for $30 million. The technology could end up in a company that actually has the ability to manufacture and the distribution channels. and their view was no, that's not good. That's not, I mean, you know, basically if you think about it, if I'm a venture capitalist and I funded you out of a $500 billion fund, right. what I need you to do is. Build a company that's big enough that returns to me and to my investors. You know, something on the order of hundred, a billion dollars, right? If you returned to me $30 million after three years, it's an amazing return on investment from a, from a pure kind of, we gave you X, you turned it into Y, but in absolute terms, I've spent three years with you. And all you're giving me back for my fund is, you know, 30 million bucks, which doesn't move the needle. You basically proven to me that I just wasted the last three years from you, because you're not the thing that's going to make the fund successful. Yeah, it's a really long-winded story. But the point of the story is to suggest that, even though there are a lot in theory, there are a lot of different ways to get the technology out to market and to get it to scale. Capital sources and institutional structures and incentives. They all act as band pass filters. and they cut you down where all of a sudden, like your only option to be successful is in some narrow range. and. That's. Yeah, it's just an interesting thing too, to think about relative to how we encourage more of this. Yeah. Oh, well, I really appreciate you going down into the nitty gritties because I think it's, one just valuable to sort of have out there. Like, I feel like people don't go into that and what it does is it then sort of frames the work that you're doing both at activate and that you did do at RPE, because I feel like both of those, like that sort of your whole. career has now been trying like the hypothesis of my life. Yes. Yeah, exactly. It's like, so it's like, how do we play with those constraints? actually one question I have about this idea of the phase change, and sort of let's, let's look at the phase change, the, the super position. Do you feel like the idea of, of kind of. Burning the boats in a way of like taking that VC money, that then sort of really focused you on, on the product. Do you think that that is. Essential or like, because like, I, I actually, I've sort of like mixed, like I have very mixed feelings because as you pointed out, like it both focused you, but at the same time, it, it put those constraints on you that you sort of like needed to go big and go home or go home. and so sort of like left. Didn't leave a lot of room for playing. So, so where, where do you, where do you come down on this sort of, like going all in or like, is there, is there like a point in time when it's correct to do that? We got to think about a few they're few different pieces. So I think what you're asking, you know, One is, is it essential to get into a different institutional mindset or structure or incentive structure to do this translational work? The other is, is it essential that you're all in, so to speak, right? That that's your entire life? I think with, with very few exceptions, the first is essential. and the second is important. and. But, but maybe, you know, but, but maybe not essential. I mean, the, you know, the, this is the, I mean, this is the reason why we created activate, and this is the reason why, you know, we started cyclotron road as, as kind of the precursor experiment, which was, The first thing we have to recognize is like, we've lost big picture, at least in the U S but I think this is true globally. Like we've lost a really critical modality of how we do research, with which is a place with amazing people who understand science and engineering at the earliest stages of technology development and who are incentivized to create a product to create something practical. And, you know, I think you, you know this, right? Like, you know, this story, but the best research in the world used to happen in companies. and, and you know, whether it was the companies themselves funding it, you know, think about bell labs, whether it was the companies themselves funding it. Whether it was funding that was really government funding through a monopoly that allowed them to fund it, or whether the government funded it directly, which the government used to fund a lot of research within companies, because guess what that's where the best research in the world happened. you know, we had, we had people who were thinking about who understood cutting edge science, and yet we're in an organizational structure that cared about products. and now we have, frankly, startups are one of the only places that we really have that in an intense way. Yeah. Oh, and I was going to, and that startups, you, you don't have as much of sort of that continued institutional knowledge, right? Like you have, a bunch of people, like a bunch of people who, as you said, like, are sort of new to that phase and that, that way of thinking, And I guess so, so we could almost sort of think of, of activate as allowing people to be in that super position longer. Would that be an accurate way of describing it? I mean, that's exactly it, which is why we take right. Our fellowship takes people who have the motivation to go figure out how the research gets out of the lab. It basically simulates for them what the first couple of years. Would look like in a, the node funded early stage startup. but without the constraints of that funding, meaning they have all of a sudden, it's like, Oh shit, they've got two years. Their entire life and success in their life at that moment is can I figure out how my research turns into something valuable and yet there's not the supposition that they already have something valuable, right. That would be putting the cart before the horse. Right. They're really allowed to explore that and figure out, Oh, you know what. You know, we've got a fellow who spent time in the program and then said, actually, no, like this isn't for me. And he's now a professor at Oxford, you know, we've got other fellows in the program who have said, actually this isn't for me and now they're running groups at Apple. and, and then we have a lot that spend that time and they say, Oh, you know what? This is a start. This is the majority of them. Like, and the startup is the right vehicle to move this forward. But then we get this additional classification that comes in, where some of them are saying, you know what? And this is, and I want to go build a VC, go big, go home startup. And I'm going to go raise money from Linode or someone like him. But we have others who have taken the time and that superposition to say, there's something valuable there. We built here, but the traditional venture capital funding model, you know, the time counts, it's not going to work well for me at this stage. So now I'm going to try and build something in a different way, you know, partnering with corporations and selling early things to them. If you think about, you know, we focus at activate on, you know, hard. Physical natural science technologies and with really focused on industrial markets. And what I like to say is, if you think about the biggest industrial companies in the world, it's hard to find many that have their origin stories in a financial VC funding landscape, right? Yeah. most of them have had to build, you know, what I like to say is in industrial markets, it would harder technology. The incubation period between proof of concept or sorry between proof of interest in the marketplace and proof of value is very high, meaning, it can take a long time for something that could be valuable. To actually be accepted by the industry as we actually believe this is valuable because we've, de-risked the technology enough, it's been in the field for, you know, a hundred thousand hours or whatever else. And so those, what you find is that to get technologies that can go into those markets and companies that can have the reputation of being able to deliver with the reliability, et cetera, that you need. Oftentimes it's the slower growth companies that just took a lot more time to incubate. We don't have right now capital. We don't have capital structures that allow people to build those kinds of companies. I'll stop. No, send me like riled up in different dimensions and this, this is so good. so the extreme version of that argument would be that just the, the timescale and the return expectation of. The venture cap, venture capital as a, as a institutional structure, just doesn't align with the, sort of the necessities of a lot of hard technology like that. That's, that's the extreme version of, and you're, you're welcome to push back on that, but. That's that that's the hypothesis. In other words, often align, it only aligns in these magic convergent moments where, you know, the market is, you know, way out of equilibrium and wants to move on, you know, a hundred times faster. And, and there, there are great counterpoints to all of this. Right. you know, the counter points are, you know, like if, if the market is not, Let me think about what the counterpoint yeah. Look at. Look at the automotive market, right? it wasn't until the automotive market basically. Realized like, Oh, like internal combustion vehicles may be dead. Like our entire business and capabilities may be dead. That all of a sudden they're willing to make big investments in acquisitions and take things on. and you know, I think from the financial lens, the argument would be, okay, well, don't just push the technology. Don't, don't push on a rope, meaning like to innovate in the electric vehicles before anyone actually has any appetite, but. As you know, it's not like then the world will never change, you know, like I never push it. the story I love on this is, You know, Dick's, Watson's at our board at SunPower. Dick was the founder of a company called SunPower. that was one of the first kind of biggest solar companies in the world when the solar market took off, it still is, Dick left his job as a tenured professor at Stanford in the mid to late seventies, mid seventies to go start a solar energy company, the world expert in the field. and he basically. It felt like university wasn't the right mode for him. and at the time, the idea of like building a solar company that put solar cells on your roof, like, you know, th the cost of solar energy with the Silicon solar cell would probably, you know, it was like, I don't know, somewhere between 50 and a hundred X, what would make economic sense? And this is in the mid seventies, you know, Dick founded like. Against all odds built a small team with government grants with early investors was able to spend 15 to 20 years building a, like a suite of technology around solar, getting the data points around validation and everything else. And I think a lot of people, I imagine a lot of VCs and others who interacted with them at the time, just thought, you know, This guy's nuts. Like he's going to be spending his whole life just kind of building and tinkering and it's, this is not a real business, you know? And then in the mid nineties, all of a sudden, you know, Japan and Germany decided let's make solar real and they put some real incentives and subsidies in place. And all of a sudden there was this market and it just so happened that Dick and his company and the way he says it is like he was at the time he got to the point where he understood how to. You understood how to make the cells work, how to make them cheap, how to manufacture. what he didn't understand was how to make them, like, you know, how to make a million cells a day, a million wafers a day. and it just so happened that at, at that moment, you know, the market was starting to turn on. He bought TJ Rogers is running Cypress semiconductor who says like, Oh, well we know how to manufacture things at scale. we don't do a million wafers a day, but, but that's an interesting challenge. and now all of a sudden, there's an opportunity to take everything Dick's done and create a massive business and a massive new industry. Yeah. and so, you know, like, Is there a way to encourage people to be as like ridiculously, you know, naively, you know, whatever, like whatever it was that you would call J Dick in the, in the first part of that journey. Tenacious. I don't know. But what I know is there are a lot of people and we've now seen this in the fellowship program. Like, and you've probably seen this, like, there are a lot of scientists, engineers who are willing to commit to decades of their life to go develop a field, to develop something that can make that impact. And. what I know is if Dick would have decided to commit those two decades at Stanford, he would have learned 0.0, zero 1% of the things he needed to learn, to figure out how that technology was going to be productive and valuable. No, no. Dig on Stanford. Just again, it's like that's, that's how academia doesn't incentivize like actually going out and like building the same thing over and over again. So I've spent a lot of time thought thinking about this, and frankly, where I end up is you can't, you can't blame VC or wall street, right? Like, you know, frankly, the earliest investors in Dick's company, a company that might take, you know, Edison talked about like you can't beat compound interest. Like those investors are not going to make their money, no matter how successful it happens. If the success starts 20 years later. and so. The only way to encourage that type of work at that stage is to start to think about Dick's company in the early stages, as a research lab, as a really interesting applied research lab and what I'm hoping and what I've been really working toward is how do we get to government to realize that like startups, a network of startups, you know, a constellation of starting, however you want to think about it, like. Like that should be the most powerful way to do applied research, I think in today's world. Yeah. there are a lot of problems with how you do that. There are a lot of challenges in how you think about government funding, startups as research labs, but, but I think that's, I think that's a really compelling, you know, direction, from a policy person. w what do you think some of the biggest challenges are. I'm trying to think of who, who pointed this out for me? I think it was Nathan Coons, who, I don't know if you've met Nathan, but he, He started a company called Kymeta, out of intellectual ventures, which interestingly enough, if you argue, there are not enough modalities around how to do applied research in the world or in this country, you know, intellectual ventures is one of those strange modalities and experiments. but I think he's the one who pointed out to me initially. what has all this stuck in my head as the biggest challenge, which is. When you fund research, you know, as you know, science technology can be easily used irresponsibly, in research funding, let's put it this way. R and D funding could be used irresponsibly either because you're going to go develop an evil technology, or because you're going to go squander the money by like, you know, buying yourself a Ferrari on the side. Right. Yeah. When the governor buying expensive equipment, when it's not even when it's not necessary. Right. Like, I think that's the most insidious one where it's like, not even clearly fraud, but just like, do you really need like a million dollar fem, right. yes. and I think Nate, when Nathan pointed out, you know, when government funds things, you know, if the government sends a check to Virginia tech to do research. there are a lot of guardrails and bounds, that would make it very hard for that money to be spent in a really irresponsible way. or to, for that project to be ethically misled. It happens. Yeah. you know, one of the benefits of startups is there they're much less tightly bound and they have a lot more dynamicism in terms of how they get led. The incentives are different, but it also means you now have less control mechanisms. And that's one, that's kind of stuck with me as, as a challenge, right? not that it's not, yeah, I think about it as like, it's one that's worth thinking about how to manage or you think about it as like, well, you know, there's a risk reward to everything you do. one of the interesting things, I don't know if we, you and I have talked about it. One of the interesting things I've found is that when we think about science, whether it's. Frankly, whether it's government, any of the actors who fund science and engineering and research, you're willing to take insane amounts of technical risk and scientific risk. there's very little willingness to take kind of institutional risk, or, or modality risk. you know, I, I, I had an interaction with, you know, a large private foundation that funds a lot of research. and I, you know, basically noticed most, if not all of their programs, you need to be, you need to be a large university to apply. and when I sort of asked them about like, well, why is that? You've got now a lot of interesting research that could even the applied stuff, even in their applied programs. And part of it is like, well, you know, no one ever lost their job funding. You know, stamp a Stanford professor or Stanford to do research. Whereas like, I don't know, like God forbid I accidentally fund Theranose to do research, that that's, you know, that's death. So that's another one that we got to figure out how to get around. I call that asymmetric career risk where nobody nobody ever gets fired for, Funding the safe thing, right? Like the, the sort of like mean results. But when we're talking, like the things that we're talking about are we're, we're counting on the outlier results. And the problem with outlier results is that they can be outliers on the good side, or they can be outliers on the bad side. And so if you. Fund a outlier result on the good side, then, then you're the hero. Yes. But then if you fund an outlier on the bad side, then you get fired. And so the sort of like expected career value for, for a funder, if the government or large, large. Yeah, exactly. And it's like ed, unlike a VC who gets to participate in the outside's upside of a positive outlier, Someone in the government or a, another large funding organization, like sure. They'll, they'll like, people be like, yeah, you funded it. But then they won't get that much, participation in the upside if it, if it really pays off. And so, so that's. What? Yeah, but I think the sweats really well thing, but let's think about like, you know, and again, all day long, that makes sense. If you look at it through the downside risks, but let's think about all the upside opportunity that we're giving up. And I'll give you the example here, which is Saul and other lab, you know, like I was a good friend. I like, I think he's one of the most brilliant people on the planet in science and technology and engineering. You could do good. other lab. You know, the amount of upstream swimming, Saul has had to do for other labs to exist. And the idea that like, you know, you have to be a one in 1,000,000,001 in a hundred million type of person to be able to end up doing something like other lab, which is essentially a different modality for doing R D you know, in a very different way than activate. It sits somewhere between a startup and a research lab. and you know, I think the question we need to be asking ourselves is like, you know, how do we allow? Not just like, how, how do we make it possible so that not just Saul could go run another lab. Right? But that the hundred or thousand top scientists and engineers who have the same motivations assault, Could think about that as a career path for themselves and something successful to do, you know, salt takes a lot of risks. Other, you know, like other lab is in some ways an insane proposition, and yet it allows them to do the work that he does. And it's like, frankly, it's, it's gotta be, you know, one of the most productive research labs in terms of applied research in the world on a per dollar basis. So, so, so, so big that I would propose. Which is like, this is going to be like a deeply unsatisfying thing, but I'd love to get your take on it is that the missing ingredient is trust. And I realize that sounds very sort of like Woohoo, but I mean, it, in a very, like if you look at, if you just sort of like, look at history and you look at the people who do these crazy things, like what ends up happening is like, it comes down to like one person trust another person they're like, look. I trust you to go spend this money responsibly. And then they sort of like take the guard rails off. And, I was, I was talking to Donald Raven the other week, and he ran, this, this program called venture research for BP in the eighties where they funded. Crazy scientific research. So it's like less on the applied side, but more on the, like, like, like literally scientific research that couldn't get funded. And the thing that struck me was that he spent sometimes up to a year getting to know the scientist who was applying for funding. And so, and I think that what happened there is eventually it got to the point where I just trusted them as a person. And so I was wondering like, well, think about, think about Bob Taylor at DARPA ARPA. Right. Like, you know, his whole mode was, let me find the smartest people, let me spend enough time with them where I can understand which of them, you know, does the best job of calling bullshit on the rest of them, you know? And then let me go give them money. Yeah. And yeah. Right. I think that's, I think, I think you're, You know, I think one of the, one of the challenges with the, with the trust and with some of those modes is, you know, I think right now, one of the really important questions in science and research is, you know, how do you think more inclusively? and how do you make sure that you're not. Just super, you know, reinforcing biases in terms of what it means to be good and excellent, that, you know, like that's, that's one of the things that you, you really need to then struggle with, which is, you know, trust is a very efficient mode. but, you know, people build trust quickly based on their biases. So. I just point that out because it's worth, you know, absolutely being part of, part of it, thinking it's like something that's sort of like really grappling with hard. Yeah. now that said, yeah, I mean, I don't, I don't think that prevents you from. You know, th the two things that I think for us is special about activate one is, we're, we're, we're supporting people outside of the normal incentive structures. Right. and two is we, we fund people, who have great ideas and who have the right motivations, you know, but the whole reason back to the superposition, like the whole reason we exist. Is to give someone a chance to go through that first 18 months where they realize all their assumptions are wrong. So we don't pick them based on our, their assumptions. Right. We pick them based on other attributes, and. You know, I'd say in the last five years, we're learning a lot about how improve processes to do that with less bias. and you know, one thing we can't control is unfortunately, one of the limitations of an entrepreneurial though, is you're dependent on. You know, you're not that you're to be successful at gaining resources to support your project. You're not dependent on a really clear pipeline, you know, within your university or whatever else. You're dependent on pulling resources from a bunch of parts of the world and the ecosystem, and those places all have biases. So yeah. You know, if I think of our women fellows, our fellows of color, you know, I would say if. You know, blind tests, their strength is individuals and ideas at the beginning of the fellowship. And how much progress do they make in two years? No question. There's a deficit there. and you know, I think the work we need to do is we need to do a better job at, you know, Not being biased and picking those people and instead, right. Because you've worried they're going to fail or whatever it is. Right. But instead, like we need to be working really aggressively to make sure that we counter all the other biases. I feel like it's starting to happen. Yeah. and, and sort of, so I got on the, on the note of the sort of, the 18 months of the fellowship, do you feel like. And this is, this would be, it's very hard to do counterfactual. So it's just based on your feeling, like, do you feel like on the margin, like that is sort of the correct length and amount and the amount of money that you give them? Or like what, what, what is, what is the marginal value of say keeping people in that superposition longer or, giving them more resources while they're in it? Yeah, that's a great question. it's super tricky. when we sort of penciled out the model, that's now, you know, activate and synchrotron road, on a napkin, There was a question of like, all right, well, as you know, my initial model for this was very different. and the model we have was really bound by some of the constraints in terms of how much funding we have. And so we looked at the amount of funding we had and we said, Oh, we could probably support people for two years. And I walked around and talked to folks and. Well, it was interesting is anyone I talked to who was on the entrepreneurial kind of VC side of the spectrum was like two years way too cushy. Like that's ridiculous. You're just going to get people into a really relaxed state. The anyone I talked to on the more science, academic research side that realized like the only way you're going to get funding for this is through a grant or something, you know, like. Funding cycles happen on the order of years. Yeah, they were all like, so, so the entrepreneurs said like, Oh, you should make it one year. The academics all said, like, it has to be at least three years. and so we landed on two. the I've actually found it to be pretty appropriate. we do is really unique in the sense that we believe that there's value in. The science and engineering expert inventor actually understanding the connection point to how what they're doing could be valuable. Yeah. Right. Absolutely. And so what that means, because they can then be a mobile, the dimensional player, and they can start to connect the dots and innovative ways that they wouldn't otherwise in the applied research perspective. You know, the other mode is you take the inventor scientist and you just hybridize. You pair them with someone who, is thinking more about the practical market and you find a way to make that work. But at least for us, there's this sense of like, we can be creating these rare breed of like, You know, super, super scientists who are thinking applied and are still cutting edge experts. There are many people who have that capacity, but if you create one and we've seen this kind of time and time, again, like those people can be really powerful drivers. Okay. So for us, a big part of the two years is we found that. There's so much nuance in debt. Like the things that we've been talking about here to really understand, like, how did the capital markets work w w VCs fund things a certain way. how does manufacturing work? How do you think about techno economics? Like, Oh, wait a sec. Yeah, just like that, that mental transition. It takes time. You know, what we find is we tell people things in the two year fellowship, and every year we bring in a new group in the beginning of the fellowship and basically the way the fellowship works is there are a lot of ways it works, but one way is every week we get the cohort together and we're exposing them to ideas for founders and others, founder stories. Here's how venture capital works, et cetera. And we do that for them in the first year. And then we often are repeating some of that stuff for the new cohort that comes in next year. And the second year fellows will sit in on a session and be like, Oh shit. Like you told me all this stuff before, but I didn't, I just wasn't even a place where I could even understand where to which bucket to put that into my brain. But like, now that I've been now that I've been hit with this stuff so many times, like I'm starting to understand it. so that's one and then the other is. You know, I wish we were a program that could find the talent that we have. And then once they're in the program, if they're doing well, six months in, I give them a two or $5 million grant to keep working on it, but we don't have that luxury. I don't control the purse strings. So our fellows will, all we're giving them is sort of the institutional, you know, Umbrella and the support and the runway. So the other reason that two years ends up being important is given how speculative, what they're working on is, you know, the amount of time it takes to get a grant proposal together into a funding agency, get it funded. So you have cash in the bank to do the work or on the venture side or the corporate side to have an engagement with a corporation, that's going to get you funded or to develop a pitch. That's strong enough that BC's like. Like those things, any of those things happen on a time constant of roughly a year. Yeah. and so the idea that in the first year of our, of our fellowship, you know, fellows are basically shifting their mindset and. Like building the foundation for where that those funding and resources are going to come from. So that in the second year of the fellowship, they're actually able to hit the ground running, in the best cases, you know, in some cases that first year, it doesn't, you know, like what, what they're working on doesn't resonate or they miss the window on the grant, like whatever it is, it seems to work and this is sort of like, something that I. Struggle with, and I, and you know, much more about it than me. this idea of a sort of push versus pull on the people coming to the program. And what I mean by that is, There's there's one school of thought that says that people need to be like intensely, intrinsically motivated. They need to like, be like banging down your door to join the program. And then there's another school of thought that, there are people who like don't even know that they should be banging down your door and that you need to go out there and sort of like forcefully opened their eyes and then they will be amazing. Where do you, where do you sort of fall on that spectrum? When you're thinking about where the best fellows come from, this is a really, this is a really hard, it's a really good question. and there are a few different things that come to mind for me. One is what's, what's the risk of like, what's the risk of a program like ours? You know, one could argue that. Being an entrepreneur is not something you should just fall into. Like you will not succeed as an entrepreneur unless you woke up and said, Oh, like, there is no other thing I could imagine doing then this startup, you know, and it's right. and, and argument that has been made that I think is, is reasonable, which is to say, you know, if you give people a nice path, To start thinking about themselves as an entrepreneur. Like you're basically setting them up for failure because those people probably shouldn't be doing it. Like the Darwinian selection that occurs in terms of what there's, someone actually is willing in the middle of their piece deprogram to say, you know what? I'm just gonna put everything aside. I'm gonna figure out how to go raise venture capital money and find the person who's going to help me do that. Like, that's an important selection because the other people just shouldn't be entrepreneurs. It's one thing that keeps me up at night, which is, you know, what we found is actually quite the opposite. you know, because it's so hard, like it's almost stupid for someone who's got a PhD, in science to take that, right? Like, let's just think about a few pieces of this. Traditional entrepreneurial story in the software space. like what's the, what's the, what's the calculus you're doing here. First of all, like, you might be 20 years old and decided to basically go to do, do this. Right? Meaning you don't have a family, you know, you haven't already, you're not in your mid thirties, right? Like where you have to actually figure out like what your life has looked like. So it's a different calculus already from the get-go then it's like, well, what does it take to go get learning cycles? Okay. Like, I'm going to stop going to class and I'm going to meet up with some friends and I'm going to start prototyping. but like I could argue that I'll probably get some really, I'll probably get some really satisfying learning cycles, like on the order of months. Right. Yeah. So that's number two. and then number three, and by the way, like I'm gonna, I'm gonna find other people to do it with, they're going to be my co-founders. Like, I don't have a lot of like vested interest in these ideas. Like they're brand new, I'm just in the, in the early creative. So if you now contrast that with someone who. Let's just say they've spent the last, you know, five to 10 years becoming a cutting edge expert in material science. they've developed an idea that they've been working on as a research program for probably five years. That is now the basis of something that they think could be valuable. Okay. So now what's their calculus. okay. Let's see. I want to just go be an entrepreneur, first of all. I'm later in life, we already covered that. Yeah. so my, my analysis is different. And the other people I need to do this with me are probably all still later along life, they're more expensive. They have other constraints. So that makes it harder. Second of all, if I decide to step away, I stopped going to my grad school classes and do this. well, how am I going to start doing it? Like, where am I getting my learning cycles to do it? I need to raise enough money to get a. Venture capitalists to fund me. Then I got to go negotiate with Ember on an order of glove boxes and wait the six months for the like potentially it's like a year of my life before I'm actually in a startup. Yeah. Which is a big deal and a big opportunity cost. and then, you know, and then thirdly, like I have a watch a different amount of vested interest in what are like, I've already spent five years on this idea. so. The thought that we should imagine that people are just going to jump into these things at the same rates or paces, as in other areas of entrepreneurship, I think is sort of ridiculous. And frankly, even if I think about, so you say like, Oh, you're giving them a fellowship. That's really cushy. Like, I look at it like the people who are cutting into our fellowship are people who could get professor jobs at any university in the country. A lot of them. Like, so for me, it's not like cushy, like they're basically deciding to do something that on the face of it is totally stupid, which is like walk away from that path and like go into this fellowship where like, who knows what's next. and my read is like, if we have, if we have amazing individuals who w who are willing to take that step, like. The least we could do is provide that about a cushion. I've I've totally forgotten the question here. I think, I think we we're, we're really attacking it, which is this idea of like, do the best. Do you need to. Filter people by the ones that are willing to bang down your door or do you need to go out and find the best people and open their eyes to what they should be doing? yeah, from our experience, the ones that bang down our door are in fact, the most entrepreneurial of our fellows and they are the ones that make the most progress, and, and have sort of the highest likelihood to succeed. what's interesting for me is the other ones, the folks who come into our program, who didn't bang down our door, who basically looked at it and said like, yeah, I've always wondered. You know, maybe I don't know, I want to be a professor. and the example that I would give is Raymond White at camp who came out of Bob Grubbs, his lab, Nobel Laureate at Caltech. You know, Raymond, it wasn't until after he got into our first cohort where he basically said, you know what, this is kind of a hedge for me. I figured because you're connected to the national lab. I'll keep publishing. And like, I'll kind of probe this. but I can, I can still go be a professor. And what we found in that case was like, what he found in that case was he had no idea what the other. World would look like, and he found himself like so excited and motivated by, you know, by a more entrepreneurial path. and my take is like, okay, you know, Ray the way I talk about this, you know, Ray was working on, he had stumbled in a way in his PhD to take olefin metathesis chemistry. We can make, which can make some of the strongest polymers in the world. And most corrosion resistant toughest. He found a way to make those polymers light, the synthesis light activated so that you could turn that into a 3d printing technique. and I think he would agree, had he not come and done the fellowship. Like those ideas would still be in the world of publications, you know, instead, you know, he's proven those as 3d, you know, he's selling products as 3d printed resins in the world. And more importantly for me, like Raven would have been a professor somewhere, still with kind of not having been exposed to this other mindset. And now I think the fact that he still has that, you know, he could still go back and be a professor. I think about folks like, You know, like a number of actually folks from the great industrial, you know, from, from bell labs or IBM research that are now in universities, right? Like they, they can pop back if they want to, but they
In this conversation I talk to Luke Constable about the complicated tapestry of finance, funding projects, incentives, organizational and legal structures, social technologies, and more. Luke is the founder of the hedge fund Lembas Capital and publishes a widely-read newsletter full of fascinating deep dives. He’s also trained as a lawyer and historian so he looks at the world with a fairly unique set of lenses. Disclaimer: nothing Luke says is an offer to buy or sell a security or to make an investment Links Luke on Twitter Lembas Capital Theory of Investment Value (John Burr Williams) 1,000 True Fans (Kevin Kelly) Quantum Country Patreon Lembas Capital’s Open Questions The Empire of Value (André Orléan) Who Gets What and Why (Alvin Roth) The Mystery of Capital (Hernando de Soto) I, Pencil (Leonard Read) The Crime of Reason (Robert Laughlin) Andrew Lo’s papers Transcript 0:01:05 BR: So if technology creates a lot of wealth, why does it feel like most people in finance are hesitant to invest in technology? 0:01:19 Luke Constable: So that's an interesting place to start. I think you have to understand, no one invests in technology. If you think about investors, investors invest in businesses that use technology, and so that's probably the first frame I would use. Investors aren't hesitant to invest in technology, investors never invest in technology. What investors do is they invest in these products that are going to generate cash flow streams, and so that's sort of the first thing. And then the second thing is, a lot of the technologies that you and I think about, they seem obvious at a macro scale, where you take a high level view and you say, "Well, it would be so much better if we had a blank sheet of paper," and I said, "We should do X." 0:02:10 LC: For instance, you could make an argument about housing technology in San Francisco, and you could say, “All of these houses built in SF, they're old Victorians, they don't really have washing machines and laundry machines, you could probably change the structural engineering, probably build them higher”. And if you look at them and said, "Oh, I have a better prefab housing technology," or "I have a better way to do it," you'd miss the point, which is just because you've invented the physics, and this is the other thing, you actually have to sell it into a market. You have to work within the market, and so that's usually where I see a lot of the interesting technical products fall down. 0:02:53 BR: So the thing that I want to poke at in the assertion that people invest in businesses is that people invest in things that are not businesses as well, people invest in gold, in currencies and other, I guess, assets would be the high level thing, and so I guess the question is why isn't technology itself an asset, and there's probably a very obvious answer to this, I just... 0:03:25 LC: Sure, so let's take a step back and talk about the various asset classes, there's sort of a couple of ways to break them down. 0:03:32 BR: Okay. 0:03:33 LC: One way people do this is they'll say there are real assets, these are things like real estate, some people put commodities in there, and then there are sort of these yield assets, these are debt that is putting out a cash flow stream, and then you have equities, and there's some argument that cryptocurrency is sort of its own asset class, and then currencies might be their own asset class too. And what you'll quickly find is these things kind of blend together. A lot of them are different ways of financing sort of the same project. And then you have the ones that are just traded for their own sake. So there's sort of two questions you're asking, the first is, why isn't "technology" the same as like gold or silver or real estate, for instance? And so there's a use value to all of those commodities, and that's why they have value, and that actually is a cash flow stream, we actually do use gold, we do use silver, and that's how that works. 0:04:43 LC: But if you think about what's valuable, there's sort of something that's value... And I should have started with this. When you think about what value is, there's value in exchange and then there's value in use. So the value in exchange ones, these are often, you could argue, cryptocurrency or a lot of currencies, gold is actually usually thought of as a medium of exchange, that actually is valuable for cash flow purposes just probably not in the ways that you think. So what happens with these currencies and these stores of value is they sort of become Schelling points where I just know there are enough people transacting in that thing that I can find the liquidity, I can actually go convert to cash, and I can go basically get that cash when I need it. That actually is a cash flow need. It's just not often thought of that way. 0:05:40 LC: Now, liquidity is really valuable because you might be invested in the best business of all time, and it might have a very, very, very high net present value and be doing a lot of good for the world. But if you take a step back and say, "Wait a second, I have to pay off student loans," or "I have to pay off my mortgage," or "I just want some cash to go on vacation" or whatever you want to do with it, you look at this and say, "Gosh, I do need some liquidity," and that's what those other sort of trading assets are for. 0:06:10 BR: So basically, technology contributes to the use value of an equity asset, is that the right way to think about it? 0:06:22 LC: I don't think of technology that separate from... It's sort of so baked into the environment that it's just difficult to disentangle. Technology, lazily put, is just ways of doing things hopefully more efficiently than we're already doing them. And so if you think about why certain assets become tradable, either they're creating these cash flow streams, or there is some value in exchange. I mean, the way that I often frame investing for the people who I invest for is there's sort of two sets of flows that determine an asset's price. There is underlying asset's cash flows and then there are the capital flows of all the investors. So you have sellers for some reason, maybe they have liquidity needs, maybe they can't hold an asset for a regulatory reason or a legal reason, and then you have buyers who come in, because they're interested in that asset, and it could be because they think it's an interesting thing to invest in, it could be because the regulators told them that they have to buy it, it could be... You laugh, but this is actually... 0:07:32 BR: What sort of things do regulators mandate that people buy? 0:07:37 LC: Sure, so if you go look at banks and sovereign debt, well, actually banks and all debt. So you have the bank regulators set risk weightings on various types of debt, which is sort of a nice way of saying, there are all of these different cash flow streams, and the regulators are saying to you that certain cash flow streams are riskier or less risky. And shockingly, they often argue that their sovereign debt is less risky than some other cash flow streams. 0:08:13 BR: I'm shocked. 0:08:14 LC: In practice, that may or may not be true. It's a weird thing to think about, but, in some cases, a multi-national corporation might actually be a better credit than a country. But that's not how these things work, and so what happens is a bank regulator will sometimes go to a bank and say, "The risk weighting on the sovereign debt is far lower than the risk weighting on this corporate debt,” which effectively is pushing the bank to go buy a certain type of debt, which then goes and funds all of those projects. So then coming back to all of this, if you think about investing in sort of these two sets of flows, like that underlying asset's cash flows and then the capital flows of all the investors, you basically, in practical terms, want to think about markets in terms of what's driving someone's action. 0:09:05 LC: And when you think about that, that's when market prices start to make sense. They won't make sense to you if you think that you're just going to sit down and solve an analytical equation where you just sort of put in a few inputs, you make a few estimates and then the price gets spit out. It's much more of a socially constructed thing. 0:09:25 BR: And going back to your point about liquidity, it feels like there's this... I don't know how to describe it, like sort of a weird effect where it feels like there's a consensus that investing in... I won't say technology, I'll say investing in a business that is proposing to build a technology with a very long-term time scale, there's consensus that that will eventually create something... Will eventually create a lot of value, but then at the same time, because of these liquidity constraints, very few people are doing that, and that's the argument for why people are not making those investments, but it seems like that would be a point where you could arbitrage. It seems like there should be some people who are willing to not get cash flow for a couple of decades, and they would be able to reap the rewards of making these sorts of investments, but you don't see that, so I assume that those people are smarter than I am. And so the question is, why don't you see people doing that? 0:10:50 LC: So you actually do see people doing this literally all the time, but it's not for the sexy technology concepts that you are thinking of. So go look into the public markets right now. You'll see a handful of software businesses that are trading at very high multiples to sales. So the idea is that you sort of have this trade-off: you could get free cash flow after taxes right now, or effectively more free cash flow down the line from some company that's growing quickly, and so what you do is you pay some price based on that free cash flow multiple. What happens when the free cash flow is really, really far down the line, we don't even use the free cash flow number, we actually just use the sales number. And sales is obviously much higher than just free cash flow, 'cause free cash flow is after all of your expenses and taxes. So when you go look, and you see some company that's trading at 15 or 20 or 25 times sales, the stock market is betting on that business being around and generating free cash flow over a 25 or 30-year period. That's the only way that math works. In practice, the reason the stock gets priced that way has something to do with those cash flows and also a lot to do with the capital flow landscape, but that is what's happening. 0:12:15 LC: These companies are getting funded on a 30-year time scale, and so the right question shouldn't be, "Why aren't good projects getting funded?" They actually are. The right question is, "Why aren't other good projects getting funded?" And so I think it comes down to... I think it comes down to what is legible to institutional finance, and so you might look out into the world and say, "There are trillions of dollars of capital... " I mean, there's just oceans of money out there, and it seems like someone could raise billions of dollar to go trade a building with someone else or something else that seems like it isn't actually moving the world forward and this sort of simplistic take. But why can't we take that billion dollars and put it towards some technology, something that might be obvious in your opinion toward moving the world forward? 0:13:15 LC: So the first thing is you have to understand what matters is, in practice, even though it looks like there are trillions of dollars of capital out there, risk-adjusted or uncertainty-adjusted, there's actually very little capital available. And the right way to think about it is to say, what type of product are the capital allocators buying? And so this isn't, again, a place where we have an analytical equation and you just pop your numbers into the equation and you say, "Well, the return to society would be X percent higher if we invested in this type of technology that will have a payoff in 25 years." The right way to look at it is to have empathy with the person who is in this capital allocator's seat, in this investor's seat... 0:14:08 BR: I.e you. 0:14:08 LC: Well, me or anyone else. But again, I'm not trying to paint myself upfront, there's the intellectual side of capital allocation, and then there's the reality that a lot of people are using an element of gambling in this. But it's to understand what they're buying. And so the reason people are comfortable investing in that real estate or investing in an enterprise software company is someone has come up with a set of metrics that has convinced the market that those cash flow streams are durable, that they will exist and be predictable 20 or 30 years out. And so what you've done is you've created this yield product, and what you've really done is you've created a sense of certainty. And I think what people don't like is uncertainty, they really want to essentially have something that they don't have to do too much intellectual work to understand and that they feel like they can trust. And so the problem is actually sort of one of search costs. 0:15:20 BR: A really dumb question is, What does it mean for something to be risk or uncertainty adjusted? Because you said that there's trillions of dollars out there, but there's actually not that much when they're risk or uncertainty adjusted, and is that basically just say that capital allocators don't have the incentive to spend most of that money on anything that they perceive to be risky or uncertain? 0:15:50 LC: Not exactly. 0:15:51 BR: Okay. 0:15:52 LC: It's two things. So first, in terms of how most people think about risk, so the way that you might think about this before you start really looking at it is you'd think, Well, we're just trying to sort of predict the future, the future is relatively predictable, and we can make some educated guesses about probabilistically what is going to happen, and then we can sort of model out those payoffs, those defaults, and sort of go from there. And so sort of the canonical text in finance for equity evaluation is called The Theory of Investment Value, and it's written by a guy named John Burr Williams. I can send you links after this. It's written by a guy named John Burr Williams after the Great Depression, and he was basically trying to sort of scientifically estimate the value of all free cash flows. You may have heard of this concept of discounted free cash flows? 0:16:48 BR: Yeah. 0:16:48 LC: He's arguably the person who invented it or at least codified it. In practice, though, you quickly find it is unbelievably difficult to figure out and to actually estimate the cash flows of something, even four, five or six years out. The world just changes really quickly, competitive positions tend to change really quickly, and so you actually could come up with this range of outcomes, but they become somewhat uncertain. So you take that as sort of the investing reality, and now let's look at sort of the funding reality. A lot of the people who fund investment funds or who are making investments, they have cash flow needs. They have sort of real cash flow needs, and then they have sort of intellectually forced cash flow needs. The real cash flow needs are, look, we have to fund our endowment, we pay X percent out per year so that the college can function, so that the hospital can function. 0:17:53 LC: And then the intellectual cash flow needs are, look, here are the risk models that we use, and when we see the prices of our investments fall 8%, we consider that as fundamental information that our investments aren't performing well, and so we need to sell out. And so they actually don't just need cash flow to look good, they need the pricing information in the market to look good. So we're talking about arbitrages. This is probably one of the biggest arbitrages that exists in the market, but it's unbelievably difficult to capture. So let me give you an example, imagine that you had a row of 10 houses in a neighborhood and they were all... Let's just say for these purposes, valued at $100. So let's say one of the neighbors, they are in a rush and they need to sell their house because they got a good job offer somewhere else, so she sells her house for $97 because she'll just get whatever she can get. And then another neighbor gets a similar job offer, and she sells her house for $95, and suddenly some other neighbor along the street looks around and says, "Oh no, prices are falling on our houses, everything else is getting sold off, we need to sell." And so they might sell just because they're scared, because they think there's sort of fundamental information in those transactions, in saying, "Okay, the market price has fallen." 0:19:22 LC: So you've seen the marked prices fall from a $100 down to $95. The problem is the market shows the prices of transactions, they don't necessarily tell you the fundamental value behind those transactions. So as a result, you being a portfolio manager, say you're invested in houses, you might have a view and say, I think that those houses that sold off, those were forced sellers. That doesn't mean that the price of the assets have actually fallen, these prices will come back up. Someone else might say, "No, no, that's pretty arrogant of you. The market has spoken and job opportunities have changed and people are going to leave the neighborhood." Now, it's really difficult to capture that sort of arbitrage, and arbitrage isn't even the right word, but capture that valuation spread, because it actually comes effectively down to who is right, and that ends up being a grounded matter of opinion, but effectively a matter of opinion. 0:20:32 LC: You can do a lot of diligence, and then you can maybe figure out if you're generally more right or generally more wrong. Ideally, if you get really, really good at sourcing information on the asset cost that you're investing in, and then you go around looking for these situations where the market has sold them off, but you recognize that they're sort of incorrect in doing it. But for the big portfolio managers, again, there's an information search cost. Every single time one of their fund managers underperforms, fund manager is of course, going to come back and say, "No, no, it's temporary. We're right, the strategy will come back. Don't pull your money." 0:21:12 LC: And so the difficult thing for the allocators to funds is they sort of have to diligence the fund managers who are then diligencing the investments. And so you can see that as you sort of go down this line of information being passed from person to person, the search costs just rise. Whatt it really comes down to is basically trust, where the investor is investing in a company or in some operator, and then the allocator is investing with the investment fund. And all along those links in the chain, it's so expensive from an information perspective to figure out who's being honest and who isn't. That trust is actually the fastest way to figure out what is a good investment and what isn't. 0:22:06 BR: Yeah. Correct me if I'm wrong, but then I sort of extrapolate that to the thought that it's actually very hard to build up trust in someone who's proposing to make, say, a 25-year bet, because you would need 25 years to build that trust, right? 0:22:31 LC: Sure, and this is actually the problem. And so if you look at it, most fund cycles for the investment funds themselves, they typically have about a three-year window to prove themselves. So if they can't show marked prices rising within two to three years, or they can't show cash flows coming out in those two to three years, it's in practice really difficult for that fund manager to go raise more money from an allocator. The best allocators, they really get it. But in practice, most people are sort of looking at each other trying to understand what we all think is valuable and what we don't, and people are actually pretty good at it. But if you're not seeing results within three years, it's difficult to go raise the next VC fund, the next private equity fund, or just to raise more money for whatever your next fund vehicle is. And so what happens in practice is, people don't go spend their time investing in projects that are going to take a really, really long time and won't get marked. So what that means is, for an entrepreneur or for someone who's trying to get funding for something, getting that asset mark is unbelievably important because that's what lets the great investors go invest in you. 0:24:00 LC: So it's really important that for the VC company to get that Series B or the Series C or the Series D done. That single mark in time is hugely important because everyone can sort of concentrate on that, take it as a market price, even if it's not a perfect market price, and then write that in their books, measure it, sort of trust it to some degree, and everyone can sort of coordinate around that because you have a market clearing price there. And so if you think about it, just on the equity side of it, every founder's equity actually is a product in and of itself. I always find this interesting because I think most people don't think of it this way. 0:24:42 BR: I don't. 0:24:45 LC: But when you start a company, you're actually... You're selling two products. The first is sort of your individual product. This is the thing that you think you're starting. And the second is your company itself. And so your company can turn into a product where you sell your debt or you sell your equity or you sell some other sort of financing scheme, but that's a product, too. And the way that product is priced is, in the private markets, you have one-off auctions where you sort of game the options as much as you can to get the highest price. This is where everyone in their C, Series A, B, C... Well, not so much in C, but in A, B, C, you basically create auctions where you try to get all of the partner meetings on Monday morning to be talking about you, put all of your meetings into a week, and then you get everyone to bid all at the same time, and then you maybe don't go to the highest bidder, but you go with some mix of the highest bidder plus the people that you want to work with. 0:25:35 LC: Then the public markets are actually a totally different mechanism, it's a different distribution method where it's a continuous auction, where there's bids and asks continuous in time, at all times. And so you can't actually create these small little one-off auctions where you can rig the price up because the bids and asks, they just keep coming. But the benefit is, if you know how to... If you do well in that channel, you then have a lot of liquidity and you can usually get a higher price and arguably more capital. It's not actually even clear that you need to do that, but that's sort of the argument. And so I think if you start thinking about it that way, you can start to recognize, "Alright, that's why some projects are getting funded and some aren't." It's because the projects that are getting funded, they are products that work well in that market, and they are actually products, it's not just a throwaway phrase. 0:26:37 LC: I was chatting with someone about this earlier. I think it's probably good to take the emotion out of whatever project you're working on and think about this for unemotional things. So one of my friends is trying to get a research project funded, sort of like an arts VR research project funded. And we're talking about this and she's like, "Oh, now I get it. I should think about this like soap." So imagine you are a soap manufacturer, and you have made the best soap in the world. You think it's better than any other soap. You wouldn't expect to sell that just because you've created it. You'd think, "Okay, how am I going to get it out there? Am I going to get it on to Amazon? Am I going to start a store on Shopify? Am I going to go to the people at Costco or Walmart and cut a deal with them so it's distributed?” Because I might have the best soap in the world, but some mediocre soap that gets into the Costco channel and then works with those constraints, they are going sell more than I am. That product is going to do better. And if you care about people using your product and you're sort of not just cash flow-driven, but you actually care about the impact, you really, really need to think about that distribution channel and how you're going to get it out there. 0:27:50 LC: What you quickly find is that often the constraints that people place on their products, it's not that they don't realize they're making their products worse, it's that they want those products to get distributed and they think the tradeoffs are worth it. And so the really interesting new products, they recognize that, "Oh, there's some constraint or there's some tradeoff that a lot of other people made with their existing product lines, and I don't have to do that," because the way you distribute it has changed, or some assumption that they've made, they actually don't have to make that tradeoff. And I use something like soap because it's boring and unemotional to at least most of us, but it's almost definitely true with research funding. And so you and I talk about this a lot, but I mean, if I were trying to go raise money for research, it would depend what I was trying to do, but I think there are probably new distribution channels out there, so I mentioned with small scale... Sorry, you were saying? 0:28:50 BR: Oh, there's just three different directions that are really exciting to go with this. 0:28:56 LC: Oh, please. 0:29:00 BR: Yeah, so I think what I'm going do is I'm going to lay... Actually, I will lay out the places that I think are all tied into this that are all really interesting, and you let me know how you want to weave through them. So one is actually this... So both this point about a project as a product is a little bit mindblowing, and I think that it's tied to an earlier point that you made that I wanted to dig into about what it means to be legible to institutions. And if I am understanding correctly, the marking of valuations is one of the ways in which... At least, in the startup world, venture capitalists make themselves, their firm, as a product legible to other institutions. And so Shopify comes along, and you can now distribute your soap through an online store that you never could. What would be the project funding equivalence of that new distribution channel? 0:30:17 LC: So I absolutely don't think that this is that new, but it seems to have come somewhat in vogue, and I think it's just patronage. And so if I were trying to go do research where I was trying to make, say, call it $100,000 a year or something along those lines, basically enough that you could live a really good life, afford rent in any city and sort of have basically time to yourself, I think the obvious way to do it is to try to build an online following. And this is not a new idea. Kevin Kelly wrote that old essay, I think it was 1000 True Fans, where he said, “Look, at 1000 True Fans paying you $10 a month, that's enough.” I think a mutual friend, Andy Matuschak, who has Quantum Country has done a great job with his Patreon. I think it would be really, really, really difficult to do this. But I would think a lot about what really causes someone to say, "I'll pay $5 a month to go read this newsletter, or to go basically fund some research I find interesting." And this distribution mechanism didn't really exist before, and so I actually think in some ways, we're still pretty early on. And all I would do is think, "Alright, I need to get 2000 people to sign up all over the world." The Internet rewards niche behavior, and so how do I get into the community of these people find it just sort of interesting, and this is sort of entertaining to them, and I would think a lot about how I could create something around there. 0:32:01 LC: For the larger amounts, I would actually do the opposite. So for the larger amounts, I would go become friends with everyone in the funding world. So they have incentives too. And what you'd want to think through is normally... I guess I'll put it this way, and I was chatting with my friend about this. Normally, the way that the great researchers I know think, they're almost... They're quite dogmatic, to be honest. They say, "Okay, my project is the best project. This really will advance the field." But in practice, what might make it easier to sell the project is to understand what gets the person funding the project promoted? What makes the funder feel good? 0:32:40 LC: What will get to that next level of funding for the person above them too? And then if you're able to map that out, you can represent it in a way that basically works for everyone. And she was actually pushing back on me and saying, "Look, I don't want to lie. I don't want to represent my project that way. That seems sort of fake or it seems like a veneer." But the truth is, is that the project that she has in her head only exists in her head and doesn't exist in anyone else's head that way. And if she doesn't communicate it in a way that actually makes sense to them, then it's not going to get anywhere. 0:33:21 LC: So I think the really frustrating thing to come out of this is that basically everyone's in sales in some way, shape or form, and I think a lot of people don't want to be in sales or think that it is a sort of a difficult thing to go do. And so as a result, they just sort of shy away from it. And so this is, again, why I think the distribution analogies really, really can work well, because it sort of takes the emotional weight out of it. And then if you look at this and say, "Oh, this isn't the best grant maker in the world, this is just Costco, and I'm just trying to get into the new line," I think it can feel a lot less heavy. And you can maybe treat it, and maybe the field might open up to you a little bit more. 0:34:05 BR: Okay. I guess, the tension I see there is building up trust with the people who are the capital allocators, almost feels like the opposite of figuring out a different way of making yourself legible to an institution. Institutions are obviously made up of people, so these aren't two separate things. But I think that there's something to the fact that you need trust when you're doing something that is not institutionally legible. So it's like you don't actually have trust with a lot of the companies that are publicly traded that you invest in, but they are... They've packaged themselves in a way that is sort of institutionally legible if that's... And I think this might actually be a good point to really... What do you mean by something being institutionally legible? What does that mean? 0:35:20 LC: It's a vague handwavy way of saying you just need to be recognizable to the people who are buying your product, and you just have to understand, in practice, how those relationships work. And once you understand the practicalities of whatever market you're working in, then you'll be able to understand how to craft a product for the people who actually want it. And, again, I think the difficult thing here, this is not intellectually that challenging, it's much more of an ego thing where we have to put aside what we think are the best products that everyone should be buying or what everyone should be doing. So if you think about it, since we're talking very abstractly here, what capitalism really rewards is, and actually this is true of all non-violent selection, it rewards behavior change. And so what we're really saying is how do you get someone to sort of change that behavior. And when you think about it that way, what's legible in your head, if someone else hasn't learned all the same things you have, they're going to end up using some sort of abstraction, some sort of shortcut. 0:36:41 LC: And that's sort of what I mean by saying intellectually... Or sorry, institutionally legible, is you understand the abstractions they use, you understand basically the mental models they're using to try to understand what's going on, and then you are able to fit your product into that. So I can give you a couple of examples and findings that are... 0:37:02 BR: Yeah, please. 0:37:04 LC: So I don't know how deep into accounting you are, but there is a metric that's really commonly used called EBITDA. And effectively, it is a free cash flow proxy metric. And it was invented by some people in the cable industry who wanted to raise a lot of money to go roll out cable systems all across the US. And they wanted to be able to quickly raise debt to go buy these sort of small cable operators and then put them all together. And with this metric that they invented, all of these other investors suddenly had a Schelling point. Suddenly, all of these investors had a new unit of measurement to look at this type of business. And because they accepted it, they were willing to go fund those purchases. Suddenly, a whole wave of those purchases were done, and basically a whole wave of these projects were financed because someone figured out a way to make that institutionally legible. 0:38:11 LC: And a similar thing has happened in the last 10 or 15 years with what we call enterprise SaaS companies, where we now have a new set of metrics that weren't really in use 20 years ago. These are metrics, I'm not sure if you're familiar with them, these are metrics like... 0:38:26 BR: The CAC. 0:38:27 LC: Gross churn... CAC, gross churn, net dollar retention. And if I went to someone today and I said, "Oh, I'm investing in a business that has an average customer life time of six years, an LTV to CAC of 4:1, it has 98% gross retention and 127% net dollar retention, and I think those numbers are going to persist for the next four or five years, that is something that I almost wouldn't even have to explain what the product is. If something met those metrics and truly met those metrics, it's a company that would get a huge valuation in today's markets. And it's again, it's because it's now institutionally legible. Someone has basically convinced the world of that. So then the question should probably be, why do these things get institutionally legible? And what I find is that, we're actually re-using the same math over and over again and finding new situations where we didn't realize that math applied. And so usually what's happening is, we're finding relationships that are really durable, that are really, really, really resilient. 0:39:40 LC: So I have this little questions page on my website, and the first one is, "What is the next durable customer relationship that we haven't really seen yet?" So what happens is, once the market recognizes that there is a durable customer relationship, and you can build that into our models. These models actually should come from how we model these bonds that last 20 or 30 years. If you can fit the customer relationship into that model, suddenly, all of the bond investors and sort of the bond valuation metrics that we used as proxies, they drift into the financing world. And people say, "Oh, this is also a durable relationship, so we should go fund it." And coming back to your first question to say, how do some of these huge technology projects get off the ground, it's because someone has convinced a set of investors somewhere that there is this long durable, and that's important, resilient set of cash flow streams 20 or 25 years out, and then we discount that forward, so that's how that works. 0:40:45 BR: Oh, man. Okay, so to riff on that and to go back to your analogy to products and distribution channels, what basically... You could almost think of it as someone coming in being really good at sales and arguably like marketing, and basically changing taste and creating a new product category where people didn't know they wanted gluten-free things, and then they go and they create that new marketing category, and now customer tastes change, would that be... 0:41:29 LC: And it's funny you use the word taste, that is... It's both fundamental reality of, Oh, in a true Bayesian universal sense where we're updating our priors correctly, imagine we had all knowledge, that does matter. But then taste does matter too, that's exactly right. There's another book I'd recommend called "The Empire of Value" by a guy named Andre Orlean, who is this really interesting French economist. And so in this book, he makes this argument that prices are completely socially constructed, and it's like you're saying, it's taste. As a side note, it's totally unclear to me why all of the people who are coming up with the socially constructed value theories are all these French people. It makes one wonder what's in the water in Paris. But similar is to say, actually, I think, and everyone else thinks, and we're all sort of self-referentially thinking, therefore, the thing exists, the price exists, the value exists. 0:42:32 BR: Yeah, yeah, that makes sense. 0:42:33 LC: It exists as this organizing principle, which everyone else then cites as a real reference and then it takes on a momentum of its own. 0:42:44 BR: And what... And so, I guess, do institutional structures like C-corps and LLCs, do those relate to institutional legibility? In my head, they do, but I might be going a step too far. 0:43:04 LC: Yes, they do, but I want to backtrack in terms of what you're saying. 0:43:12 BR: Yeah, do it. 0:43:14 LC: So what they do is they basically... The legal structure sets the landscape for markets. I should completely confess my own bias here. I am massively, massively pro-markets. I think virtually, no other social mechanism that we know of has raised so many people out of poverty. But as much as I love markets, I recognize that it's not sort of this shallow teenager's love of markets where I overdosed on Ayn Rand. It's more of on the lines of... 0:43:45 BR: Be nice to the little libertarians. 0:43:49 LC: No, I was once one when I was 14 too, I get it. And I think the problem is, you have to understand markets are these amazing and emergent phenomena that pop up basically naturally everywhere, people trade with each other. But efficiently functioning markets are actually very, very expensive public goods to maintain. And that means that you're depending on the bias of all the regulators to try to make the best guess as they can to create and maintain these liquid markets to make sure that people are transacting fairly. To give you another book recommendation, there is an economist named Alvin Roth, who wrote a book called "Who Gets What and Why," and a lot of his students went on to go work at Uber and Airbnb to sort of create these marketplaces. And if you look at it, they're actually quite intentional about how they're sort of creating the markets. So now, let's take one step further back and say, “Alright, all of the countries are creating markets themselves, too, and they're creating the balance of these markets.” 0:44:54 LC: So as you know, I'm a lawyer and was a history major and sort of loved looking into this stuff. I would argue that one of the least appreciated social technologies of the last few centuries is the concept of limited liability. And so it used to be, before we had easy access to creating limited liability organizations, if you started a business and it went bankrupt, you personally went bankrupt. Maybe you were thrown in jail, maybe your family went bankrupt, and so you couldn't go that far out onto the risk curve. And so, socially, if you were thinking about this sort of like an agent-based modeling perspective, if you could basically increase the variance of what agents could do, if you could basically socialize some of the risk, then you let people take a little bit more risk. Maybe it doesn't work out as well for a few people, but socially, you get to that higher hill in the hill-climbing analogy. And so you're asking about how C corps work and LLCs work. Do you want me to just run through the history really quickly? 0:46:01 BR: Well, I guess more what I'm poking at is just talking about how, at the end of the day, these aren't laws of nature, the structure of organizations and... 0:46:14 LC: Not at all. So why do we have Delaware C corps? Coming back to limited liability, in the late 1800s, New Jersey created a charter that let anyone go get a corporation. And then after that, later in the 1800s, New Jersey passed a set of laws that are colloquially known as the “Seven Sisters,” And these were these terrible laws in the view of all the businesses who were registered there, so they were looking for other places to register. Delaware saw this as an opportunity, so around 1900, Delaware lowers their taxes, lowers their registration fees, and they bring a lot of corporate registrations in. And then they set up their court systems so that they specialized in registrations, at which point Delaware becomes the de facto place. You get a runaway phenomenon, then all of the good corporate lawyers want to go practice in Delaware or they want to be corporate judges in Delaware, and all of the interesting cases go to Delaware. And it's literally gotten to the point where everyone in the US references Delaware corporate law, and non-US companies will create charters saying they'll defer to Delaware corporate law, and countries who are still forming their legal systems will effectively copy and paste a lot of Delaware corporate law. And so coming back to your point, it's not a law of nature. These are people doing the best they can to optimize the landscape, and that's how it works. 0:47:47 BR: And so my thought would be that that does relate to institutional legibility, because if I went to someone and said, "I'm using a B corp structure," they'd be like, "What the heck is that? I'm not touching that with a 10-foot pole." But if I say that I am using a Delaware C corp, then that is a legible abstraction, so I guess that would be my argument for why institutional structures matter. 0:48:24 LC: They do, and I think what it comes down to is you have all these degrees of freedom when you're starting any organization or any project, and you just want to think about where you want to innovate and where you don't want to innovate. So you look at US business organizations, I should say this, since I'm a barred attorney, this is not legal advice. There are basically four options. You default into being a partnership where you actually have unlimited liability. You can be a limited liability company, which is done state-by state. You could be an S corp, which is a tax status of LLCs, or you can be a C corp, which is the one that you're talking about. 0:49:03 LC: And what you go see when you run through all of these things is, well, there might be a better way to do this, but for the company that I'm starting or the project I'm starting... So the fund that I run, we have a Delaware LLC. I could argue to you that there are things we could do that would actually be better for the investors and better for the whole strategy, but you then look at this and say, "Hmm, it's just not worth the marginal effort given the payoff of actually trying to overcome that sort of legibility hurdle." And so I think what ends up happening is you end up getting these innovations around the edges where someone says, "Okay, here is one use case that's a little bit better, and we'll keep everything else the same except for that," and then the new standard arises. I don't think it ends up being worth saying, "I want to create a new legal structure and a new product and do physics research all at the same time," just because there's not enough time in the day. 0:50:11 BR: Yeah, I guess it just... It makes me wonder, because it feels like these legal structures do impose certain constraints, it just makes me wonder out in the landscape on a completely different optimization mountain what other constraints could be imaginable. 0:50:40 LC: So probably the most difficult cost to measure out there is opportunity cost, because it's so difficult to say, what could things be if we organized everything differently? And one hopes that when you have 50 states, that's how federalism works in the US, one hopes you get people experimenting with regulation, and you can get maybe a new project started off the ground somewhere else, if not in the state that you live in, and then of course, with more countries, you can maybe go overseas and do it too. And it's interesting, you brought up Spotify a little bit earlier, it's unclear to me that Spotify could have gotten started in the United States, given the state of music laws at the time. But then what happens is all of these European customers start using the product, and that has an institutional legibility of itself, and people say, "Oh, okay, I can see it's working in that country, it will probably work here," and I wasn't involved in the record label negotiations, but I assume that's basically what they were looking at. And then you look and say, "Oh, okay, then the laws can change." 0:51:52 LC: The other thing that I just want to point out is that when a law is set, that's a much more fluid thing than I think most people realize who haven't spent a lot of time looking at this. So in practice, a lot of times, there are sort of these gray areas of the law, and I'm not saying people should go break the law. But there's a gray area of the law where the products that you're working with don't really fit into the regulation, or customer demand is just so massive that the regulators will actually change their mind once they see that demand. Now how far you want to push that boundary is really up to you. There are arguments that Uber or Airbnb were illegal when they were first started. There are arguments that they're illegal right now. I don't think so, and I think they did the right thing, and I think the world's a better place for giving everyone the options. But it’s also really, really important to realize is there are these constraints, but the constraints, when you read a law, it's not a law of physics. And the other thing that you have to understand is laws are executed by regulators, so understanding why they are enforced or what they actually want to enforce is also really, really important. 0:53:09 BR: Yeah, and do you think there's... So to your point about there being different regulations in different places, do you think that it's then problematic that you see so much copying of Delaware law and sort of copy-pasting that around the world? 'Cause wouldn't that then sort of make everything... Wouldn't that be a very strong attractor? 0:53:37 LC: I think what ends up happening is it's a good enough baseline. So I can't remember what the book is called right now, but there is another famous economist named Hernando de Soto who wrote about just the importance of property rights and how if you are able to sort of import the property rights regimes from the US into a lot of different countries that don't have them right now, it would be a huge driver [0:54:00] ____. 0:54:00 BR: It wouldn't necessarily work. 0:54:01 LC: And so I don't think we live in a world where we figured everything out so perfectly that all we need to do are these sort of minor experiments. I think we live in a pretty uneven world where if we just had relatively good legal functioning across the world, not just in terms of the laws that are written down, but sort of culturally how they're practiced, we could make life a lot better off for a lot of people. So it does make a lot of sense to me that if you and I were trying to start up a corporate law and corporate practice in some small country somewhere that was just starting to figure it out, or just decided they really wanted to change their system, I think we would go look at best practices. I think that's normal. It's unclear to me though that we are actually doing enough experimentation on the regulatory side, it's just really, really hard to say how much because it's just sort of this abstract opportunity cost question. 0:55:03 BR: Yeah, it's... And I guess these are sort of the same thing where I think of it as it's very hard to talk about counterfactuals, and actually, to riff off of the point about opportunity costs, my impression about... Of one of the reasons that large long-term projects don't get funded is because the opportunity cost is so high in that if I see that the stock market is increasing at a... It's like the number in my head is 5% of... I think of stock market 5%, I'm not... Is that roughly... 0:55:47 LC: I think nominally, the numbers, depending on the timeframe you look at, are along the range of 8%-10%. 0:55:56 BR: Oh, wow, okay. 0:55:57 LC: But there are actually a lot of people who right now think that 5% is what you're going to see for the next 10 years. 0:56:03 BR: Okay, well, let's... 0:56:05 LC: Anyway, doesn't really matter. Let's say 5%. 0:56:06 BR: Yeah, exactly. So in order to make the argument for something like the opportunity cost of investing in an illiquid thing is the compounding returns that you would get from 5% growth in the stock market, plus the amount that... Like the liquidity that you're giving up, which is, as you pointed out, a really big deal. And so it's... And then put uncertainty on top of that, so it's not even a guaranteed in the future compounding... Like you need to be... So it just... It seems like it's a fairly straightforward... It's actually a very, very large opportunity cost to propose an alternative investment to just the stock market. 0:57:07 LC: So I think it is and it isn't. First of all, I think you framed all of that correctly, that everything is subject to an opportunity cost. And so, of course, when I'm looking at whatever investments I'm making, and you are too, or deciding where to spend your time, you're going to look at your other alternatives and then choose. I don't think that necessarily should mean that it's impossible to go find a project worth working on. I think what it means is you just need to really, really understand what you're building. So that you understand why it's really valuable, and you have to go after sort of basically big projects or you have to have really, really fast experimentation, so you can just try out a lot of things and say, "Okay, maybe the opportunity cost is high over five to seven or eight years or 10 years, but I am going to try 2,000 different types of Shopify stores programmatically, I'm going to figure out which ones work, and then I'll have the revenue stream that I want once I've tested out and pulled out to the best 25, and then go on from there." 0:58:16 LC: So I do think that that it's definitely doable, you just have to recognize the opportunity cost. But you're right, there is an opportunity cost. I just think you shouldn't sell yourself short. I think implicit in what you're saying is that the world is relatively efficient, and because the world's relatively efficient, how on earth could I earn more than 5%? But I have to say, I look around everywhere and see a lot of products that, they were built on the constraints of the past distribution channels, they were built on the constraints of the past production approaches, or they were built on social relationships that have broken for whatever reason. 0:59:04 LC: So you look at this and say, huh, I think there's probably a better way to do it. And if I'm right, and if I really, really focus on figuring out what's wrong and how we can do this better, you're going to find that the returns you earn are massively more than the stock market. I just think you have to be really focused and intentional in how you're doing it, and I think you have to spend a lot of time understanding the people behind the process. If you ever... I'm trying to think. Have you ever read that essay "I, Pencil" by Leonard Read? There's this idea that if you look at any sort of product in front of you, so you look at a pencil, an uncountable number of relationships went into building that product. So for the pencil, someone had to chop the wood, someone had to mine the metal, someone had to refine it, someone had to put it all together, someone had to paint it, someone had to build the eraser, and someone had to invent all of that and patent all of that, and start all of those companies and then figure out how to market it, and then figure out how that distribution channel worked, and then figure out how consumer tastes were changing, and just look through all of that. 1:00:11 LC: There are so many relationships there, and if you think about it, there's just... There's no... It's extremely unlikely that we've reached the global maximum for almost any product, because you only need one of those relationships not to have been done perfectly, not to have been optimized, to have an opportunity to do things better. And then you look at the constraints that they used to have 80 years ago versus what we have now... Software has changed so much in the last 15 or 20 years, the Internet has changed the world a lot in the last 20 to 30 years. You look at this and say, there probably are better ways to organize these things or to sort of optimize things. And I think that's true... I'm looking around my apartment now, when you look at, I don't know, a glass, or you look at a countertop, or you look at any art or any hardware, I actually think this is true for almost the most mundane object in your life. And actually I find... Once you start getting into the details of all of these mundane objects, it's not mundane, it's totally... 1:01:19 BR: My concern is actually the opposite, where I think that there are tons and tons of dollar bills on the ground, but the payoff you need to convince someone of becomes inordinately larger, the better the stock market is doing, it feels like, because of the opportunity cost. 1:01:45 LC: So yes and no. If you look, for reasons that are separate from this conversation, at demographics and the way that capital is structured, interest rates are low and look like they're going to stay low for a while, which means the required return for a project is going to keep falling. So yes, when the stock market is doing really well... Imagine the stock market were returning 40% a year, it would be harder and harder to get new projects funded because people would just put their money in the stock market. But as those returns fall from 8% to 5%, or you used to be able to get 6% or 8% over a 10-year period in a 10-year bond and now you're getting 2, 2 1/2% a year, you actually are more and more willing to go out onto that risk curve and sort of fund something new. So I actually don't think the problem is as much opportunity cost, especially today. Socially venture capital is so popular that I don't think the problem is opportunity cost. I actually think the problem is alpha. And so if you think about what alpha is in the finance world, it's basically, you're looking for an information advantage, and it's going back to cash flows and capital flows. 1:03:07 LC: You're looking for an information advantage on what's going on with those cash flows, with the product, the customer sort of thing, or what's going on with capital flows. So your alpha could be, you understand there's going to be a forced seller here or a forced buyer there, and then you bridge the liquidity into that market. And to throw one more book out there, the best book I know of to think about information sourcing is a book by a Nobel Prize-winning physicist named Robert Laughlin, it's a book called "The Crime of Reason." Have I ever mentioned this one to you? 1:03:39 BR: No. 1:03:39 LC: So it's really interesting. Frankly, it's a shocking book when you really process it. He basically argues that all economically valuable information is kept secret. And so you think that you really understand a lot about the world, but you actually understand, say, 98% about some topic, but that last 2% that really matters to get the project off the ground, to get the product built, to actually get funded, that's really kept secret. So the reason I think this is interesting is we've turned an opportunity cost problem of, "Well, there's really nothing I can do about it, I hope I come up with a good idea," to an information sourcing problem. So the way I think about this is I say, okay, there are really two places that you find information in the world. It can either be recorded or it can be in someone's head. So recorded could be like written and natural language, or in numbers in a database. And I often find, unless we're talking about you going and coming up with some new fundamental algorithm, all you really need to be doing is collecting all of that data and joining tables. It's not actually that complicated from an intellectual perspective, but it's really about finding those tables and joining them. And then on the side of, oh, it's in someone's head, it just ends up being about building relationships with people. 1:05:01 LC: And to your point about there being lots of dollar bills on the sidewalk, there are, but it's almost like they're invisible, so you need to go find the information to really understand, oh, that's a real one, that's a fake one. And it just ends up being a shoe leather exercise where you say, "Okay, I'm just going to go reach out to a lot of people, become friends with a lot of people, talk to them about their work, really try to understand what they're going through, and then I'll recognize what they want and what they don't want, and then I'll find effectively that alpha." And I think that's probably a more useful way to think about it than opportunity cost, because it's more empowering once you think about it that way. 1:05:38 BR: I like that. To change tracks a little bit drastically, but to just get to a point that I think it is really important to talk about... So you invest primarily in public equities, right? 1:05:52 LC: Mostly public, but public and private companies, yeah. 1:05:55 BR: Yeah, and so there is a argument that... I'm on like... There's basically an argument that short-term is like short-term thinking on the part of public investors has sort of pushed public companies to slash R&D costs and basically caused the fabled death of corporate labs. I think it is pretty clear that corporate labs don't sort of have the sort of world-changing output that they used to. However, I'm agnostic about the cause and still trying to figure that out, so... What do you think about that argument? 1:06:42 LC: So, I think it's complicated. I also think I'm not sure, but I can think it through with you. 1:06:50 BR: Yeah, let's think through it. 1:06:51 LC: Sure, so if you look at the valuations in the public markets today, they are very high by any historical measure. And so high valuations do not imply short-termism. They imply that the market is placing very, very high prices on companies. Now, it just turns out that a lot of that has to do with the way capital flows work today, not just with cash flows. And so what's going on effectively is we changed the retirement system in 2005. So we default decided to put a lot of people's money into index funds. Index funds just blindly buy a set of equity as a set of stocks, just as capital flows into them, and so we've had more and more retirement flows, so you see all of these stocks get bid up. That has been a huge reason for valuations going up. But anyway, you look at this and say, alright, so just on a project basis, companies are actually getting huge valuations. Now quarter to quarter, companies face unbelievable pressure to sort of make a mark that Wall Street thinks is good or bad. And what ends up happening is people are definitely optimizing over the quarters, because the research analysts, it's so difficult to see inside the companies that these are the metrics that they use to measure what's going on. 1:08:13 LC: So it's sort of a mixed bag. We are getting really high valuations, but there is still a lot of quarter to quarter pressure, but at the same time, I mean I look at this and say... I think it's actually closer to the journalism and editorial arguments, where it used to be that these newspapers were monopolies and then separately or sort of for social reasons, they were also safeguarding these unbelievable journalists, and it was this huge benefit to society. The reason it worked was the newspapers were monopolies, so they really didn't face competition, and then culturally it became normal for them to sort of support journalists. And then it was like a social competition, like "Who is going to win the Pulitzer price this year? Who's sort of funding the best journalists?" If you go look at the big corporate R&D labs, you find that it was a set of funders that were basically semi-government entities. They were such great monopolies, and culturally, the people who are running those companies also wanted the R&D labs, maybe out of the sense of patriotism, maybe out of some other sense, but I think that's sort of how they came to be. And when those monopolies were broken up, they basically weren't able to keep funding the R&D labs. 1:09:40 LC: I do think that some of today's monopolies and oligopolies, these are the Facebooks and the Googles and the Microsofts of the world, they are able to fund big R&D labs, and we could argue about whether it's the same as Bell labs or PARC... But they're definitely trying, they have been inspired by those old examples, and my friends who work there, I do think are quite brilliant. So I do think that the ones that you're talking about and that I've read that you've written about, I think that it was basically this really nice side effect of monopolies that also doing it. But at the same time, not every monopoly... And in fact, almost every monopoly isn't going to have that cultural imperative. And then on the flipside, let's look at the ones that aren't monopolies, and this is again, partially a narrative problem and partially a reality problem. People haven't come up with a good metric for outsiders to know that research projects are going to do well long-term, so the outsiders feel comfortable funding them. 1:10:48 LC: So an example is that over the last 15 years, you can go look at pharmaceutical companies, and you'll see that their R&D budgets are getting cut. And what happened was a lot of investors were looking at the returns on that R&D over a three-year basis and a five-year basis, and they were saying, "Look, we're not seeing any returns here, it really doesn't make sense for you to be spending money." And of course, people trot out the worst examples when they're making arguments, but there was a set of pharmaceutical companies that maybe was abusing the R&D line. Maybe they were basically not really doing great research, and they were paying themselves a lot of money to not do great research. And some hard-charging Wall Street hedge funds came in and really, really pressured those companies to stop spending on R&D. Now, you'd say socially, that's terrible outcome. We could say, "Look, maybe the R&D is a public good, not a private good, so we need some way to incentivize that and we can have that conversation." I think it's possibly solvable if we come up with a new set of metrics that everyone actually believes. 1:12:07 BR: Yeah, so this goes back to the legibility point. 1:12:09 LC: It does. So you and I have spoken about this one privately before, but there's a professor at MIT named Andrew Lo who proposed that you bundle cancer research projects together or any pharmaceutical projects together. And say you take 100 of these projects or 200 of these projects, you bundle them together, you give each of them, say, a couple million dollars, and then you bundle all the payoffs together. And so the idea is that, hopefully, that's institutionally legible enough that someone would be willing to fund it because they think, "Okay, there's actually a good chance that of these 200 projects, one or two of them will hit, and then you'll have this unbelievably valuable drug that will really be good for the world, and maybe that's a good way to push us out on the risk curve." I haven't seen this type of thinking really take hold because we're still very much in that project-based milestone-based financing approach where it's like, okay, you have the metrics that makes sense for your series A, for your series B and C and D. 1:13:18 LC: And there's also an argument that maybe the smartest biotech investors and pharma investors are already cherry-picking the best companies, the best projects. So maybe you'll sort of have this adverse selection where maybe of the top 200 projects, this would have worked, but if the best five are just going to go off on their own, you're just not going to get the good ones. And this is again, sort of that information s
In this conversation I talk to Donald Braben about his venture research initiative, peer review, and enabling the 21st century equivalents of Max Planck. Donald has been a staunch advocate of reforming how we fund and evaluate research for decades. From 1980 to 1990 he ran BP’s venture research program, where he had a chance to put his ideas into practice. Considering the fact that the program cost two million pounds per year and enabled research that both led to at least one Nobel prize and a centi-million dollar company, I would say the program was a success. Despite that, it was shut down in 1990. Most of our conversation centers heavily around his book “Scientific Freedom” which I suspect you would enjoy if you’re listening to this podcast. Links Scientific Freedom Transcript audio_only [00:00:00] This conversation. I talked to Donald breathing about his venture research initiative, peer review, and enabling the 21st century equivalent of max Planck. Donald has been a staunch advocate for forming how we fund and evaluate research for decades. From 1980 to 1990, he ran BP's venture research program. Where he had a chance to put his ideas into practice. [00:01:00] Considering the fact that the program costs about 2 million pounds per year and enabled research, that book led to at least one Nobel prize and to send a million dollar company. I would say the program was success, despite that it was shut down in 1990. Most of our conversations centers heavily around his book, scientific freedom, which just came out from straight press. And I suspect that you would enjoy if you're listening to this podcast. So here's my conversation with Donald Raven. would you explain, in your own words, the concept of a punk club and why it's really well, it's just my name for the, for the, outstanding scientists of the 20th century, you know, starting with max blank, who looked at thermodynamics, and it took him 20 years to reach his conclusions, that, that matter was, was quantized. You know, and that, and, he developed quantum mechanics, that was followed by Einstein and Rutherford and, and, and a [00:02:00] whole host of scientists. And I've called, in order to be, succinct Coley's they, these 500 or so scientists who dominated the 20th century, the plank club. So I don't have to keep saying Einstein rather for that second. I said, and it's, it's an easy shorthand. Right. And so, do you think that like, well, there's a raging debate about whether the existence of the plank club was due to sort of like the time and place and the, the things that could be discovered in physics in the first half of the 20th century versus. Sort of a more or more structural argument. Do you, where do you really come down on that? The existence of the plank club? [00:03:00] W well, like, yeah, so like, I guess, I guess it's, tied to sort of like this, but the question of like, like almost like, yeah. Are you asking, will there be a 20th century, 21st century playing club? Do you think, do you think it's possible? Like, it's sort of like now right now. No, it's not. because, peer review forbids it, in the early parts of the 20th century, then scientists did not have to deal with, did not necessarily have to deal with peer review. that is the opinions of the, of the expert of the few expert colleagues. they just got on, on, Edgar to university and had a university position, which was as difficult then as it is now to get. But once you got a university position in the first part up to about 1970, then you could do then providing your requirements were modest, Varney. You didn't [00:04:00] need, you know, huge amounts of money. Say. You could do anything you wanted and, you didn't have to worry about your, your peers opinions. I mean, you did in your department when people were saying, Oh, he's mad. You know, and he's looking at this, that, and the other, you could get on with it. You didn't have to take too much attention. We pay too much attention to what they were doing, but now in the 21st century, consensus dominates everything. And, it is a serious, serious problem. Yeah. So I, I seriously believe that keeps me what keeps me going is that it is possible for there to be a plane club in the 21st century. It is possible, but right now it won't take, it won't happen. I mean, re there's been reams written on peer review, absolute huge, literature. and the, but, but most of it seems to have been written by, by people who at least favor the status [00:05:00] quo. And so they conclude that peer review is great, except perhaps for multidisciplinary research, which ma, which might cause problems. This is the establishment view. And so they take steps to try to ease the progress of multidisciplinary research, but still using peer review. Now. Multidisciplinary research is essentially is, is absolutely essential to venture research. I mean, because what they are doing, what every venture researchers, the researcher is doing is to look at the universe. and the world we inhabit in a new way. So that's bound to create new, new disciplines, new thought processes. And so the, when the conventional P, when the funding agencies say, there's a problem with multidisciplinary research, they're saying that's a problem with venture reserves. Yeah. And so therefore we won't have a plank club until that problem is [00:06:00] solved. And I proposed the solution in the book. Of course. Yeah, exactly. And so I guess, so with the book, I actually think of it as it's just like a really well done, an eloquent, almost like policy proposal, like it's, it's like you could, I feel like you could actually take the book and like hand it to. A policymaker and say like do this, I guess you could, so, I guess to put it, but like clearly nobody's done that. Right? did you, do you ever do that? Like, did you actually like go to, government agencies or even billionaires? Like the, the amount of money that you're talking about is almost like shockingly small. what, what are, what are people's responses of like, why not do this? Patrick Collison as being the only billionaire who has responded, I've met about, I don't know, half a dozen billionaires. And, they all want to, they all want to do things [00:07:00] their way, you know, they all want to, which is fair, which is fair enough. They all want to, sees a university through their own eyes. They are not capable of saying opening their eyes and listening to what scientists really want to do and to get what scientists really want to do. You've got you. You just can't just ask them straight off. You've got to talk to them. For a long time before they will reveal what they want really want to do. And then only a few of them will be capable of being a potential member of the plank club of the 21st century state. But it's a wonderful process. It's exciting. And I don't know why. well, I, I think I do actually, why the conventional authorities do not do this. And I believe that for, the reason this is more or less as follows that, for 20, 30 years following the expansion of the universities in about, about 1970 for political reasons. [00:08:00] no, not at all for, for scientific reasons that, there was a huge expansion in the universities and, and, and a number of academics. I really really mean it's factors of three, two, three, four, or something like that, depending on the country. Really huge. And, so therefore the old system where freedom for everyone was more or less guaranteed, which is what I would advocate freedom for everyone as a right. So, what we have done now is to develop absolute selection, rules, absolute selection rules for selecting venture researchers. And, and, and that's taken, you know, that's taken some time to develop them, but they work well. And, and, and open up the world to a complete ways, new ways of looking at it. Yeah, look, I mean, the, the, the track record seems very like very good, right? Like you, you, you, you [00:09:00] enabled research that would not have happened otherwise and led to Nobel prizes. Right. Like, I don't, I don't see how it could, what evidence one could present that your method works more so than that. and so it's, so yeah. Well, well, over the years you see, the, the scientists to work in for, for the funding agencies. they have advised politicians on the ways to ration research without affecting it. And they have come up with the way, the method of peer review, which is now a dairy girl, you know? it's absolutely essential. Yeah. Every to every funding agency in the world, I've not come across one that does not use it well, apart from our own operation, of course we don't use it. but we, we find ways around it. And that's the conventional wisdom is that there are no ways around, [00:10:00] there are no way. peer review is regarded as the only way to ensure research excellence. People keep saying that it's the only way, but we have demonstrated with the BP venture to search you and this and that UCL, that there is another way. And, and I guess so is, is, is the response from, people that you would propose this to simply that , they, they don't believe that. they don't believe that it can work because it doesn't, it isn't peer reviewed, , is that the main contention? Any, any ideas now must, must, must survive. Peer review and venture research of course would not. And so therefore what we're saying is therefore not admissible. And now a few people, in like the 50 or so of my, my, of my supporters, very senior supporters, re regard what we [00:11:00] are doing as essential, but their voice is still tiny compared with the, you know, the millions of, researchers and, and the, I I'm the funding agencies. Now the funding agencies kept on saying that they have advised politicians over the years, that the only way to ensure to ensure, that the, that the scientific enterprise is healthy is to, is to, is to a DIA to peer review. Now. They cannot. They cannot now say, ah, yes, Raven points out. There's a serious floor. They cannot do that. And so they say they do, they do not acknowledge that I exist or that the problem exists. This is so, so just because like they have, have doubled down so hard on peer review being the acid test for research quality. That they, they just like, they can't, they're like they're [00:12:00] lash to that, man. Okay. Okay. And, so I, I know at least in the NSF, I think actually shortly after your book came out originally, so in. 2008, 2009. I read about an initiative to try to do more. I think they, they termed it like transformational research and the NSF, that was the NSF, initiative. it was pioneered by Nina fedoroff. Nina fedoroff, who's a great, another great is one of my supporters. and, she was the, she was the, I think she was the chairman of the science board or something like that, which controls the NSF. And so she set up a special task force to look at. Mainly what I was trying to do. And so, she invited me to go to Washington on three occasions and we sat in this huge room at the national science [00:13:00] foundation headquarters. And we, we, we, we had three, two or three day meetings, venture research, and they concluded that it was the only way to go. And so that's what they recommended to the NSF. But what did the NSF do with decided that th that, that, that they would accept Nina Federoff's recommendations, but they should be administered by each of the divisions separately. Well, that's, that means they don't do anything that they wouldn't do normally. and so, I guess one, one thing, I'm not sure if you mentioned it in the book, what do you think about, like HHMI Janelia and. Like sort of the effort that they do, because it is much closer to your recommendations, how Hughes you mean? Yeah. Howard Hughes medical Institute. Yeah, exactly. and specifically like they're their Janelia campus where my understanding is [00:14:00] that they give people sort of, whole free funding for five years and really just sort of let them. Explore what they want to explore, but they have to, but they, I think they insist on them going to the central laboratories. Yeah. Yeah. That is a problem. How's that? Because, well, because, scientists all have roots and they all, I've ended up where they have, you know, wherever it is and that's where they prefer to work. And so therefore in venture research, that's why we allowed them to work in their old environment. Yeah. But now with total freedom and they'd radically transformed, you know, a little segment of the, of what it's done, but they transformed it and that they would've transformed even more. Had we been allowed in 1990, if BP had allowed venture research to continue. They were th th there [00:15:00] would be more than 14 major breakthroughs because, in 1990, when BP closed us down, then we, these people no longer could rely on venture research support the, the, the essential, feedback that we gave them, the meetings that we arranged, you know, of all venture researchers, which we had to work out how the hell to do that because, you know, w th the just scientists and engineers, scientists, and engineers all came together. Yeah. And, I don't know that's been done. but anyway, right. we were no longer allowed to provide that support. And so therefore they were on their own and exposed to the full rigors of peer review in applying to funds before they were ready for it. Yeah. The successful ones are venture research, you know, people they can suddenly, you know, it's, with his ionic liquids, then he, he, he, jumped over the line [00:16:00] of, of, of, into mainstream science and it became then part of the mainstream. Yeah, and same with similar Polica and all the other people who, who, who, who was successful. But, but th th there were, there were a few groups, you know, who were left high and dry and, and they had to manage, they had to, cut their class according to the funding. Yeah. do you keep track of people who today would make good venture researchers? but, but don't like, like, do people still still send you letters and say like, I want to do this crazy thing. No, I'm afraid. I can't, I can't do that because I would be, raising their hopes, way beyond what Mabel to provide, UCL. We've done that and we've met one person. we supported one person, Nick lane, whose work has been prodigious digitally successful. Now he could not get support. He couldn't get support from anybody. [00:17:00] Before we, we felt a bit, before we backed in. And, so I persuaded the university to cover up 150,000 pounds over three years, which is trivial amount of money. Totally. Totally. And since that time, since then, he's, now he's more or less stepped over the line and he's now become mainstream. And he's, since that time as you're right. 5 million pounds, 5 million compared to the 150,000. So that's, that's profitable, you know, as far as the university is concerned, they're profitable, but even so even with UCL, it's still not caught on. Yeah. And, do you, do you, so when I, I guess I also have a question about like, what about the people who might make really be really good researchers, but , don't even make it through to the point where, they would even like be able to [00:18:00] raise venture, venture research money in that , There's also the fact that. in venture research, you were entirely supporting people except for, believe one case, you were supporting people who are already in academia, right? they'd already sort of gone through all the hoops of getting a PhD and, getting some sort of, some sort of position. And so do you have a sense of how many possibly amazing people get weeded out? even before that point? Oh, I mean, to be a venture researcher, you you've got to have a university position, I would guess. or, I mean, as with, was, with, with, with the only engineer we supported, he was working for a company and we enabled them to leave and I took great care to, to, to inquire of him. He w he would have to give up his job because, you know, industrial company couldn't support him if he was working for another company. And so I had to be sure that [00:19:00] he really was serious about this. And so we arranged that he, we arranged for a university appointment for the nearest university to where he lived, which was sorry. That's was just down the road, so to speak. And, but even that created problems, he was never really accepted by the university hierarchy. And w why do you think the university, association is, is important? as opposed to just someone just, you know, just doing research, right? Like what if they had, they built like a lab in their basement or. we're doing mostly theory. And so they just sort of, they've done that. You know, people, you know, like the guy, the guy at shop, I mean, if they, if they'd done that, then of course we listened to them, but they must be, they must be reasonably proficient in, in, in, in, in what I mean, they're, they're coming with a proposal to do something. Right. And to some that you've got to, [00:20:00] you've got to have done something else. You've got to, you've got to prepare the ground, so to speak. Yeah. So getting a university appointment today is no more difficult than it was say in 1970, you still had to get up. You still got to get, you know, go through a degree. PhD may be, and, and then, convince the university that you're worthy of, appointment. But then as I said before, You, you had automatic, you automatically qualified for this modest amount of funding, at least in Britain. you automatically qualify for that, but now you're quantifying for nothing. Once you pass the, you know, you're appointed by the university, you then start this game of trying to convince funding agencies to support you. And if you don't, you're dead. Yeah, you don't get, you don't get anywhere. You've got no Tanya. So you, you, you, you, you just disappear. It says it's an unforgivable system and it's extremely [00:21:00] inefficient. Yes. Do you think, I guess the question is like, is, is efficiency even the thing that's worth shooting for it. Like, it seems like it's, it's going always going to be inherently inefficient. Because of the uncertainty. like I guess I always worry when, when, like, when efficiency comes up as a metric around research, because then you sort of start having to calculate like, okay, like how much value is this? What is our like, return on investment? Like how efficient is that? And it's just, do you think that's the right way to think about it? Well, it's certainly not a bad way. but, but mines are closed, you know, I, I've been in touch with so many people over the years, you know, I've been at this now for 20 years since, since BP terminated my contract, so to speak and I've never, and I've always, and I've always tried every single minute [00:22:00] of that 20 years to find new ways of doing this. I mean, it's big, it does sound a bit, you know, that, that th th th th th th th the, the, what I do as a large element of the crank about it, but I'm so convinced of the value of this eventual research and its contribution to humanity, so to speak. I'm so convinced that it will make an enormous contribution that I keep on going. Yeah. No. I mean, I have no money. Yeah, no, I'm not paid to do this. And the first person that I've met of the many, very rich people I've come across has been factored colorism. Who, offered to publish my book and at a fraction of the price, why only were charging for it? Why do you charge $75 for a paperback he's charging less than $20 for a, for a hardback? Yeah. Well, I think he realizes that it's important for people to [00:23:00] actually read it. That's that's good. he, took part in, just before he met me, he took part in a, in a, in a, in a, in a blog or something like that. it's on, it has a YouTube thing. And, he said he was very impressed when he met me and I, I, I, I changed the way he looked. Yeah. I changed the way he looked at the world. You know, and, and he made this, joining an hour long speech to these fellow billionaires, but no, one's come forward. No one said, you know what you wanted. Yeah. Well, I hopefully like, I mean, hopefully between the book being out and, like. You know, I, I try to recommend it to everybody. I know. so, so hopefully like we'll, we'll start to, sort of get it more into people's heads. Do do you have any good stories about, people who applied and didn't make it right, [00:24:00] because I assume that, like, I always noticed that the sort of a line between like brilliant ideas and like completely crackpot ideas is very, very thin. so did you get any, like really, really ridiculous applications? It's not that thin. I don't think it is, there were lots of people who came to me and said, you know, similar dynamics is bunk. And, and one thing that they really hate is to be asked you say, okay, I agree that it is, what do you want to do now that completely floors them. So people was a crackpot idea are automatically. disqualified because they never return. They never say what they want to do, or if they do, you know, you cannot, you keep on re repeating the same question and they eventually gave up. Now venture was search the venture researcher. I may not say we, we, we, we may not say yes for the first meeting. It [00:25:00] might be five, six, seven, eight meetings with Dudley, Hirsch, Nobel laureates. It took more than a year. Because, you know, I met him very shortly after he got his Nobel prize and he came to a meeting that I took part in a meeting of the American physical society in New York. And, and I gave a talk and there, I noticed this guy in the front row scribbling away and he came to me after each day. He said that I think I've died and gone to heaven. This was Dudley Hurst back three days after winning the Nobel prize. And he had an idea which no one would listen to. And, well, we, we, we, backed him, but I can't think of an, of people who come with crazy ideas and gone on to be, you know, to, to, to fizzle out and die if you like. And, and, and so w I w I want to dig into that because that's, that's really, I feel like you're, you're saying something like very important. [00:26:00] and that, so to sort of repeat back to you, it sounds like the people who are not crackpots are able to like go to a level of precision about action that other than people who don't know what they're talking about are not able to do. Would that be accurate? Okay. That's exactly right. And, I'm, I'm asking like very much, because I, I'm trying to do similar things and like looking at, it, and like very much in the same position where it's not always in my exact field. and so it's, it's like, what is that? That you can. Do to sort of like tease out the, like the, the difference between a good, crazy idea and a crazy, crazy idea. Well, all venture research has courses outside my own field, all of it, because I'm, you know, that was a [00:27:00] long time ago when I was a practicing scientist. And so everything is outside my, my competence, so to speak, which is another reason why the mainstream venture is searchable. The mainstream, funding agencies tend to disregard what I say because it doesn't pass peer review. Of course. Yeah. So, we are accustomed to being uncomfortable with talking to people we are, but we try to engage them in conversation that reveals what they really want to do, what they, and try to assess, what they want to do compared with the state of knowledge in that field right now, you know, they want to transform it. and you don't have to be a fellow expert to understand that. I mean, I am an expert now, but in general, things like science, you know, or engineering that the broadest possible approach to these study subjects. And I can, I can tell on everyone else can tell who was involved with this process because when I came to BP, [00:28:00] they gave me two or three very, talented people, more or less people like yourself, you know, high flyers young in their early 2030s. And they joined me for two or three or four years. And, they were, they were mainly chemists. and so, yeah, chemistry, I think, I think, only there was only one physicist. This, I think that the BP provided me with, and it didn't matter because they were all fairly, talented people, very talented people, in fact, and you didn't have to say anything twice to them, you know? And, and they took it like a duck to water. Now we never had to discuss. But, you know, when we're sitting around our little table and people would, were coming with their ideas, we never had to discuss whether someone was what we called a venture researcher. It came to immediately obvious to anybody to have to everyone around the table, that he was someone who wanted to transform thought processes in that particular [00:29:00] field, they would do something important and that, and, and then once we made up our mind, then. Then, we backed them and then they could then do absolutely anything they wanted and, you know, nothing, you know, we're not bound by the proposal they wrote to us. Yeah. But that was mainly the agency, which, which caused it caused us to, to, to select them. And mostly, I mean, most people did of course, you know, but, some of them didn't. And do you feel like. So, so I'm, I'm really interested in sort of like the, the landscape of the like untapped potential in research. And so, venture research sort of goes after people who already have an idea and know what they want to do and need money and support to do it. And I wonder if there's sort of another [00:30:00] class of people who. Could do amazing things, and, and have the right skills and mindset to do it, but almost have like either not even thought of the things yet, or, or sort of have suppressed them in order to fit into the peer reviewed research box. and they could be unlocked by either like putting them in contact with, with other researchers or, Sort of like shifting their focus. Do, do you have a set, like a sense of like, whether those people actually exist or am I just making, making that up? Well, I'm not quite clear what you're saying. I mean, the people that, most venture researchers we came across, I mean, they knew precisely what they wanted to do well eventually, and they would eventually admitted to us. What they really wanted to do, you know? but there must be mutual trust. Trust has been lost entirely in funding. [00:31:00] Nobody, you know, funding agency doesn't trust the researchers, but I found that trust is absolutely essential in this, you know, that they must trust you and we must trust them because I've said once we back someone, they can go anywhere or anywhere in any direction they want. You know, you can only do that. If there's mutual trust. And if they come across problems as inevitably, they will, you know, things, you know, then they come to us and say, look, we've got this problem, you know, can you help us to solve it? And, you know, and we, we took up a problem with a university or something like that. And we w we helped them along. And this is why when venture research was closed down in 1990, this service was no longer available. And so the 14 people who made, who made major breakthroughs subsequently, and now that's a minimum. I think there are, it would be much greater if we'd have had the support, if I was able to provide this, the support right through, right through to today. Yeah. [00:32:00] And, and, and so. Can we dig into how you build trust with researchers. So you, you have a lot of conversations with them. and like, but can you, can you unpack that? Like, I, I want to we'll have the same problem in everyday life. You know, you meet someone and you talk to them, you know, not about research. You don't have to think about research, you know? do you trust somebody? how do, how, how do you, how do you reveal your trust? How do you express it? How do they express it? And, it has to come, you know, by, by a multi, by a multitude of routes, just like in everyday life, you have to make up your mind that you would trust somebody eventually, and we'll go along with most things that they say. So it is no more of them. It's no different from that. And, in fact I've found that, that the people who've been most receptive to what I've been saying have been nonscientists, you know, people like the [00:33:00] secretary I used to be, I wish to work in the cabinet office, the secretary of the cabinet Burke trend when I was there, he was, you know, he was instantly, so what we were trying to do, you know, and, I had an ally there. Yeah. And, and yeah. And I guess why, why wasn't he? So, so if there were people in the government who were excited about this, like what sort of stopped him from being able to push through? He was, he, he, that was before I CA I went to BP and before I'd worked out the ideas of venture research, but I, I talked to him and I realized I could trust him. You know, and, which is very unusual, you know, for a very senior person, to be able to do that. And, so we didn't discuss venture to search them because I hadn't, I hadn't taken up the cudgel so to speak when I, when I in, on the April, the eighth, 1980, I went to VP. Yeah. Got it and sort of [00:34:00] switching gears a little bit, something I loved and it's like, give me a completely different way of seeing the world in the book is, your. Sort of your take down of the idea of high risk research, biggest high risk. Well, there's not, I, venture search was the lowest possible risk you can imagine, because I was convinced that they would, that the venture researchers would do something of value. You wouldn't be able to predict what it was, but they would do something of value and only had to do with, to keep talking to them. Yeah. And, unfortunately we, we can't show the graph, but I, I just, the, the sort of concept of, like with cutting edge research, the fact that there's just uncertainty about what the like, underlying probability distributions even look like. Right? Like, and, and the fact that the researcher actually knows that distribution so much more than anybody else, [00:35:00] It's just a, a much, much better way of thinking about it. because I think that people really do think of like high risk research as sort of just like this like portfolio approach to, to the world. And it's like, okay, well, like what's our expected, and this actually goes back to the, the, the idea of expected value where, why it's, it's so hard to talk about it in those terms, because. we don't know the underlying distributions. Right? Well, have why should a funding agency support high risk research? When what they're really saying is that we expect you to fail. That's what high risk means we expect you to fail. So why, why should either we measure as the researchers or the funding agency do that? I mean our approach, which goes to the lowest possible risk and the highest possible gain, much more. It's much more accessible now. Not everyone of course can be a venture [00:36:00] researcher, but, I'm convinced that every serious minded researcher, at least once in a, in, in, in a scientific career will come across an idea that will transform his is a local, field. Yeah. he, he, he did not share it or he did not reveal it to the, to peer review because he hasn't yet proved it. Yeah. Peer review only works sort of ad hoc, you know, sort of, so, after the events, so to speak, when you've actually shown that it will work, then they can say whether it's good or not. Yeah. And I guess, I don't know if you've been paying attention to sort of like the, the meta discussion around scientific stagnation. but there's, there's sort of like the argument that we've picked, although the low-hanging fruit of, of the physical world, What do you w what, and like, I get the [00:37:00] impression that you don't agree with that. So I, I always looking for sort of good counter arguments. I don't agree with that because, we're looking for people who will, grow new types of fruit. Or if you take the continental view, you know, that, that our field is a bit like a country. And, when the, you know, people like Einstein discover that country, then it leads to, to, to a wave of new research, but eventually. Then the field becomes paid out and it gets more and more difficult to make it difficult to make a new discovery. What if somebody comes along and says, there's a new continent there and I want to, I want to create it. Yeah. And they can convince you that it exists. So, either the, you know, the low hanging fruit thing, which is, works very well for particular types of fruit. But what if you come up with a new idea for fruit and you come. And that actually sort of ties back to [00:38:00] the peer review argument, which is that, peer review is very bad at allowing, sort of like new fields and therefore new contents orchards to exist. We are, we all an endowed with a w w w with a creative spirit. You know, and, it's this fundamental creative spirit that we all have and scientists, you know, perhaps, to a, to a greater degree than others though. I'm not convinced of that. and, you cannot expect that their view of the world completely individual view of the world. Just if I have consensus. Yeah, not immediately. You can't do it right away. I mean, no, the scientists, you know, major discoveries have not been greeted with a claim. Science, Einstein Einstein's discovery were called in the times newspaper. And one is when he wrote his famous paper on relativity and the front to come and sense. That's what the time [00:39:00] said. And so it was, it wasn't a front to common sense. Yeah. It should be fair. It's still, it's still sort of is right. Like you in like, you know, it was like the universe is curved and it's like that, that actually doesn't. Even jive with my comments, that I'll be honest. university's a very, very big place. Yes. It's very, a very weird place, right? Like when you start really looking at it. Yeah. And you know, the, the, the, the, the gravitational constant, which is, the Hubble constant. So, is, is, is one 80, sorry. Th well, there's some dispute about what it is, but it's a tiny amount per million years, you know, it's a tiny difference from what we see now, how the, it would be difficult to detect. Have you, have you said, you know, [00:40:00] you've gotta look for this and this is what people are doing. Yeah. Yeah. It's something, something that I wonder. And I'm not sure like how this sort of jives with, with venture research. But what I feel like it has also happened is, people have become so specialized in like theory or experimentation or, like engineering or development, that. And like part of where, these, these new fields come from is people, sort of interacting with people that they wouldn't normally interact with. and, and so did you, did you find yourself sort of beyond that, like they're, they're the, the sort of group meetings, of the venture research community, but did you, did you find yourself, Sort of like pushing people to interact in ways that they wouldn't have otherwise interacted with. No, we never push people that way. there, there were always [00:41:00] any new interaction that came from our meeting, from the meetings they were derived in exclusively from the scientists. I mean, we w we may make one or two suggestions about a group. Well, I did, we would do, we did. In fact, the, the ant people. we're working, in the field of distributed intelligence. Now I happen to know there was a unit at the university of Edinburgh that was doing this, that this very work distributed computing, and they weren't, there were one or two experts. So I went to them and said, do you have somebody who might be interested in joining the group? And they did a man called Tuft and he went down and he, and he did, and it was a very productive exchange. But that was very rare. I mean, that, that happened, but they, the user can the other way, got it. I don't know. Maybe, maybe, or the, you know, I forgotten, it's a long time ago, you know? but, I'm just always interested in, sort of improving my mental models of like, how, like, sort of [00:42:00] the, the question that everybody have of like, where do ideas come from, right. Like, and it's like, how important is sort of like cross cross-pollination, to sort of, to, to creating new areas. Actually I'm really interested in what the day-to-day of running VPs venture research was like. I kind of in my head, imagine you like just flying around the world, and sort of like meeting scientists and, and sort of, did you, have you ever watched the Avengers? I imagine you like Nick fury. Sort of like going around to different superheroes, and sort of say like, alright, like we're gonna, we're gonna form a team. yeah. W w what was that like? Well, I'll tell you, it was a, a very difficult problem because when I first arrived, I was, you know, a single person in a single room. and, so, the research director and, you know, the [00:43:00] guy who was responsible for BPS main research activities. And he spent about 2 billion pounds a year, you know, on, on he had, and he had 2000 people working for him and, he he's, he thought I was mainly harmless, you know, so, but as the, as the, as the decade wore on, then, it became obvious because I always try to involve, see in here. BP, directors in what we were doing. I always invited them to our conferences, for example. And even the chairman, you know, came down and other senior directors came down and they could see for themselves that what I was talking was not bullshit, you know, what really serious. And, and so the idea became. embedded in, in, in, in, in a few recess director's minds that the bang, the Braven was having a bigger bang for book Cadogan will the research director and it's 2 billion a year that he was [00:44:00] spending. And, and this fed back on, on, on to the research directors approach to me and the increasingly saw me as a threat rather than as an opportunity. And eventually. In 1990, he won and we were closed down. I got a phone call from New Zealand on, on March the eighth, 1990 from New Zealand, the man, the guy, the guy, or just the, the, research director, had just retired Bob and he gave me all the freedom. I, what I wanted. And he retired in, in 1989, it was succeeded by Buzzle Butler and Buzzle Butler was, well, I won't say what I think of him. he, he phoned me for ISA from New Zealand and, and his first thought first was hello, Don venture search has closed down. The BP can no longer afford the drain on its resources, BP. I was spending then [00:45:00] five millions a year when, you know, managing directors didn't get out of bed unless it was at least a billion. Yeah. Yeah. It, it's funny how people can get very attached to like even comparatively small amounts of money. Well, people see, tend to see value in cost. You see? and so, if a university adopts venture to search, which I hope they will then, you know, they can't release, there's no glory in spending nothing a year. Even if you have an arrangement for looking for these people, you know, even a UCL, you know, it's been 150,000 in 10 years, but you know, that's about the right number. You know, we're talking like 500 people in a century over the whole world. So any single university is likely to have one or maybe two in a decade. And so, [00:46:00] but if the arrangements were set up so that people could come forward with their ideas to talk with senior people in the university, people who had given up their own research, like I have, and you take the carrier's pleasure and their discoveries. Then, if you can find some people such that such people than ever few universities were able to do this, then it would solve the problem. And so that's what we're working on right now. So as soon as I get my book, I'll be sending it to various, which is the due to come this week. I think, you know, my, the 50 copies that Stripe send me and I can send these out. I'm not very optimistic. I'm afraid. Okay. Okay. Well, I am, perhaps I, I realized I realized, foolishly, but my, my, my theory is that, If you're not optimistic, then you're sort of doomed to failure. Like, like the, the, the [00:47:00] non optimism makes like, it, it, reinforces itself, right? Like, so, so if you're pessimistic, then it will make itself come true. we sort of, yeah, so, so we, so we need to be optimistic and, and I guess like, with, with the universities today, Like, what did they, so like if, if the money were there, like if, if the money were coming in to, to researchers, universities don't have any problem with sort of people being there, doing work as long as, as the funding is coming from somewhere else with no strings attached. Right? Yep. Yeah. Okay. So let me, every university has, I mean, UCL, I mean, it has a budget of a billion pounds, you know, it's a big university, so we're asking for a tiny amount of money, you know, the, and even that is an over estimate, you know, because most years expenditure will be zero, [00:48:00] you know, and it's only, but to have that, to be able to call on occasionally. something like 150 200,000 pounds a year. So 200,000 pounds for three years, sorry. it would be no big commitment for them to enter into it, especially if you could re entertain the hope that in a few years, the, the scientists would, would make the transmit that the transition into the mainstream and then back external funding. And just what I've done. And so, so I know that a couple of universities do give, like new professors a year or two of funding. Well, that's right. I mean, people taking new jobs are in th th their, their maximum creativity then. And so universities, it's very good investment for them, but, but, but, but, an academic now has to look [00:49:00] forward to what will happen when this, when this funding ends. Right. You know, and will he be well-placed and he's got to engineer his, his position to be well-placed to, to attract funds. And so a year or two, not enough to do venture research. And so they just, you know, it, it works, but only to a very limited extent. Yeah. And the, and the, the incentive sort of cascade backwards. Right. Cause you're looking at, and you're like, okay, well, I'm going to need to get grants. And in order to get those grants, I'm going to need to have, do you have done peer reviewed research? So I better get started on that now so that it has to start thinking about that. Yeah, no, that, that makes a lot of sense. so I guess in including, What besides simply just like reading the book and thinking about, venture research. What, what is something that you think people should be thinking about [00:50:00] that they're not thinking about enough? Well, I, I don't think you can, so you can say it like that. I mean, if you're, if you're a venture researcher or a budding venture search, then you'll have an idea and you'll, always wants to be returning to it. So, I mean, the creation of venture research therefore is a happy coincidence. You know, that it's a meeting of, of, of similar minds, if you like. And I provided the opportunity to those, 40 people, that we backed over the 10 years to do their thing. but it was a partnership. It was a partnership between us and them. I was taking a risk, you know, with BP and, you know, having to solve the problems. I had to do all the other things you say, you know, like travel the world and give, to get, cause I didn't believe we could advertise. I didn't, I didn't think we could advertise, you know, in journals and say, you know, we wanted good ideas. I had to go to universities and give talks about venture to search while people were doing. [00:51:00] And the state of science now. Yeah. And then invite proposals and then sit and listen to what people came up with. And, at each university I might get in an afternoon, I may get 20 or 30 proposals. I mean, most of them just so you as a new source of money. You know, and that was always a problem we had, even with venture researchers was convincing them that even though we were the BP venture research unit, we were not interested in getting oil out of the ground. This could help the research director and I wouldn't trust us on what he was trying to do. So I had to find a way of, of, so our strategy was completely different. So the search directors. So we're not in direct competition, but he did see me as getting direct competition because they, the senior directors, you know, were saying the bang for buck that blaming God is higher than yours. I didn't say that they did. And actually, what, what was, what is the thinking behind, Not putting [00:52:00] out a sort of like a broad, call for, for applications, but instead, going and giving a talk and then, and talking to people because my, like my gut says that makes a lot of sense, but I'm not quite sure I can pick apart. Why. Well, venture searches, Nobel prize winners are a very special breed, you know, they do not respond to a, to opportunity. I mean, they create, they create their own opportunity and they are convinced of their particular view of the world will eventually be proved. Right. And hopefully. well, for the few we managed to help, we, we enable them to do that, but, other people just happened to have to cut their clamps according to the phones that are available and have to keep doing that. So I, I, I think that, people's view is, is, of the world is, is created within what the, within themselves, in this, within their thought processes, [00:53:00] their thought processes on their creative spirit, you know, this thing, which we, we all have, allows, them to do. I mean, people like, like Einstein, you know, w w when he, he looked at the, at the world without any feedback from anyone. You know, he didn't read the literature, he didn't cite any publication in this, you know, it didn't say anything. And his Anna's mirabilis is three papers and max playing catheter to decide whether or not he was the editor of the journal that he submitted his papers to. And he had to decide whether we should publish these or sub subject them to the usual references, but he didn't and he just published them. And th th they attracted a lot of criticism that said the times for it, it's always in the front of common sense, but there's other two work, you know, for the, for the conductivity and, Brownian motion. I mean, they were major pieces of work, but he did that [00:54:00] without talking to anybody. I mean, apart from, you know, mathematical advice and things like that, which had got from various people from time to time. Yeah. So, so to, to, to sort of pull that back to the strategy for teasing out, the, like the, the high quality applicants, the hypothesis would be that like, they, they don't even, they wouldn't even be reading the, like the journal where you'd be advertising that you want applications and you sort of have to like really go and, and get in their face. Yes, you have to let it be. You have to create the environment that allows them to write to you or to contact you, or to give you a phone call and say, I want to do this. You know? And, I remember I got a phone call once from, from a lunatic who said, I have a way of launching satellites, which has been much cheaper than, than anyone else has ever had. And I said, Oh, what do you want to do? He said, Oh, I want to build a building a hundred miles high. And, and then throw out of [00:55:00] the window, you know, there's, there's been, these objects and there were, there would be an arbiter immediately. And so they would, but I asked him about, you know, what do you think are the limits on, on growth, you know, on, on, on, on building, what would the foundations for a hundred mile high building? And there was just silence then, you know, cause he realized that the Earth's crust would not support. And what's the highest structure on the world Mount Everest high is that five miles. So Y you know, w how are you going to construct something which is a hundred miles high? So, I mean, that doesn't matter. It's just an anecdote of a, of a phone call I got, but that's what, that's what constituted, And initial approach to us. That's all we asked for, that the person would, was a ring or write a paragraph or whatever it was and saying, this is what I want to do. Yeah. And you would then take them there. Yeah. That's. [00:56:00] I like that a lot, because it's, it's very, it's almost the opposite of the approach that I've seen, that you you've seen, that there are like many other places. Like you look at how, like DARPA or bell labs does it, did it. and they it's like almost the opposite where they would only ever go and be like, you, like, we want you to come and like do some awesome stuff. And so it was it's like that, like push versus pull in, in getting people into the. The the organization is, is an interesting dynamic. well, I spoke, I spoke to somebody in the cabinet office recently about that, but, you know, he, he, he knew about it. He learned, he learned about the publication of the book and he said that the government was saying the British government were thinking of creating DARPA in Britain. And I didn't think it would be a very good idea. Because not for venture to search, it would be good for other things. but, DARPA 10, it's a bit like venture capital, you know, they know what they want to [00:57:00] do. And, when I was no venture to searcher would be able to point to specific benefits flowing from their work, right at the beginning, it would not be able to do that. And so therefore they would be disqualified from applying. Yeah, I think it's actually, a really important distinction that you just made because that like what, when people say like, like research is broken, my, my hypothesis is that there is actually that they're describing at least two completely distinct problems where there's the one problem of sort of, like making more. Technology. And then there's the other problem of like discovering more new areas of knowledge. And, and venture research is very much targeted at the latter. and w distinctly from the former new fruits and new continents, you know, that's what we're concentrating on. You know, [00:58:00] the people, you know, completely new consonants, completely new fruits. So there'll be low hanging fruit will come from that, from that fruit. And, yeah, that's what we're trying to do. I think one of the biggest takeaways from this conversation for me, that I just want to double click on is. Donald's assertion that the line between genius and crack pot is not as thin as I. Used to believe. And that it may be possible to tell the difference. Bye. Really. Paying attention to how precise someone's ideas. Uh, I'm still sort of processing that, but it's. An important thing for us to think about [00:59:00]
A conversation with Adam Marblestone about his new project - Focused Research Organizations. Focused Research Organizations (FROs) are a new initiative that Adam is working on to address gaps in current institutional structures. You can read more about them in this white paper that Adam released with Sam Rodriques. Links FRO Whitepaper Adam on Twitter Adam's Website Transcript [00:00:00] In this conversation, I talked to Adam marble stone about focused research organizations. What are focused research organizations you may ask. It's a good question. Because as of this recording, they don't exist yet. There are new initiatives that Adam is working on to address gaps. In current institutional structures, you can read more about them in the white paper that Adam released recently with San Brad regens. I'll put them in the show notes. Uh, [00:01:00] just a housekeeping note. We talk about F borrows a lot, and that's just the abbreviation for focus, research organizations. just to start off, in case listeners have created a grave error and not yet read the white paper to explain what an fro is. Sure. so an fro is stands for focus research organization. the idea is, is really fundamentally, very simple and maybe we'll get into it. On this chat of why, why it sounds so trivial. And yet isn't completely trivial in our current, system of research structures, but an fro is simply a special purpose organization to pursue a problem defined problem over us over a finite period of time. Irrespective of, any financial gain, like in a startup and, and separate from any existing, academic structure or existing national lab or things [00:02:00] like that. It's just a special purpose organization to solve, a research and development problem. Got it. And so the, you go much more depth in the paper, so I encourage everybody to go read that. I'm actually also really interested in what's what's sort of the backstory that led to this initiative. Yeah. it's kind of, there's kind of a long story, I think for each of us. And I would be curious your, a backstory of how, how you got involved in, in thinking about this as well. And, but I can tell you in my personal experience, I had been spending a number of years, working on neuroscience and technologies related to neuroscience. And the brain is sort of a particularly hard a technology problem in a number of ways. where I think I ran up against our existing research structures. in addition to just my own abilities and [00:03:00] everything, but, but I think, I think I ran up against some structural issues too, in, in dealing with, the brain. So, so basically one thing we want to do, is to map is make a map of the brain. and to do that in a, in a scalable high-speed. Way w what does it mean to have a map of the brain? Like what, what would, what would I see if I was looking at this map? Yeah, well, we could, we could take this example of a mouse brain, for example. just, just, just for instance, so that there's a few things you want to know. You want to know how the individual neurons are connected to each other often through synopsis, but also through some other types of connections called gap junctions. And there are many different kinds of synopsis. and there are many different kinds of neurons and, There's also this incredibly multi-scale nature of this problem where a neuron, you know, it's, it's axon, it's wire that it sends out can shrink down to like a hundred nanometers in [00:04:00] thickness or less. but it can also go over maybe centimeter long, or, you know, if you're talking about, you know, the neurons that go down your spinal cord could be meter long, neurons. so this incredibly multi-scale it poses. Even if irrespective of other problems like brain, computer interfacing or real time communication or so on, it just poses really severe technological challenges, to be able to make the neurons visible and distinguishable. and to do it in a way where, you can use microscopy, two image at a high speed while still preserving all of that information that you need, like which molecules are aware in which neuron are we even looking at right now? So I think, there's a few different ways to approach that technologically one, one is with. The more mature technology is called the electron microscope, electromicroscopy approach, where basically you look at just the membranes of the neurons at any given pixel sort of black or white [00:05:00] or gray scale, you know, is there a membrane present here or not? and then you have to stitch together images. Across this very large volume. but you have to, because you're just able to see which, which, which pixels have membrane or not. you have to image it very fine resolution to be able to then stitch that together later into a three D reconstruction and you're potentially missing some information about where the molecules are. And then there's some other more, less mature technologies that use optical microscopes and they use other technologies like DNA based barcoding or protein based barcoding to label the neurons. Lots of fancy, but no matter how you do this, This is not about the problem that I think can be addressed by a small group of students and postdocs, let's say working in an academic lab, we can go a little bit into why. Yeah, why not? They can certainly make big contributions and have to, to being able to do this. But I think ultimately if we're talking about something like mapping a mouse brain, it's not [00:06:00] going to be, just a, a single investigator science, Well, so it depends on how you think about it. One, one, one way to think about it is if you're just talking about scaling up, quote, unquote, just talking about scaling up the existing, technologies, which in itself entails a lot of challenges. there's a lot of work that isn't academically novel necessarily. It's things like, you know, making sure that, Improving the reliability with which you can make slices of the brain, into, into tiny slices are making sure that they can be loaded, onto, onto the microscope in an automated fast way. those are sort of more engineering problems and technology or process optimization problems. That's one issue. And just like, so Y Y Can't like, why, why couldn't you just sort of have like, isn't that what grad students are for like, you know, it's like pipetting things and, doing, doing graduate work. So like why, why couldn't that be done in the lab? That's not why [00:07:00] they're ultimately there. Although I, you know, I was, I was a grad student, did a lot of pipetting also, but, But ultimately they're grad student. So are there in order to distinguish themselves as, as scientists and publish their own papers and, and really generate a unique academic sort of brand really for their work. Got it. So there's, there's both problems that are lower hanging fruit in order to. in order to generate that type of academic brand, but don't necessarily fit into a systems engineering problem of, of putting together a ConnectTo mapping, system. There's also the fact that grad students in, you know, in neuroscience, you know, may not be professional grade engineers, that, for example, know how to deal with the data handling or computation here, where you would need to be, be paying people much higher salaries, to actually do, you know, the kind of industrial grade, data, data piping, and, and, and many other [00:08:00] aspects. But I think the fundamental thing that I sort of realized that I think San Rodriquez, my coauthor on this white paper also realized it through particularly working on problems that are as hard as, as clinic Comix and as multifaceted as a system building problem. I th I think that's, that's the key is that there's, there's certain classes of problems that are hard to address in academia because they're system building problems in the sense that maybe you need five or six different. activities to be happening simultaneously. And if any, one of them. Doesn't follow through completely. you're sort of, you don't have something that's novel and exciting unless you have all the pieces putting, you know, put together. So I don't have something individually. That's that exciting on my own as a paper, Unless you, and also three other people, separately do very expert level, work, which is itself not academically that interesting. Now having the connectome is academically [00:09:00] interesting to say the least. but yes, not only my incentives. but also everybody else's incentives are to, to maybe spend say 60% of their time doing some academically novel things for their thesis and only spend 40% of their time on, on building the connectome system. Then it's sort of, the probability of the whole thing fitting together. And then. We see everyone can perceive that. And so, you know, they basically, the incentives don't align well, for, for what you would think of as sort of team science or team engineering or systems engineering. yeah. And so I'm like, I think, I think everybody knows that I'm actually like very much in favor of this thing. So, I'm going to play devil's advocate to sort of like tease out. what I think are. Important things to think about. so, so one sort of counter argument would be like, well, what about projects? Like cert, right? Like that [00:10:00] is a government yeah. Led, you should, if you do requires a lot of systems engineering, there's probably a lot of work that is not academic interesting. And yet, it, it, it happens. So like there's clearly like proof of concepts. So like what what's like. W why, why don't we just have more things like, like certain for, the brain. Yeah. And I think this gets very much into why we want to talk about a category of focused research organizations and also a certain scale, which we can get into. So, so I think certain is actually in many ways, a great example of, of this, obviously this kind of team science and team engineering is incredible. And there are many others, like LIGO or, or CBO observatory or the human genome project. These are great examples. I think the, the problem there is simply that these, these are multibillion dollar initiatives that really take decades of sustained. government involvement, to make it happen. And so once they get going, and [00:11:00] once that flywheel sort of start spinning, then you have you have it. And so, and so that, that is a nonacademic research project and also the physics and astronomy communities, I think have more of a track record and pipeline overall. perhaps because it's easier, I think in physical sciences, then in some of these sort of emerging areas of, of, you know, biology or sort of next gen fabrication or other areas where it's, it's, there's less of a, a grounded set of principles. So, so for CERN, everybody in the physics basically can agree. You need to get to a certain energy scale. Right. And so none of the theoretical physicists who work on higher energy systems are going to be able to really experimentally validate what they're doing without a particle accelerator of a certain level. None of the astronomers are gonna be able to really do deep space astronomy without a space telescope. and so you can agree, you know, community-wide that, This is something that's worth doing. And I think there's a lot of incredible innovation that happens in those with focus, research organizations. We're thinking about a scale that, [00:12:00] that sort of medium science, as opposed to small science, which is like a, you know, academic or one or a few labs working together, Or big science, which is like the human genome project was $3 billion. For example, a scope to be about $1 per base pair. I don't know what actually came out, but the human genome has 3 billion basis. So that was a good number. these are supposed to be medium scale. So maybe similar to the size of a DARPA project, which is like maybe between say 25 and. A hundred or $150 million for a project over a finite period of time. And they're there. The idea is also that they can be catalytic. So there's a goal that you could deliver over a, some time period. It doesn't have to be five years. It could be seven years, but there's some, some definable goal over definable time period, which is then also catalytic. so in some ways it will be more equivalent to. For the genome project example, what happened after the genome project where, the [00:13:00] cost of genome sequencing through, through new technologies was brought down, basically by a million fold or so is, is, is, how George Church likes to say it, inventing new technologies, bringing them to a level of, of readiness where they can then be, be used catalytically. whereas CERN, you know, It's just a big experiment that really has to keep going. Right. And it's also sort of a research facility. there's also permanent institutes. I think there's a, is a, is a, certainly a model that can do team science and, and many of the best in the brain mapping space, many of the sort of largest scale. connectomes in particular have come either from Janelia or from the Allen Institute for brain science, which are both sort of permanent institutes, that are, that are sort of, nonacademic or semi academic. but that's also a different thing in the sense that it's, it takes a lot of activation energy to create an Institute. And then that becomes now, a permanent career path rather than sort of focusing solely on what's the shortest path to. To some [00:14:00] innovation, the, the, the permanence. So, so the, the flip side of the permanence is that, I guess, how are you going to convince people to do this, this, like this temporary thing, where. I think, someone asked on Twitter about like, you know, if it's being run by the government, these people are probably going to get, government salaries. So you're, you're getting a government salary, without the like one upside of a government job, which is the security. so like what, what is the incentive for, for people to, to come do this? Yeah. And I think, I think it depends on whether it's government or philanthropic, philanthropic fro Faros are also definitely. An option and maybe in many ways more flexible, because the, you know, the government sort of has to, has to contract in a certain way and compete out, you know, contracts in a certain way. They can't just decide, the exact set of people to do something, for example. So, so the government side has. Both a huge [00:15:00] opportunity in the sense that I think this is a very good match for a number of things that the government really would care about. and the government has, has, has the money, and resources to do this, but philanthropic is also one we should consider. but in any case, there are questions about who and who will do Froy and, and why. and I think the basic answer though, it, it comes down to, it's not a matter of, of cushiness of the career certainty. it's, it's really, these are for problems that are not doable any other way. this is actually in many ways, the definition is that you're only going to do this. if this is the only way to do it, and if it's incredibly important. So it really is a, it's a medium scale moonshots. you would have to be extremely passionate about it. That being said, there are reasons I think in approximate sense why one might want to do it both in terms of initiating one and in terms of sort of B being part of them. [00:16:00] so one is simply that you can do science. that is for a fundamental purpose or, or, or, pure, purely driven toward your passion to solve a problem. and yet can have potentially a number of the affordances of, of industry such as, industry competitive salaries, potentially. I think the government, we have to ask about what the government can do, but, but in a certain philanthropic setting, you could do it another aspect that I think a lot of scientists find. Frustrating in the academic system is precisely that they have to. spend so much work to differentiate themselves and do something that's completely separate from what their friends are doing, in order to pay the bills basically. So, so if, if you don't eventually go and get your own appealing, you know, Tenure track job or, or so on and so forth. the career paths available in academia are much, much fewer, and often not, not super well compensated. And, and [00:17:00] so there are a number of groups of people that I've seen in sort of, if you want critical mass labs or environments where they're working together, actually, despite perhaps the. Incentive to, to, differentiate where they're working, does a group of three or four together. and they would like to stay that way, but they can't stay that way forever. And so it's also an opportunity if you, if you have a group of people that wants to solve a problem, to create something a little bit like, like a seal team. so like when, when I was, I'm not very generally militaristic person, but, when I was a kid, I was very obsessed with the Navy seals. But, but anyway, I think the seal team was sort of very tight knit. kind of a special forces operation that works together on one project is something that a lot of scientists and engineers I think want. and the problem is just that they don't have a structure in which they can do that. Yeah. So then finally, I think that, although in many cases maybe essentially built into the structure fro is make sense. We can [00:18:00] talk about this as, as nonprofit organizations. these are the kinds of projects where, you would be getting a relatively small team together to basically create a new industry. and if you're in the right place at the right time, then after an fro is over, you would be in the ideal place to start. The next startup in an area where it previously, it's not been possible to do startups because the horizons for a venture investment would have been too long to make it happen from the beginning. Well, that's actually a great transition to a place that I'm still not certain about, which is what happens. After it fro, cause you, you said that it, that it's a explicitly temporary organization. And then, how do you make sure that it sort of achieves its goal, right? Like, because you can see so many of these, these projects that actually sound really great and they like go in and possibly could do good work and then somehow it all just sort of diffuses. [00:19:00] so, so have you thought about how to sort of make sure that that lives on. Well, this is a tricky thing as we've discussed, in a number of settings. So, in a, like to maybe throw that question back to you after I answer it. Cause I think you have interesting thoughts about that too, but, but in short, it's, it's a tricky thing. So, so the fro. Is entirely legal focused there isn't, there's no expectation that it would continue, by default and simply because it's a great group of people, or because it's been doing interesting work, it's sort of, it is designed to fulfill a certain goal and it should be designed also from the beginning to have a, a plan of the transition. Like it could be a nonprofit organization where it is explicitly intended that at the end, assuming success, One or more startups could be created. One or more datasets could be released and then a, you know, a much less expensive and intensive, nonprofits, structure could be be there to [00:20:00] host the data and provide it to the world. it could be something where. the government would be using it as a sort of prototyping phase for something that could then become a larger project or be incorporated into a larger moonshot project. So I think you explicitly want a, a goal of a finite tune to it, and then also a explicit, upfront, deployment or transition plan, being central to it much more so than any publication or anything. Of course. At the same time. there is the pitfall that when you have a milestone driven or goal focused organization, that the funder would try to micromanage that and say, well, actually, not only do I care about you meeting this goal, but also I really care that by month six, you've actually got exactly this with this instrument and this throughput, and I'm not going to let you buy this other piece of equipment. Unless, you know, you show me that, you know, [00:21:00] and that's a problem that I think, we sometimes see with, externalized research models, like DARPA ARPA models, that try to. achieve more coordination and, and, and goal driven among otherwise, somewhat uncoordinated entities like contractors and, and universities that, that are working on programs, but then they, they, they, they achieve that coordination by then, managing the process and, with an fro, I think it will be closer to. You know, if you have a series, a investment in the startup, you know, you are reporting back to your investors and, and they, they, at some level care, you know, about the process and maybe they're on your board. but ultimately the CEO gets to decide, how am I going to spend the money? And it's extremely flexible to get to the goal. Yeah. Yeah. The, the micromanage, like [00:22:00] figuring out how to avoid, Micromanagement seems like it's going to be really tricky because it's sort of like once you get to that amount of money, I like, have you, have you thought about, like how, like, if you could do some kind of like actually, well, I'll, I'll give her the, the, the, the, the, the thing that the cruxy thing is like this, I think there's a huge amount of trust that needs to happen in it. And what I'm. like I constantly wonder about is like, is there this like fundamental tension between the fact that, especially with like government money, we really do want it to be transparent and well-spent, but at the same time, in order to sort of do these like knowledge frontier projects, sometimes you need to do things that. Are a little weird or like seem like a waste of money at the time, if you're not like intimately connected. and so there's, there's this sort of tension [00:23:00] between accountability and, Sort of like doing the things that need to get done. I agree with that and Efros, we're going to navigate that. Yeah. I agree with that. And I think it relates to a number of themes that you've touched on and that we've discussed with, which has sort of, has to do with the changing overall research landscape of, in what situations can that trust actually occur, you know, in bell labs, I think there was a lot of trust. throughout, throughout that system. And as you have more externalized research, conflicting incentives and so on it, it's, it's hard. It's hard to obtain that trust. startups of course, can align that financially, to a large degree. I think there are things that we want to avoid. so one of the reasons I think that these need to be scoped as. Deliverables driven and roadmaps, systematic projects over finite periods of time, is to avoid, individual [00:24:00] personalities, interests, and sort of conflicting politics, ending up. Fragmenting that resource into a million pieces. So, so I think this is a problem that you see a lot with billion dollar scale projects, major international and national initiatives. Everybody has a different, if you say, I want this to be, to solve neuroscience, you know, and here's $10 billion. Everybody has a different opinion about what solved neuroscience is. And there's also lots of different conflicting personalities and, and leadership there. So I think for an fro, there needs to be an initial phase, where there's a sort of objective process of technology roadmapping. And people figure people understand and transparently understand what are the competing technologies? What are the approaches? What, what are the risks? And you understand it. and you also closely understand the people involved. but importantly, the people doing that roadmapping and sort of catalyzing the initial formation of that [00:25:00] fro need to have a somewhat objective perspective. It's not just funding my lab. It's actually, you, you want to have vision, but you, you need to. Subjected to a relatively objective process, which, which is hard because you also don't want it to be a committee driven consensus process. You want it to be active, in, in a, in a systematic, analysis sense, but, but not in a, everyone agrees and likes it, you know, emotionally sense. and so that, that's a hard thing. but you need to establish it's that trust upfront, with, with the funder, And that's a hard process and it gets a hard process to do as a large government program. I think DARPA does it pretty well with their program managers where a program manager will come in and they will pitch DARPA on the idea of the program. there'll be a lot of analysis behind it and, but then once, once they're going, that program manager has tremendous discretion, and trust. To how they actually run that [00:26:00] project. And so I think you need something like a program manager driven process to initiate the fro and figure out is there appropriate leadership and goals and our livable as reasonable, Yeah, that seems the way, at least the way that it's presented in the paper, it, it feels a little bit chicken and egg in that. so with DARPA, DARPA is a sort of permanent organization that brings in program managers. And then those programmers program managers then go, start programs, whereas, The look at fro it seems like there's this chicken and egg between like, you sort of, you need someone spearheading it. It seems like, but then it, you sort of like, it, it seems like it will be very hard to get someone who's qualified to, to spearhead it, to do that before you have funding, but then you need someone spearheading it in order to get that [00:27:00] funding. yeah. Like, yeah. How, how are you thinking about. Cracking that that's, that's sort of the motivation for me behavior over the next year or two, is that I'm trying to go out and search for them. And, a little bit of it is from my own creativity, but a lot of it is going out and talking to people and try and understand what the best ideas. Here would be, and who are the networks of, of human beings behind those ideas, and trying to make kind of a prioritized set of borrows. Now, this kind of thing would have to be done again, I think to some degree, if there was a, larger umbrella program that someone else wanted to do, but, I'm both trying to get a set of, of exemplary. And representative ideas and people together, and try to help those people get funding. You know, I think there can be a stage process. I agree that, in the absence of a funder showing [00:28:00] really strong interest, people committing, to really be involved is difficult, because it is a big change to people's normal. Progression through life to do something like that. but just like with startups, to the extent that you can identify, someone who's. We spiritually just really wants to do this and we'll kind of do anything to do it, the sort of founder type, and also teams that want to behave like that. that's obviously powerful, and also ideas where there's a kind of inevitability, where based on scientific roadmapping, it, it just has to happen. There's no way, you know, for neuroscience to progress unless we get better. Connectomics and I think we can go through many other fields where, because of. The structures we've had available and just the difficulty of problems now, where arguably Faros are needed in order to make progress in fields that people really care about. So, so I think you can get engagement at the level of, of discussion, and, and, and starting to nucleate [00:29:00] people. But, but there is a bit of a chicken and egg problem. In the sense that it's, it's not so much as here's an fro, would you please fund to me it's we need to go and figure out where there might be Faros to be had, and then who is interested in those problems as well to, to fund and support those things. So, yeah. So I guess to recap what I see your process that is, is that you're going out, you're sort of really trying to. Identify possible people possible ideas, then go to funders and say, here, like sort of get some, some tentative interest of like, okay, what, which of these things might you be interested in if I could get it to go further and then you'll circle back to. the, the people who might be interested in sort of say like, okay, I have someone, a funder who's potentially interested. Can we [00:30:00] sort of like refine the idea? and then sort of like, like you will drive that loop hopefully to, Getting a, an fro funded that's right. And there's, there's further chicken and egg to it. that has to be solved in the sense that, when you go to funders and you say, why, you know, I have an idea for an fro. We also need to explain what an fro is, right? in a way that both, engages people in creating these futuristic models, which many people want to do, While also having some specificity of, of what we're looking for and what, what, what we think is as possible. So, and then the same on, on the, on the side of, of scientists and engineers and entrepreneurs all over the world, who, you know, have the ideas certainly, but most of those ideas have been optimized to hit, the needs of existing structures. So, so we are, we are trying to, I think, broker between those, And [00:31:00] then start prototyping a few. but the, you know, the immediate thing I think is to make, w Tom Coolio has referred to a catalog, a Sears catalog of moonshots. and so we're trying to make a catalog of, of moonshots that fit the fro category. but that sounds like the perfect name for this podcast, by the way. the cataloging mood child, like, you're kind of kind of cataloging moonshots and ways to get moonshots and yeah, absolutely. Yeah. and so I guess another sort of, thing that I've seen, and I'm not sure, it's almost like for people like a lot of people who like really want. Who like sees something as inevitable and they really want to get it done. In sort of like the current environment we're recording in October, 2020. there's. There's sort of this perception that capital is really cheap. [00:32:00] you know, there's a lot of venture capitalists there. They're pretty aggressive about funding and one could make an argument that, if it's, if it, it really is going to be inevitable and it really is going to start a new industry. Then that is exactly where venture capital funding should come in. And I do see this a lot where people, you know, it's like they have this thing that they really want to see exist and they, you know, come out of the lab and it started a company that's sort of extremely common. so. I guess, like, what almost would you say to someone who you see doing this that you think maybe should do an fro instead? Yeah, that's a great question. I mean, I think it's a complicated question and obviously, you know, we got to see VC also, you know, obviously VC backed, you know, innovation is, is, is one of, if not sort of the key, [00:33:00] Things that is driving technology right now. So, so I'm in no way saying that fro is, are somehow superior to two startups, in any generalized way. So I think that things that can be startups and are good as startups should be startups and people, if you have an idea that could be good for a startup, I think you should go do it. Generally speaking. But, there, there are a few considerations, so yeah. So I think you can divide it into categories where VCs, no, it's not a good idea for startups. And therefore won't talk to you, in cases where VCs don't always know whether or not it's good for a startup or whether there's a way that you could do it as a startup, but it would involve some compromise that is actually better not to make, even potentially for the longterm. economic prospects of, of an area. So things that can happen, would be, if you have something that's basically meant to be a kind of platform technology or which you [00:34:00] need to develop a tool or a platform in order to explore a whole very wide space of potential applications. maybe you have something like a new method of microscopy or something, or a new way to measure proteins in the cell or things like that, that, you know, you could target it to a very particular, if you want product market fit application, where you would be able to make the most money on that and get the most traction, the soonest. Yeah. Sometimes people call this, you know, the, the, the, the sort of Tesla Roadster, equivalent. You want to guys as quickly as you can to the Tesla Roadster. And I think generally, what people are doing with, with that kind of model, where you take people that have science, to offer, and you say what's the closest fastest you can get, to a Tesla Roadster that lets you it lets you build, get, get revenue and start, start being financially sustainable and start building a team, to go further. generally that's really good. and generally we need more scientists to learn how to do that. it'd be supported to do that, but, [00:35:00] sometimes you have things that really are meant to be. either generalized platforms or public goods, public data, or knowledge to underlie an entire field. And if you work to try to take the path, the shortest path to the Roadster, you would end up not producing that platform. You would end up, producing something that is specialized to compete in that lowest hanging fruit regime, but then in the, in, in doing so you would forego the more general larger. Larger thing. And, you know, Alan Kay has, has the set of quotes, that Brett Victor took is linked on his website. and I think Alan K meant something very different actually, when he said this, but he's, he refers to the dynamics of the trillions rather than the billions. Right. and this is something where in, and we can talk about this more. I'd be curious about your thoughts on that, but something like the transistor. You know, you, you could try to do the transistor as a startup. and maybe at the time, you know, the best application for transistors would have been [00:36:00] radios. I don't think like that. I think it was, it was guiding a rockets. Yeah. So you could have, you could have sort of had had a transistors for rockets company and then tried to branch out into, becoming Intel. You know, but really, given the structures we had, then the transistor was allowed to be more of a, a broadly, broadly explored platform. yeah, that, that progressed in a way where we got the trillions version. And I worry sometimes that even some startups that have been funded at least for a seed round kind of stage, and that are claiming that they want to develop a general platform are going to actually struggle a little bit later. when investors, you know, see that, see that they would need to spend way more money to build that thing. then the natural shortest path to a Roadster, or another words the Roadster is, is, somehow illusory. yeah. Yeah, this [00:37:00] is, this is a. Sort of like a regime that I'm really interested in and a, just on the transistor example, I've, I've looked at it. So just the, the history is that it was developed at bell labs, in order to prevent a T and T from being broken up, bell labs had to, under strictly licensed a bunch of their innovations, including the transistor William Shockley went off and, Started, chocolate semiconductor, the traders eight then left and started, Fairchild and then Intel. And, believe that that's roughly the right history. but the, the really interesting thing about that is to ask the question of like, one, what would have happened if, bell labs had exclusive license to the transistor and then to what would have happened if they had like exclusively licensed it to, Shockley semiconductor. And I think I would argue in both of those situations, you don't [00:38:00] end up. Having the world we have today because I fell labs. It probably goes down this path where it's not part of the core product. and so they just sort of like do some vaguely interesting things with it, but are never incentivized to like, you know, invent, like the, the planner processing method or anything. Interesting. yeah. Yeah. And so I guess where I'm. Go. And then like at the same time, the interesting thing is like, so Shockley is more, akin to like doing a startup. Right. And so it's like, what if they had exclusive license to it? And the, what I would argue is actually like that also would've killed it because, you have like, they had notoriously bad management. And so if you have this, this company with. And like the only reason that, the trader could go and start a Fairchild was because they, that was, that was [00:39:00] an open license. So this is actually a very long way of asking the question of, if F borrows are going to have a huge impact, it seems like they should default to. Really being open about what they create from like IP to data. but at the same time, that sort of raises this incentive problem where, people who think that they are working on something incredibly valuable, should want to do a startup. And then. And so there, and then similarly, even if they'd be like that sort of couldn't be a thing, they would want to privatize as much of the output of an fro. and so which. Maybe necessary in order to, to get the funding to make it happen. So I guess like, how are you thinking about that tension? That was a very long winded. Yeah. [00:40:00] Yeah. Well, there's, there's a lot, a lot there, I think, to loop back to you. So, so I think, right, so, so this idea that we've talked a bit about as sort of default openness, so, so things that can be open for maximum impact should be open. there are some exceptions to that. So, so if, And it's also has to do partly with how you're scoping the problem. Right? So, so rather than having an SRO that develops drugs, let's say, because drugs really need to be patentable, right. In order to get through clinical trials, we're talking about much more money than the fro funding, you know, to do the initial discovery of a target or something. Right. So to actually bring that to humans, you know, you need to have the ability to get exclusive IP. for downstream investors and pharma companies that that would get involved in that. so there are some things that need to be patented in order to have to have their impact. but in general, you, you want, I think fro problems to steer themselves to things where indeed. it can be maximally open and maybe, maybe you, you provide [00:41:00] a system that can be used to, to, or underlie the discovery of a whole new sets of classes of drugs and so on. But you're not so much focused on the drugs themselves. Now, that being said, right. if I invest in an SRO, and I've enabled this thing, right. It kind of would make sense for the effort, you know, maybe three of the people of, of, of, of 15 in the fro will then go and start a company afterwards that then capitalizes on this and actually develops those drugs or what have you, or it takes it to the next stage. And gosh, it would really make sense if I had funded in fro. that's, those people would like to take me as a sort of first, first, first refusal to get a good deal on, on investing in this startup, for example. Right. so I think there are indirect network-based, or potentially even legal based, structure, structure based ways to both incentivize the investors and, But it's, it's a weaker, admittedly weaker, incentive financially than, [00:42:00] than, than the full capture of, of, of something. But then, but then there's, I think this gets back to the previous discussion. So which is sort of the trillions rather than the billions. So if you have something where maybe there are 10 different applications of it, Right in 10 different fields. you know, maybe, maybe we have a better way to measure proteins and based on this better way to measure proteins, we can do things in oncology and we can do things in Alzheimer's and we can do things in a bunch of different directions. We can do things in diagnostics and pandemic surveillance, and so many fields that one startup, It would be hard even to design, to start, if that could capture all of that value just as it would have been hard to design sort of transistor incorporated. Right. Right. given that, I think there's, there's a lot of reason to. To do an fro and then explore the space of applications. Use it as a means to explore a full space in which you'll then get [00:43:00] 10 startups. so if I'm the investor, I might like to be involved in all 10 of the new industry, right. And the way to do that would be to create a platform with which I can explore, but then I have a longer time horizon. Cause I have to first build the thing. Then I have to explore the application space and only then. do I get to invest in a specific verticals, right? Yeah. I think the, the two sort of tricky questions that I, I wonder about what that is one. So you mentioned like, Oh, there's 15 people in an fro, three of them go off to start a startup. What about those other 12 people? Like, I, I assume that they might be a little bit frustrated if, if that happens, Yeah, because like, like they, they did, they did help generate that value in it. It sort of gets into two questions of like capturing, like sort of kicking back, value generated by research in general, but like, yeah, it could, it could, it could be all 15 people, you know, we saw something [00:44:00] similar with open AI, you know, in a way, for example, converting, you know, into, into a, for profit or at least a big arm of it being, being the for-profit, and keeping all the people. Right. So you, you, you, you can imagine, just blanket converting. but yeah, I think, I think it's sort of, In the nature of it, that these are supposed to be things that open up such wide spaces that there's, there's sort of enough for everyone, but no, no, no one person necessarily one startup would completely capture. And I think that's true for clinic Comix too, for example. Right. So if you had really high throughput clinical, connectomics just, just to keep going on this example, that's a great example of perfect. It's a good thing as a good example. It's not. Depending on the details, whether this is exactly the first fro or not. I think it's totally, totally other issue, but, but. Connectomics there's potentially applications for AI and you know, how, how the neurocircuits work, and sort of fundamental, funding. Mental is a brain architecture and intelligence. although there's a bunch of ranges of the sort of uncertainty of exactly what that's going to be. So it's hard to sort of [00:45:00] know it until you see the data. There's also potentially applications for something like drug screening, where you could put a bunch of different, Kind of some CRISPR molecules or drug perturbations on, on a, on a brain and then look at what each one does to their, the synopsis or, and look at that in a, in a brain region specific way and sort of have ultra high, but connect to them based drug screening. Neither of those are things you can start a start up until you have connected. Right. working. but so anyway, so maybe three people would start an AI company and maybe those would be the very risk tolerant ones. and then three would start at, you know, a crisper drug company and, and, and, three would just do, do fundamental neuroscience with it and, take those capabilities and, and, and go, go back into the university system or so on and yeah. And start using that. Yeah. And the, the sort of the other related to. like creating value with it. there's, there's a little like uncut discomfort that like even I have [00:46:00] with, say like philanthropic or government funding, then going to fund a thing that proceeds to make a couple of people very wealthy. Which like, and like, there's very much arguments on both sides, right. Where it's like, it'll generate a lot of good for the world. and, and all and, and such. so, so like, I guess what would you say? I guess like, as a, as a, like, if I were a very wealthy philanthropist and I'm like, do it, like, you know, it's like, I'm just giving away money so that these people can. Yeah, the company is a complicated thing. Right? How much, how many further rich people, you know, did the Rockefeller foundation, you know, investing in the basics of molecular biology or things like that ended up generating? I mean, I think that, I think you, I think in some way the government does want to end up is they want the widely distributed benefit. And I think everything that should be an SRO should have widely distributed benefits. It shouldn't just [00:47:00] be a kind of, A startup that just, just enhances one, one person. It should be something that really contributes very broadly to economic growth and understanding of the universe and all that. But it's almost inevitable. I think that, if you create a new industry, you're gonna, you know, you're gonna, you're gonna feel it going to be some more written about rich successful people in that industry. And they're probably going to be some of the people that were involved. Early and thinking about it for the longest and waiting for the right time to really enter it. And so, yeah, that's a really good point. I guess the, then the question would be like, how do you know, like, like what are, what are sort of a, the sniff test you use to think about whether something would have broadly distributed benefits? That's a great question. Cause it's like connect to them. It seems like fairly clear cut or, or generating sort of like a massive data set that you then open up. Feels very [00:48:00] clear. Cut. it's. We we've talked before about that, like fro is, could like scale up a process or build a proof of concept of, of a technology. and it, it seems like that it's less clear cut how you can be sure that those are going to, like if they succeed. Yeah. I mean, there are a few different frames on it, but I mean, I think one is, FRS could develop technologies that allow you to really reduce the cost of having some. Downstream set of capabilities. so, you know, if, just to give you an example, right? If, if we had, much lower costs, gene therapies available, right? So, so sometimes when drug prices are high, you know, this is basically it's recouping these very large R and D costs and then there's competition and, and, and profit and everything involved. you know, there was the marching squarely situation and, you know, there's a bunch of, sort of. What was that? there was, remember the details, but there [00:49:00] was some instance within which, a financially controlling entity to sort of arbitrarily bumped drug prices way high, right. A particular drug. and then w was, you know, was regarded as an evil person then, and maybe that's right. but anyway, there are some places I think, within the biomedical system where you can genuinely reduce costs for everyone. Right. and it's not simply that I, you know, I make this drug and I captured a bunch of value on this drug, but you know, it's really, it should be available to everyone and I'm just copying there. There's genuine possibility to reduce costs. So if I could reduce the cost of, of the actual manufacturing of. The viruses that you use for gene therapy, that's a, that's a process innovation. that would be, you could order as a magnitude drop the cost of gene therapy. If you could figure out what's going on, in the aging process and what are the real levers on a single, you know, biological interventions that would prevent multiple age related diseases that [00:50:00] would massively drop the cost. Right? So those, those are things where, Maybe even in some ways it would be threatening, to some of, some of the pharma companies, you know, that, that work on specific age related diseases, right? Because you're going to have something that, that replaced, but this is, this is what, you know, things that are broad productivity improvements. And I think economists and people very broadly agree that, that the science and technology innovations, For the most part. although sometimes they can be used to in a way that sort of, only benefits, a very small number of people that generally speaking there's a lot you can do, with technology that will be extremely broadly shared in terms of benefit, right? Yeah. Yeah. I mean, I, I do actually, like I agree with that. I'm, I'm just, I'm trying to represent as much skepticism as, as possible. Definitely. I know you agree with that. And actually, another thing that I have no idea about which I'm really interested in is as you're going and sort of creating this, [00:51:00] this moonshot catalog. how do you tell the difference between people who have these really big ideas who are like hardcore legit? but like maybe a little bit crazy. And then people who are just crackpots. Yeah, well, I don't claim to be able to do it in every field. and, and I think there's a reason why I've, I'm not trying to do a quantum gravity, fro you know, both, both, because I don't know that that's, you know, I think that's maybe better matched for just individual. Totally. Open-ended Sunday, you know, fun, brilliant people for 30 or 40 year long period to just do whatever they want. Right. Yeah. For quantum gravity, rather than directed, you know, research, but, But also because there's a class of problem that I think requires a sort of Einsteinian type breakthrough in fr fro is, are not, not perfect for that in terms of finding people. I mean, I, I find that, there's a lot of pent up need for, this is that's my preliminary feeling. and you can see there's a [00:52:00] question of prioritizing, which are the most important, but there's a huge number of. Process innovations or system building innovations that are needed across many, many fields. And you don't need to necessarily have things that even sound that crazy. There are some that just kind of just make sense, you know, are, are very simple. You know, we here, here in our lab, we have this measurement technology, but we, you know, we can only have the throughput of one cell, you know, every, every few weeks. And if we could build the system, we could get a throughput of, you know, A hundred thousand cells, you know, every month or something. Right. there are some, there's some sort of ones that are pretty obvious, or where there's an obvious inefficiency. In kind of, how things are structured. Like every, every company and lab that's that's modeling fusion reactors, and then also within the fusion reactor, each individual component of it, like the neutrons in the wall versus the Plaza and the core, those are basically modeled with different. Codes many of which are many [00:53:00] decades old. So there's sort of an obvious opportunity to sort of make like a CAD software for fusion, for example, you know, that the, the, it doesn't, it's not actually crazy. It's actually just really basic stuff. In some cases, I think they're ones where we'll need more roadmapping and more bringing people together to really workshop the idea, to really have people that are more expert than me say, critique each other and see what's. Really going on in the fields. and I also rely on a lot of outside experts. if I have someone comes with an idea, you know, for, for energy, you know, and I'm talking to people that are like former RPE program managers or things like that, that, that know more of the questions. so I think we can, we can, we can do a certain amount of, of due diligence on ideas and. and then there are some that are, that are really far out. you know, we both have an interest in atomically precise manufacturing, and that that's when, where we don't know the path I think, forward. and so that's maybe a pre fro that's something where you [00:54:00] need a roadmapping approach, but it's maybe not quite ready to, to just immediately do an fro. Yeah, no, that's, you sort of hit on a really interesting point, which is that. when we think of moonshots, it's generally like this big, exciting thing, but perhaps some of the most valuable is will actually sound incredibly boring, but the things that they'll unlock will be. Extremely exciting. yeah, I think that's true. And, and you have to distinguish there's there's boring. Right? So, so I think there's, there's some decoupling of exactly how much innovation is required and exactly how important something is. And also just how much brute force is required. So I think in general, our system might under weight, the importance of brute force. And somewhat overweight the importance of sort of creative, individual breakthrough thinking. at the same time, there are problems where I think we are bottlenecked by thinking I'm like really how to do something, not just to [00:55:00] connect them of brain, but how do you actually do activity map of entire brain? You actually need to get a bunch of physicists together and stuff to really figure out what's, you know, there's a level of thinking that is not very non-obvious similarly for like truly next gen fabrication. You really, really, really need to do the technology roadmapping approach. And that's a little different than the fro. And in some cases there may be a, as we discussed, I think in the past, there was sort of a, a continuum potentially between DARPA type programs or programs that would start within the existing systems and try to catalyze the emergence of ideas and discoveries. And then fro is, which are a bit, a bit more cut and dry. And in some cases, even you could think of it as boring. but just very important. how do we prevent Faros from becoming a political football? because you see this all the time where, you know, a Senator will say, well, like I'll sponsor this bill, as long as we mandate that. 50% of the work has to happen in my particular state or [00:56:00] district. and, and I imagine that that would be counterproductive towards the goals of . so do you, do you have any sense of like how to, how to get around that probably much easier in philanthropic setting than governments? Although I think I'm overall, I'm, I'm sort of optimistic that, if. If the goals are made very clear, the goal is disruptive, you know, multiplicative improvements in scientific fields. that's the primary goal. It needs to be managed well. so it's not either about the individual peoples, if you want academic politics and also that it doesn't, doesn't become about sort of, you know, districts, congressional districts, or all sorts of other things. I think there's a certain amount of complexity, but the other, the other thing is. I think there's really amazing things to be done in all sorts of places and by all sorts of people that are not necessarily identified as, as the biggest egos or the largest cities also, although certainly there are hubs that [00:57:00] matter. yeah. Cool. I think so. I think those are all like the actual questions I have. Is there anything you want to talk about that we have not touched on? Yeah, that's a good question. I mean, how does this fit into two things that you're thinking about, in terms of your overall analysis of the research system, then, do you think this, what is this leave unsolved as well? if, even if we can get some big philanthropic and government, donors. Yeah. So, so there are sort of two things that I. see it not covering. And so the, the first that you you've sort of touched on is that there are, some problems that still like don't fit into academia, but are not quite at the point where they're ready to be at fro. And so, they need, the, the like mindset of the fro without. Having this sort of, cut and dryness [00:58:00] that you need to sort of plunk down, like have the confidence to plunk down $50 million. so, so we need sort of a, a, what I would see as a sustainable, way of. Getting to the point of fro type projects. And as you know, I'm spending a lot of time with that. and then sort of a, the other thing that I've realized is that when, when people, we sort of have these discussions that are like research is broken, I think what we're actually talking about is, is sort of two really separate phenomenon. So, what we've been talking about, like Efros, Are really sort of sitting in like the Valley of death where it's like helping bridge that. but I think that at the same time, there there's like what I would call like the, the Einstein wouldn't get in any funding problem, which is, as you alluded to there, there are some of these things, like some of the [00:59:00] problems with research that we talk about are just about, The sort of conformity and specialization of really idea based exploratory, like completely uncertain research. And that's also really important, but I, I think it's what we don't do is, is, is sort of like separate those two things out and say like, these are both fall under the category of research, but are in fact. Extremely different processes. They require very different solutions. Yeah. Actually let me, let me, since you mentioned that, and since we are here together on the podcast, I agree with that and I, I have some things to say about that as well. So, so I think that the fro is indeed only address, or are designed to address this issue of sort of system building. problems that have a sort of catalytic nature and are a particular kind of pre-commercial stage. Right? So in some ways, [01:00:00] even though I'm so excited about borrows and how much they can unlock, because I think that this is one of two or three categories that has been, you know, under emphasized by current systems or has systems currently have struggled with it. there are these others. So, so I think that. The, the supporting the next Einstein and people that may have also have just be cognitively socially in any other number of ways, just different and weird and not good at writing grants. You know, not good at competing. Maybe not even good at graduating undergrad. Yeah. You know, I'm running a lab who are, are brilliant and because the system now. Has proliferate in terms of the number of scientists. it's very competitive and, and there is a, there's a lot of need to sort of filter people based on credentials. So there's this sort of credential there's people that don't fit with perfectly with credentials or with a sort of monoculture of who is able to get NSF grants and go through the university system and [01:01:00] get the PhD and all those different Alexey goosey has this nice blog post is oriented toward biomedical, but saying basically that in order to get through the system, you need to do 10 or 15 things simultaneously. Well, and also be lucky. And maybe we want to be looking for some people that are only able to do three of those things about, but are orders of magnitude better than others, then there's people even who have done well with those things, but still don't have the funding or sort of sustained ability, to, to pursue their own individual ideas over decades. even if they do get tenure or something, because the grant system is based on peer review and is, is sort of filtering out really new ideas, for whatever reason, There's kind of the broader issue that Michael Nielsen has talked about, which is sort of the idea that too much funding is centralized in a single organizational model. So particularly the NIH, the NIH grant is kind of hegemonic as, as, as a structure and as a peer review mechanism. then I think we need more [01:02:00] DARPA stuff. We probably need more darker agencies for other problems. Even though I've, I've sort of said that I think Rose can solve some problems that DARPA DARPA will struggle with. Likewise, DARPA walls solve problems that fro may struggle with. particularly if there's a very widely distributed expertise across the world that you need to bring together in a, some transient, interesting way, for a little bit more discovery oriented, perhaps in Faros and less deliverable oriented or team oriented. And then there's even bigger things we need, you know, like we need to be able to create, you know, a bell labs for energy, you know, or sort of something even bigger than fro. so yeah, I think the thing that you're, you're getting at that I is, is sort of simple, but under done is actually analyzing like what the activity is and what. How to best support it. Yep. Which is instead of just saying [01:03:00] like, ah, there's some research let's give some money to the research and then magical things will happen actually saying like, okay, like, like how does this work? Like what, and then what can we do for these, these specific situation? Yes. I think as you've identified. Like there's both on the one hand, there's the tendency to micromanage research and say, research has to do this, this with this equipment and this timescale it's entirely, this is sort of subject to milestone. And on the other hand is research is this magical thing. We have no idea. but just. Let other scientists, peer review each other, and just sort of give as much money to it as we can. and then we see what happens. Right. And I think neither of those, is a, is a good design philosophy, right? Yeah. Yeah. And I think it involves people like thinking it's it's uncomfortable, but like, like thinking and learning about. How, how did you think then understanding how it could, how it could be different? [01:04:00] How it's not a it's it's a system. Kevin has felt set, said it said it well. And so in some ways it's been designed, but really our scientific systems are something that has evolved into large degree. No one has designed it. It's not. Something that's designed to be optimal is it's a, it's a emergent property of many different people's incentives. And, if we actually try to apply more design thinking, I think, I think that can be good as long as we're not over overconfident in saying that there's one model for everyone. Yeah. I think that the trick to, sort of fixing. Emergent systems is to like, basically like do little experiments, poking at them. And that's, that's very much what I see getting fro is going okay. It's like, you're not saying, Oh, we should like dismantle the NSF and have it all be . Okay. Let's do a couple of these. See what happens. That's right. It's I think it's inherently a small perturbation and it it's. And I [01:05:00] think DARPA, by the way is a similar thing. It's sort of dark. You wouldn't need DARPA. If everything else was already sort of efficient, right. Given that things are not perfectly efficient, Darko has all these, all these sort of this niche that it fills. I think similarly Faros, they can only exist. if you also have a huge university system and you also have companies that that doesn't make sense, otherwise it's, it's a perturbation, but as we, I think it's a perturbation in which you unlock a pretty big pressure stream sort of behind it when you open it up. So. Excellent. Well, I think that's, that's actually a great place to close. I guess the last question would be, Like, if people are interested in, in Faros, especially like funding or running one, what is the best way for them to reach you? Well, they can, they can talk to me or they can talk to you. my email has, is prominently listed on my website. Twitter is great. and that, yeah, I really interested in, people that have a kind of specificity [01:06:00] of, of, of what they want of, you know, here here's, here's what I would do, very specifically, but I'm also interested in talking to people that, See problems with the current systems and want to do something and want to learn about, other highly specific fro ideas that others might have, and how to enable those.
Michael Filler and Matthew Realff discuss Fundamental Manufacturing Process innovations. We explore what they are, dig into historical examples, and consider how we might enable more of them to happen. Michael and Matthew are both professors at Georgia Tech and Michael also hosts an excellent podcast about nanotechnology called Nanovation. Our conversation centers around their paper Fundamental Manufacturing Process Innovation Changes the World. If you’re in front of a screen while you’re listening to this, you might want to pull up the paper to look at the pictures. Key Takeaways Sometimes you need to go down to go back up The interplay between processes and paradigms is fascinating We need to spend more time hanging out in the valley of death Links Fundamental Manufacturing Process Innovation Changes the World(Medium)(SSRN) Michael on Twitter Matthew Realff's Website Michael Filler's Website Nanovation Podcast Topics - The need for the innovator to be near the process - Continuous to discrete shifts - Defining paradigms outlines what progress looks like - Easy to pay attention to artifacts, hard to pay attention - Hard to recreate processes - The 1000x rule of process innovations - Quality vs price improvements - Process innovation as a discipline - Need to take a performance hit to switch paradigms - How to enable more fundamental manufacturing process innovations Transcript [00:00:00] this conversation, I talked to Michael filler and Matthew Ralph about fundamental manufacturing process innovations. We explore what they are, dig into historical examples and consider how we might enable more of them to happen. Michael and Matthew are both professors at Georgia tech and Michael also hosts an excellent podcast about nanotechnology called innovation. Our conversation centered around their paper called fundamental [00:01:00] manufacturing process. Innovation changes the world, which I've looked to in the show notes and highly recommend the fact that they posted it on medium. In addition to more traditional methods, give you a hint that they think a bit outside the normal academic box. However, I actually recommend the PDF version on SSRN, which is not behind a paywall only because it has great pictures for each process that I found super helpful. If you're in front of a screen, while you're listening to this, I suspect that having them handy, it might enhance the conversation. And here we go. the, the place that I'd love to start is, to sort of give everybody a, get them used to both of your voices and sort of assign a personality, a personality to each of you. so if each of you would say a bit about yourselves, and the. The, the sort of key bit that I've loved you to say is to, to focus on something that you believe that many people in your discipline would sort [00:02:00] of cock an eyebrow at because clearly by publishing this piece on medi you sort of identify yourself as not run of the mill professors. Oh boy. Okay. So we're going to start juicy, real juicy. So I guess I'll go since I'm speaking, this is Mike filler speaking. Great to be here. so I've been a professor of chemical engineering at Georgia tech for a little over 10 years now. my research group works in nanoscale materials and device synthesis and scale up. So for say electronics applications, Yeah. I mean, this article, which we'll talk about emerged from, you know, can I say a frustration that I had around electronics really is where it started for me, at least, that. We have all this focus on new materials or new device physics or new circuit. And I know your listeners are probably thinking about morphic computing or quantum computing, and these are all very cool things, but it seemed to me [00:03:00] that we were entirely missing the process piece. The, how do we build computers? and, and, and circuitry. And, and so that's where this started for me was, starting to realize if we're not dealing with the process piece, that we're, we're missing a huge chunk of it. And I think one of the things is that people, people miss that where within working within the context of something developed 50 or 60 years ago, in many cases, and it's it's was really hidden to a lot of people. And so that, that was where I came at this. Great. All right. So, yeah, so I'm, also a professor of chemical and biomolecular engineering at Georgia tech. my background is actually in process systems engineering. And, if you go back to the late 1960s, early 1970s, actually frankly, before I was a much more than in shorts, there was a, that was a real push towards. The role of process systems engineering in [00:04:00] chemical engineering in it really arose with the, with the advent of computing and the way that computing could be used to help in chemical engineering. And then slowly over time, the, the role of process systems engineering has become, I think, marginalized within the chemical engineering community, it's gone much over towards. What I call science and engineering science in a way from the process systems piece of it. And so, you know, as Mike would, would berate me with the, with his travails over, over what he was trying to do with nano integration and nanotechnology, I realized that what he was doing was describing a lot of the same frustrations I felt with the way that process systems engineering was being marginalized and pushed to the edges of chemical engineering with the. Focus more around fundamental discoveries rather than actually how we translate those fundamental discoveries, into, functioning, processes that then lead to outcomes that affect society. So for me, it, it, it [00:05:00] was a, it was a combination of, talking to Mike and then my own frustrations around how my own field was somewhat marginalized within the context of chemical engineering. Got it. And, sort of to, to anchor everybody and, and start us off. could you just explain what a fundamental manufacturing process innovation it's. So the way we think of fundamental process innovation or manufacturing process innovation is actually rethinking how the steps in a process are organized and connected together. And so that has become the paradigm which we have. we have set for fundamental manufacturing process innovation, and these innovations come in in different categories that enable us to put these processes together. And one of the examples of which for example, is. I'm factoring taking something that has been done together at one process step and separating it into two different steps that occur maybe at different [00:06:00] times or in different places. And by so doing, we actually enable us to make, a tremendous change in the way that that process operates. So it's really around. The strategy for organizing and executing the manufacturing steps and using a set of schema is to sort of understand how over history we have been able to do that. Do you want to add to that mic? Yeah. I want to take a step back outside of manufacturing. So one of the examples we give at the outset of the piece is not in manufacturing, but in shopping something that every single person listening to this can wrap their mind around, I think. and I still love the example cause it just kind of. I miss it every single day. and this is all pre COVID thinking of course, but the idea that say a hundred years ago, and a lot or Western societies, you would go to let's call it the general store. and you'd walk in, go up to the counter. And, if I have a list maybe, and you'd handle lists to the purveyor, and they would go [00:07:00] in the back rows of shelves and they'd pull off what was on your list and they'd bring it out to you, you pay for it and you go on your Merry way. And then, you know, several decades ago, this started to change, probably half century my ex ex ex. Exactly sure. The timing, but, to, to a model, where instead of a single shop keeper, having to interface with many individual, shoppers, it was now many shoppers who did the traversing of those aisles themselves, right? This is at least in Western society is what we are familiar with today as the grocery store or the target or the Walmart. And what you do is you. Trade one thing for another in doing that right. Instead of, the person, the, the purveyor, getting things for you, which from a customer's perspective is very nice. Right? you, you, you no longer have that, right. You're being told. Okay. He used to, yeah, he or she used to get it for you now. You're going to go and traverse the ALS yourself. But you do get something in return as the [00:08:00] shopper. And that is a lower costs because now one store at the same time can be, open to many, many people stopping shopping simultaneously. So, selection goes up, costs go down and there's a benefit for the customer, and the shopkeeper. So this is an example of a process innovation it's the it's still shopping, but it, it takes the old process paradigm and inserts a new one. Excellent. And so you, in your paper, you illustrate eight major historical, fundamental process innovations. And I would love to sort of frame the conversation by walking through them so that, a just because they're great history and B, so that everybody can sort of be anchored on the very concrete, examples while at the same time, I'll, I'll sort of poke at, The, the more sort of abstract questions and ideas around this. so the, the first, [00:09:00] the first one you talked about is the shift from the new Komen to the watt steam production process. So like, what was that? And, and why was that important? it was important because, what it did was it changed fundamentally how we could make power. So the newcomer engine had, the condensation of steam in the same vessel, as, as the, as what was being the vacuum was being pulled to enable the, Pulling of water up from the coal mines in Britain, turns out it's actually 10 mines rather than coal mines, where this was first developed. And what, what did was to factor that's one of a fundamental process schema factor, the two pieces so that the vacuum pulling and the condensation happened in different vessels. And as a result of that, he was able to increase the efficiency of the steam engine by, an order of magnitude. and, and through other innovations that then followed from that. The steam engine became [00:10:00] significantly more efficient. Now, what did that do? Well, the first thing it did was is it meant that you could pump water out of deeper mines, so you could actually now get coal out of deeper mines and so you can increase coal production significantly. The other thing it did, of course, it meant that for the same amount of power, the engine could actually get quite a bit smaller. In fact, it could get small enough that he could actually move itself on rails. And so what that also then enabled was. Stevenson and essentially the invention of railways without the steam engine. You wouldn't have railways with railways. Now, suddenly you can bring the coal, which you've now enabled yourself to dig out of Deepa mines. You can now bring that to manufacturing sentence. So there's a whole follow on set of innovations. And in fact, a complete reorganization it's called the industrial revolution. That is, that is based on these kinds of process innovations. And this was one of the most central ones, right? To that actual outcome was the idea of factoring these students. [00:11:00] Two steps leading to much greater efficiency in the way that a steam engine could be used. And, and that, there's actually two pieces that I think are fascinating about that. And one is, this phenomenon that you see over and over again, where what I would sort of call a continuous efficiency increases, right? Where it's. It was, it was like a fairly steady, increase of efficiency. But then, because as you point out, it eventually got efficient enough that it could power, a rail car that all of a sudden made this like discrete difference in what the process was actually capable. and I feel like you see this in school, many of the examples that you give, and I like, I just love that. And then the other piece. That I believe is the case. Is that, what was, what was, new Cummins apprentice, right? That I'm not sure about that actually. I mean, I think he was familiar with new [00:12:00] comes work. but I don't know if he was actually his apprentice or not in that particular context. W w and the reason that I ask is that, like, do you think that what would have been able to. Create this, this process innovation, if he hadn't been like, sort of actively working with the new Komen engine in the first place. No, I think the answer is, is that without that he, he, you know, you have that you had to have a starting point. And I think he understood, once, once he sold the starting point that, yeah, that was a, there was a way in which he could make this more efficient. the other thing about the, the, the efficiency and the scanning of efficiency is what we see in a lot of these fundamental process innovations is that there is a step change, but not only that, it shows how then off to that fundamental process innovation has happened. That, that can be this continuous increase, right? So there is, it unlocks an enormous potential to suddenly change the game in terms of the efficiency. So, [00:13:00] so the point being that say the original engine was maybe less than, than 3%. Maybe one, 2% efficient. And what, what did with the sort of next version was increased that by an order of magnitude, and then suddenly with that innovation now by better manufacturing, higher pressure vessels, et cetera, you could actually then go into an even higher level of, of efficiency. Not only that, but it drove the development of the sort of discipline of thermodynamics. Now you have to analyze the engines on their efficiencies and understand what could lead to greater efficiency in the future. And so, and you know, entirely scientific discipline was built on top of the, of innovations that were occurring in heat engines. Yeah. Well, I think there's an important point here in the efficiency discussion, right? And Matthew and I have chatted about this. A fair amount is that you kind of have the efficiency piece and as you're pointing out, Ben, it's really critical. Look it up for some threshold with a lot of these, but efficiency is kind of zero to a hundred, [00:14:00] right? And then you have the whole cost throughput piece. And as we show in the piece, you have many orders of magnitude possible gains on that side of the equation. and some of it goes hand in hand with efficiency, but I sometimes think that that is there's an overemphasis, often on efficiency. you gotta get through the threshold and then recognize that the driving down of costs or increasing of throughput can happen, you know, a million X, you know, as, as for example, the planar process of integrated circuit shows it's more than a million X decrease in cost over time. Yeah. And, and this, this idea is that, that you point out about almost sort of like the process innovation, defining a paradigm that then sort of sets the pack for things is, is a theme that we'll like, let's, let's almost like poke it that as we, as we go through through everything else. and, before we move on, I guess the last piece, [00:15:00] sort of going back to. like Watts familiarity with the process in the first place. And sort of tying it back to to today is, I guess what, what's your take on sort of like the, the, the familiarity that the people who are working on cross as possible process innovations have with the processes now, Let's see, I probably phrased that a little bit weird, but, I guess my concern is that there's, there's more of a separation between the people that we expect to do the innovating and the people who are working on the processes. So, so yeah, this is a really critical point. I mean, what we have done in the modern innovation enterprise right, is we've split, so-called fundamental research with applied research. and, these examples, many, the ones that we give are really squarely between the two and they need both [00:16:00] to function. And so this is, for this kind of innovation or real. I think a real issue with the current way, things are set up, because it requires some knowledge of the science that's kind of emerging. It requires some knowledge of some engineering, and it's a matter of integrating these things. And it's not, so much, I think what the prevailing view of the world is, which is fundamental innovation gets developed and leads to some specific technology. It happens between the two. and so that's, that is, that is, That is a theme I think, and these innovations and it's something that I think today is harder to do. we could talk for a long time about why it's harder to do, but it's harder to do today. Cool. Well, we'll, we'll we'll. Circle back on that, as, as we get sort of closer to the present. so can I say one more thing? This is such a good example, but everyone knows the, the watt engine and we are very careful to call it, the watt, what do we call it? We call it the [00:17:00] walk process, right? We call it the what process, what process for energy generation or something like that. But yeah, we focus on the process and I think this is one of the reasons why these kinds of manufacturing innovations are missed all the time is that you focus on the engine, the physical thing that carries out the process and you're missing that. Oh, actually, what, what did was he factored these two steps? It's still a machine like new Coleman's machine, but in the end, what made it so powerful was the underlying process that. It carried out. And I think that that is one of the reasons why these manufacturing innovations are missed in manufacturing versus in other areas where process is talked about much more frequently. So I wanted to make sure, well, actually, as long as we're on that topic, I want to, sort of the talk like the. call out the sort of obsession with novelty in academia, where like, [00:18:00] if like, it's, it's really important to call out the, the process innovation. Because if you look at it just as like steam power, then you could sit, like you could sit a lot, like what's novel, like near your dinner, your power from steam, new colon generated power from steam. and so, so we, like, we need to. Really sort of pay attention to what's going on on the inside and like how that really different, even though on the outside, it does not look that different. For sure. And, and I think the point that we arrived at there is, is, is when we went back into deep history and asked ourselves, well, what do we call the ages of the past? And we call them things like the INH. We don't call it the smelting age. Right. Right, right. We could, we could call it by the process, but we don't, we call it by the thing that was made. you know, we don't, we don't talk, we talk about Flint's and we talk about Flint arrows. We don't talk about the ways in [00:19:00] which those flints were shaped into arrowheads, the flaking and the, and the, and the. But essentially those kinds of processes, which we don't even know in many cases how to reproduce and they lose that knowledge for, for many, many years. In fact centuries, the one example we use in the paper is that a Roman concrete, you know, we were able to, to look at Roman buildings, but we were not able to reproduce them because we had lost the, the recipe. We lost the recipe for making, concrete, with the, with the sort of dissipation of the Roman empire. And so in fact, we couldn't reproduce these buildings, so we could look at them, but we couldn't reproduce them because we had lost the process. Well, I think that that's so key to point out because it's almost what, like similar to the, the streetlight effect where, it's, it's so much easier to look at and point out and talk about the, the artifact. but it's, it's, not as legible what work went into making and even, even now, like, even [00:20:00] now, when you like, literally when everybody's writing everything down, it's still, there are so many little things that go into these processes, that are sort of illegible. and I think that it's. Easy to forget about that and think like, Oh, well, you know, someone wrote it up. Therefore we know everything that can be known about it. Yeah. History is kind of similar, right? The history. Yeah. We, we, we look back on history and we don't see the generator of the history. Yeah. So it's, it's often very hard to get our true handle on what it was that led to certain phenomenon. We, we, we look back and we start to come up with theories. and I mean maybe sometimes they're right. Sometimes they're wrong. We don't have, we have some ways of knowing and other areas. We have no way of knowing because it's, what happened is lost to time. Yeah. Sorry. This is kind of very similar in terms of the fleeting nature of processes. Yeah. And, and, and the fact that it's not easy, I think it should be born out [00:21:00] by anybody who's ever tried to read the materials and methods, sections of academic papers, because you will discover that very rarely do the researchers actually document the materials and methods in sufficient detail to actually reproduce them. There's a, there's a, there's something that they do in the lab that they just forget to write down. That's actually absolutely critical to make the, the, the, the material process work. you'll just discover that they, Oh yeah, we soaked it in methanol for 60 minutes. Oh, I'm sorry. We left that out. you know, there's, there's there are, there are easy to leave out these steps that turn out to be crucial, but they're not the final artifact that's being exhibited in the paper. Yeah. Yeah, there's this, there's this, sorry, there's this, this kind of discussion in, today in science about irreproducibility and we have this reproduction crisis and okay. Maybe we can be doing a better job, but I think a lot of it it's just, as Matthew's describing it's stuff that is not obvious you, as the experimenter are doing the experiment. You, even, if you wrote [00:22:00] down absolutely everything you thought you did. There are things you didn't even realize you were doing that were central to the process and it gets lost. And that, that to me is likely the main source of a lot of these, these issues. Yeah. I wonder what would happen if we actually had a system where you just videoed, literally everything that someone did in a process and then, like captured every key stroke on their computer and it would be it. Yeah, but , I wonder, I wonder whether it would just be completely, unintelligible or whether there'd be something useful that came out of it. Just for the sake of time. I love, yeah, let's move on the second of eight. so, the, the, the second process you talk about is, the, the, the foreigner process for continuous papermaking, which I did not know anything about before I read this. so yeah, like what, what was that, why was it important? So, so here is it's a lot like, what, Gutenberg good with the press. but, [00:23:00] paper prior to this innovation was Preston single sheets and dried as single sheets. basically a fully integrated process on one sheet of paper. And, what, continuous papermaking did was it took each of those steps and separated them into individual components. So that's a factoring schema, as we describe in the paper, where you first throw down the slurry of pulp. Right. And then, there's a section where you let the water drain. you consolidate the Pope down into something that's like a sheet, and then you push that sheet through rollers. and then you dry it, but each of those steps are different, right? The pulp deposition, the rolling and the drying are separated in space and time now. Whereas before they were more or less in the same space. And so that, that factoring allows you to scale up by orders and orders of magnitude, that production rate of paper. And so we talk a lot about Gutenberg's press, being central to mass literacy and it clearly [00:24:00] was. But, and, and we're not the first people to point this out, Tim Harford, who I like a lot who writes for the financial times and his own books, has talked about this where, you need to have the continuous paper. Manufacturing piece so that you could get those books to so many more people. And it was really both of those together that, that led to that. The other point I was going to make about that is, is it also revealed that we, that we were going to that as soon as we were able to, you know, produce, paper at large rates, we needed some sort of raw material that could also be produced. At large rates. And so this idea that you are going to continue to use rags as the, as the input, suddenly became difficult. And so people had to scout around for other forms of fiber that you could use. And that's really what led to the whole, you know, creation of, of the pulping industry that, that takes what. Well on the face of it, a tree doesn't exactly. Look like paper, takes a tree and turns it into something that you can make a make paper out of. [00:25:00] So again, it's this upstream and downstream it's the, the downstream effect is, is. The societal mass literacy, the upstream effect is, is the, is the creation of a, of an entire industry around, you know, turning trees into, into pulp. and so some people might disagree with doing that, but, but the bottom line is, is that's what enabled, the, those two pieces to be driven was the creation of the, of the, of, of papermaking in the, in the middle of that. Yeah. And something that. So, did you have a sense of how people were thinking about papermaking? Oh, for, before for generic came up with process that is like, did, did they realize that it should be possible to make paper more efficiently? Or was it just like, just that's the way it wants? because I feel like so many of these process innovations. [00:26:00] There are people just sort of accept whatever level of whatever process we have. And we're like, Oh, like that's the way it is. Yeah. Maybe we can make it a little better until something new comes along. One of the things we were careful to do in the piece. And I'll be honest because we're not historians is to, to try to stay away a little bit from like the, the, the driving forces. Right. And kind of what people were thinking. I'm really focused on the mechanisms. And that's one of the things, you know, I've really enjoyed learning from people who are in the, the progress studies community, that emerging community. in general, I find that they really know a lot about history and that's great. and we really wanted to make sure we could pay attention to mechanism at the, at the actual innovation level. and so I guess I'm saying that as a long winded answer to say, I don't know how they thought about it. but, you know, but I think that there's kind of been a shift over time. you know, Matthew was sending me, show me something from scientific American recently. [00:27:00] They just, what was their anniversary? Matthew? A 175th. I can't remember what that is in Latin, but, but it's, it's a very long and complicated word. Yeah. But DECA. Yes, exactly. Quickie and no versary. Yes. It's something like that. I buy, if I pumped up, I could go get my issue and they have it in there, but, but it is, it's quite a complicated word. That's all I remember. And they have a article in there talking about the shift in how people speak, spoke about science and engineering. And, h hundred years ago, there was this kind of more engineering processing, which that was far more common. And then around at the time of world war two, it kind of shifted, be more about science and the emphasis on science. At least as far as that magazine goes, but I think the magazine is probably fairly representative of the endeavor as a whole. And so, yeah, that's, that's kind of fascinating. You're saying, did they appreciate, whether the process could be [00:28:00] better? And my gut feeling is they maybe in, in the 18 hundreds, they appreciated that it could be better, more. Did they have an appreciation for how much better that's that's probably dubious. Right? I think most of these, if you went back and asked the original innovator. Did you know, you were setting us on a pathway or a trajectory that led to, you know, the world, as we know it today, I think they'd probably be like, wow, no, I did not expect that. I just was trying to make an extra buck. Yeah. But I think it's like, it's actually almost like a powerful, admonition people to sort of like, keep in mind the different schemas that you lay out and just to like walk around the world. Saying like, Oh, like, could this, could this apply here? and it almost like gives you a bit of humility that it might be possible that like these could always happen. that's for us, that's kind of [00:29:00] emerging from doing this and we're, we're continuing to work on, on, on next pieces basically is a kind of a thousand X heuristic. Whereas you have a two D technology today and you ask yourself, can I do it a thousand X cheaper or a thousand X faster? with the way we do it today? if the answer is yes. Okay, great. And you're really competent that if the answer is no, it may be time for a process innovation. Maybe to us a thousand X is, is sufficiently beyond someone, you know, giving you the pop out answer. Of course, we've made progress in the last 10 years and I expect more progress. Well, that's kind of a cop out answer. A thousand X is quite a bit faster or quite a bit a higher throughput. So that's, I think that's a good metric for anyone working on any technology. and I think COVID COVID is a great example of what we've been experiencing in the last, however many months. It feels like two years, and you know, we needed rapid vaccine [00:30:00] manufacturing. We needed rapid testing, basically a thousand X faster. And we didn't really have that capability in hand and people have done tremendous work right in the, in the intervening months to try and get us a lot closer. I know Matthew has done some work on this. but when the whole thing started, we hadn't really thought about it so much yet. How could we speed up this a thousand X? And so for us, it's a pretty good heuristic is that, is I like that a lot. That is a very powerful heuristic. and it's also like it's, it's aggressively ambitious, which really, really does speak to me. cool. And so, let's, let's talk about the, the Bessemer process for steel manufacturing, which, His age is really cool. everybody listening, go check out the pictures. so, so what is that and why was it important? So again, I think it was important because what it led to obviously was a, was a, a better steel and, steel that you could make. Again, as Mike has pointed out, you could [00:31:00] make, the steel significantly faster than the existing processes. and what it came down to was was, was a recognition that actually to remove the impurities from the steel, you, you could blow air through the steel. That that would cause a reaction that would cause the steel to heat up. Whereas if you think about blowing edge generally, if you blow on things, it makes things colder. So this idea that you would blow air through something to make it hotter was was, was obviously a, you know, something you do in bellows and had been at. Had been thought about in terms of bellows, but actually literally blowing the air through the steel was, was not something that had been done and, and combined with that idea was also this idea that by removing all the impurities and making essentially something that was, that was pure. And then adding back dosing back impurities after you've purified. So that you had control over the composition instead of attempting to stop right at the moment when you had exactly the [00:32:00] right amount of carbon, for example, in the steel, that, that was then another powerful idea that came about. So, so the Bessemer process really. Had a profound impact, both in terms of, again, how much steel you could make in a given amount of time, because it increased the rate by this heating, and then also the control of quality by this site, this very counterintuitive idea of removing all the impurities and then adding something back in order to get to the, to the final product that you wanted. That led then to, to much stronger steels than had been capable of being produced previously and much higher quality control too. I mean, that was a key piece of that. And so actually on that point, you, you, you, you note that the, the best word process led to, three order magnitude, three orders of magnitude increase in, in steel production. And, I'm not, this is something that I, I always wonder about with the, these process innovations that both make it cheaper and [00:33:00] increase the quality, Do you have a sense of whether the order of magnitude increase was primarily due to sort of like moving down the supply demand curve, where there was just like people, you know, because the see was cheaper, they would consume more of it or was it primarily driven by, by new applications of the higher quality steel? obviously it was both, but it's interesting to think about like, which of those. Ends up being, I think the high quality in this case was a, was a very critical factor in the, in the, in the equation poly, because one of the things that opened up was is it opened up the idea of making steel rights, as opposed to what was made from iron rails and steel rails were able to bear a huge, a significant amount, more weight. And because of the fact that they could bear more weight. Now, suddenly again, you could increase the distances and volumes of which trade could happen. And so this, this was one of the reasons why, for example, you could spread [00:34:00] all the way across the United States because you could connect the resource rich West to the, population rich East. with, you know, now a much more powerful, communications network driven by, you know, the steel rails that you were able to produce. So I think that a lot of it was, was, was, you know, bound up with this idea that suddenly now this new application came, came about, that you could do much as the steam engine sort of. When you were able to move the steam engine with its fuel, you now actually could even start that whole process going. So, so again, it's this knock on effect, here, follow up on that and just make the connection for everyone that the efficiency threshold we talked about with watt is very similar to the strength threshold Matthew's talking about with steel. Right. And cross that threshold to a new material, a new strength threshold, but then it was really this driving up production, driving down costs by orders of magnitude. And yeah, we, we got better [00:35:00] higher stress, but you're not going to change the strength of something by a million times. Right. Right. So again, it's, it's kind of these two columns, the efficiency or performance column, and then the manufacturing scale column. Right. And, and going on to the next process in the, in that, in that, in our list, the calorie cracking process, again, you have that same, juxtaposition. You have the fact that by factoring the catalyst regeneration from the production of the fuel, you enabled yourself now to have a continuous process. which enabled you to increase the throughput in terms of the barrels of oil that you could, you could bring through this process, you enabled it to be increased significantly, but also this innovation was happening at a, at a time period where aviation in war was a significant factor and the quality of the fuel that you actually produced. Out of the, out of the catalytic cracking process was higher than the quality of fuel you [00:36:00] produced just by distilling off a certain fraction of the, of the crude oil. And so what you were able to do essentially was, was have a higher performance aircraft engine that was quite significant in terms of its power to, to wait. A ratio in terms of what it could deliver. And so that gave a, you know, allied aircraft, actually a significant boost in performance by having this fuel available to them. And again, provided a significant driving force to scale up the process, which again, went up by a factor of at least a thousand, over the course of, two or three years. Yeah. It's these numbers like whatever, would it be? Say these numbers it's still sort of crazy because it feels like. So many things, focus on like getting, you know, like 10% more efficiency. whereas like, like truly getting to a thousand thousand Xs is like mind boggling. So, I believe this was the case for catalog cracking, and I know that it's the case for many process innovations, [00:37:00] where, at first the, the innovation actually makes the process less efficient like wall while you sort of are figuring out how to get everything working. And then, once you do that, then it makes the whole thing skyrocket. and so I, I guess, The question is like, do you have a sense of how people sort of got past got out of these, like these local equilibria where, you know, if you went to someone you're like, Hey, I want to think less efficiently so that eventually it will become more efficient. so like how, how these, these things even got through. I'm not sure I have any great answers except perseverance. I mean, I think a lot of this stuff comes down to, to the inventor, really, you know, from their experience from their early work on, innovation recognizing in themselves and in their work, that there is the potential, even if right now it's not quite there. you know, [00:38:00] Bessemer was the same thing where, you know, you first, licensed the patent to people and they could reproduce what he did. So the separation of full separation of impurities came later, so that people could reproduce it. So that was a reproducibility problem in the beginning, not so much a strength problem. and, yeah, I don't know. I think a lot of this just comes down to the person, saying I see it just like any of today's, you know, visionaries we talk about in the innovation space and then just keep hammering on it. Yeah, right. I mean, there's counterfactuals, right? So sorry, Matthew. I mean, it was just, we can't, we don't know the ones where the person didn't hammer on it and it never came to fruition. So it's hard to know. Right. I'm going to string together, you know, a few thousand laptop batteries and stick them on the bottom of, of a, of a car. And that is going to create a company called Tesla. Right. so, so, so the answer is, is, is it's very hard to predict, obviously a and B the T's about a lot of it is about [00:39:00] perseverance and certainly Elon Musk will we'll talk at length about the fact that he, he. He's thinks his quality is perseverance. And that it's, that that's, that's very important in this context or I'm going to have a rocket that goes up into the air and then eventually pirouettes and lands on a, on a platform floating in the middle of the seat. so these, these are, these are, you know, innovations where, where certainly the, the individual involved has plays a pretty signature. If you can, too, to the perseverance necessary to get it to that stage. But, but it's also important to recognize, right. That it's not perseverance along the existing trajectory. Right. It's stepping aside trying to establish a brand new trajectory and pushing on that. And I think sometimes those, those two are missed a lot. When you use the word perseverance people, miss that. It's it's, it's also this stepping outside of the existing trajectory. Yeah. I I'm, I'm particularly interested in whether we can like. Create Mehta innovations in sort of [00:40:00] roadmapping out what that stepping aside looks like. So instead of just, I'm saying like, okay, we're gonna go this other way. Like really sort of saying, we'd go this other way. And like, this is what it will take to get this too. Do that, that thousand X to hopefully make it easier for, these individuals too. So just convince other people that they're not crazy, when, when they don't maybe have a couple of million dollars to go off and like blow up rockets on an Island. Yeah. It's I think it's, it's, it's hard to figure out. I mean, look at, look at the bottleneck that emerged that Matthew was talking about and continuous paper manufacturing. I, you know, I think I'm pretty sure when they started, developing that process, they didn't expect that to be the next roadblock. Right. but it was, and so, so again, this comes back to the perseverance thing. I think, I think you can try and outline it stuff, but there's going to be roadblocks. And you probably should. Right? Don't just, this is not just serendipitous. I think there's a certain kind of [00:41:00] force that comes with these things that people push on the innovations. but you know, recognizing that there's going to be one new bottlenecks that emerge, but not to let those discourage you and that, you know, this, they think of them as, you know, motivating new science and engineering and, and that's how I view a lot of this stuff. And, and yeah, that's what I would say, Matthew. Yeah. And, and actually on the note of sort of unexpected bottlenecks, I think that that's another key point is that, like so much science and engineering does come out of trying to implement things and then running into bottlenecks that you can't even expect. Right. Like, instead of trying to like, imagine everything through, cool. So just in it for the sake of time, let's talk about the, the planar process for integrated circuitry, which like arguably, has been the driving force of at least the second half of the 20th century. [00:42:00] Yeah, and I think it's often a missed, right. We talk about the integrated circuit and information technology, and miss the fact that there's this process underlying it, that has enabled us to interconnect. I mean, it's in certain settings, it's hundreds of billions of transistors now. Right. And so, in the early days, everything was discreet. just like everything else, everything was modular and discrete components. Yeah, transistors were all sold as single tracks. I would tell them that way. Yeah, exactly. No, no. Yeah. I'll, I'll take three. And, they, P people have the idea of interconnecting them. We, we were building computers. We recognized how hard it was to take these modular components with the technology of the time and integrate them. the other thing that was happening at the same time was some science. And actually, this is one of the cool things about the planar process was that there was science going on. Where there was a recognition that embedding these electronic devices all the way inside a single crystal, Silicon wafer gave you much better performance. [00:43:00] And so it was kind of the realization that you could jam these things inside the top surface of a wafer. There was also surface passivation, for those who are familiar with this process, that was key to making the devices good once they were embedded, but then once they were inside the wafer, the top surface remained flat. but they were embedded. Right. but the, the technology before that was what they used to call Mesa technology, where the transistors were kind of built on top, like mesas and Utah or Arizona, but putting them in, okay. The wafer left the top surface flat and much easier to interconnect using this development of photo lithography. And then it went from there. and, and so that, that was the key innovation, was this extreme parallelization basically. of embedding, not just a single transistor, but thousands and then millions and billions of transistors. And I want to also point out, you know, The, the, the trajectory that, that set us on as described by Moore's law, [00:44:00] this idea that we, decrease the size, increase the number at a, at a rate that's, gives us Moore's law and, and potentially that's slowing down. that's another one of the features of process innovations in many cases is that they, they eventually will run out of steam. and, I, I think we're starting to see this with the planar process, where it's had a tremendous runway. but we're getting to the point where the underlying assumptions of it may no longer not, they're not going to go away, but that we may benefit from an alternative way of building circuitry. Yeah. The, these processes they're, their effects tend to fall as you point out, tend to follow S-curves. Right. So that's, we're sort of, you see it when you start to like hit the top of that. S-curve that's when you need to think about like these fundamental process innovations. I think we've been at the top of the S curve for a long time, the processing, I mean the prediction of the [00:45:00] end of Moore's law. And I say that in quotes, it has been around for decades and, always been able to get around it. and that's impressive. It's a Testament to the scientists and engineers that work in the industry. But, you know, you can only get so small. yeah, that was an interesting thing here about biases also that, the planar process biased us towards miniaturization, right. biased us. But one of the central tenants of the planar process is perfection at every step. Once you put transistors in the solid wafer and you can't pull them out very easily, or really you can't, if they're defective, You're now in a world where every transistor up to these tens of billions, we're talking about better, be really close to. Perfect. And, so what that drives you towards it incentivizes you to, not change too much about the process and find a trajectory that allows you to still increase performance. And that trajectory was just shrinking thing. Don't change the materials too much. Don't change the [00:46:00] processes by a large amount to shrink stuff. And that was very synergistic, right? That's Moore's law and it's a tremendous success, but it did incentivize us down that pathway. And it's a bias that process innovation set up and that other innovations would set us up to go in a different direction. Yeah. Yeah. That's the, the counterfactuals are fascinating. And, and, and another thing that I think is really interesting about the, the planet process. and, and it happens in other places where, horny, who, who came up with it happened to have had experience with printing, if I remember correctly. And so you tend to see these, these situations where like someone who has experienced in like a completely different discipline. Just so happens to be interacting with the process and say like, Oh, Hey, perhaps this thing from this other discipline can be applied in this process. and I wonder if there are that, like, do you have an incentive, like sort of better ways to get that to [00:47:00] happen? well I do, which is to create a specific, discipline around, this. So, so I, you know, if I'm going to take a very strong position here, I would say we need, we need a discipline of process studies. where we do try to lead, you know, young minds because ours have too inflexible at this point, across these different kinds of examples and allow them to see the connections between the different processes in different technological domains. And that may be, although that's not a, not a, a pedagogical, certainly that will be this opportunity. They will then connect these ideas in some other manufacturing domain, or even across. for example, service domains, I do see that there is this general principle around process innovation, manufacturing, so potentially, possibly founded on the schema that we've, that we've outlined that could enable people to see these [00:48:00] connections and start to use ideas from one process discipline in another. And so factoring could be sunny appears as we've said, in, in services. And it could appear in other manufacturing domains as well. So, so I would advocate for a borough, sort of a discipline that's built around this, these ideas so that we could lead people to make this more efficient in terms of our discovery. Wait, Mike's refraining. No. I, I, I agree. I think probably the things we're talking about or the discipline Matthew's talking about, I would liken it a lot to the role mathematics plays, right? Mathematics is its own discipline. it's separate, but all of the engineering and sciences use it. and so this is kind of similar and we were very careful, to pick out to process innovations that span the gamut. We really, we think, I think it's hard to argue that any of the eight we picked, were not really impactful. but they, they really [00:49:00] span a whole variety of, of disciplines kind of showing that it really is everywhere, but we don't recognize it as, so as pervasive as something like mathematics. and, I, I don't want to be heard as saying, well, we're as important as mathematics. mathematics has been along around a long time, but it's something akin to that. Right? I think the one place that I think it's different and would need to be adjusted somehow is that there's there isn't a ton. I mean, there are some, but like there isn't a whole lot of feedback loops between. Matt and the, all the other disciplines that math, enables. so the, so like occasionally you'll see like a mathematical problem. That's been inspired by a, a sort of more applied problem. whereas I imagine in some kind of, process innovation discipline, you really do need to have these, like these feedback [00:50:00] loops. Between, the, the discipline and the, and the sort of like the effective disciplines and sort of like setting up those, those feedback loops seems, important and harder. Yeah. Discipline is hard. Yes, absolutely. And I think with mathematics, we may have been doing it for so long that we don't see it. Right. I think, I think, you know, if you think about astronomy, for example, astronomy uses uses mathematics falling objects, is one of the inspirations for a lot of, a lot of mathematics. And so sometimes I think we know that mathematics has become the problems in mathematics have become so embedded with each other in some sense that we don't see that we need to create that, that, that feedback loop. Right. whereas, you know, geometry, for example, is another one, where, whereas in, in process, I agree with you. It's still something that I think is despite us having, you know, used [00:51:00] processes since we were, you know, since we were time in Memorial, right. We haven't really set up that as a formal means of, of analyzing the way we, the way we do things, right? I mean, that's, that's, if you like, it's the science of the way we do things. and that's what we need to, we need to think about and actually put that out. I'm going to argue against myself and, and there's, there's tons of examples of math, being inspired by, by applications where like, look at information theory, right? Like the whole reason that. We have information theories because they wanted to see how much information they could cram in a single copper wire. So, so I will actually rescind that really. Yes, I think so. And I, and I think the other thing there is, is look how impactful, what is the impactful mathematics? It is actually, I mean, in some sense, almost by default, but it is the sorts of things where now, you know, where information theory was obstructed away from the app, from the original idea. And [00:52:00] now has come back to influence a whole range of. Of of applications beyond that. And that's, that's the, the value. And I think that's the same thing with process innovation, right? If we could abstract away find the, find the, the, the core of that as a discipline that could then come back and influence a whole range of, of the way that we do things. Yeah. And, and so, so I do want to be respectful of both of your times. so, what I will do is encourage people, listening to go look, read, like, read the paper, to discover, the, the last three, fundamental process innovations. And the way I'd love to close is, sort of beyond reading this paper, like, how do you think that we could. Get beyond, reading the paper and Vicky about a new discipline. Like what, what are ways to get more of more fundamental process innovations? Well, I think we, we, at least in some, [00:53:00] some amount of our innovation sequence, need to recognize that there are things that happen. Within the Valley of death. So, you know, we talk a lot about the Valley of death as something to cross. first of all, Valley death is very manmade because we've split fundamental science and applied science and processes. An example where the splits are really bad thing. And instead of crossing it, we should look at at it as we want to go into it and hang out in it. Yeah, right. I think this is one of the issues with it. This course is it's all about something bad versus no, it's actually where we need to be. for, for certain innovations. you know, I think you think about the Nobel prize from this last week for CRISPR like that, that is squarely in my mind, that is a discovery. It's a fundamental discovery and it'll be translated that that's kind of the conventional view of things, but there we are not doing ourselves any favors by. By having the scale too [00:54:00] much on the fundamental side and that we should at least rebalance a little bit and force ourselves down into that Valley. Just hang out. Yeah. Love it. Matthew, what do you think. Yes. I think the, the stepping away from some of the things that we take for granted, like electronics manufacturing, and, and considering Mike's question around what would make this a thousand X, better in some dimension. Is is, is really the way that we can, that we can make progress. And again, your point was very well taken, which is sometimes when we get better at something, we're going to get worse at something else. Right. And, and it could be that we're going to have to accept that we will not have circuitry that behaves as, as, as well, or as fast as it did previously. But now we may have gained in some other dimension. So again, it's about taking the blinkers off and not saying, okay, we have to have these particular metrics [00:55:00] always be improving, but think about how through processes. We may take some other metric and now make that significant it'd be better than it was previously. And then. Hang out and see what happens as Mike said, because by doing so, we may in fact then lead ourselves to improve other areas as well. And that, that could then lead to the kinds of scalings we saw with making steel, making paper or making energy. And so that's what we really need to think about. Here are my key takeaways. Sometimes you need to go down, go back up. The interplay between processes and paradigms is absolutely fascinating. And we don't talk about it enough. And finally, we need to spend more time hanging out in the Valley of death. [00:56:00]
A conversation with Professor Andrew Odlyzko about the forces that have driven the paradigm changes we’ve seen across the research world in the past several decades. Andrew is a professor at the University of Minnesota and worked at Bell Labs before that. The conversation centers around his paper “The Decline of Unfettered Research” which was written in 1995 but feels even more timely today. Key Takeaway The decline of unfettered research is part of a complex web of causes - from incentives, to expectations, to specialization and demographic trends. The sobering consequence is that any single explanation is probably wrong and any single intervention probably won’t be able to shift the system. Links The Decline of Unfettered Research Andrew's Website A Twitter thread of my thoughts before this podcast (Automated, and thus mistake-filled) Transcript audio_only [00:00:00] In this conversation. I talked to professor Andrew Odlyzko about the forces that have driven the paradigm changes we've seen across the research world. In the past several decades. Andrew is a professor at the university of Minnesota and worked at bell labs for that our conversation centers around in his paper, the decline of unfettered research, which was written in 1995, but feels even more timely today. I've linked to it in the show notes and [00:01:00] also a Twitter thread that I wrote to get down my own thoughts. I highly recommend that you check out one of them either now or after listening to this conversation. I realized that it might be a little weird to be talking about a paper that you wrote 25 years ago, but it, it seemed when I read it, it sort of blew my mind because it seemed so like all of it just seemed so true today. Um, and so I was, I was wondering, uh, like first do you, do you, do you sort of think that the, the core thesis of that paper still holds up? Like how would you amend it if you had to write it again today? Oh, absolutely. I'm convinced that the base thesis is correct. And as the last quarter century has provided much more evidence to support it. And basically if I were writing it today, I would just simply draw on this experience all those 25 years. Yeah. Yeah. Cause, okay, cool. So, so like, um, I sort of wanted to [00:02:00] establish the baseline of like asking questions about it is still, is still super relevant. Um, So, uh, just, uh, for, for the, for the listeners, um, would you sort of go through how you think of what unfettered research meets? Because, uh, I think many people have heard of, of sort of like, like basic or, or curiosity driven research, but I think that the distinction is actually really important. Mmm. Well, yes. So basically unfettered researchers, emotional curiosity, driven research, very closely related to maybe some shades of difference with the idea here is that you kind of find the best people. You can most promising researchers and give them essentially practically complete freedom. Give them resources, making them complete freedom to pursue the most interesting problems that they see. Um, and that was something which, uh, kind of many people still think of this as being the main mode of operations. And that's still thought [00:03:00] the best type of research in that case, but it's definitely been fading. Yeah. So, uh, would you, would you make the art? So what, like, what is the, is the most powerful argument that unfettered research is actually not the best kind of research. Well, so why is it not the best kind of research? So again, this is not so much an issue of world's best in some global optimization sense. And so on my essay. It wasn't really addressed to the forces that were influencing conduct of science technology research. Um, and, uh, I'm not quite saying that it's kind of ideal that it was happening. I said, well, here are the reasons. And given the society we live in and the institutions, the general framework here is what's happened and why it's happening. Yeah. [00:04:00] Now and a particular outfit. Yes, there was an argument coming out of my discussion was that, uh, this unfettered research was, uh, becoming a much smaller fraction of the total. And this was actually quite justified. But yes, uh, even so to a large extent, research did dominate for a certain period of time. Um, that era was ending now. It was likely to be the con kind of consigned to a few small niches. So evolving on the, a small number of people, much more of the work was going to be kind of oriented towards particular projects. Yeah, the, the, the thing that I really like about the term unfettered research that I feel like draws a distinction between it and curiosity European is that, uh, unfettered research, the idea of fettered versus unfettered, uh, feels like it refers to, um, Sort of like [00:05:00] external constraints on a researcher, whereas curiosity driven versus, uh, not curiosity driven is, uh, the motivation uh, um, Where, where is like, curiosity? Do you have any, is like the internal, no motivation for a researcher. And I think it's, my whole framework is around incentives. So it's like, what are the incentives on researchers and, and, uh, fettered versus unfettered really sort of, uh, touches on that. Yes. Um, personally, I don't draw a very sharp distinction between the two, I think has got into very fine gradations and so on. I'm not sure they kind of necessarily in most meaningful is our sons. Is that when we're talking, just driven around unfettered research, People are never kind of totally acting in isolation based on is our curiosity. They always react to the opportunities. They react to what they hear from other people. And very often also they are striving for recognition. Yeah, [00:06:00] invitations to stock home to receive about price and so on. That's something many people in the proper disciplines of course keep in mind or so, so there are always some constraints coming from particular group in that case, I kind of, I know these terms as almost synonymous. Yeah, that makes a lot of sense. And so sort of a, the upshot of the decline of unfair research for me was, uh, kind of mind blowing. And it makes so much sense when you put it this way, that research has become a commodity. And I'm not sure how much you've been paying attention to sort of what I would called, like the, the, um, stagnation literature, where there's been a lot of literature around the idea of, of scientistic stagnation. And I realized that sort of at the core of that was this assumption about [00:07:00] research being a commodity. Like you look at these economic models and it's just like, okay, well we need more researchers to produce more research and it's this undifferentiated. Thing. Um, and so, so like in your mind, what are the implications of something specifically research becoming a commodity, right. Let me maybe kick it back a little bit. I'm not sure commodities quite the right term. Uh, I think we can relate it to something that has been documented and discussed very extensively in various areas, such as sports. Sports or maybe music and so on named new that what happens is, well, it's becoming very music, becoming very competitive, uh, schools, cranking out people are selecting them for the ability to perform at a certain level, scolding them, and then letting them go on the stage and so on and compete. And so what you find, for [00:08:00] example, you sport typically the gap between the. Top, whereas leads say the gold medal winner as a silver medal winner has been narrowing performance has been increasing in practically all areas of sports people, jump throws that are higher. They run faster. So on again, that seemed to be leveling off in many cases. People studying human physiology, argue with some quantitative models that we're approaching the limits of what's possible to do with our human body, unless we go to some other planet and other environmental assaults. Uh, so you hire these people, but you still have the best ones in there. Um, you were saying bolt, you know, kind of, uh, sprint or repeatedly case is I got a good example. And so you, you, you couldn't, it's not quite. Correct to say is that the hundred meter [00:09:00] sprinters are a commodity. There is definitely a differentiation there, and there is a reason to encourage them to compete and get better and train to do better and better. On the other hand, you come to a situation losing anyone knows the top around nurse makes less and less of a difference to the performance. It should observe. And I think that something similar happening with the research, you said that she saw you. And so I think that presupposes something that I love your take on, which is that sort of, there are natural limits to human physiology. I think like that's a pretty clear, right? Like, um, but there's. Not as clearly a limit to technological ability or the, the amount that we can know about how the universe works [00:10:00] like possibly. Um, and so, so this is, this is almost like, it feels almost philosophical, but so the, the analogy to sports, um, Would presuppose some, some natural limit on, uh, sort of like the amount of science and the amount of technology that we could do. Um, and so, so do you think that that's, that's the case. Okay. Yes, there definitely is a difference in those kind of general research in science. We don't have these very obvious, very obvious reasonably well defined limits. On the other hand, what we're coming up against is the fact that these fields still are becoming more and more competitive, soft sciences are sort of growing. Uh, it's also your current number of sub fields is growing. A volume of information that's available is growing while that also means that watch any single individual can master [00:11:00] smaller and smaller fraction of that total. So in some sense, you could say that human society is becoming much more knowledgeable. The algorithm each individual we can say is becoming less, less knowledgeable, knows less and less about the world. And we depend much more on the information we got from others. Uh, there's this extensive concern right now about the postural world and all of these filter baubles and such tied to how being created. And is that this almost inevitable because. How do you actually know anything? Um, sort of surveys show that maybe 10% of the people believe the earth is flat. And all those theories and all those pictures from space as being fake or creations of people with video editing tools and so on. And well, uh, most people can [00:12:00] live quite well with the mental model of that world. Uh, as long as they are not in charge of plotting rocket trajectories or airplane trajectories, and so on, same thing, vaccinations I'll do you know the vaccination is good. I'm assuming you're not . I, I believe that vaccinations are. Pretty good. How would you, you prove to me that vaccinations work again, there's a whole long chain of reasoning and data and so on. That has to be put together to really come to this conclusion that vaccinations work. Some is sometimes I ask my questions and my students. Whether they come through as the artists around now, you from Caltech, you may remember enough physics to be able to come up with a convincing argument. Most people can't. Okay. That's all. It would have been thought. It's consistent with everything sounds fine. So is [00:13:00] the result. Is that we have people, large groups of people working very hard and as much as very competitive, uh, in many cases, and you look many projects require extensive collaborations. Uh, and this has been documented in a kind of quantitative terms in some of my presentation decks. I had some, this slide. Where I showed the degree of collaboration amongst mathematicians. So similar, similar graphs could be drawn from other disciplines. Many of them moved towards more collaborative form, a head of mathematics, a lot less, but slower and so on in mathematics background, 1940 around I focused the exact numbers now, but there are 95% of the papers where it's in the bystander or, sorry. By year, 2000, 60 years later. No, it was down to about under 50%. Wow. And by now a check, I haven't [00:14:00] gotten the latest numbers. I suspect it's probably well under 40%. And so what does it reflect? Uh, I suspect to a large extent, I think that's consistent with what other people found in other disciplines who started more carefully is design need to combine different types of expertise. Great. Um, not knowing enough to be able to cut out the project. That's crazy. And so, so this, this paints for me, a really sobering picture of a world in which. Basically like as, as you need to collaborate on more things, there's more specialization. So you need more people to collaborate, uh, which just sort of by its very nature, nature increases, coordination costs. And so it feels like it's almost like just more and more friction in the system. And so each new that just like has more friction involved, um, and. [00:15:00] So like, is it, this is like the inevitable trajectory, just for things to, uh, to stall out or like, is there an escape hatch from this, uh, this conundrum? Well, I say we simply have to deal with it. No, I don't think so. So I don't see any kind of silver bullet. I don't see a big breakthrough people doubt AI, and yes, I'm not downplaying the usefulness of various AI tools, but I still think they are likely to be fairly limited in this kind of real creative sense. Um, and so we'll simply have to deal with a fact cause that's things are getting messier. That requires more effort. Marshall was the low hanging fruit has been picked up. We'd have to work harder. And also there will be men, highly [00:16:00] undesirable features. Uh, people going off on tangents, uh, kind of, kind of creating their own alternate realities, such like going astray. That was all of those kind of build up kind of elaborate alternate realities where certain kind of art attempts are assembled together into convincing pictures. I think we'll be, we'll have to deal with that. Yeah. And, um, so, so. Another piece that you, uh, like sort of core to the thesis, is this increasing sense of competition? Um, w would it be too extreme too? Say that the, the game has sort of changed from I'm a sort of absolute game, uh, to a relative game in, in a lot of research where instead of trying to produce a. The best thing, it's just trying to produce something [00:17:00] that's better than the other person. Uh, I'm not sure, uh, whether I would put those stamps out there to think of it. Uh, I mean, there was always this element of competition. You simply look at these bitter disputes, Newton versus Libin. It's about calculus. For example, other cases. Sometimes they were resolved amicably, Darwin evolution and so on. But again, people often they're reacted to not to competition. So Darwin that, getting his book into print because he heard that well, that's just coming out with the work and so on. That's a really good point. Things like that. Uh, so I think the competitive aspect was always there. It's actually very important to get people to accelerate themselves to, to, to, to, to do their best. So I think that is always been important. Yeah, probably much more important now than used to be the occasion before is the [00:18:00] need for collaboration. A need to for collaboration, need to kind of assemble a group to work with groups towards some common goals. And especially that universities, you often see it now where the professor is less the. Yeah, investigator created more, almost like a thought leader or manager because ideas and by the customer assemble, you know, get the grants, bring graduate students and post docs who was an executor, a program. And you know, the head of the lab who gets his or her name, you know, other publications, not necessarily just lead the sense that. Because they're all found that person really is the inspiration kind of on maybe overly original ideas they use there by the second is very different from what it used to be. Say a hundred years ago. Yeah. Even a hundred [00:19:00] years ago, you saw some of it at the sun Edison. Well, it was a very good example, this larger lab, which was working under his guidance and trying out various things, all of the different materials for light bulb filament and such like it was clear that kind of Edison was driving it, but lots of people working on it and so on. But I mean, Edison was very unusual for that period these days. That is how research operates. Yes. And the, the pieces that you allude to in your paper is that, um, there's sort of, there's, there's more competition and, uh, what I would call less Slack, um, in terms, I think of those as being, uh, sort of like to counter opposing systems or to capture opposing forces. And if you. Have that, uh, like competition is what drives you to some [00:20:00] equilibrium. And then Slack is what lets you sort of like jump out of local equilibria. Um, And, and the thing that really drove this home for me was the example you give of, uh, the, the contrast between Xerox, having years and years to sort of do development around their patent and build up additional patents versus, um, the, the superconductor research where multiple groups, uh, came up with the same discovered the same thing, like within weeks of each other. And, uh, I wonder if there's. That is that, um, sort of phenomena is actually playing into the stagnation piece in that, like, this is probably not true in of itself, but like, is it possible that the reason we don't have room temperature superconductors is actually because, uh, nobody. Could would profit from bill, like could actually build up a patent portfolio around them [00:21:00] to the point where they would, where it would be profitable for them. And so like this, this competition is actually sort of like, uh, driving out, uh, paradigm shifts. Well, It's hard to say, because here we're talking about the real kind of, uh, um, natural barriers kind of room temperature, semiconductors exist, easy abstract. Okay. We don't know for certain. Yeah, of case on the other hand, what you can observe is that there have been a few labs that were established over the last couple of decades, which tried to kind of come up with this moonshots and so on. Well, I mean, Google has this X lab. I think something like that, that's been called it. Hasn't produced the very much, uh, Ellen was bill Gates collaborate on creating Microsoft. He had this kind of. So silver bullet, I mean the kind of lab in [00:22:00] Silicon Valley, uh, I forget his name right now. Again, not much has come out of it. Uh, so I think it's simply very difficult to come up with breakthrough ideas. Uh, and I mean, you know, my main area that I can talk to you about shape mathematics itself. Uh, there have been a few kind of. Really incisive ideas, new breakthroughs, last few decades, I would say many few words and used to be like I used to be for other areas more closer to applications cryptography. I used to work a lot. Uh, I would say March of what has been done over the last couple of decades have been pretty much incremental. There hasn't been all that much either way of significant breakthroughs. Um, if you look at something like Bitcoin has excited, uh, attention of many people, uh, in our work produced almost a dozen years ago. On the other hand, all the basic [00:23:00] technologies unit has been known for at least 30 years. Yes the result. Uh, so I think it's more, more a case that it's really harder to achieve breakthroughs, the kind of the low hanging fruit or the big pick. Only a few of them are, are maybe hiking around and maybe occasionally somebody will find them, but not too often. Yeah. I, I guess it's, I always, I find the, the low hanging fruit. Explanation sort of unsatisfying, I guess. And I'm always trying to, to at least like tease that apart and because, you know, it's, it's sort of like there are low hanging fruit until you find a different tree. And I feel like the, the history of the 20th century is one of just fi like repeatedly finding trees. And so the question sort of becomes like, Less [00:24:00] like, have we picked the low hanging fruit and more like, why aren't we finding more trees? And so, um, the, the argument could be like that the trees themselves are, are, are fruit. And so that they're they're low-hanging but, um, I just, I just feel the need to like, keep poking at that. Um, and like, uh, like another, another thing that I found really compelling in your paper was this idea of, uh, expectations shifting from discrete, uh, discoveries to continue to discoveries. Um, that, that was, that was pretty mind blowing. And I think it actually has to do with, uh, These the, the idea of these trees of like finding new trees. Um, do you think that. Perhaps because of the expectations of continuous improvement, that we're less, uh, less willing to sort of like start like picking fruit from new trees. Do, did stretch the analogy to the limit. Well, my, so again, from looking at [00:25:00] everything, I'm kind of inclined into the view that are simply are not that many trees. It is occasional. Things are harder. Um, you just look at many disciplines. What can you do? Even when people have brilliant new ideas? Um, they often don't go very far because they have to be incorporated with other things so on. So I've encountered various areas like networking, cryptography and things like that, where people come up with really great inventions. But they are not implementable for one reason or another. So it's hard for me to see the breakthroughs. Uh, again, something might happen. Uh, look at something like fusion, lots of resources have been devoted to trying to come up with practical way of doing controlled fusion and get cheap energy out of it. It hasn't happened. Of course it would say, well that's because, you know, maybe when [00:26:00] maybe, or maybe all of these governments brought all their resources into a blind Ellie. On the other hand, there has been a lot of clever people trying to think of other ways of doing things that will bypass that barrier. And none of them have worked out for a while. There was a great excitement about cold fusion, but that kind of flamed out very quickly. So some people talk to the culture seriously for a while. Um, now I think it's gone. Uh, is there some other way of doing it again? People have been, nobody has found that even though there definitely are incentives to do it. And these are people who have basic technical knowledge, scientific knowledge, to be able to address that question. Yeah. I want to go back to something that you, you just said, um, and, uh, and almost sort of argue against you with what I see as [00:27:00] something that you wrote, uh, which is that. So, so you mentioned that people come up with these, uh, like brilliant technical solutions, but they're not implementable for one reason or another. And. Based on, on my, my reading of your, your paper. Um, one could make an argument that the reason that they don't get implemented is, uh, not because of like some fundamental, uh, reason in the world, but because the people who would implement it, like the people who pour the resources into making it implementable and then implementing it, um, Instead prefer, like expect that, uh, the systems they're using right now and the paradigms that they're using will just get better at a continuous rate. And so there's sort of no point in making the effort to switch. Um, would that be, would that be a fair reading of, of the paper? Oh, very much so well, not cellular paper, but in journal evidence. I mean, we see it very much in the [00:28:00] computer arena and so on. Uh, kind of we've had now kind of a domination of a few kinds of operating systems. Um, we have, uh, uh, the browser. Taking over there on the internet. There's an interesting case. One of my areas, papers kind of downside non technical papers was about electronic publishing written about 1994 and a user. I kind of. The rotor balls are tools for access of information as a web. And I said, well, we have all these great tools, like gopher ways, and there are better ones coming along. Like this thing called browser. It sounds like it will be better generations. Well, I mean, some, since I was the right browser was the better, but that was the end. The browser has just evolved to absorb or to incorporate all these other things that people wanted to do. And again, many people have commented. How, if you were [00:29:00] designing a web from the ground up, you would make various decisions in different ways, but we're pretty much tied to it. And you can only do incremental change. Yeah. That's so, I mean, for me, at least, at least through another really uncomfortable conclusion that it's like, we're almost like suffering the effect of our own success. Right. Like, because we've had such continuous improvement, um, we, uh, are sort of like unable, like the discreet improvements get crowded out. Um, yeah. And it's like, And, and so, and I feel like that's sort of a different effect from, uh, the, the increased friction from, from collaboration. Right. And so I guess, uh, do you have, it's not too much different. It has to do with the greater complexity. So we kind of, uh, Each into our brains are not growing. Okay. It's not much, you know, the event [00:30:00] and we're not certainly acquiring that much knowledge. We would just acquire waste to incorporate or excess graters amounts of knowledge case. And so we're. Devoting more and more of our energy managing the complex, trying to figure out how these things interact. It's much less, much less the point of trying to find some very simple principles F equals ma. Okay, brilliant Newton in that case. Uh, well, uh, people try to come up with such simple concepts that would explain a lot of what God's in his own world and they're failing. That was because of all this complicated. Yeah, no, that's, that's actually, that's a really good point. And that increases the switching costs for, for new systems. Um, if everything is super interconnected, you can't just sort of like [00:31:00] take off the, the module and stick on, on the other module. Although it might suggest that like in a, like a amazing world, uh, We would pay more attention to making things modular, um, right. Like, and, and that sort of like that is at least in some way, a way to abstract away complexities. Right, but we do a bad job of that. Yep. We're doing a very, very poor job of it in the case and yes. Uh, it's one of these other kinds of customers. It's a trade off, uh, the issues. How much effort do you put into making this modular? And one of the problems is that people. I've always had this mental image of software, something that can be modified very quickly. And so therefore there is really no need to, or about interoperability and so on. Well, we fix it when we, and then we generally, we don't, uh, a friend of my browsers [00:32:00] several decades ago. I had this great saying this was in a context of talk communication switches. And you said, what's the difference between hardware and software versus hardware? You can change. Brutal because the engineer is developing hardware sort of knows that these things are now going to be out there and people will have to live with them and then maybe have to repeat replaced and so on. So they pay much more. Attention to modularity. You have all these standards about this, which define what kind of connectors you have, what are the voltages and phase electricity and everything else, the attitude as well. Okay. It's all kind of can be modified. So you end up with this mess of spaghetti code. Okay, you cannot change. And so you have all of these crucial [00:33:00] systems running around, powering our economy. That's all written in Kabul. I've heard. I see, I saw a news article recently that like different governments were desperately trying to hire COBOL engineers that were paying like massive amounts of money because they can't switch. Um, Although, you're also seeing a sort of what I would call it, decreased modularity in hardware systems as well. You know, it's like the, the, the car engines that you can't service yourself for the batteries that you can't replace. Um, and obviously there are good things. There are good reasons for doing that. Um, well of course it's good or maybe not so good. So again, a lot of it has to do with intentional design Lakin. So one manufacturer's making difficult for you to kind of do things. Is there a largely in order to control the user, the other guy gets into the economic incentives. He's also STEM. [00:34:00] Yeah. And, um, I I'd actually love to, to rewind. And, um, what was the question text of you writing this, this paper in the first place again? Because it feels, it feels so contemporary that it's shocking that you wrote it, uh, like more than two decades ago. Yes. Well, that's a very interesting question and very easy to explain. So those written into context of working at bell labs and bell labs was one of these people because of jewels or whatever it is. So it's a really wonderful place. Uh, I joined it right after graduation from high school graduate school. That'd be very impressive. No, no, no, that was it. I spent a summer there before that, but not out of getting my PhD. I joined it and work as there for extended, you know, for several decades. She, most of my career so far has been spent at bell labs [00:35:00] and ATNT labs afterwards. And when I joined, uh, bell labs was still dominated by the of unfettered research. It wasn't completely unfettered. Actually. It was one of the big strengths of it. Namely, there was a certain thing to kind of pressure to do things and also being part of the bell system. We had contact with real world in some sense, closer than say academics did, but otherwise they're all still this ideal of almost unfettered research. Give me lots of freedom. And that was changing. When I was there over the decade before I wrote the essay or declined research. And so basically I was looking around the whole scene, not just that bell labs, what a world, the science of technology and so on, and was trying to explain to my fellow, uh, kind of colleagues at bell labs. [00:36:00] Why is it that we were experiencing this pressure, which many were very concerned about, you know, fearful, resentful or other kinds of things. So that's really, what's the context of it. It was really about these traditional. Uh, large industrial research labs, like bell labs. IBM kind of what's on the research lab and there's some algorithm. Uh, and are your answer? I already alluded that back to the same factors would start influencing kind of other types of research, especially academic research. And so on because when indeed been happening, but at that stage, I was kind of seeing it as a front lines as, as our wave was coming in and of reacting to it. Yeah. That, that that's fascinating. And, and sort of as, as more, I guess, more fetters were put on researchers at bell labs. Um, did [00:37:00] you, did you have any conversations? So sort of like with, um, The people who were starting to put, like put those pressures on, on the research. Not well, not, not real deep conversations or so no, but Tommy of course we had general conversations about what was being done. Uh, you know, how the reorganizations, how the reward structure was changing and all these other kinds of things. Yeah. How, how, how did the rewards trust structure change? If I may ask not much more, uh, uh, much more attention was being paid to work, being done for the company. Or interactions that were more closely related to what the company was doing and much less on just pure scientific accomplishments, which might be kind of a recognized on the outside. As one example, one of the, my colleagues wrote some papers with him. She was also in my department after I was promoted to department head for [00:38:00] many years, very distinguished researcher. He was on the. Few people who was member of national Academy of sciences, national Academy of engineering, and what was then called Institute of medicine now, national Academy of medicine. So on August bodies, this was all for work. He did on foundations of a kind of computer cat, computer, extra tomography, which is basically. Was working on some mathematical problems of reconstruction from, uh, kind of the images x-ray measurements and so on. So he did do a lot of this work and that had practically no kind of, uh, uh, no. No connection to bell system might do well, actually there was a little bit, we tried to develop some of these techniques for electrical tomography, for some cables, anomaly [00:39:00] detection, other kinds of things, but basically this was working a few did, but had a great influence. I say it. Improved the health of many people. And so on, it was widely recognized on the outside that, uh, you know, a bit of some things that belonged to the earlier era of bell labs, not to the final stages of it. Yeah. And, and, and so I guess the, um, sort of implication of that for me is that he then did not like he wasn't really that rewarded. Uh, w w basically he was rewarded because this was the earlier, you know, that he kind of retired when things were really changing and so on. And. So kind of a long story. He did very well. He was widely recognized. He contributed to many other things too, but this computerized tomography work, that was a major undertaking for him, which took a lot of time. [00:40:00] Yeah. And, and, um, do you have a sense of how long it took him to do that? Like, like how, how much time did he spend just sort of like, uh, Oh, seemingly producing nothing. Well, no, no, it wasn't not just that these things were getting published in general scientific journals, but they were not kind of relevant to what bell system was doing. Yeah, it's touched for a number of years, I'd say five years, something like that. And so like sort of counterfactually uh, is there a reason why sort of, if he were working today, he would not be able to do that in like a university setting? Uh, well, uh, again, it depends, so, um, he might be able to do it. Other than you need to make sure that you get some kind of a funding agency recognizing that this is a [00:41:00] promising area. Uh, I don't know, is that right? Once he started doing it, whether NSF or other agencies where knowledgeable knew enough about it and you regard it as promising enough, so not impossible. So there were some. People who kind of proposed doing this before there was an issue of how do you actually get useful reconstructions, extra imaging that you obtain. And I think that's kind of part of this big change. Now you'll have many more people kind of looking around trying to find something to do and, uh, Again, if you're persuasive enough, uh, and you can convince enough people and you can persuade you the private founder. Yeah. Shouldn't or maybe you can persuade and some adventure, some national science foundation. So the director to set up a program for fun, you know, a particular type of [00:42:00] research. Yeah. So, so no, I mean, I'm not saying that this new style of research is in capable of producing breakthrough results, but, but you do, you do it, it does seem like there are, there's a sort of different set of constraints on it, right? Like it's uh, so if you have someone who is, um, sort of antisocial and, uh, bad at bad at sales, basically, um, the chances of them. Being able to create this breakthrough, uh, are probably lower. That's right. So then they succeed on leave. They hook up with somebody who's more of a promoter. Okay. And then was she examples of it? Some, some people who managed to assemble a group and kind of collaborate. Be very conducive towards, so, you know, facilitate interactions amongst them and direct them means our rights area. And even you, those are kind of these kind of [00:43:00] very nerdy types and so on. They might still be very effective at coming up with useful products or services. Yeah. And, um, uh, another area of the paper that I felt was, was very prescient, was you're, you're focused on sort of Japanese, um, sort of the Japanese, uh, economy, and that you pointed out that the Japanese business structure, uh, should be able to enable longterm work. Um, and, and yet it doesn't. Um, and so. Uh, that's, that's sort of a, a counter, like a counterfactual to the argument that like, Oh, we just like don't have long enough timescales and like, uh, like stockholder pressure is, is forcing people to work on shorter timescales. Um, do you feel like that is still true in, in terms of, uh, like the Japanese, uh, like that Japanese. Um, output has not [00:44:00] created like breakthroughs that we would expect to happen. No, it has not. Again, again, a part of it may have to do with the kind of cultural factors or how their corporations are structured. And so on. I certain are excellent. They are still pretty top technology sees a world in different areas. On the other hand, when you look at records of some people, there was one particular guy. I forgot his name now. I think the blue laser. I think this was the breakthrough invention and so on, but very hard time, this company was not really properly rewarded or so, and it was almost like a little bit do a skunkworks work and it kind of gets things most other side. Indeed. I want to be respectful of your time. So the, the closing question I ask everybody is just, what, what should people be thinking about that? They're not thinking about enough right now, [00:45:00] people shouldn't be thinking about. Very very hard. Very hard question. I don't think I have a simple answer. Give a complex answer. Welcome to my particular concentration right now is a group thing. Uh, so I think that's, uh, again, I won't say this will be central for everybody and so on, but I think it's a very important question. The degree to which a human, uh, society really depends on groups, to what extent it's actually let us stray, uh, where people disregard are very obvious evidence in order to, uh, adhere to the preferred worldview. And I'm studying financial manuals, baubles, precisely from that standpoint, how is it that people manage to overlook all those very obvious sides? Yeah. Yeah. [00:46:00] It's uh, one thing, actually, I I'll just, um, what do you, what's your take on the argument that, uh, there are some like, especially like infrastructure, um, things that never would have happened without bubbles. So, so there's this argument that like, we, we actually would never have the railroad infrastructure or like the telecom communications infrastructure, uh, without bubbles. Yes. I don't think that's true that we'll never have had it. A single comment probably would have come more slowly. Oh, so, uh, I mean, technology will typically these basic technologies have been developed before achieve what led to the bubbles was the appearance of, uh, of the technology in a form that could be deployed. And that make money, the case, and that gives rise to excessive optimism and, uh, you know, future investment manuals. But I just can't think of anything. That's kind of juice, just the bubbles [00:47:00] by themselves. Like the, that there's there's. So, so you don't think that there's some sort of activation, like if you think about it, like a cat chemical reaction, there's like an activation energy that's actually higher than the sustaining energy of, of the reaction that's provided by a bubble. Yes. Well, okay. So I think there's some of it and to somewhat, you know, various ways of thinking about barcodes, they lead to faster deployment. Some technologies will be through otherwise they open up people's minds. Um, extent. And to some extent they may also drain of some of the irrational energies, which might otherwise be deployed in more destructive ways. That case some people regardless, just farfetched, but there are some ways you could think of the Bible says being conducive to human progress. My key takeaway is that the decline of unfettered research is part of a complex web of causes from incentives to [00:48:00] expectations, to specialization and demographic trends. The submarine consequences that any single explanation is probably wrong. And any single intervention probably won't be able to shift the system.
A conversation with Eleonora Vella about getting the right people in the room, finding research on the cusp of commercializability, and generally how TandemLaunch’s unique system works. Eleonora is a Program director at TandemLaunch. Tandemlaunch is a startup foundry that builds companies from scratch around university research. This is not an easy task - check out Episode 15 with Errol Arkilic, Episode 19 with Mark Hammond, or Episode 21 with Eli Velazquez if you need convincing. Given the challenges, TandemLaunch’s successes suggest there’s a lot to learn from their processes. Key Takeaways - An under appreciated reason that commercialization is tricky because it involves a transfer from one skillsets to another - The timescales of business and patents seems to have become decoupled Links TandemLaunch Homepage
A conversation with Dr Anton Howes about The Royal Society of Arts, cultural factors that drive innovation, and many aspects of historical innovation. Anton is a historian of innovation whose work is expansive, but focuses especially on England in the 18th and 19th centuries as a hotbed of technological creativity. He recently released an excellent book that details the history of the Royal Society of Arts called “Arts and Minds: How the Royal Society of Arts Changed a Nation” and he publishes an excellent newsletter at Age of Invention. Notes Aton on Twitter: @AntonHowes Arts and Minds: How the Royal Society of Arts Changed a Nation - Anton's Book Age of Invention - Anton's Newsletter The referenced post about Dungeons and Dragons We don't dig too much into the content of the book because Anton talked about it on other podcasts. He gives a good overview in this one. How much did a steam engine cost in today's dollars, these sources suggest it was roughly $100k , but as anton noted - it's complicated. Transcript (Rough+Experimental) Ben: the place that I I'd love to start is the,society of arts did something that I feel like people don't discuss very much, which is focused on, inventions that have positive externalities. So you, you talk a lot about how they, they would promote,Inventions that maybe people,couldn't make a lot of money off of they weren't going to patent. , and it's one of the few examples I've seen in history of like non-government forces really promoting,inventions with positive externalities. And so I was wondering , if you see that. how could we get more of that today? And like, if there were other [00:02:00] things doing similar work at the time and maybe how that theme has like moved forward in time. Anton: Yeah. That's really interesting question. I'm trying to off the top of my head, think of any examples of other non-governmental ones. I suspect there's quite a few from that period, though, just for the simple reason that. I mean the context in which the society of arts and emerges right, is at a time when you have a very capable state, but a state that doesn't do very much. Right? So one of the, one of the things you see throughout it is actually the society kind of creating what you might call the sorts of institutions that States now take upon themselves all the time, voting positive externalities as you, as you, which is a very good way of putting it. , you know, Trying to identify inventions that the market itself wouldn't ordinarily provide. , later on in the night in the mid 19th century, trying to proper state into providing things [00:03:00] like public examinations or, you know, providing those things privately before you have a state education system. But I think one of the main reasons for that is that you don't really have that kind of role being taken up by the central state. Right. I mean, the other thing to bear in mind here of course, is that a lot of governance actually happens at the local level. And so when we talk about the government, we really mean the central government, but actually a lot of stuff would be, is happening, you know, amongst the, kind of the towns and cities. It seems with that written privileges, the various borrowers with their own often quite bizarre privileges and like the way they were structured,local authorities for want of a better word, although they kind of. Take all sorts of different forms. And I think you do see quite a lot of it. It's just, it wasn't all done by a single organization at the time. So I think that's kind of the main underlying context there. Ben: Yeah. And so I guess sort of riffing on that. , one thing that I was wondering, as I, as I read through the book was like, why don't we see [00:04:00] more of that sort of like non central, central state,Positive externality promoting work done. Now, like you think of philanthropy and it doesn't quite have that same flavor anymore. And I wonder like do, like, my bias would be, would be to think that sort of,there's almost like a crowding out by the centralized state now that people sort of expect that. , and I was wondering like, do you. W w how do you think of it, perhaps there's some crowding out. I mean, the interesting thing, right, is that Britain has actually kind of interesting in that it has quite a lot of these bottom up institutions. Whereas across the rest of Europe, you actually see quite a few top-down ones. Right? So I discussed in the book that there is actually not one, but two French societies of arts, sociology. Those are there's even a third one, which still exists, which is a kind of a later much later one from, I think the late 1938, early 19th, late [00:05:00] 18th, early 19th centuries. , part of the, kind of catch up with Britain project that Napoleon and others start pursuing,But yeah, you have a lot of these princely institutions, ones that depend on particular figures to be their patrons,to promote them,to, you know, provide a meeting space for them to provide them with funds, to provide up, to, to fund anyone who's doing fellowship of that, of that kind. Whereas in Britain, you seem to get basically those stuff that doesn't get funded by the particular patrons, even when they're promised that funding like the Royal society, which they always hoped we'd get some kind of government or, you know, some funds from Charles the second or something never does. , it obviously gets support that, you know, he gives them a Royal base that they can have on the table in front of them when they have that discussions. But that's about it. And the society of arts I guess, is, has to be set up because you have that lack of. , you have that lab because of state support. , I mean, what's interesting is I guess in certain [00:06:00] complex contexts, you do get state funding of these sorts of institutions. The Dublin society becomes the Royal Dublin society, but that one actually does get state funding as part of the kind of compact try and get Ireland to catch up with, with, with Britain in terms of its economy, same with Scotland, the society Scottish society of improvers does eventually get. I guess morphed into what becomes the Scottish board of trustees for fisheries and manufacturers, probably full title one. , so organizations like that, I guess become state ones. I mean, the idea that there, the fact that they're quite uncommon though, is, is interesting. And I wonder if Britain was just a bit better sometimes that they're organizing these things and keeping them going. , the Dublin society is. An outlier. So there's the society of arts. You see lots of these patriotic societies set up to emulate the society of arts across Europe, but very, [00:07:00] very few of them,assisted I think by the 1850s, the only one, like they're pretty much, refounded a bunch of them as kind of discussion clubs. And then since then, I think the only real one to keep going was it's the one on Malta for summary, bizarre reason. , I've kind of forgotten the original question now I've kind of gone. So, so the original question was just around,like why almost like why aren't there more,nongovernmental organizations sort of devoted to promoting,these positive externalities. Like that's, that's sort of the big question I have. So I guess my answer there is partially that. It seems as though if you did have crowding out it was happening just as much then, or at least had that potential. Right? Cause you have these Nobles who could be the patrons. You have the King, who could be the patron. , although potentially you're right in that, because British Monex worth giving their patronage. You end up with these actually ironically more robust institutions because they're [00:08:00] much more broad based and bottom up. Yeah. Being formed and then surviving. So perhaps it's the case that because we just expect the government to do it and the government's extremely rich and actually does give lots of money for lots of different things. We just say, well, it's easier to, just to kind of persuade a politician, to get some money set aside for a new agency in much the same way that you know, today Britain is trying to set up an ARPA. , I think just announced a few weeks ago. , because once the idea gets,enough currency, as long as you can persuade the panels that be the, maybe it's actually quite straightforward to do it. The reason I ask is actually based on something that, that Jomo cure has pointed out, which is how,Kind of like the federalisation of innovation makes it much more robust. , I'm sure you've seen the, the,sort of like the contrast between like the Chinese state. , and then how, like, in, in Europe, comer, [00:09:00] Copernicus could like go, go between Patriot to patron until they found someone who would actually support him. , and so I always wonder about like having multiple sources of innovation and like how to have that happen. , so that was that's, that's sort of something that, that I'm, I'm always thinking about. , I guess you could say that that's, that's present right on the European level. Certainly the big question then is why is it that you don't get it happening in other fractured States? , I think a very neglected part of the case thesis, right? Is that yes, fractured States is one thing, but the other half of that, of the, of the puzzle there is also having a kind of common culture. Yeah. Even if that's. Completely kind of invented right with Swedes who presumably is descended from whatever bar area, really fat calling themselves, you know, Albertus Magnus or, or, or whatever, you know, people who are certainly not Latin from, you know, in [00:10:00] any kind of. I guess ethics sense claiming a Latin heritage or Greek or Latin heritage for themselves. , I guess bricks as well. Right? , many of whom are probably Anglo, Anglo, Saxon, Germanic Anglo-Saxons or, or, or pre Roman council something. , you know, John D is actually referring to himself as the artist know. But, but that, that, that common language, you know, having a lingua franca, French of Latin then of French, and then I guess more, more recently of English having that common set of assumptions, you know, the Republic of letters. Wasn't just about the fact that you could, as a stop gap or safety valve move somewhere, that could be a bit more promising. , I think it's also very much, it very much requires that extra step, whether or not you have had in other places is, is debatable. Right. I think kit mentions, you know? Yes. Career in Japan and next to China, but they don't [00:11:00] quite have the common culture. So even though some Chinese intellectuals will move to Japan there, they get kind of forgotten neglected. It's a really good point. And I had, I appreciate you. You. Bringing up that neglected part. And so it's like then,actually this is a great segue into another thing I wanted to ask you about. , it's like, so we're, we're in the middle of the coronavirus and you've done a lot of work on sort of like the virality of,of innovation itself,and the ideas like that. And,and so. In contrast, it feels like there's a contrast between,sort of the, the industrial revolution where it seemed like people really would,see someone innovating on something and then take it on themselves to start doing something similar. , and then today you see something like Elon Musk doing something awesome, but then you don't see that many people. Replicating that. , and [00:12:00] do you have a sense of the what's what's different or whether I'm,basically that on like some false assumptions that makes sense, like, like, or just generally, how could we,have more of that innovation vitality? I mean, I think a lot of people probably are inspired by people like Musk. , the way in which they're inspired, I guess is debatable. , you do, I think it's important to have. Invention figureheads. If you like people who you can aspire to copy when it comes to improvement, when it comes to tinkering, when it comes to invention, I guess one of the problems with a figure like Musk is that he seems unaware, reachable or unobtainable, right? There's a kind of level of connectedness and wealth. That seems almost like a starting point before you can even. Get us starting the sorts of projects that [00:13:00] he does or involved himself with. And I think that's potentially harmful. And that it kind of, it's some idea I keep coming back to actually, which is that there's the, the myth of the genius inventor is on the one hand. Good. Because people aspire to be like them. But on the other hand, it can be quite damaging if it seems as though. You have to be born with that a couple lucky enough to just be, be a genius of that. , and that I think is very problematic,because that's not a tall, what we see, I think in the 18th century. And it's certainly not what we see in the 19th century, which is this idea. I articulated that you could be anybody in any past station of life. Right? The Samuel smiles self-help mantra is you can be dirt poor. And, you know, a minor or something in, in, in, in,in the Northeast, someone like George Stevenson and yet through the sheer force of [00:14:00] self-education and. Adopting that improving mindset, you can do great things. Yeah. , and so one of the reasons I quite like improvement as an idea, versus like, as something broader like innovation or invention, is that it has that kind of sense of marginality, that sense of tinkering, that sense of, you know, just doing a little bit to make things a bit better. Which can often have very outsized effects. So a problem with a figurehead, like Musk, I think is that it's like, Oh my God. Yeah. Where do you even start? If I said, if I say it, whereas I think if you can construct a narrative false or not, I think that's actually relevant here. Right. But if you construct a narrative where. It's simply through hard work, a bit of, a bit of hard work and just tinkering around the edges and then keeping on optimizing until you get something really great. That's much more accessible. And I think it also [00:15:00] happens to be true. Right. I think, I think that happens to be true that occasionally certain bundles of improvements with these huge outsize effects can make people extremely rich, extremely famous, and then it kind of spirals from there for certain people. But I think focusing on those initial stories is one of the reasons why, you know, I think, I think the Victorian narratives ended up being so effective, perhaps even had an actual impact on inspiring more people to go in and do that, that sort of thing. And that's actually something that,I've, I've comfortably been thinking about, which is the sorts of things that can be tinkered with and improved now feel different than the sorts of things that could be tinkered and improved in the 19th century. Right. So it's like you look at. , like you could actually like tinker with what was sort of the cutting edge technology, right? Like you could tinker with, [00:16:00] , like railroad brakes or you could tinker with,like sailing apparatus, but it's now harder. It's like, you can't really go like tinker with,like a fusion reactor in the same way. , and. Do you think there's something to that? Or like, like that, that, that contrast, I think perhaps there is something in the mid 19th century. You've got this focus, I guess, on. I'm trying to make some of these instruments, more accessible things like a sort of study of arts gets involved with things like having cheap microscopes that you can send out to working men's colleges, mechanics, institutions, all over the country so that people can then use these things and then make new discoveries, or at least know how they work. , you know, the closest thing we have to that now, I guess it's like something like the raspberry PI, you know, these very simple things that you can start tinkering away around [00:17:00] with. , and I guess, you know, maybe in certain respects you want as much as possible to make. Not, not even necessarily knowledge, but to make invention more accessible, you need the materials to become more and more accessible. Having said that if you think of something like the rust free pipe, that is a very complicated piece of machinery that is now available to school kids,that would be like, you know, in the 18th century taking, you know, a watch or something, something extremely complicated and being like, yeah, how the go, you know, like take this apart and do what you will. , you know, these are things,they certainly come down in price so the time, but they're still. They're not cheap to tinker with. I mean, you mentioned, you mentioned shipping, you know, doing any kind of tinkering with a ship is actually extremely expensive. I mean, it's in the 18th century, that's very much on par with trying to tinker with a jet fighter today in terms of the relative cost of it, you know? So, well, let me, let me push back against that a little bit, which is that. It at least like I've, I've never,like actually like built a [00:18:00] ship, but it seems like it's a little bit more modular, right? Like, like you could say, like tinker with the steering wheel of the ship without,necessarily affecting like the whole, whereas. There's, it's not really possible to like the jet fighter is so integrated that I'm not sure how much you could tinker with like, maybe that the instrument panel, but I'm like, that's it. Or it's like, you could, you could tinker with a sale, a sale design. , but you can't really tinker with the engine of a jet fighter. Yeah. Interesting. I mean, I guess something like the steam engine is kind of similar there where most of the time, most of the improvements you make probably involve redesigning the whole. And there are a few, obviously exceptions to that, but you know, something like in reaching the separate condense that [00:19:00] does require actually changing the way it works, the same with Marine engines, you know, the kind of much lighter, smaller engines that you can use on boats, because they're trying to make these things as small as possible light as possible, at least the same with high pressure engines. , I guess, yeah, those, those do require a big upfront cost. And yet what's astonishing now, I guess it's still that you have a lot of people. From all sorts of backgrounds, still, somehow managing to, to make their improvements to it. Model scale, perhaps not at full scale, but then using a model to show the principles and then getting it built at a much larger, much larger way. Actually, I'm not sure if you know this off the top of your head, but like, do you have a sense of how much a steam engine. Costs in term, in, in the 19th century. But in terms of today's money, not off the top of my head, that'd be real. I'd be just like interested in like, [00:20:00] even like order of magnitude, right? Like, would it be like, like 10,000 pounds or a hundred thousand or a million, right. Like. I mean, it depends how you measure these things a lot of the time as well. But if I have the figure to hand, it it'd be a bit easier, but yeah. Cause you can make it to things later. I'll look it up. Yeah. Stick it in the link. Yeah. But there's different ways of measuring it as well. Right. So just the real cost doesn't actually tell you very much because the basket of goods changes so dramatically over time, the labor cost maybe tells you a bit, but then it's probably it's relative to the average. Wage, which is like the labor is wage very often and not, you know, if you're, if you're a middle class in the 18th century, you were actually pretty damn rich. If you're upper class, you'll extremely rate unimaginably wealthy. , and if you're not, then you're extremely, then you're very, very poor. , like the levels of inequality at the time seeing was unfathomable today, I think,Even when we talk about Nicole T increasing, it's really the comparison. Not that bad [00:21:00] people forget that. The very, yeah, it's difficult to appreciate, I think how, how things change qualitatively as well as typically, but then you've also got measures, like, you know, what is the cost of it relative to the size of the economy, which can also be an interesting way of looking at that. , so, and then you've got different ways of, of, of comparing those measures. So it's very difficult to compare the money over time. I mean, certainly these are expensive machines. , making a model even is extremely expensive, requires quite a lot of careful work. , but I wonder how much of that to scale tinkering happens. It's possible that, you know, in. In the process of making machinery with interchange parts and making it as kind of custom built. It's not really custom built, but. As integrated, as you say, as possible, we've made it actually harder to make changes. Perhaps we should be putting more in the way of tweak ability into our [00:22:00] design. Yeah. I mean, like that's, that's a, that's a huge thing. , it's like you see that with,you know, it's like, you can't take the,battery out of most Mac laptops anymore. , most cars you can't tinker with the engine. Anymore. , because you, you do get sort of like re efficiency returns by making things unconquerable. , so, so I, I, I definitely agree with, with you,I really appreciate you bringing in the nuance of comparing,the, the prices now to prices in the past. And,So think that I also wanted to ask is what do you think, like you're, I feel like one of the real historians who engages the most with sort of like the, the technology,world, what do you think that I would guess, I would say like, technology, people get wrong when they're thinking. Historically, like what, what sort of like, almost like cognitive [00:23:00] errors do you use, you see people making that just like make you want it, tear your hair out? What an interesting question. Couple. This is where I offend people. I think this is, I think, I think like, like you gotta, you gotta be okay with that as long as, as long as you like really believe it. Hmm. That's an interesting one. I mean, certainly you occasionally see a sort of simplified oversimplification of certain trends. Right. , but I, I don't know if that's common to technology people particularly, or if that's just general humans, a general human thing,which you probably see quite a lot. But, you know, I have to think about that one. Yeah, we can, we can circle back on it. I'm just, I guess it's just my, my bias is that I think a sort of historical thinking is under [00:24:00] done. , like, like lots of people talk about history, but they don't approach it like historians. And so I would love to just like inject a little bit more of the way that you think into the world. So I try to the general thing, I guess it would be that very occasionally I'll see the kinds of. Historical work, where you're effectively see people reading the Wikipedia page and kind of coming up with this very straightforward, almost linear narrative of this invention and led to this invention, which led to this invention or this understanding led to this invention. And I think what's often missing there is, is the extent to which. A lot of fan is just tinkering a lot of thought that there are so many more steps along the way that go into this and dead ends and you know, ways in which things either, either failed from a scientific point of view or a technical point of view [00:25:00] or. Just kind of, there's a lack of understanding at the time. We'll just from a business point of view where I think dead ends happen very easily in the history of technology. And there are a lot of them and they're probably somewhat unexplored, but on the converse, the other thing I notice a lot is that people often have a bias, I think, towards very technical explanations. , so a good example of this was, so I wrote this,Sub stack this newsletter, this newsletter blog post about the invention of Dungeons and dragons. Yes. I bought that one. I don't think it was quite as, it was probably my most popular one so far, even though, you know, bizarrely that thing, this is the one I spent the least time writing. Mmm. And the most common reaction. So that the overall argument for listeners who may not be aware of it, or probably won't be aware of it was that you have a lot of inventions that are behind that time, [00:26:00] which is the phrase, Alex, tap rock economist users. I quite like it essentially very, very low hanging fruit things that could have been done very, very early. And for some reason just worked. , and I think the reason I was just very few people in the past tinkered. Yeah. , and even fewer, perhaps, you know, of those who did tink or even made things public. So sometimes you get things invented and they actually failed to reveal it, to discover by the way, which is, you know, the word discover is uncover it's to kind of, not just that you found the things that you actually bother to tell someone about the thing and through the transmission of that knowledge, you know, that, that technology as a whole, as a, as a society advances,so yeah, some idea that it is is that you have a lot of these ideas that are, or inventions that could have been done any point within the past. And my main example of this or the one that I discussed that was doughnuts of records, right? This is literally for those who haven't played, literally, you need. Nothing except the people, right? It's it's you just [00:27:00] basically tell a story and then I guess you need dice. But I actually noticed the other day that they had,the 20 sided dice in each, in Egypt, something thousand, something BC, whatever, they found, very intricately inscribed. So you've got all of the raw materials and then all you do is you have the structured plate and the pushback from this was overwhelmingly. No, but there must be other factors, right? That there has to be some kind of constraint. I think the way that, and this is, this is the economist thing like this,cause they're trained to, and I think a lot of people in, I guess the, the technology sector thing like this as well, that there must be some kind of constraint that needs to be overcome. So a lot of people were saying, well, you did have some things like a Creek spiel. , which is this Prussian army game, which was kind of similar. Going back to the 19th century. There were potentially a few, I think it's the Bronte sisters may have come up with a similar form of structured play. , so there was the [00:28:00] questioning from that level, but then the other one was what cable you needed. I don't know the American suburb so that kids are like the invention of Childs so that kids would have, and yes, I get that those things may have contributed towards the specific form that D and D took. But. It still could have been invented earlier, right? These are, these are weak constraints. , and I think a lot of people, they, they tried very, they try very hard to find hard constraints, the same with the famous example,of the. , the suitcase with wheels, you know, people were just like, well, you know, first of all, you need to have, you know, good enough floors in the airport. You need to have a lot of people going to the airport, you know, an international flight because otherwise, what are you gonna use? This thing you need good enough roads for the wheels to work. You need good enough rubber. You probably need the, the ball bearings or something or something rather for this to be technically possible. But the reality is there are, there are absolutely loads inventions that just didn't require, you [00:29:00] know, maybe that's just a bad example. , but there are actually loads and loads of other ones as well. , another one I mentioned. Yeah. And that post, which not many people picked up on was Semafore systems, you know, signaling between ships or from ship to shore. Like you need a flag. Yeah. I mean, a lot of the early, when they, when they discovered it well, when they create invent the one that kind of becomes modern Semifore, you know, people are literally just doing with like a white handkerchief. Yeah, they wrapping around their arms. , the holograph by leftenant James Spratt is the one where they just kind of wrap it around their arms. It almost has a picture of Vitruvian man. , with the, you know, the, the kind of arms different positions all at once holding these handkerchiefs,very long kind of white cloth,or wrapping it around their arm. , the only example I can really think of,you know, The warning system that they used in Elizabethan times for when someone was invading England, which [00:30:00] is a bit like the lighting of the beacons and all the rings, you know, where they just set up a fire, it's like attack, you know, there's no, there's no signaling going on there. And another one I noticed just the other day was from the early 17th century was some kind of signaling system when they were fishing off the coast of Cornwall. But it's actually say what, how. How intricate that system was. So these are inventions though, that, you know, given it probably did exist in Cornwall in the 17th century. Why isn't it used while the Royal Navy until the late night, the late 18th, early 19th century, or even the kind of physical infrastructure that you see in France beforehand, they have these towers with signaling systems. Where they kind of have almost like they look a bit like windmills, except they don't turn around. They just kind of have these shutters that kind of go up and down in different arms of the shutters go up and down for different letters. Why do they only set that up in the seventies and eighties? This would have been useful, you know, underneath. 200, 300, 500,000 years. Exactly. I would say like the Greeks, like why didn't the Greeks [00:31:00] a signal between ships with,and I think a, you know, something when people say, Oh, it was invented earlier. Well then the question is, well, why wasn't it more widely adopted, right? Yeah. Invention does happen all the time. You do get things reinvented all the time. , but there are actually very few hard constraints on, on those inventions. I think that's just as true today. , I mean, one of the really interesting things about, I think a lot of people in today's technology, the sphere, the industry, and I guess the kind of intellectual sphere. Is that if you look at how a lot of them actually make their money, it is often from exploiting, extremely simple things that could have been done quite a bit earlier, which has worked well. They were, but they failed for whatever, either unrelated reason or the conditions weren't quite right. Or they just a bit unlucky. Yeah. Yeah. That's , Man. Okay. So there's, there's a couple of places that I'd love to go from this. I think one that I really want to get your take on is, and I think you're [00:32:00] really sort of touching it here is,if you look there, there are two really big schools of. Thought around history, right? Like, so you have the great man people, and then you have sort of like the evolutionary,sort of, so like, almost like I was, I was looking into this and there's no like single, like anti great man. It's theory, but like, it's, it's sort of like, is it just like, do things come about because like, like singular people really push things through, or is it a much more like, like it would have happened anyway process,and. I, I completely realized that it's not a binary thing, but what I'd love to do is just hear like your mental model of like, how like those, those two poles and where like, how things actually work. I think you, you probably need a bit of both. Right? So in a lot of my own work, I, I guess I'm [00:33:00] methodologically individualist, right? I like looking at what it is that individuals deed and said, and then from what they did and said, try to work out what they also thought or what motivated them, which isn't necessarily the same thing, but, you know, but you can, you can get at it a bit. Yeah. , At the same time, I think it's worth taking it's it's worth taking stuff off the kinds of forces that are pulling the strings, so to speak of those individuals and maybe affecting all of them all at once. So I think you need a bit of both. You have to be aware of the kind of overall macro level arguments. Yeah. Was it just, the prices were right in general, which is, you know, such a kind of broad sweep of coordination of millions, potentially people. Resulting in this single figure, right. , it's kind of spontaneously generated or created a thing. The emergent thing. , but at the same time, you do need to be aware that, you know, people I think do have agency,yes, their context [00:34:00] matters as to how they are mine, their agency. But I think one of the things I've learned is yeah, great, man theory may not exist. Great person theory may or may not be quite right. But so I think bad to throw the baby out with the, of water and say, well, yes, we've just, you know, in the kind of Marxist. Reading of things just at the mercy of these, these suffer national global forces around which we have no say whatsoever. I mean, the reality is that, you know, I think the industry relations is a great example of this because you have this broad acceleration. Imagine with some of these inventions, having these global scale effects on the rest of the world, you know, things like the steam boat. Okay. It's a collective endeavor that leads to that point where you have steam boats, but once steam boats effectively shrink the world, I mean, that completely changes the game when it comes to. Trade patterns, right? Suddenly the whole world can be globally integrated. You can see price convergence across the entire [00:35:00] globe. You see this kind of distinction between this growing distinction, as they, as people put it in the forties and fifties, you know, the periphery and the S and the core, an industrialized series of. The nation's sucking in raw materials from the rest of the world, because those raw materials were profitable. Those countries start specializing in those things alone. And, you know, perhaps they get the industrialized or whatever, you know, those forces are still ultimately caused by the actions of a few individuals. So I guess the way to think of it is that, you know, we should take the individual actions seriously in their context and not necessarily think of them as heroic individuals. You know, changing the course of the river, but they can definitely change the rate of the flow, the, the direction that it, that it flows in. , they can, you know, eat away at the banks a bit more or a bit less. Okay. I think, I think that there's room for change [00:36:00] there. , especially when it comes to network effects and that very much relies on individual initiative. Right. I think we take for granted that, Oh, you know, okay. Let's say a place like Vienna in the early in the year 19th century. It's just, yeah. You know, there's something magic in the air or in the water and people come together. No, you require individuals to be these kinds of. Social butterflies and bring together particular groups. And through those interactions almost create new ideologies potentially right? Where the convergence of different ideas and interest leads to a sort of synthesis. , you know, the Royal society in England in the 1660s is often cited as being a kind of outgrowth of the circle around Samuel Hartlieb he draws together all of these different people and they become, essentially a,an invisible college . Yeah, even though he's not really that involved himself in what then happens, the Hartlieb [00:37:00] circle kind of manifests itself as the Royal society later on, even though a lot of the Hotlips circle, you know, you could say we're very associated with the Cromwell Rasheem, you know, during the English civil war, you know, the Royal is sympathizers amongst the mobile we're adjacent to that ended up forming their own society. , so I think you need those sorts of fingers. People like hot flip or someone like Benjamin Franklin, right. Is he's as much a connector as he is an actor. Yeah. Bringing together particular people, sometimes that's just through writing, but often it's through correspondence is through active meeting. It's through setting things up or what the society of arts, which I wrote my book about, right. Would not have happened. Had it not been for the ship assistance of the guy like William Shipley, a lot of people have this ideal of an organization like that, but to actually make it happen, you need to actually do the organizing. So two things that makes me think of a first, actually going back to your point about soft [00:38:00] constraints. , what would you say to the argument that the softer, the constraints, the more important the individual is? So if it's something where it's like the world, just like wasn't ready for it. Like a hard constraint changed and then the world could have it then maybe it's like, okay. It just happened to be someone who made the thing, but then you look at such as the dragons,the inventor, Gary Gygax,maybe he was actually very important,because he was the one to really crystallize the whole thing. In my understanding of that particular example is there are quite a few people hovering around what, what would it, what he kind of hit it hit upon in his kind of unique way. , which strikes me as suggesting that, you know, perhaps there were a bunch of soft constraints that get lifted,in that particular case, or at least maybe not constraints, but things that led to that kind of particular. Form that it took. I mean, it's definitely a [00:39:00] plausible mechanism, right? That sounds like it probably works. I'm just trying to think through an example of how, of whether or not that that's the case. I guess, I guess the right comparison would be, are there cases where I get or how, how quickly do old ideas that had very solid, hard constraints then get adopted? The moment those hard constraints get lifted. Yeah. Is perhaps the way to think about that. That'd be an interesting. Just like actually like going through those case studies. And I suspect there's quite a few from the 20th century. I mean, I'm trying to think of something like the steam engine, but the problem of the steam engine is that actually the hard constraint of simply not understanding how air works. And then once that gets once, once we do have an understanding of the air, it's actually pretty rapid from there. No, it's a matter of decades. I would say [00:40:00] once they, once they hit upon that, and once they, they realize they can do it with steam, it moves very, very quickly because I've seen, I mean, just today I was apparently there's a Spanish claimants to the adventure of the steam engine from 1606. I've got very worried. So I looked into it because it would have been validated my last blog post. , but I was safe. It turns out,because you know, that the steam mentioned is, as we know it doing the kind of work did from the 18th century onwards very much realized that understanding the weight of the air and then using atmospheric pressure through the steam condensing that you get the, the work being done. Whereas this much early one it's very much just. Basically using the steam itself to push water up. So you kind of get, put the, put the water that you're trying to drain into a tank yeah. Which is lower than the altar itself. And then you kind of push the [00:41:00] steam from the boiler through that up. So it kind of spouts out the top through a pipe, which is not the tool, same thing. Right. The amount of work you can do with that kind of dimension is completely different. , Yeah, I guess the things to look at would be actually, I can't, I can't think of an example. There are certain forms of engine, which I think are only, I think it's the Sterling engine, which are now being looked at again, because at the time that they were come up with in the 1820s, if I remember rightly,The Sterling engine just didn't really have the materials to make it work. Yeah. But now that we can do it, it seems as though they're starting to be a bit of movement around it, but they are, the problem is perhaps half dependence that we've, we've invented all these very good engines that do things pretty well. And to shift to a different path will only be worth it. If it becomes extremely, extremely expensive to, to work or to continue producing or. Well using the [00:42:00] existing laws that we have. Yeah. It's a sort of enhanced two question, I guess, is the sort of case where once you have those sorts of developments, it does start to rely a lot on relative prices in terms of the kind of investment that goes into certain things or the effort that goes into certain things, or when something is invented. You know whether or not it succeeds in the market, it definitely relies on those overall historical forces beyond our control, like prices and costs. Yeah, no, it's just,It's fascinating to think about it. And I appreciate you,actually thinking about it. Like, I feel like everybody has their, so, so many people have their narrative about like, this is the way it works. Like it's all evolution or it's all great people. , and, and so like actually like digging in and thinking about like, okay, like when, when is it, which,I really appreciate,I want to switch a little bit and talk about risk. , So a lot of the things that, that you discussed,blow up when they [00:43:00] fail. And yeah. So I'm, I'm wondering, like, if there's some like, and I feel like people today would not use something, if it would blow up when it failed. Right. So, so,and so, so,I'm wondering, like if there's something. In like, like you need a societal risk tolerance, like of, of like physical danger in order to be able to do this tinkering with,Sort of intense technology, right? Like, so like steamships, they, they blow up when they fail. And like you see all these pictures of,like steam engines that have, have exploded and they, they kill people. , and so it's like, do you think that there's, there's a difference in our level of, of risk tolerance between now and,the, the 18th and 19th centuries. Maybe I'm not, I don't think so though. Okay. I'm trying, I'm just thinking of all of the sorts of things that just [00:44:00] from recent memory, you know, things like washing machines used to explode and fridges explode pretty easily, and it has that risk associated with them. , it's not until certain regulations come into force as the ways they have to be produced to kind of conform to certain standards. I mean, that's only a few decades ago. , And we certainly seeing a lot of inventions with the rocketry going on. Right. Which have a very, very real risk of exploding with absolutely no chance of survival. It's true. But you don't see that many, like, like, ah, I would say like sort of like civilians or, or customers getting on them right now, perhaps. I mean, certainly when it comes down to the wire, people are willing to take the risk for things like, you know, Testing a vaccine for the coronavirus. Right? What I've noticed is actually a lot of people are very bravely putting themselves forward for that sort of thing. I think I read the other day that the children of one of the, one of the [00:45:00] scientists working on it, an Oxford where, you know, very willing Guinea pigs for their moms,work in terms of there's vaccine and, you know, things go wrong with the vaccine. Things can do very, very wrong. Yeah, life-changing Lee or like Killingly I guess,even if they don't kill you, it could, it could affect the rest of your field days. So it seems as though, I mean, usually of course, you've got all sorts of regulation about the stages in which you test things out, and that's definitely different to what happens in the 18th century where, you know, it would gener. Gets his Gardener's son and gives him, you know, he purposely gives him cow pox and then smallpox to see if he gets it. And he's fine. Thank God. You know, or, you know, in the 17th century, the early experiments with track blood transfusions, they get pretty widespread and ultimately it just requires a doctor to kind of persuading their [00:46:00] patient on the, the procedure. , So, I suppose in some ways were more cautious about risk. , and again, even in these early cases, you know, they would often, when it comes to the first small pox inoculations, when they're trying to test them, they choose people who are going to be hanged as they're, you know, so they're, they're not, they're not always choosing people who are volunteering without any other constraints around that. Well, without any other possibilities, that's actually, that's very reassuring. I think it's like a, like there's I have this narrative in my head where we're like super risk averse and like, that's why we can't do anything, but,be very happy. That's actually wrong. I mean, certainly if you look at the number of people who become entrepreneurs and. In terms of just financial risk, basically give everything up and go bankrupt freeze. I mean, I don't see, I don't sense any change there. [00:47:00] Yeah. If anything, probably because the money available given how cheap capital is, it's just like everywhere for whatever idea, no matter how crazy, you know,in a way that in the past, just wasn't available. So. You know, even if society as a whole is becoming more risk averse in terms of regulation and trying to prevent loss of life, the ability to take financial risks as much, you know, we're were able to take as much. Much more risks, I think, than the net before, you know, society is now enabling the risk takers in that kind of stuff. As, as long as you will, could possibly make the money, I think is one of my concerns, I guess. So, but even then, I mean the business cases, aren't exactly what we solid. So that very kind of what's the classic, you know, do this question, Mark. Make some money. Yeah. I will speak to him. I'll also, also,Sell you a hundred dollars for $99 get [00:48:00] all the users. Yeah. , so,another thing that,I wanted to ask you about is like, sort of like in terms of the cultures of innovation is something that I've been struggling with is like almost by definition to really innovate on something. You need to break a spoken or unspoken rule. And,So, like, have you seen anything in the relationship between,like cultures and rule breaking and innovation? Do you know what this actually, maybe also answers your earlier question about something that people mentioned a lot, which is that the. As though it's a kind of us for you, them. I had a narrative where we must take on the entrenched interest and they're going to block us at every turn. Luddites are everywhere. Yeah. That's the classic Silicon Valley. Yeah. And maybe in some ways it's a [00:49:00] useful, even if it's a myth in the sense that, you know, if you're going to troll people together, what better way than to create an enemy for them to fight or to help do. Right. , So maybe it's not necessarily a bad thing and can be quite motivating in a way that isn't necessarily that harmful. Right. Cause it's more about out competing someone,than it is about destroying them necessarily. , no, it's okay. Competition as a word could perhaps be a bad thing cause it, it, it, it. Implies a contest or not really a contest, but maybe combat. Whereas what's really meant is something more like a sports where whoever whoever's first wins, the race versus boxing or something where whoever knocks the other one out is the one, the one who wins, so I think this, this narrative is very common. And I'm so skeptical of it nearly all the time, right? Is that you do have that kind of opposition to invention, but it's always been there. And I don't think, I think it's, I think that kind of opposition is very rarely to innovate invention per [00:50:00] se. I think it's much more commonly in opposition to particular ways in which those inventions affect existing interests. , So the Luddites, for example, a smashing particular kinds of machinery that are, that they feel are framing their jobs, the suite, the captain swing rights, again, affecting particular kinds of machinery. , I mean to, to, to go beyond machinery, think of the kind of anti enclosure movements where, you know, this is an economic change that is potentially improving the, the rental yields of the land in the sense that it's a more efficient use of it. , but it's certainly. Yeah, depending on the kinds of enclosure, it could be kicking labor is off. So the replacing fields with, with sheep,which is, you know, competing like 40, 40 laborers suddenly replaced with one shepherd,So these are things that I think affects particularly interest in the same way that, you know, Uber opposition to Uber, isn't Israel, Haley about kind of general opposition to that kind of [00:51:00] technology. It's usually a kind of just opposition by taxi drivers. Having invested a lot of money in getting these rents and being like, you know, what the hell I've, I've invested all that money. And you're telling me this was for nothing. And I could have just gone and use this app. , Which is understandable, right? It's, it's something that you see throughout. And so I think, you know, a lot of the time when I see this and you see this throughout history as well, I often see something being like, so, and so inventor was rejected by the emperor of China, the emperor of Turkey, or. The queen list with the first. And so they went abroad and took their invention elsewhere. And the moment you actually start to dig into the details of those, they're either completely apocryphal or they're much more about the specifics of the invention and not about inventors in general. , I very rarely come across cases where people are just anti novelty. Because if you're [00:52:00] anti novelty in one direction, you might actually be very pro novelty and other ones, right? The kinds of people who might be very unhappy about things, look, call center to call an employment, probably perfectly happy to have new designs for the silks they're going to wear. You know, there's novelty as a whole is Jen is I think it's we over overanalyze it, we over kind of label it, like creating this kind of fake. We in the same way that I disliked discussions before the scientific revolution or, you know, big, broad terms that cover these huge sweeping things or individualism. Right. I find these very difficult concepts to get my head around because when I actually. Think okay. How would I use this myself? I kind of can put the Gates a bit of a problem now, even industrial revolution to even define it. You require an essay. So, yeah, so, so the upshot is, is it's actually like much more nuanced and complicated. , [00:53:00] Man, I, this is like, this is like the historian's buzzkill. Right? Which is like, you've covered this great theory. I'm sorry. Well, I think it's something that happens a lot and weirdly I think I'm off historians, actually, a lot more willing to entertain the broad sweeping theories. Cause I think, you know, they, they do, I mean, certainly have a certain sort of historian, right? Those who are brought up in the economic history or the Marxist and various other traditions or the long duration kind of traditions, they certainly have these, these broad sweeping theories and they like to tinker with those. , but there's also a lot of historians who are much more specific. And I think you do need a bit of both that, that. But you've got, when you do use it, who's you or your boss killing bit by saying actually it's more complicated than that. , I think that's best when put in relation to the theory as a whole. Yeah. So it should be telling us about our general mental models of how the world works. So yeah, for me, that my [00:54:00] problem with a lot of these Luddite things is they, they kind of give me this instinctual kind of, I don't know if it's such a big. Battle,in that particular way, I mean, actually to give you an example that I've just been writing about right now, just before we started the podcast, I've been reading the work of Daniel Defoe, so famous for Robinson Crusoe,and to foe is both pro improvement and yet seemingly very anti particular forms of technology. Right. The whole book that I've been reading, which is a tour through the islands of great Britain. Is him just going all around Britain and commenting on the recent things that have happened, the economic growth, the improvements, the land, you know, the, the, the change, the changes to manufacture, how many more people are now being employed. And they were formally, you know, how much more trade is going on in these sports. And he's excited about this stuff. He thinks that improvement as a whole is a good thing. He's pro I would say he's a [00:55:00] pro improvement technology. Awesome. Yeah. And yet when you come to specifics, like. The stocking frame. He is lamenting the fact that it's made certain whole villages completely unemployed. Yeah. Cause the, the, the, the economy, that's all the kind of where the growth is, has shifted to other places where those frames were being applied earlier. I mean, he's even talking, you know, very in favor of bands or imported. Silks and important cottons because it affects the wool, the fine wool industry and East Anglia. , and so this isn't his, it's not like he's anti openness or anti, I mean, he's a pro-trade person. I mean, he's someone who is extremely pro-immigration,who was trying to create these settlements almost like charter cities for religious written political refugees in the early 18th century. And yet. When it comes to those specific things, he can still [00:56:00] think that's a bad thing. It's not inconsistent for that. So I guess that's what I mean, there is that we, we should be careful about labeling people as Luddites or anti-technology I guess, where that's interesting though, is that you do at the same time, have certain people who I guess from an ideological perspective will be quite panty. Those things, but they're rarely workers. They're rarely people who are directly affected. I mean, I guess like to a lot of your listeners, it's going to be the kind of. The, I guess, increasing email, traditional feeling between journalists who cover technology and technologists, right. That you see a lot of the kinds of critiques. And I've noticed on Twitter that have that all of this kind of a growing vehemence like that. And that's, that's, that's interesting, right? , and I don't know if that's ideological or, or if it's just the journalists, I find good stories and good stories are usually negative or they involve, I have [00:57:00] people. So if you're put in charge of technology, you're going to be looking for bad people,in particular sectors. And so that might color your whole view. Of the sector, or if you're, if you're asked to come up with the kind of general op-ed about what the state of what's going on, you're probably going to come up with like the bad things that happen, the things to be careful. So, yeah, again, I don't think that's necessarily like anti technologist and I mean, to a certain extent, those people are probably pro a lot of the kinds of technologies that are coming up. They're certainly using them often as well. I think, I think the problem with. Having so much nuance is that it really involves like sitting down and like talking to people and like really trying to understand them and people,often don't want to spend the time doing that. the last question I always like to ask people is,what is something that people you think should be thinking about that they're not thinking enough about. [00:58:00] In a historical way in this is, this is your or just anything like this. This is your, I think of this as sort of like the, the open, open podium. , no pressure, no pressure at all. It's an interesting one. I guess that changes day by day for me as to what I think people should be thinking more about. Yeah. Well, what about today? Today. I mean, the, the main general one is that, and this, I guess isn't probably as targeted as your usual audience, but as a more general thing is it would be nice if people appreciated technology a bit more and they thought about its evolution a bit more,Or even just the people who were involved in making those things possible. I mean, if you just look around the room that you're in right now, or the space that you're in right now, like the, the nearly everything in it [00:59:00] regarding whether it's actually manufacturer even natural as involved someone doing a bit of tinkering. I mean, I'm looking at a house plot right now and thinking to myself, okay, what even allowed this plant to be here. That's always, certainly not native to England. It's you know, so it probably involves perhaps, you know, greenhouse technology, it involved all sorts of glass, making that in involved people learning how to cultivate it, spreading that knowledge of cultivation probably involves fertilizer improvements. You know, the, the, the, the capacity of improvement is almost infinite. , I guess, I guess this is a kind of other general thing that maybe you'll usual listeners will also be kind of more interested in, which is that, you know, a lot of what we can improve. Isn't just about efficiency. It isn't just about making things cheaper or work faster or work better. , it can, or even simplifying things, which I imagine a lot of people do. It's also about [01:00:00] aesthetics. It's also about beauty. It's also about. Capacity of things to provoke, meaning I guess, or interpretations of a particular kind. , which sounds a bit fluffy. , but I don't think it is. I think, you know, a lot of, a lot of improvement that takes place happens along these kind of unexpected. Lines,where it's, you know, maybe just something like increasing the variety of plants in your garden, you know, in the 17th century, unexpectedly leads to dramatic improvements in agricultural productivity a hundred years later because of the sorts of things that you had to problem solve to do that. I mean, just today I was reading or yesterday I was reading about the first orange trees in England and how. When those were introduced, you know, during the winter they created a sort of shed that would have been put up all of the trees to protect them from the frost. And that actually, you know, does have an impact later on in the kinds of multicultural development that you get later on as well. So yeah, I [01:01:00] guess that's a kind of be open to those artists affected. I wish people were more open to those unexpected avenues for invention.
A conversation with Ashish Arora about how and why the interlocking American institutions that support technological change have evolved over time, their current strengths and weaknesses, and how they might change in the future. Ashish Arora is the Rex D. Adams Professor of Business Administration at the Fuqua School of Business at Duke University. His research focuses on the economics of technology and technical change and we spend most of this conversation focused on his recent paper: “The changing structure of American Innovation - some cautionary remarks for economic growth.” I tried an experiment this episode and wrote notes on the paper before the interview. Key Takeaways Ashish introduces a useful framework by breaking the innovation world down into four players : academia, incumbent companies, inventors, and government and then look at how their relationships evolve over time. The current innovation system is well equipped to enable new products with large technology risks and almost no market risk (like new cancer drugs) or high market risks and almost no technology risks (like most software) but falls short in between those two extremes. A fuzzy one but it’s important to marinate in the constant complexity of the answer to ‘How does technology happen? ’ Notes Ashish’s Home Page Ashish on Twitter The Changing Structure of American Innovation My notes on the paper Steve Usselman’s Website Transcript (Experiment and automatically transcribed) [00:00:00] [00:01:00] just to start us off, , would you give a summary of the paper? I'm going to direct everybody to go read it, but just for people who are, are listening, like what, what do you think are the key things that you would want people to take away from reading your paper? So the paper itself is descriptive, but our objective data is to, to make, make one argument, which is that the way in which innovation in America is organized? Has changed over time. And there's a sense in which the system we have now is closer to what we had say at the turn of the night of the 20th century. So, you know, a hundred years [00:02:00] ago there are important differences. So that's, that's one from a descriptive point of view. There are important differences too. And we, you know, we can talk more about that, Ken. The part, which I think is, is most interesting. And perhaps also most speculative is, you know, two things. One, why has, why, why, what, what caused this change? What caused this system to evolve? And the second is, well, you know, is it good or bad? And you know, what, what might, what should one do about it? What could we do about it? , and I suspect we would spend some time on that as well. Yeah. I thought the, the dividing up the paper into different areas was, was really important., and so actually, would you say a little bit more about how,, the way that innovation is structured now resembles the way that it did at the turn of the 20th century? [00:03:00] So if you think let's start with today, right? If we think about today, we have the, the. Big tech companies. , but most people would say, you know, if you think about the innovation system today, we have sort of three sets of players, maybe four, we have the universities where, which do a lot of the research produce a lot of the fundamental knowledge and importantly, a lot of the, what economists call human capital people that, that do it. so that's one. The second part is, is the startup community, right? The startups and the VCs that fund them and all that kind of stuff. And the third are the firms, the, the incumbent firms, as we call them in economics, the peanut, the Googles, the Facebooks, but also the IBM's Microsoft and so on. And these, these are the different components. And if you go back to Adam Smith, He talked about a division of labor as being the quintessential aspect of capitalism. That [00:04:00] capitalism is this relentless force towards specialization. And what we have, you might think of it as a division of labor in innovation there, the universities that produce the research, the startups that take it and make it more commercially applicable. And then the incumbents that apply it. If you go back. Say two 1860s, that's kind of the system we had. We didn't have the universities, but we had independent inventors and we had people that backed them. And then those inventors would sell that inventions for the most part to companies that were producing, you know, early ones were railroads, for example. And so there's a sense of, you know, in that sense, it's similar. You could think of this as a splinter or a fragment system. I prefer to think of this as, as specialization and a division of innovative labor. Does that make sense? Yeah, definitely. I think so. Something that, so I completely agree with that. , those similarities, the thing that strikes me, that's [00:05:00] different between that, like the technology then, and the technology now is. Sort of the level of complexity and the amount that it takes to integrate it. Like something that I noticed about, , 1850s technology, and maybe this is this, this might be like a cognitive bias where it's like a fish in the water sort of thing. But you look at like patents from 1850. And like, you could, you could take that. You could take that patent and you could like build the thing., whereas now. Everything is just, is so complex. And like, literally, if you like, even just like downloading software from get hub and try to get it to run, sometimes doesn't work. and, and so do you, do you think that that comparison breakdown at all there, you know, that's a fair point and that I've struggled with it? So, so there's a sense of what's surely things are much more complex now than they were earlier. , but, but let me offer you. , a sort of a counter example or two, please. So one, if you think about one [00:06:00] complex industry of the time was agricultural machinery, right? Those mechanical devices were complex and people did, , innovative parts of it. , and at some point, you know, the whole system became integrated. You can just sort of bolt on stuff. The second sort of, probably more compelling one is, is, , the railroads, which if you think about as a technical system, we're quite complex. And, , Steve who's at Georgia tech has written up. He he's a historian of study science. He studied this extensively and I'm persuaded by his work that this was really complex. , but somehow the railroads managed to integrated. While, still relying on independent inventions. So if you think about track switching, these all came from different, different parts of, , you know, different people in different parts of the co , of the country. And, and they didn't really, the railroad companies themselves didn't really have a function whose [00:07:00] job it was to, to, to develop these innovations. This somehow had got managed. Yeah. So, you know, I mean, I find it depending on which side of the bed I wake up, I either agree with you or disagree with you. But yeah, I think the trick with all of this,, that I think is fascinating is that it's so multi causal and so nuanced that it's, it's very, it's tough to say like, okay, like this is. Exactly the same or exactly different., and so I, I think that conversations like this are actually really important for sort of exploring that nuance., actually like just something that I'm wondering about the railroads is, , my, my sense of modern corporations is that they are very hesitant to integrate. External systematic change. Right? So it was like in my, my mental model, if we've had a railroad today and someone came and said like, Oh, I have this, this like, great way to change the way that you do tracks, but you need to [00:08:00] do all your tracks this way. It would never adopt that., is, do you have a sense of whether there was like a cultural difference or a good point? I'm not an expert on this, but again, relying on Steve's work. Steve Musselman's work. There's an interesting case of, , of breaking, you know, when you have a, when you have a locomotive and you've got these, these are these bogeys that are coupled, how do you stop this thing? And so this was a complicated thing and it was, it was a system that had to be installed, , in, in, in all the, all the cars and the railroads were. So, so there's a sense in which they were very open to the system, , and Westinghouse. Was that was the guy who was one of the people who came up with the whole system. There were others who came up with different ways of accomplishing this. And the railroad said, fine. , you know, we'll, we'll take it, but we want to do it. And Westinghouse said, no, no, no, I'll supply you the whole system. And just, you just put it in. And there was a lot of friction around that and, and [00:09:00] eventually Westinghouse prevailed, , thanks in part to his, his patent position. And his willingness to take the railroads on. So,, but to go back to your big question, is there a cultural change? I mean, surely there has to be right. And we were talking about 150 years, right? Yeah. But you know, that, that particular axis. Yeah. I suspect, I suspect that that all particularly since companies now have an autumn D function or an engineering function, that's that, you know, build certain. Builds up such certain sort of preferences or biases or, or views. It would be hard to adopt something wholesale from the outside and give up what you have internally. If you didn't have such an entrance function, it might be easier. But you know, I'm really speculating on this one. Yeah, absolutely. That's, that's what we're here for. The, the, like, this is not, , we're not doing any sort of peer review or anything. and, and so I guess I, another. Big [00:10:00] theme that I was wondering about that you didn't. Like, I feel like you hinted at, but didn't quite touch on in the paper was sort of the nature of the technology in these different areas themselves, like, you know, , late 18 hundreds, you have a lot of sort of mechanical inventions and then sort of giving way to chemistry and then electronics,, and then eventually software. and, and do you, do you have a sense of whether the, the organization of American innovation. Was Al was like, which way does the causality run? Yeah, that's a good point. I mean, so, so, so far I think it is, I think you've got something really important there, which is, it may be that mechanical systems are maybe designed based systems have this kind of a, are different from more integrated systems, like, like a, you know, if you think of. , a modern chemical process, which is highly optimized in many ways, and everything [00:11:00] is, is interacting with each other. So, so the shortly our differences, and you could make the argument that, that sort of the mechanical systems were more amenable to sort of bolt on parks. Right? You take this part out of the, the, the agricultural machinery and you bought a different type of, , part onto it. Yeah., A variant of that, which is an argument that, , again, a historians have made economic historians have made, which is the one difference from, from the mechanical technologies of the late 19th century and the, the chemical and electrical technologies of the early 20th century was the latter of a much more science-based and your independent thinkers, you know, had much less of a, , The opportunities were much, much less fruitful for just the tinkerer, the famous Yankee in January, which, which, you know, was that was irresponsible for, for America's [00:12:00] rise to riches prop, you know, in some sense, had, had, had, had to change and evolve to accommodate the new science based industries. And I think that's probably true. And that may be one reason why. Companies like DuPont had to start doing some of the, the inventing themselves and to, to bring some of this insight there inside the phone. That's certainly, certainly one possibility on the other hand, you know, I'm sorry, this is going to be on the one hand, on the other hand, that's amazing. On the other hand, think about a petroleum refining, which is. We started out as tinkering, but eventually had a very strong science, scientific and engineering base. Yeah., some of the most far reaching and inventions were made by, , independent inventors, but by a guy called interestingly enough, CP dubs and I've read, I can't [00:13:00] verify that C stands for carbon and P stands for petroleum. So the guy's name was carbon petroleum dubs, and he came up with a dubs process. And the, that led to, to the technology that is that's used in pretty much every on refinery that you can think of the, the platin , what's called the platform. Other platinum reforming technology uses a platinum catalyst. So, you know, there was lots of room for independent invention., even in these new science based industries, by the way, dubs was competing with standard oil. Wow. Standardize. It had the song process and this guy that the modern day company, you can look it up. It's called UOP universal oil products build itself as the supplier of technology to the independence, the independency of being the independent oil refining companies, independent of the standard oilbrella. Yeah. Wow. And, and [00:14:00] so yeah, the, the, the relationship between like a science and tinkering is. I feel like there's, they're, they're like the people on the science side and they're the people on the tinkering side. But my, like the hypothesis that I've sort of been coming more and more towards is that it's almost, almost like a cycle where like, everything goes through cycles where it's like very tinkering heavy and then very science heavy. And then maybe back to tinkering heavy,, depending on, on where it is., and so. I think what's, what's interesting to me now. So like let's pull to the present day is that the structure of the American innovation system feels to me at least very geared towards software, right? Like this, the whole started started a software company in your garage. You have like these really cheap startup costs., You know, high, high capital expenditures, low operating [00:15:00] expenditures,, really does seem to lend itself to venture capital, , acquisitions by large companies, sort of this externalized R and D model that you talk about., what I wonder is like, have you looked at how, but at the same time we still do have all of these other industries, right. And. At least to my eye. It feels like they, that model, which is really good for like the call it the like the hot, or like most top of mind industry then gets applied to all the other industries., is that like, does that ring true to you? And like, did that sort of happen in other places as well, where you would see like when corporate labs started rising up. Get the corporate lab started get applied to industries that previously didn't need them. So, so, so let's, let's break this question up into two. One. Is, does the VC model work elsewhere? [00:16:00] It's certainly being was started elsewhere., I think the other place, what it arguably works is is, is biotech, which is a very different kind of sector. Right. Very science heavy. Yeah., Capital-intensive as heck, , at least in terms of sort of paying for, for equipment and reagents and people., and it, I would say on the whole, it works well there as well. So it works with two very different, you know, almost like two extreme sides of the economy. Yeah. I think, and you know, I, I want to be cautious. I think it, it sort of breaks down in the middle. And we have a way of thinking about it that it's, it's speculative at this point. , so, so, so that's the answer to your first question is, you know, yet it works at extreme ends of the spectrum is sort of breaks down in the middle. You know, if you think about materials, technologies, , energy, climate change related stuff, it's, it's difficult. [00:17:00] I mean, we haven't really seen very much coming out of it., and Peter deals sort of famous quote. Oh Quip, you know, we wanted flying cars and we got 140 characters. It is, , it's in a sense, has it has a element of bitter truths there that the system, for whatever reason, hasn't really worked in the middle to go back to the other, you know, how did it work earlier? I suspect we didn't have, you know, professional investors investing other people's money, which is what VCs are for the most part. Right.,, But we did have people who, who backed independent in ventures., you know, if you think about, , I, I spent a long time at Carnegie Mellon in Pittsburgh and Alcoa was a homegrown company and it was, it was funded by, by wealthy individuals today. We will, we might call them angel investors, but really that, that involvement went much farther than, than a typical angel investor would do. And you, there are lots of other examples. Well, you know, the, the [00:18:00] people who backed Tesla, for example, the Nikola Tesla, you know, Westinghouse back, back to him out of personal funds. So, so we, we had, we had people willing to, to, to, to back in independent inventors., you know, obviously things were never quite the same, you know, history never quite repeats itself, but, but yeah, there's certainly, you know, you can see the, you can hear the resonance. Of the past in what we see today. Yeah. I think the, where I'm, I'm interested in, whether you could make sort of a broad sweeping statement that the, and, and I, and I realized that this is like a very, sort of like a big statement, and I'm not asking you to like, endorse her undergoes, but like, what would you say about the, the, the thought that,, the structure of the American innovation system. Sort of follows what's best for the most profitable, [00:19:00] , industry at the time, and then sort of applies itself to all industries regardless of applicability, like at that time. Yeah. You know, so it's, it's, it's, it's, there's an interesting point. I think I would put it in a different way, which is it's certainly the case that. If, if you're interested in, in the VC type model, there are a couple of sectors, you know, the VC is like a particular model and they're willing to go to go for that. The thing that's striking to me about the American system. Has been its diversity and its, you know, the incredible diversity, , and the, , willingness to experiment in many different forms. So even, even within the VC sector, you'll find VCs who are specializing in science-based. , thing, , you know, startups, they're not whether it's showing the, that they're sort of, they're saying we won't follow the heart and just do you know, SAS, or we just do, you [00:20:00] know, B to C companies or platforms or whatever it is. You can find people to back. You pretty much, no matter what you're doing, , maybe not, not enough, , in some sense to meet societal needs. So I would say, in fact, it's the opposite that the American system has been very good in terms of diversity in large measure because of its scale. I mean, America is in some sense, , and this is a tangent, but it is just a giant exception, right? It's a continent, which is one which has had a unified currency, a largely unified set of rules for commerce, , for trade. A common system of law. It's, it's, it's really quite, quite amazing, , what, what we have here. , so for countries who want to emulate America, I always have this caution is that is only one, you know, , , and, and the fact, you know, unlike Yoda, it didn't really have to, to kind of reinvent something from all plot and wait for [00:21:00] 200 years. I mean, I'm overstating it, right? I mean, it's an important point. Right., so, so we have this diversity, there's a sense in which the VCs, you know, a lot of them are chasing the trends as you talk as you, as you say., and perhaps for good reasons. I mean, maybe this was, this was getting us off point, but maybe if you think about it, right, , we're. Work w America is a really rich country. And, you know, it seems odd to say this in, in the situation we're in, but we pretty much solved a lot of the basic problems that signed some technology could solve., you know, if you go back 250, 300 years ago, the big problem was getting enough to eat, fighting off jumps and microbes, and just basically the sheer drudgery of daily living. And for the most part. Again, I don't want to sound, , like, like a, like a nut, but to some, to, to, to a large extent, we've solved [00:22:00] those problems. So if I, if I look, I have teenage kids, I have a, you know, a kid college rising, rising senior, and I look at their lives and their lives are so different from what I grew up in India. And it strikes me that that biggest problem is, is boredom. That we've, we've ultimately reached the stick stage in, in, in, in America where people people's people have to fight the boredom. And you could argue that many of the so-called innovations that the VCs are funding are a solution to that problem, which was the problem. How do, how do, how do you stay off boredom? Yeah, I think now, no, it's my turn to do the on the other head, which is so I, I actually, I completely agree with you about how far we've come and how, how many amazing things we have. And I like don't, , I don't want to understate that. And I think that people do, I think that people are not [00:23:00] grateful for like what, how different it was even a hundred years ago. and yet I think my. My point is, or my hope is that there's so much more to be done. Right like that, that it's not, we're not at some tapering off point. Right. But ideally there's, there's so much more to be done. So ideally not only would we not worry too much about bacterial infections, , but we would also not have to worry as much about viral infections. Right. And instead of going like really the maximum speed limit being, you know, Couple hundred miles per hour on an airplane, it would be, you know, a couple thousand miles per hour on a rocket ship or something. So, yeah. Well look, I, I completely agree with you. I'm, I'm, I'm an optimist and we will find that the, the challenges we're dealing with now, to some extent we've created, right? If you think of the big challenge we face or a big [00:24:00] challenge we face, which is, which is climate change, it's, it's, it's something that essentially is a fruit of our own success. Yeah, that the arts can now support so many more people. It's, it's fundamentally changing the earth itself. Right. All of the fossil fuels that took so to, to build we're we're, we're consuming and we're dumping all the carbon back in the, in the environment. So to CO2 back in the, yeah, that's a tough one. It definitely is. And so I guess actually sort of looking to the future,, you, you touch on this sort of lightly in the paper, cause it's mostly focused on, , the, the history and sort of like how we got to where we are today., where do you see sort of like in your mind, where is the American innovation system? Sort of like less equipped to handle sort of featured things then [00:25:00] maybe it could be. So, so that's, that's a great point. And, you know, we've between my Cortez and me, we've had a, , , , I would say a very spirited debate on both this and therefore what, what might happen., so, , So I'm not going to represent them. I'll represent my views. Oh, okay. Well, if you, if you, if you could like mention their views as well, that would be amazing. Right? So, so, so let's, let's start with sort of, you know, where we might be going. One view is, and th their view is in some ways market forces or the profit motive. Has entered so deeply into the innovation system that it's taking us away from, from pressing important problems to, as I said, you know, for example, solving the problem with bored teenagers. Right? Okay., so this, this is one view I don't disagree with, you know, [00:26:00] when we are in terms of what were the, what big parts of the innovation system are doing, but. The question is, you know, what is it? Is it, is it profit motive or something else? You know? Cause every it's, it's natural to look back at, just roughly say the period between 1930 and up to 1980, what we had these, you know, the DuPonts, the GEs, the Kodaks the IBM of course, the bell labs, Xerox park, you know, one after the other, these companies great companies that did great things,, did one for themselves. What also did get, did great things produce, you know, fantastic, , innovations., you know, there's a sense in which people want to go back and sort of go back to that golden past, , in, in many ways, , really possible. And my view is that's just not going to happen that, you know, Much for whatever it's, it's, it's far, there's a sense in which we're not, I don't think it's possible to go back there. [00:27:00] So this is the system we have for, for better or for worse., which is the, you know, they mentioned the universities, the startups, and then the incumbents. And the question is, , where might we go and what, what might we be able to do with it? I think that this kind of system could be improved., You know, if you look at the current pandemic, , which is an interesting case in point, we find ourselves hopelessly under prepared. And if you look at, for example, the CDC has guidance on what universities should do to, to reopen the CDC does not recommend at this point,, widespread and regular testing, which I find absolutely. Definitely it's absolutely baffling. I do it baffled given that we really don't have any, any, any prophylactic, any way to prevent. If we don't have a vaccine, we're not going to get a vaccine for the next 18 months, no matter who and whatever, you know, on a widespread or wide-scale. Yeah, we [00:28:00] don't have a cure. The only thing we could do is to sort of test and isolate and prevent people. , you know, from, from infecting others and the fact that this wonderful innovation system has left us six months after this, why this was first discovered, why are the CDC is still not prepared to say you should test and test regularly? I suspect it's because our testing capacities, woefully inadequate. That's the only charitable reason I can, I can describe as to why they're doing it. Yeah. But it's, it's, it's, it's a huge, I think we're putting thousands of lives at danger, , because we haven't developed this, this ability to deploy, destined, which uses technologies that by and large exist, right. You're you're using PCR based antibody testing or, or automate [00:29:00] testing, whichever. I, you know, I'm not an expert, but certainly those technologies exist. We know how to do these things. And the fact that the richest country in the world, the technically the most sophisticated country in the world is, is unable to deploy. It tells us something about the innovation system that I think is not flattering for the system. It's, it's failed us in important ways. And I suspect one could make a similar claim as regards climate change. I think the system is failing us., what I think we will have to do is imagine a more constructive role for the government and perhaps, , private philanthropy as well, that can fill in some of the gaps that the cotton system leaves, where we can have more of our minds, the bright, wonderful minds that America produces on that it attracts. Use to S you know, employ to solve what I, at least I consider to be more pressing problems and [00:30:00] more important problems., and, you know, we, we, we could, we could talk about what, what shape those things might take, but at least, you know, at a high level, that would be my answer to your question is I it's, it's, it's a great system. There are some weaknesses that need to be fixed. Yeah, I, I would, I would, would you mind sort of digging into, into those weaknesses? , cause I mean, I have, I have my own opinions on that, but I 100 w yours and I can react to them. Okay, sure. So I think mine actually, , you, you, you illustrate this very well in the paper where you talk about. I believe it was a DuPont. I think it was DuPont, , tried to acquire, they, they bought the patent, , for, I think it was synthetic silk,, yes, Korean and, , they just, they couldn't get it to work. And so they, they eventually needed to actually bring the people in and start doing things in house. And that [00:31:00] was one of the reasons that they started, , Like that, that corporate labs took off was because,, there was, there was like a lot of sort of integration work that needed to be done. And at least in my mind, we've sort of returned to. That weakness now. And so I see at anything that can either, , sort of stand on its own or be very modular and quickly,, become part of a larger thing. , it does well, like th the system does really well for it, but,, technologies that sort of are like improvements to systems or replacements for systems. they, they sort of wither on the vine. Yeah, no, I think that's right. I mean, one way to think about this is that, you know, if you go back to where the VC based system does really well, you see the two opposing ends. If you think about software, I would argue to a first approximation software is mostly about [00:32:00] figuring out what consumers want, will enough people buy it? Do I, can I find them? So if you think, if you see what, what, what a lot of VC money gets spent. It's not in, in solving the technical problems. It's in solving the commercial problems. Yeah. How do I find the market? What is the market? You know, how do I sell it? And, and, and I did those things out, which is why VCs don't care about whether you make money, but they're care. They care about top line growth. Can you acquire the customers? Can you acquire them cheaply enough? So the system is well tailored to that. On the other hand, other end, if you think about the biotech side, The problem that is not do customers want it? Yes. We want a cure for, for, you know, for Colbert. We want a cure for cancer. The big problem is the scientific and technical problems. Can we find something that will do it? And once again, I would argue the VC system for the most part is, is, is well suited or VC system. The startup [00:33:00] based system is well suited to that. Yeah. You have patient capital people willing to put in money where they will not see. The final outcome for another 20 years, people are, you know, are spending money in, in how you could make a human beings immortal and you know, just think, think about, you know, there's no, there's no doubt. There's a market for that. And we also know, we also know who, who, who will want it. Right. I'm in that category. I like living too, right. It's the stuff in between it, where these two things interact and important place where, you know, if you go back to the question of, of, , nylon, let's say our, our rail on the question of what are you going to do with it? So you have a new material, what should it be used for? So did we used to make. Make underwear for women, or should it be used to make parachutes, could it be used to make a billiard balls, which is one of the earliest, , not for nylon, but for something else. [00:34:00] There's a myriad of possible ways so that, you know, you could have many different kinds of markets, many different kinds of consumers, and depending on each, you would have to take this and get to a different price point, have different sort of performance characteristics. And all of those have considerable technical uncertainty. So for example, when nylon first comes in, you can dye it, you can call it it. So it's this really crappy looking, Dell gray kind of material. Well, who would want to buy stuff? You know, clothes may not have that if you could only buy gray, so you have to solve the problem of, okay, how do I diet? But that depends on what you use it for. If you go to make it to use rope, you don't cash, right? Right. So it's when these two things need to interact in important ways and that are important, you know, funding decisions that have to make, should I invest in, in changing the performance characteristics of this material? Well, that depends on what market I'm going after. Well, which market should I go after? [00:35:00] That's when I think the system does much, you know, it's not clear at all, , that the system is, is, is weak and it, I would argue. , the, the integrated system would work much better. Yeah. That's really insightful. , I hadn't, I, I hadn't put those pieces together before. and that, that sounds really correct., and so do you have any thoughts about,, How instead of like, I guess without saying like, Oh, let's just make corporate labs again. , do you, do you have a sense of what steps we could take from where we are now towards, , a system that supported those kinds of innovations? Yeah. So I would say that they're probably one just to come back to, , the question of, of the role of government. One another place where the current system works breaks down is when you [00:36:00] have what economists call externalities. Whereas, you know, it's hard to capture all the benefits that you're producing and some cases, those benefits could be, could be quite significant. The government is already involved in one sector in a big way that solves this, which is a university sector. We we're now, you know, , , the university sector just would not survive. Without the NIH and the NSF funding and the other source of the funding over the last two decades, maybe three, I certainly three decades. I think that the system has the government system has become more short term. I think they're willing to take fewer bets, , perhaps for good reasons. , but, but I think. If you think about many of the things that, that we got many of the breakthroughs we got, certainly after following world war II, the government not only subsidize some of the upstream [00:37:00] research, they were happy to support the training of students and they were happy to, to buy the things that came out at the backend. what's. Interesting. So let's go back to covert. One of the things that would have been very useful is for the government have also announced early on that they would, they would be ready to buy, you know, 250 million units of vaccines for the next so many years. I couldn't agree more. And if he has to write and test and forget, government buying government is actively hindering. The use of, of tests, right? They won't allow people to do tests. The FDA has become such a force, , such a destructive force and disrespect, I would say, , it's it's, it's, I'm, I'm shocked and saddened by, by how, at least in this particular case, the FDA has behaved., so, so, so the, but just not to get too [00:38:00] far off on my pet hobby horse right now is. So the government, I think we need to find a way where, where the government can, can not only solve some of the upstream, scientific uncertainty, commercial, you know, technical uncertainty part, which they're currently doing through funding research and so on, but maybe also try to do something at the, at the backend where they stand ready to, to procure or to, to, to be important customers for, or somehow. Help mitigate some of the commercial uncertainty through the procurement. So I'm, , I feel like very torn on this subject because so, , to, to expose my biases, like I, I'm a big fan of markets. And so,, I'm very, I buy the arguments that, , government is not. Well, especially when, when it's, they're not buying something for themselves, they're not, they [00:39:00] will always make the best choice about what to buy. But at the same time, I, I find the argument you just made compelling., do you have any sense of like how to, to balance those two? Yeah, I think it's a good point. So, I mean, The government as, as the user, as a lead customer, for many of the technologies and electronics computing, et cetera, kind of makes sense because I don't know if they bumped the best ones, but they certainly bought a lot of them. And it was justified by, you know, by the, by the space race, you know, the Sputnik and then the core war that followed. And in some sense, even if you bought the wrong things as the government. In those days, you could console yourself by saying, well, at least we trained a whole bunch of PhDs and engineers. , and so we, we got something in the body game, right? We, we developed a capacity. We developed a way to test these systems. So [00:40:00] I'm, my biases are the same as yours. I am generally having grown up in India. I'm very suspicious of government. , You know, actively interfering in the marketplace because usually no good comes out of that. Yeah., but at the same time, it's hard not to look at the, the current crisis we have and say, well, here's one thing that, you know what, I would have supported some sort of government intervention, both on the front end, in helping develop the knowledge that we need to develop the vaccines and the tests. But also to S to, to at least backstop some of the commercial risk. Yeah., without it, nobody will make investments. There was a very nice article that I read about the, either the Ebola or the SARS epidemic, where some people made significant investments in, in scaling up capacity for, for vaccines and or for PPE. And then we're left, hanging. Because if the government doesn't, doesn't intervene [00:41:00] actively to buy some, you know, to provide some significant quantity of demand is just not viable. People. People are not going to pay for tests for themselves, or at least the right people will not pay for tests for themselves. They will pay for tests when they think they're sick. You want the test when they don't think they're sick. Exactly. And most people won't pay for that. You do need, this is a classic case where you do need the government to come in and say, well, we'll pay for it. And within in India, for example, smallpox are more or less wiped out because the government subsidized not only the acquisition of the vaccines, but also the large scale deployment. Yeah, we also need government to help perhaps deploy this, but given our fragmented insurance and medical delivery system, we need, we need a lot of help both on the regulatory side and on the dollar side to help that. And so while I share your biases, I think there are definitely cases where [00:42:00] we need the government to be more thoughtful and more, more progressive. Yeah, I think that the tricky thing, so I agree and I think. The tricky thing that I, I don't know how to do is how to decide like where, where where's the line., and that's, I think beyond, beyond the scope of this, but, , it's, it's an interesting, , question that I don't think that we have a good conversation about yeah. I mean, w w a different way to do it is we don't have to draw the line, , which we can do it on a, on a, as is basis. Right? I mean, part of the thing that I asked. Over time. I've come to the view that we want to have principles, policies, or principle ways of, of, of, of approaching these kinds of decisions, which is fine. And it's a, it's a, it's an instinct and an impulse that I understand, but sometimes that stands in the way, so, so we can draw the line. Fine. Yeah. [00:43:00] It's okay. Let's do it here. Then when the next one comes along, we can figure out whether the government should do something like this or not, or whether the government and the reality is the government does get involved. Right. I mean, yeah. So all the nice lines we draw don't seem to stop anything in any case. Yeah. And actually speaking of bare, mid and shifting a little bit, I, you, you sort of outline., the antitrust trends over time. And,, like in, in the paper, , you make the argument that antitrust was pressure to, for corporations to do more internal research so that they could expand markets. and there's examples. Both of this., but then at the same time, there are also examples of, , sort of the, the technology. Like another reason why people have really liked corporate labs is because [00:44:00] technology also like escapes them, right? Like you see, , the transistor coming out of bell labs and, and to my understanding, they government antitrust also. Sometimes would prevent, , big companies from going into new markets, even if they were to invent things. So,, can you, can you sort of like walk me through, , your, your argument of. The value of trust. Yeah. So that's a really good point and I should have, I should have thought about that. Thank you for reminding me. here's a, here's an interesting piece. Just going back to DuPont in 1911, DuPont had a monopoly on smokeless powder. They were the monopoly suppliers of smokeless powder to the us Navy in 1911 in a famous antitrust case. The government went in and broke up DuPont into three parts. So there was DuPont Hercules and Atlas three companies to, to provide, , to provide smokeless powder. And this is quite remarkable. If you think about right, they took a dominant one form that was doing well. And they said, you know, we're going to break you up [00:45:00] into three producers. Now, those Atlas soon March with Hercules, I believe. And then Hercules' remained as an independent company for many years in similar spaces to DuPont and they had reasonably friendly relationships. But I think that episode was a singular episode for DuPont because they understood that that ability to grow was going to be constraint that they could not do what companies do now, which is just buy up competitors and, you know, look at T-Mobile and sprint. Right. I mean, it's, we've gone from four to three and I'm not an antitrust experts. I don't understand these things, but there's a sense in which. If growth is, is an imperative for companies. And if they're going to be constrained from growing, by buying up, , you know, by expanding by, by, through, through, through sort of what we call inorganic acquisitions, there's a sense in which, , they will [00:46:00] have to find new products and these new products could come from startups. Although there's an antitrust scrutiny there as well. , but it's much easier if they come internally. And that's essentially how DuPont went from being a producer of large, largely undifferentiated chemical products, like explosives and fertilizers into new materials. And they eventually went almost all the way into getting into textiles and then they stopped and they, you know, when they got into nylon and polyester, they said, how far do we go? Do we want to be producing our own cloth? And they said, no, no, no. We want to stop here. We're going to demand an industrial company. , ATMT while they could not get into merchant semiconductors, they use semiconductors internally. It was very important for them that they needed those to solve their own internal problems and to grow the telephony business. Yeah. Did, did they,, produce, did they make their own semiconductors or did they purchase them from, [00:47:00] I believe. Hi, I'd been interested. I believe they did for, for quite some time and an IBM similarly. Right. I don't think they were a merchant supplier of semiconductors to others, but they've certainly produced any conductors for themselves. Yeah. And so this is going to be a naive question, but like, why, so, I mean, I would like, I guess the question is like, why doesn't Bowie, why don't we see a lot of amazing research coming out of Bowie? Because I sort of think of them as. Basically a monopoly on, I mean, it's like, there's like Boeing and Airbus and they're the only, , companies that can produce like giant airplane, like giant complex airplanes. so if, , the, if like the monopoly profits are what enables like longterm thinking and,, really great corporate labs, like why, why doesn't Boeing have an amazing corporate lab? That's a great point., I don't know, but remember boy was allowed to buy McDonald Douglas,, [00:48:00] and, and words was not stopped there. And really, , that, I don't know. I don't know the answer, but I think, I think part of it is at some, they, they never either never developed the capability to do this kind of fundamental research. Are, or they do. And somehow it's all tied up in various kinds of government contracts that we don't see, you know, because they had a large government contractor as well. And maybe the government is not demanding from them. The same sorts of innovative products that it might have demanded of its contractors in an earlier era. Yeah. So, but it's a, it's a great question, Ben. I don't know. I never thought about it. Like why don't they have,, a large corporate lab? No. Remember one reason we will never go back to these large corporate labs is because they are incredibly difficult to manage insight, a publicly traded company. Yeah. And this was the big [00:49:00] point of disagreement between my, my, my coauthors and me. My view is research inside of a large public company is always a strange animal. It's always a strange animal., and the only way it's happened is if, if corporate headquarters protects it and nurtures it as the shiny example, here is Microsoft. Microsoft is, had set up under bill Gates, , and more of a world of fantastic research lab, which, you know, they, they nurtured, but, and this is a big part. If, if you ask yourself, what did Microsoft shareholders get out of that lab? That, that would be an interesting quality. I don't know that it's been systematically studied. Yeah. But looking at it from the outside, if you look at what Gates as successors have done, I think I'm going to get in trouble for saying [00:50:00] this, but I think. What we are now seeing since probably 2015, 2016 is the beginning of the end of Microsoft research. Wow. That's and I say this because you're that the person was heading, Microsoft research was replaced by two people. Oh yeah. And part of that was, was taken away. And part of what was, was moved into sort of more applied. so. You know, I that's, why both, I think we'd never go back to the corporate labs, , in a, in a serious way. Yeah. Because it's incredibly hard to manage and it's incredibly hard to justify, you know, the billion dollar, roughly expense that Microsoft research, you know, cost Microsoft. If you think about huge, huge public good. , but it's, it's, it's no longer justified given that Microsoft now faces significant competition. Yeah. In terms of growth goes back to the externalities piece [00:51:00] again. Yeah. So there's certainly large externalities., you know, if you think about IBM when IBM had to take a potential crisis or even a little bit before that IBM's Watson research lab, essentially God reoriented and pushed more and more into being managed by the individual divisions and the businesses inside. IBM. And those divisions have quarterly reporting responsibilities, right? You have to justify the capital that you get from, from, from the parent corporation. And it's really hard to say what something you've, you know, investing in now may or may not work. If it works. We might see the results in six years, you know, in terms of the technical and scientific findings and the commercial benefits might be even further down the line. It's, it's hard to manage that. I, I don't blame it on, on. On on short term as though I don't believe that argument, but I certainly do think that it's this, this kind of bundling between a main activity of a corporation, let's say [00:52:00] IBM, which is to sell computers, to produce, and then to do all this other cool research, which could be relevant, but perhaps not. But, you know, perhaps someday it's really hard to manage and do that. And I think this is the relentless logic of, of. You know, golf capitalism is we have to unbundle things. We have to specialize. So they go back to, to kind of Smith and say, and as I said, this was the point of contention between the research team that's working on this problem. So you're just hearing one side of it. They think I'm,, I'm overtaking the customer so to speak. I, well, I mean, I certainly buy your argument, so. I forgot what that's worth., I'll, I'll tell them, , so the last thing I always like to ask guests is,, what is something that people are not thinking enough about that they, they should be thinking more about? That's a tough one, [00:53:00] not very imaginative sort of guy.,, I would, I would disagree, but, well,, Okay. No, I don't think I can give a, give a clever enough to justify the advertisements. That's that's fair. That's fair. Well,, I really appreciate you doing this. , this is,, and I really appreciate you being willing to, to sort of like, Almost like play with these ideas., I think that, that people don't do that enough. And,, I, my, my hope is that by I sort of playing with them, , we can, we can figure out new ways to make awesome things happen. I really hope that, you know, I would really like to learn because none of these ideas are set in stone and it's. I, I thank you for the opportunity to talk to you. And I hope we'll, we'll, we'll learn some more and people will, will come up with, with new and better ways of thinking about this problem. [00:54:00] [00:55:00]
In this episode I talk to Venkatesh Narayanamurti about Bell Labs, running research organizations, and why the distinction between basic and applied research is totally wrong. Venkatesh has led organizations across the research landscape: he was a director at Bell Labs during its Golden Age, a VP at Sandia National Lab, the Dean of Engineering at UC Santa Barbara and started Harvard’s engineering school. Our discussion touches on the ideas in his book Cycles of Invention and Discovery. In it, he argues the the pipeline model of basic research leading to applied research leading to commercialization is not how good research actually works and that there are many negative consequences of most of our research institutions being either explicitly or implicitly operating around that model. Main Takeaways - Research depends on good people and trusting those people. - In order for the first point to happen, people who are responsible for research organizations need to grok the research - We should really stop using the terms basic and applied research Notes Cycles of Invention and Discovery Good overview of Cycles of Invention and Discovery's Thesis Venkatesh's full history Some Topics Touched On: - Fund people over projects - NSF structure - Bell Labs didn’t make the applied/basic distinction - Deep scholarly work - Frank Jewett and Bush - Agreements to license things from at&t - What would you do to start a research institute from scratch? - Why people went to Bell Labs - Just a smaller community - How do you nurture and lead research - Nothing nothing nothing nothing something - Tough love leadership - People who knew what was going on - Bayh-Dole act - How do you prevent things from becoming ossified - Research area not reporting to operating company - No metrics on managing research - Informal mentoring
In this episode I talk to Adam Marblestone about technology roadmapping, scientific gems hidden in plain sight, and systematically exploring complex systems. Adam is currently a research scientist at Google DeepMind and in the past has been the chief strategy officer at a brain-computer interface company and did research on brain mapping with Ed Boyden and did his PhD with George Church. He has a repeated pattern of pushing the frontiers in one discipline after another - physics, biology, neuroscience, and now artificial intelligence. I wanted to talk to Adam not just because it’s fascinating when people are able to push the frontier in multiple disciplines but because he does it through a system he calls technological roadmapping. Most of our discussion is framed around two of Adam’s works - a presentation about roadmapping biology and his primer on climate technology. The conversation stands on its own, but taking a glance at them will definitely enhance the context. Links below. Key Takeaways Technological roadmapping enables fields to escape local maxima It might be possible to systematically break down complex technical disciplines into basic constraints in order to construct these roadmaps Figuring out these constraints may also enable us to reboot stalled fields Links Road-mapping Biology presentation Architecting Discovery paper Adam’s Website Adam on Twitter The Longevity FAQ The Longevity FAQ - Making of Hypothes.is
In this episode I talk to Jude Gomilla about distributed innovation systems focused especially around the bottom-up response to the coronavirus crisis. Jude is a physicist, founder and CEO of the knowledge compilation platform Golden, and a prolific angel investor. He’s also been in the thick of the distributed response to the coronavirus response from day one. Key Takeaways - There’s a clear gap between market-based distributed systems and a top down systems coordinated by the government but it’s not clear how to fill it. - Twitter is shockingly important as a coordination tool. - The concept of centralized top-down problem statements coupled with distributed bottom up solutions may be under explored. Notes Gödel finding inconsistencies in the constitution Jude on Twitter Golden.com - [especially their cluster on the virus Feline Coronavirus Gilead - company working on treatment Balaji Srinivasan on Twitter Chris Dixon Idea Maze Article Cambridge Institute for Manufacturing paper on distributed manufacturing - Government as a giant flywheel - Claims and counter claims - How do you figure out what’s going on quickly without a centralized system? - Strategies based on timescales - hybrid strategies - Wave 1 - Ramp up for Wave 2 - How to respond to the [[‘Someone is working on that’]] problem - related - Too much explore vs too much exploit - Prizes for solving problems - Top down problem generation and bottom up solution generation
Intro In this episode I talk to Joel Chan about cross-disciplinary knowledge transfer, zettlekasten, and too many other things to enumerate. Joel is an a professor in the University of Maryland’s College of Information Studies and a member of their Human-Computer Interaction Lab. His research focuses on understanding and creating generalizable configurations of people, computing, and information that augment human intelligence and creativity. Essentially, how can we expand our knowledge frontier faster and better. This conversation was also an experiment. Instead of a normal interview that’s mostly the host directing the conversation, Joel and I actually let the conversation be directed by his notes. We both use a note-taking system called a zettlekasten that’s based around densely linked notes and realized hat it might be interesting to record a podcast where the structure of the conversation is Joel walking through his notes around where his main lines of research originated. For those of you who just want to hear a normal podcast, don’t worry - this episode listens like any other episode of idea machines. For those of you who are interested in the experiment, I’ve put a longer-than normal post-pod at the end of the episode. Key Takeaways Context and synthesis are two critical pieces of knowledge transfer that we don’t talk or think about enough. There is so much exciting progress to be made in how we could generate and execute on new ideas. Show Notes More meta-experiments: An entry point to Joel’s Notes from our conversation - Wright brothers - Wing warping - Control is core problem - Boxes have nothing to do with flying - George Vestral - velcro - scite.ai - Canonical way you’re supposed to do scientific literature - Even good practice - find the people via the literature - Incubation Effect - Infrastructure has no way of knowing whether a paper has been contradicted - No way to know whether paper has been Refuted, Corroborated or Expanded - Incentives around references - Herb Simon, Allen Newell - problem solving as searching in space - Continuum from ill structured problem to well structured problems - Figuring out the parameters, what is the goal state, what are the available moves - Cyber security is both cryptography and social engineering - How do we know what we know? - Only infrastructure we have for sharing is via published literature - Antedisciplinary Science - Consequences of science as a career - Art in science - As there is more literature fragmentation it’s harder to synthesize and actually figure out what the problem is - Canonical unsolved problems - List of unsolved problems in physics - Review papers are: Hard to write and Career suicide - Formulating a problem requires synthesis - Three levels of synthesis 1. Listing citations 2. Listing by idea 3. Synthesis - Bloom’s taxonomy - Social markers - yes I’ve read X it wasn’t useful - Conceptual flag citations - there may actually be no relation between claims and claims in paper - Types of knowledge synthesis and their criteria - If you’ve synthesized the literature you’ve exposed fractures in it - To formulate problem you need to synthesize, to synthesize you need to find the right pieces, finding the right pieces is hard - Individual synthesis systems: - Zettlekasten - Tinderbox system - Roam - Graveyard of systems that have tried to create centralized knowledge repository - The memex as the philosopher’s stone of computer science - Semantic web - Shibboleth words - Open problem - “What level of knowledge do you need in a discipline” - Feynman sense of knowing a word - Information work at interdisciplinary boundaries - carol palmer - Different modes of interdisciplinary research - “Surface areas of interaction” - Causal modeling the Judea pearl sense - Sensemaking is moving from unstructured things towards more structured things and the tools matter
In this episode I talk to Anna Goldstein about how the ARPA (Advanced Research Projects Agency) model works and what makes it unique. We focus on ARPA-E: the department of Energy’s version of DARPA that funds breakthrough energy research. Anna is a Senior Research Fellow at the University of Massachusetts Amherst and the author of the paper “Funding Breakthrough Research” that systematically breaks down how the ARPA model works based on research at ARPA-E. Anna is full of insights about the ARPA model and innovation systems in general. Key Takeaways Different innovation systems depend on empowering individuals and taking risks but shift around who is empowered and when the risk is taken on. It’s almost impossible to tell how well an early-stage high-risk system is doing. More Resources Anna's Personal website Anna on Twitter Funding Breakthrough Research - the paper we reference often Howard Hughes Medical Institute ARPA-E DARPA
In this episode I talk to Jason Crawford about his work on the history of progress, funding and incentivizing inventions, ideas behind their time, and more. Jason is the author of the Roots of Progress blog, where he focuses on telling the story of human progress in an amazingly accessible way. Key Takeaways Funding *structures* are understudied as a progress-enabling mechanism *Why* inventions happen is not so straightforward as we might think Culture may matter more than we think for building the future and there are concrete things we can do to build a culture of progress Links Roots of Progress Posts Smallpox - The history of smallpox & the origins of vaccines Charting progress Six threads of technology Arsenic as a pesticide Other Jason Appearances Palladium Podcast with Jason that touches on the philosophy behind progress studies Random Anki and memorizing - Augmenting Long-term Memory Ideas behind their time - Ideas Behind Their Time - Marginal REVOLUTION Tyler Cowen talking about bricks - My Conversation with Mark Zuckerberg and Patrick Collison - Marginal REVOLUTION
In this episode I talk to Eli Velasquez about creating startup ecosystems, commercializing research, especially when it's not necessarily venture-backable, and how the US government thinks about startups. Eli is the head of Venture Development at VentureWell - a non profit organization that funds and trains faculty and student innovators to create businesses. VentureWell helps run I-corps, which talked to Errol Arkilic about in Episode 15. Currently, Eli runs all over the world helping create fertile ground of startup ecosystems and in the past he's worked with intellectual property both in industry at Boeing and Academia at Texas Tech. Basically he's working on meta-meta innovation: creating new ways to make places where it's easier for people to create new things. Major Takeaways Too much government aid can turn companies into zombies because their customer becomes the grant-giver instead of money-paying customers. At the end of the day ecosystems happen because people's mindsets change On average bringing a technology to market on average takes more than five years. Resources Venturewell
In this episode I talk to Bill Janeway about previous eras of venture capital and startups, how bubbles drive innovation, the role of government in innovation. Bill describes himself as "theorist-practitioner": he did a PhD in Economics, was a successful venture capitalist in the 80's and 90's with the firm Warburg Pincus and is now an affiliated faculty member at Cambridge and the member of several boards. Key Takeaways Bubbles have arguably been the key enabler of infrastructure-heavy technology. Venture capital may be structurally set up to only be useful for computing and biotech. Most technology that venture capital invested in was subsidized at first by the government in one way or another. Resources Doing Capitalism in the Innovation Economy VC: An American History Wikipedia article on Bill NYT Article on Fred Adler from 1981 Bill's Website Bill on Twitter
In this episode I talk to Mark Hammond about how Deep Science Ventures works, why the linear commercialization model leaves a lot on the table, and the idea of venture-focused research. Mark is the founder of Deep Science Ventures, an organization with a fascinating model for launching science-based companies. Mark has many crisply articulated theses about holes in the current system by which research becomes useful innovations and what we might do to fill them. Key Takeaways: There are many places where innovation is slow and incremental because everybody is focused on individual pieces: batteries are a great example here. The perception that deep/frontier/hard tech companies are riskier and take longer to provide returns may in fact be more grounded in popular perception than fact The factors that make translational research so expensive may not be inherent but instead driven by administrative overhead and the fact that much of it is pointed in the wrong direction. Resources Deep Science Ventures Mark on Twitter (@iammarkhammond) Systematised ‘quant’ venture in the sciences. LifeSciVC on biotech returns
Alexey Guzey is an independent researcher focusing on how to systemically increase the rate of biology discoveries and the idea that reviving the patronage system may be a way to do that. We spend most of our time talking about the project he's been working on for the past year but also touch on some of his thinking around connecting with people, which he's written about extensively. Key Takeaways Most people doing biology research are embedded in a system that incentivizes incremental consensus steps and divides researcher time There are some institutions that stand at least partially outside of that system - Calico and Janelia being two examples Maybe we should be supporting more crackpots Resources Alexey's Essay: Reviving Patronage and Revolutionary Industrial Research Followup: How Life Sciences Actually Work: Findings of a Year-Long Investigation Alexey on Twitter:@alexeyguzey Alexey's Website HHMI Janelia Calico Andrew York Ronin Institute Emergent Ventures Phillip Gibbs - crackpots who turned out to be right
Cindy Wu and Denny Luan are the founders of experiment.com - a platform that allows anybody to request funding for a science project and anybody to fund them. It's fascinating because it stands completely outside of the grant funding and publication system that drives most science today. In this podcast we discuss how the current system prevents the creating of new fields, why science communication may be even more important that science funding, and new models for company governance. Key Takeaways The incentives built into the grant system make it hard for new fields to emerge Arguably, changing how science is communicated might have the biggest impact on our knowledge creation system. The concept of ownership and governance of companies being two separate axes that need to be considered separately Resources Experiment.com The Science of Science Funding DIY biohackers trying to see infrared with vitamin A Innocentive Public benefit corporation Purpose Trusts Wellcome Trust/Foundation Employee Owned Breweries Topics Consolidation and risk aversion in science Hard to fund research outside of funding buckets Field politics Hard for younger scientists to get funding NIH budget stayed the same, proposals have doubled Government funds what's popular CERN is a consortium of companies doing funding Only real solution is disseminating knowledge DIY biohackers trying to see infrared with vitamin A Digging up dinosaurs No money to prepare dinosaur bones Incentives for science Brewery example of employee owned corporation New models for funding businesses Ownership and Governence Axes Making scientists stakeholders in Danger of masking philanthropy as investment and vice versa Would VCs ever fund something that's not purely for profit New Company structures
In this episode I talk to Errol Arkilic about different systems involved in turning research into companies. Errol has been helping research make the jump from the lab to the market for more than fifteen years: he was a program manager at the National Science Foundation or NSF, Small Business Innovation Research or SBIR program, where he awarded grants to hundreds of companies commercializing research. He started the NSF Innovation Corps, a program that gives researchers the tools they need to make the transition to running a successful business. Currently he is a partner at M34 capital where he focuses exclusively on projects that are being spun out of labs. Seeing the often rocky tech transition from so many sides has given him a nuanced view of the whole system. Key Takeaways While there are some best practices around commercializing research, like business model canvases, many pieces like assembling a team and finding complementary technologies are still completely bespoke. The commercial value of research is a tricky thing. Some is valuable, but not quite valuable enough to form an organization around. Other research could be incredibly valuable if the world were in a slightly different state. Different approaches are needed in each situation. The mental model of MIST vs TIMS - market in search of technology and technology in search of market. Links M34 Capital The SBIR Program Business Model Canvases Errol on How the NSF Works Pasteur's Quadrant NSF Innovation Corps Topics What is the pathway to commercialization How do you have an iterative process when people don't know what they want What do the best researchers do to pull out core problems to work on? How do you address the tension of people wanting to apply their hammers? What are examples of people who have applied very specific technologies? How do you assemble a team around a technology? How do you systemitize assembling teams? How do you systemitize finding technologies that can plug a technological hole? What do you think about patents? Patents, trade screts, Technology that isn't venture fundable Valuable ideas that aren't valuable enough to pursue Systemitizing finding whether value could be harvested Where is the role of SBIRs in today's world SBIR decision making process Lengendary SBIR successes Push vs. Pull out of lab How do you find MIST projects Are there labs in unintuitive programs Next steps outside of local ecosystems? Does any new innovation need a champion? What should people be thinking about that they're not? TISM vs MIST
In this conversation Sam Arbesman and I talk about unlocking cross-disciplinary innovations, long term organizations, combinatorial creativity and much more. As you might expect from someone with Generalist Thinking as a main area of interest, Sam has out-of-the-box insights in a ton of domains and he's amazing at capturing them in tight concepts like "knowledge mining" and "jargon barriers." By day Sam is the Scientist in Residence at Lux Capital. Don't cite me on it but I think he may be the only person with that job title in the world. In the past he's done research in complexity science and history and the two of them combined, written books, and worked in non profits. Key Takeaways The concept of knowledge mining - recombining existing knowledge to create new knowledge. Unintuitively, Video games may secretly be some of the most powerful cross-disciplinary research labs. There are tactics you can use to generate cross-disciplinary creativity by cultivating a bit of randomness in your life. Resources T-Shaped Individuals Sam on Twitter Sam's Website Small World Networks Complexity Undiscovered Public Knowledge (and a 10-year update) Spore Kongō Gumi - the 1400 year company The Red Queen Hypothesis Other content from Sam: https://fs.blog/samuel-arbesman/ https://25iq.com/2016/03/12/richard-feynman-and-charlie-munger-expert-generalists/ Topics Favorite examples of combinations of ideas via generalists Ref: Small world networks paper T shaped individuals Attempts towards systemic cross-discipline idea sharing Don Swanson - undiscovered public knowledge Jargon Barriers Jefferson West Uwash - topographical map of fields Combinatorial creativity Systems for increasing the rewards for broad thinking vs. specialized thinking Need to define complexity science Computer games as a place that rewards generalist research Meta portfolio for generalist institution Self-sustaining insitutions and criteria for them Reinventing selves Or provide something people always want Japanese construction company that lasted 1500 years IBM original machines The Red Queen Hypothesis wrt Organizations Model that you need massive innovations to sustain growth (look up professor) Does the VC funding research paradigm constrain what can exist? Wired magazine researcher - "everyone loves the big idea that changes the world, but what about the ones that make a difference?" The importance of different approaches to making things exist How do you know if small ideas and tweaks in complex systems have intended effects? Promoting randomness and optionality What are tactics for increasing randomness and optionality? Randomly reminding about books Go to crazy different conferences
In this episode I speak to Matt Clifford about talent investing, how big long term projects can start small, and financial innovations. Matt is the CEO and co-founder of Entrepreneur First. Entrepreneur First, abbreviated as EF, is a fascinating system. It starts with cohorts of around fifty to a hundred ambitious, talented people who want to start companies but might not even have an idea to build around. Key Takeaways The mental model of predictable vs. unpredictable value. The idea that hypothesis testing speed predicts success even in projects where you won't see real results any time soon. The idea of money as a commodity that fuels innovations Background on EF (context for some of the podcast) EF then helps cohort members pair up into teams and get companies off the ground. Matt and Alice Bentinck started EF in 2011 and the history is kind of a crazy story: it started as a non-profit and now has raised a massive fund from LPs. One of the highlights in the story that really put EF on the map was a company named Magic Pony that sold to Twitter for an unconfirmed 150 million dollars eighteen months after starting at EF. There are links to Matt talking more about both the structure of EF and EF's history in the show notes. EF is a fascinating innovation system because it challenges many ideas that have basically become gospel in the startup world - everything from "if someone isn't willing to start a company in a garage with no income they don't have what it takes" to "only founding teams with a long working relationship can succeed." Resources Matt on Twitter (@matthewclifford) Matt's weekly newsletter EF on Wikipedia Magic Pony exit referenced in podcast Matt speaking at Startup Grind about how EF works Ideas Capital as a resource like any other Adverse selection The best CEO of a deep tech business often doesn't know the best CTO of that business Predictable value vs Unpredictable value Predictable market does not necessarily mean existing markets Basically logic-able innovations Job as founder is to lay out 18 month roadmaps Think of VC as a financial product Providing optionality to the founder Income sharing, with optionality The power of finance innovations Misalignment of incentive between VCs and entrepreneurs because VCs have a portfolio
In this episode I talk to Evan Miyazono about tackling metaresearch questions, how novel physical phenomena go from "oh that's cool" to devices that harness cutting edge physics, and how we could better incentivize the creators of innovations where traditionally it's hard to capture value, like open-source software and early-stage research. Evan is a research scientist at Protocol Labs where he helps lead their research efforts - coordinating researchers both inside and outside the company. Protocol labs is best known for Filecoin: a blockchain application for distributed storage. At the same time they also have a much larger mission that we get into in the podcast. Before joining Protocol Labs, Evan did his PhD at Caltech where he worked on turning crazy physics into practical devices for cryptography. Key Takeaways There might be ways to demystify both intuition and "big H Hard" research research in order to improve our systems for breakthrough discoveries. It's still super speculative but worth thinking about. Observations about physical phenomena and the world are at the core of many innovations, but the most of the process is driven from the top down by the problem, rather than bottom-up by the solution. On top of that, the process of solving the problem can actually feed back and increase our understanding of the underlying phenomena. Finally, there might also be new legal structures we could put in place to encourage more open-source development and fundamental research by allowing people to access more of the value they create in those activities. Resources Protocol Labs Evan on Twitter A quick talk on Protocol Labs research Metascience Cloud Seeding - From the abstract: "The intent of glaciogenic seeding of orographic clouds is to introduce aerosol into a cloud to alter the natural development of cloud particles and enhance wintertime precipitation in a targeted region. ... Despite numerous experiments spanning several decades, no direct observations of this process exist." SourceCred - a tool to help open source contributors capture the value of their contributions. Evan on Google Scholar if you want to go really deep. Try saying "Coupling of erbium dopants to yttrium orthosilicate photonic crystal cavities for on-chip optical quantum memories" three times fast.
In this episode I talk to William Gunn about the guts of science publishing, changing incentives in science, and the relationship between publishing and funding. William is currently the Director of Scholarly Communication at Elsevier. He joined Elsevier when they acquired Mendeley, which is a platform designed to help researchers share papers and notes about them. Before that he was an academic researcher himself and, for a time, a professional chef. Key Takeaways Science publishers aren't idiots - they realize that the internet is making anything free that can be free and are trying to adjust their business models accordingly. The metrics we use to judge research innovation are starting to shift and interestingly that is speeding up the "speciesation" of fields. Science has shifted more towards "big science" - big teams with big funding doing big experiments. However, there may be room to discover many more things if we put more focus on smaller projects. Resources William on Twitter @MrGunn Mendeley - a platform for sharing science Are Ideas Getting Harder to Find? Diminishing Returns from Science
In this episode I talk to Torben Nielsen about creating new products and systems in health insurance. We touch on the tension between insurer's well-founded risk aversion and trying new things, the process of insurance companies working with startups, and how to even know if things are working. Torben runs programs at Premera Blue Cross with both internal teams and external startups to build new products and systems. Premera is one of the largest health insurers in Alaska and the northwest US, so even small changes can impact many people. Torben spent many years working in healthcare and built his tech chops at Xerox and Lego. Much to my chagrin, we spent zero time talking about the latter because of time constraints. His official title is "VP of Innovation" which I do poke at a bit in the podcast. Outtro My major takeaways I'm starting to sound like a broken record on this, but in health insurance, like so many places, the process of creating new products and systems ultimately hinges on the opinion of a few decision makers. Startups trying to work with health insurance providers are often frustrated by the providers' speed. This conversation helped unpack why the providers move slowly and what they're trying to do to change that - I hope it works! Resources https://www.linkedin.com/in/torbenstubkjaernielsen/ https://twitter.com/TorbenSNielsen https://en.wikipedia.org/wiki/Premera_Blue_Cross https://www.premera.com/Premera-Voices/All-Posts/Healthcare-must-innovate/ Questions What does being a VP of Innovation in a large org do? What are your incentives? Incentives in the system? Who are the players in the process of innovating within healthcare? Why is healthcare slow to change? I assume there must be good reasons. How would you deal with a situation where an innovation challenges the core of the company? Conflicts? Primero test kitchen How do you assess/quantify risks? What are expected ROI timelines? How should startups engage in partnerships in healthcare ecosystem? Hard Question. Are there moral limits on cost per treatment / monopolies to drug therapies? What have innovations in health insurance looked like in the past? Let's talk about the elephant in the room: from the startup world, working with insurance companies is notoriously dangerous because of getting stuck in pilots, Insurance companies are inherently a hedge against risk. Innovation has built in risk. How do you manage this conflict? It makes sense that Where do you see the biggest areas for innovation?
In this episode I talk to Dr Robert McNutt about medical innovation, medical research and publishing, and patient choice. Robert has been practicing medicine for decades and has published many dozens of medical research papers. He is a former editor of JAMA - the Journal of the American Medical Association. He's created pain care simulation programs, run hospitals, sat on the national board of medical examiners, taught at the university of North Carolina and Wisconsin schools of medicine, and published dozens of articles and several books. On top of all of that he is a practicing oncologist. We draw on this massive experience with different sides of medicine to dig into how medical innovations happen and also less-than-positive changes. It's always fascinating to crack open the box of a different world so I hope you enjoy this conversation with Dr. Robert McNutt. Major takeaways The practice of medicine has changed significantly over the past several decades - there has an explosion of research and specialization. This proliferation has led to many innovations, but has also decreased the ratio of signal to noise in medical advice both for doctors and patients. For another perspective on the explosion of research, listen to my conversation with Brian Nosek. While it would be amazing to have a process that was based purely on very strict scientific method, health is so complicated that the ideal is impossible. That means, like so many imperfect system, that ultimately so much comes down to human judgement. Notes Robert's Blog Robert's Book Tomaxin Case Study Observational Trials Dictaphones
In this episode I talk to Craig Montouri about nonprofits and politics. Specifically their constraints and possibilities for enabling innovations. Craig is the executive director at Global EIR - a nonprofit focused on connecting non-U.S. founders with universities so that they can get the visas they need to build their companies in America. Craig's perspective is fascinating because contrary to the common wisdom that innovation happens by doing an end run around politics, he focuses on enabling innovations through the political system. It's eye opening conversation about two worlds I knew little about so I hope you enjoy this conversation with Craig Montouri. Key Takeaways: There is a lot of valuable human capital and knowledge left on the table both by the US immigration system and the university tech transfer system. Nonprofits need to find product-market just as much as for-profit companies making products. And just like the world of products, there's often a big difference between what people say their problems are and what their problems actually are. Political innovation is different than other domains for several reasons - it both has shorter and longer timelines than other domains and in contrast to the world of startups, politics needs to focus on downside mitigation instead of maximizing upside. Resources Global EIR Craig on Twitter(@craig_montouri) NPR piece on Global EIR
Overcast Link. My Guest this week is Mason Peck, Professor of Aerospace and Systems engineering at Cornell University and former Chief Technologist at NASA. Previously Mason was a was a Principal Fellow at Honeywell Aerospace and has an extremely colorful history we get into during the podcast. The topic of this conversation is how NASA works, alternatives to the current innovation ecosystem - like crowdsourcing and philanthropy, and also the interplay between government, academia, and private industry. Key Takeaways You can have an organization full of smart motivated people that doesn't produce great results if all the incentives are set up to avoid risk. There's been a shift in where different parts of the innovation pipeline happen. More has shifted universities and startups from larger companies and the government but the systems of support haven't caught up. Taking a portfolio approach to technology and innovation is a powerful concept that we don't think about enough. Links Mason’s Lab (Space System Design Studio) Website Mason on Twitter (@spacecraftlab) The Office of the Chief Technologist at NASA NIAC (NASA Innovative Advanced Concepts Directorate) Breakthrough Starshot Mars One Transcript Intro [00:00:00] This podcast I talk to Mason Peck about NASA alternatives to the current Innovation ecosystem like crowdsourcing and philanthropy and also the interplay between government Academia and Private Industry. Officially Mason is a professor of Aerospace and systems engineering at Cornell University, but I think of him as Cornell space exploration guy. He's done research on everything from doing construction in space using superconductors to making spacecraft that can fit in the palm of your hand and cost cents instead of millions of dollars from 2011 to 2013. He served as NASA's Chief technologist. Don't worry. We'll get into what that means in the podcast before becoming a professor. What is a Chief Technologist Ben: You spent several years as the chief technologist at Nasa. Can you explain for us what the chief technologist at Nasa actually does. I think that it's a usual role that many people have not heard of. Mason: Sure, NASA's [00:01:00] Chief technologist sets strategy and priorities for NASA's. Let's call them technology Investments. It's helpful to think of it in investment context because it really is that you know, what you're doing is spending money taxpayer money. You want to be a responsible Steward of that money. You're spending that money on. Something like a bet that you hope will pay off in the future. So taking a portfolio approach that problem probably makes sense. At least it made sense to me. I was the chief technologist for NASA for the over two years started in the end of 2011 and continued to little bit into 2014, but mostly it was the two years 2012-2013. And I may just offer it was a wonderful time to be doing that difficult from the standpoint of the budget. There are a lot of challenges at that time budgetarily, but good from the standpoint of lots of great support from the White House the office of Science and Technology policy when I was there was particularly aggressive and committed and [00:02:00] passionate about doing what they thought was the best for the nation and the just the degree of energy and expertise some of those people made it a wonderful ecosystem to work in. How long term were bets? Ben: Awesome, and going off of that portfolio approach with the bats. how long term were those bets? Like what was the the time scale on them? Mason: In the portfolio approach that we tried to? Take some of those bets were the long game. I suppose, you know, 20 years out. There was a program known as NIAC Nayak the NASA Innovative advanced concepts program, which placed bets on to keep using this metaphor. Ideas that probably would pay off in a couple of decades. And by the way, that seems like a hopelessly long time but for spacecraft that's maybe a generation of spacecraft. In fact spacecraft Generations in technological sense almost mirrors the human Generations, if you think of a human generation being 20 years, you could [00:03:00] probably look across the history of space technology. In spot these Rafi 20-year slices where things seem to happen. So some of the investors are definitely 20 years plus others, whereas near term as possible, but it's not just the the duration of time that is how long it would take for these Investments to pay off. It was also about the type of investment that is the ways in which technology was done. Different types of tech investment So, If I can go on about that briefly the me, please you say that it's one thing to as one thing to solicit ideas from the traditional offers of technology or DARPA calls the performers, you know, you go to a Lockheed Martin or university what I've Cornell University of just for one example, you go to university and you ask for a certain result and then they can probably deliver that kind of result. There's all the non-traditional offers. For example, when the NASA we would start these challenges or competitions. [00:04:00] The idea was to bring in non-traditional providers people who normally wouldn't have bothered or even have been considered qualified to solve a NASA problem, but through a challenge like a coding challenge a hackathon or maybe a more substantial dollar amount. Prize offered a million dollars for electric aircraft or something through that mechanism you bringing different kinds of people to solve the problem and that's not only the other that's not the only other dimension. Another dimension is whether the problem you're solving is something that is a known problem or something you feel like if you build it they will come. Either that freezes the death to death to investment, right if you say something like I've got this great idea but no one's asking for a right now, but trust me if we build it somebody it will buy it that is not what a venture capitalist for example wants to hear right? However. It is a distinct type of futurism, right? Mission Pull vs Mission Push There's what we call pull and push Mission pull refers to [00:05:00] when we have a mission that NASA let's say returning samples from the surface of Mars or sending humans to a distant star. I mean, these are not necessarily necessary. What's this if they are then? The Jews demand certain technological solutions certain Innovations, or if you come up with idea that no one's asking for is their value in that and I'll give you the example of a say spacecraft that are the size of your fingernail right now. You probably know been that this is a topic we were working out of Cornell. I guarantee you no one's asking for that. I can prove that by virtue of how many proposals have been turned down. The basic fact is there are uses for this now. Maybe there aren't enough that are compelling and I'll accept that but the reason no one's asking is because no one knows it can exist and that's not a reason to say no, so. Again, think of the mission pull versus what we call technology push direction if we can come up with a solution that people maybe could use [00:06:00] a little value in working on that think of the dimension. As I said before of different kinds of offers. What are the sources for technology and then of course, there's the timeframe Dimension. So there's at least three dimensions that you might think of for the. Portfolio of Technology Investments That least we took it to kind of NASA and that maybe helps other environments to Non-traditional vs Traditional Offers Ben: yeah, are there some good examples of non-traditional offers really succeeding where the traditional offers did not. Mason: Yes, two ways to answer that one is for some problems. They are simply not profitable for a lot of companies even as an example. I major company might spend a hundred thousand to maybe over a billion dollars, maybe multiple millions of dollars. Just writing the proposal to a government agency do some work and it's not at all an exaggeration. You know, that's really not the [00:07:00] case. Where a small mom-and-pop company. But for larger companies, I see a Honeywell or a Boeing or Lockheed or some other defense kind tractor, you know for sure they spend that kind of money. So the and that's the total money. They spend let alone The Profit they might get in that which is maybe on the order of 10% or something. So you got to really want. To do this work to invest the money for a proposal into it and something at the scale of I mentioned Nayak before right the NASA innovated the best huh something that small it's simply not worth large company writing a proposal that they're not going to get there not even get the cost of proposal back probably now there may be other reasons, but let's let's give me those for a second. Let's think about the the other way of answering that question. What am I people who just want to work with NASA? There are people out there that are passionate. About what NASA doesn't and you do you'll be hard pressed by the way to find other government agencies and probably even other businesses with the brand loyalty if you like or their reputation that in Mass it has yeah, so I'll [00:08:00] give you the example of Tom ditto titl was his last name. He's got had a couple of Nyack Awards over the years. The first one was I think in 2005 ish? He had this brilliant idea for a new kind of spectrometer. And for your I know you probably know but not everyone knows this spectrometer is a device that looks at it a light and finds out what colors it is. And I'm looking at the Spectrum of a let's say reflected light off of a rock or something will tell you about its chemical may make up so spectrometers the useful thing for astronomy. Well, Tom didn't came up with the idea of using diffraction grating. It's that that colorful rainbow mirror looking stuff. There was all the rage in the 1970s. So but he had a way of using that to make a spectrometer and he would have been a very long spectrometer. In fact, maybe even on the surface of the Moon a super long kilometers long spectrometer arguably a crazy idea, but absolutely brilliant and solve the problem, but NASA didn't even know it needed to solve. Once against problem [00:09:00] that no Lockheed would propose but a Tom ditto would so Tom just wanted to work on this and he had a passion for it. He solved the problem and that was a cool example, and there's others just like it's so in an environment where you have Innovation where people can. Contribute, I guess I'll stay out of the goodness of their heart or because I like the idea of the challenge or maybe even for relatively small price. You'll get different kinds of solutions and that's an interesting possibility. What would you do to unlock grassroots innovators? Ben: how would you encourage that even further? So say you you control the entire United States government? what would you do Beyond Nyack anything to sort of unlock those people? Mason: To clarify for your listeners. I have no plans to take over the government. Yes. I'm willing if someone like to offer me the job, but that's not my forte. Well, so again, I let me let me go back to the example of prizes and challenges. This is a big deal with in the Obama Administration. [00:10:00] They were faced with this awkward problem of having lots of great ideas and basically no money to work within a Congress that was not supportive. (Prizes and Challenges) So what do you do? Well, you open up these opportunities to the nation maybe even to the world. So if you can come up with an away with a way of articulating the value of contributing, you know again in a way that makes the public or maybe just a few individuals wants to help. Depend on that altruistic nature that some people have that's when we dissolve a problem because it doesn't work in all cases. So rather than just offering a challenge where if you do it you get a medal. What about offering a prize prize competitions are interesting because first of all the the organization that offers the price doesn't necessarily spend money until they get a result. For example, the the orteig prize remember this one. This was the one that encouraged transatlantic flight. Yes. So, you know that that's one way to go. A $20,000 prize and [00:11:00] then you win it and you pay off your mortgage there have been others. Like I birthed an said that building the gossamer Albatross was away from to pay off his mortgage. And so there are there are some folks who are motivated by the prospect of a prize and again from for the funders perspective from a funding perspective. You're not going to pay until and unless you can get the solution you want. So that's interesting the other interesting feature about crowdsourcing a solution like that is you might get. People applying to solve your problem and you get the best one out of a thousand compare that to a typical again since we're talking Aerospace a typical Aerospace Contracting opportunity. You'll probably get responses that say NASA would offer millions of dollars for a new rocket. You're going to get doesn't maybe responses to that of which a half dozen maybe will be credible and it's going to be The Usual Suspects. It'll be it'll be Boeing and Lockheed and orbital sciences and maybe a few others well. What if that one in a thousand Solutions the one you really want offering an [00:12:00] opportunity that solicits such a large number of potential inputs really allows you to pick that best one the again the 2 Sigma 3 Sigma Solution which is kind of exciting possibility. So that's another way to go. How do you pull out good ideas when they take resources? Ben: to Riff on that how other good ways of. Judging a solution before it requires a large amount of investment. So with this Crown funny I can imagine that it would get a lot of people. With ideas and you'd be able to go through the ideas and if there's one that immediately stands out as better than the rest or is very clearly feasible often with things. You don't actually know if it's a good idea until you've tested it and you poured some resources into it and people might not have those. So is there any trick to pulling out those ideas? Mason: One interesting interesting fact [00:13:00] about prize competitions is pretty clearly. You have to pitch it at the right dollar amount, you know after ten bucks, you're not going to get in. This is really what you want a prize where the prize might be the say 20 billion dollars the investment necessary to. That twenty billion dollars might be so prohibitive that you're only going to get a few players and once again, probably the usual suspects right? For instance. Let's say that we offered twenty billion dollars for whoever first built at the hotel on the moon. Okay, it sounds like an interesting idea maybe but to develop that infrastructure that capability is going to cost billions begin with and and maybe someone will win the 20 billion dollar Enterprise, but I really need to get what you want. So first of all the the scale of the prize. Matters, but let me go back to this portfolio idea we were talking about before if you have the freedom to manage a portfolio of Technology investment your opportunity then is to think about those high-risk investments. Just the way you would have to say in your own eventual portfolio think about high risk Investments as a way to pick winners [00:14:00] you invest a little bit the high-risk stuff across the large board and maybe a few of them. But you have to be winners. Well, then maybe you go investible bit more in those and soon as saying the case of Nayak, right? And let's say that we like to Tom Dittos spectrometer so much that the $100,000 that he got for building this which is not peanuts by the way, but it's still small from Aerospace perspective that hundred thousand dollars a small investment. But in a subsequent phase maybe he gets ten times that amount of money maybe he starts a small company. I think he is company something like ditto tool and die company or something like this maybe ditto Tool company gets a factor of 10 or investment in the in a follow-on phase. In fact, maybe even a subsequent phase could be a hundred times as much. So as time goes on as the maturity of the technology increases as you continually refine the portfolio allowing the failed investment to just sort of Fall by the wayside. You can concentrate on those ones that are [00:15:00] successful which is first of all a reason why you have to invest in some high-risk stuff. You got to take some risks right and then second if you. And if you have a portfolio approach you have the opportunity to use statistics to your benefit. I can let's say if I'm NASA invest in a hundred a crazy ideas every year and if only one or two of them pan out, well, that's great those one or two. Probably something I really care about. How do you incentivize innovation within NASA? Ben: that makes a lot of sense and in that portfolio. So in a in excellent Financial portfolio, you measure success by how much money you get by your return. There's a number and that's you want to maximize that number that you're getting back NASA's portfolio doesn't quite fit into that. So, how do you how do you measure how well a portfolio is doing? How do you incentivize people? To within NASA to really push the best Innovations forward. [00:16:00] Mason: Yeah several things going on there. First of all, you got to take a look at the organization's culture. You have to take a look at how they respond to Innovation. My experience with NASA is that it's full of brilliant and committed people at the same time. There's a tendency for the younger folks to be very forward-looking and interestingly for the most senior leadership be fairly forward-looking somewhere in the middle. There's a like a lot of problem, but it would have a low spot would have us soft spot where people in more than elsewhere can be. (Risk aversion) Careerist that is the not so willing to take risks. They want to keep their jobs. They want to be seen as effective. And again taking on risks can be not looked upon well that in their opinion. So so that's tricky right here these different populations in any large organization and you got to come up with a way of communicating the value of innovation across the board, right? That's one of the challenges making this sort of thing work. Suppose a lot more that you can see about about culture and I [00:17:00] suppose every culture is a little different but one of those the hardest parts in making Innovation stick is to communicate to folks that it's a permanent solution what I found again using NASA's an example, and I've also work with other companies by the way for which this is true. There's a tendency to think that these technology investment initiatives or this Innovation is initiative is just the flavor of the day, you know, it's it's a it's our flash in the pan or whatever metaphor you like. It's a temporary State of Affairs. So there are people who are afraid if they start to go to heavy toward Innovation and man maybe quit their job of doing program management and study to become a radical innovator. That whatever leadership has been pushing that is going to disappear eventually and it'll go back to business as usual and then they'll be left without a job. Right? So there's risk seen in this process of taking an innovation because you not so sure how permanent is going to be. So, you know, how do you Embrace that problem as someone trying to effect change just [00:18:00] promising it's not going to go away probably won't convince folks. They've been around long enough. It's in your organization's happen. They've seen issues come and go how do you convince them? So I wish I had an answer to that other than to say that it's only through longevity of an innovation process that people really start to embrace it and what I'm talking about when I say longevity. I mean really on the order of five plus years you really would like to have almost a generation of folks grow up in an environment where that Innovation is taken to be the order of the day. Strengths and Weaknesses of each sector Ben: something like that. I've I don't have an answer to but that I see consistently is that there are these these timescale mismatches where people's careers are sort of judged in maybe two to five year segments where if you nothing's happened in the past two to five years. People are like well, what are you what are you doing? And then the really the the Innovations take something like. You know seven to 10 years to to really mature so it's very [00:19:00] hard to align those incentives and I'm just always always looking for answers around that. I you mentioned that you've seen this at a bunch of different organizations you've literally been in every every sector right you've been in Industry. You've been in Academia you've been in government. Do you have any sense of what role each of them should ideally occupy in an innovation ecosystem and what strengths and weaknesses each has. Mason: That's a wonderful question and probably beyond my Ken but I will I will offer for those of your listeners and you as well who want to go back rewind a little bit to the World War II time frame thinking about this fellow named vannevar Bush and then you've probably encountered thanks to him and his Innovations we have what we have now where [00:20:00] universities take on what we call fundamental research which combines both basic and applied research and then come. The government take on the next step which is implementation in to say potentially demonstration or something operational system. This is at least the way it's shaking out maybe the original town with a bit different but that's kind of how it shakes it out and people are fond of pointing to this Gap or this where they call it the. This Chasm between the Innovation that happens in universities and then the need for near term profit making investments in companies or low-risk politically safe Investments of the level of the government. There's a gap in their right and how do you feel that Gap? There are organizations like DARPA the defense Advanced research projects agency that are meant to fill that Gap and their NASA. We try creating programs that would fill that Gap and not surprisingly. There are there are still problems with that. So. We think of universities think of companies think of government that are clearly different motivations that drive each one of these. [00:21:00] I wonder if there isn't a different motivation entirely that might be more Global more Universal at the moment. We don't have it if we were ever. Oh, I don't know set upon by an alien horde we might pull together as a as a nation as a world and in all contribute a little bit differently to the way things are going but at the moment without any obvious. I'm cataclysm on the horizon and some might argue about climate change for the say we don't all agree that there's a cataclysm on Horizon. We're in these silos. So universities we innovate in a certain way. We innovate at the level of again. I'll call it basic and applied research. The government innovated the level when it works. Well policy when it doesn't work. Well the government tries to solve its own problems using its own expertise really really in my opinion. They should be going outside for that expertise and businesses solve problems in a way that maximizes shareholder value probably in the relatively near term. These are all I mean perfectly successful ways of pulling on [00:22:00] Innovation, but they're not the same. And they do lead to very idiosyncratic Solutions. Again. The question is isn't there something more General and broader. What do you think? What's the correct system? Ben: I would have I think I mean, I definitely you're the one being interviewed but I think that there's you've completely identified that Gap and I think that in my mind there's what it should really be is. Sort of a pipeline and that looking at what needs to be done and who is best incentivised to do it. So for example, the. It's stuff where there's this very long long time Horizon uncertain outcomes sort of like big our research would come from universities with some light support from the government. But then as soon as that needed to be pulled together into something that required a lot of [00:23:00] coordination and a lot of money then perhaps the government or a company would come in depending on. What the real outcome would be but you know if I had a real and like the whole point of all of this is to try to figure out a real answer. I don't have a good one at the moment. (Shift in funding methods) Mason: Yes, happy birthday thinking about this other thing. I guess I could offer is the way we fund research in this country has as changed over the years there was a time and it might surprise some of your listeners to think about this there was a time when as a university researcher. You probably didn't write any Grant proposals or if you did it was one every few years. These days most people in say my position where I'm working at a well-regarded research-intensive university. I write 10 to 20 individual research proposals a year of which a small faction or funded is probably less than 10% or funded. And I think I'm actually doing pretty [00:24:00] well frankly for that ten percent. There are folks who go years without getting any of proposal from the despite submitting hundreds of Grant proposals for the amount of time involved in writing these proposals. It's worse and worse every year the money gets Tighter and Tighter and you know, what do you do one answer is that we've. We've morphed toward this model and maybe it's not what we all want what we have right now in a previous age where the government more directly supported universities where research was done regardless of funding you got different outcomes, but that was a relatively short period of time in our in our history. If you go back a little farther this a 19th century before for the most part research was done either by the independently wealthy or by people with some kind of philanthropic back. You know the prince of some new name your favorite to European potentate, the I the prince of whatever would would fund your research into discovering new molecules. And that was just the way it worked. Yeah. So these models have changed [00:25:00] radically over the years and interesting question is where this might go if in fact something like crowdsourcing or. The ubiquity of information and access to it through the internet really matures to inform how we do research. I do not know what the future holds. I know you've been thinking about the sorts of things in the past. Yeah, but it's interesting question. But what this looks like what the research infrastructure or ecosystem looks like when we can vote up or down a good research projects. Or maybe when crowdfunding can be the basis for what research gets undertaken may not be good. But it's another way to do it. How good is crowdfunding Ben: Would you trust a large population of people to be able to. Would you trust them to allocate research dollars? I ask this based on the fact that you see a lot of these articles shouting an outrage that the government is funding someone to I don't know like walk around [00:26:00] and look at snails or something ridiculous. But then you could make the argument that well you look at snails enough and then you find this one snail that has some chemical compound that then could be synthesized into medicine. So would you trust crowdfunding? What would that get become Mason: I probably wouldn't trust them as far as I can throw them. I guessed another way to think about it is there are I probably would not trust the crowd to vote for one thing. I might be trust them statistically if we could fund many things out of such a population and that's where again the benefit of large numbers comes in. I even though I think that the public generally might get some things wrong from time to time and maybe somewhat credulous and believe strange things on the whole they're strangely predictive. I'll give you another quick story about that please years ago was probably 10 years ago DARPA had this interesting idea. Don't remember exactly who DARPA but isn't. [00:27:00] Dandiya, if you look at how crowd Source information works, it seems surprisingly accurate and predictive. So what if we created a stock market for terrorist attacks, and we had people as actually placed bets on but you know invest in Futures, but. Terrorist attract attack Futures the the outcome would be people voting to maximize their return on their Investments would use all that work release or most information that we know is out there and would identify the most likely terrorist outcomes of those terrorists outcomes associated with say that again that are continually Rising stock something out there. Is motivating people to think that that's like the outcome now to issues, of course number one is incredibly crass and in extremely poor taste to defy such a thing [00:28:00] and. And Interpol was I think a little tone deaf, you know offering that as a project because it was very quickly jumped on by the me. Yeah. I can't believe how horrible these people are really thinking but they're not wrong in that the right kind of crowdsourcing can in fact the almost prescient almost almost. Telepathic or psychic in its ability to predict some things but not all things and that's where I say. You want to have a managed portfolio of this stuff. So every now and then maybe more often than not the crowd will be wrong. But if you give them the chance to run lots of different things, you'll both encourage A diversity of opinion which leads to different kinds of solutions now, that's a good thing and probably a statistical. Draping over all the different possibilities so that eventually the right answer can come out. So I think those two ingredients probably could make it work, but I'm very speculative about this right now. And again the DARPA stories interesting cautionary tale because as soon as that became public it just that went away in a [00:29:00] hurry. What happened to grants? Ben: just to go back you mentioned that until recently people do University Research only had to write one Grant every few years was that because the great sizes were much larger. Were they getting money from outside sources? Why was that? What changed? Mason: Yeah, that's interesting cause and effect will bit muddled and you can find other people probably better explain this history. But my quick version is something like this the kinds of research that we're done in the University's the kind of research was much more skewed toward the basic end of things pencil and paper theoretical development. And also the let's just be frank we knew less than we know now. So coming up with new stuff is a larger maybe than it was before I know if that's fair but I think that's just some research part of it. Yeah. So well there there you go. So first of all, we were solving different problems right now though. We are taking on a lot of the problems that actually you to be done in Industry. The famous example, of course is Bell [00:30:00] Labs right out of which the transistor came these days. The transistor will be developed within University and to develop a transistor or something analogous to it requires significant infrastructure Investments, not just pencil and paper. So even though the theory behind some conductors came out of University the actual practice of it came out of bell labs and there have been plenty of other examples like this. So I think actually industry has skewed away from doing research. Although there's a bit of emotion back toward it now, but it's nowhere what it used to be and then necessarily universities have taken a non not out of a sense of obligation, but rather because it's you know, there's a void and they rush to fill. But to fill it we need more money. So where does the money come from either comes from profit centers or come from the government with the government reducing tax income and also research investments in trouble for the 1980s. Now there's a new kind of Gap. It's the research Gap. So for the most part Industries not doing it and when University does do it. [00:31:00] It's spending a lot of a server for just bringing the funding. Got it. So you'd also argue probably that the universities are you not the best place for this to be done? You know, there is a lot of my opinion a lot of value in companies developing intellectual property. They keep it to themselves. They can make a profit on it. That's a huge motivator. What we do need a verse These almost exclusively is open. We publish it and basically anyone can pick it up and use it. What do you think of breakthrough starshot and philanthropy? Ben: that makes a lot of sense. You also mentioned that farther in the past a lot. There was a lot of funding that was being done by wealthy individuals and you're an advisor for breakthrough starshot. I believe which as far as I can tell is almost entirely bankrolled by wealthy individuals it seems like. Breakthrough starshot is sort of something that in the past. We would have expected NASA to do. Do you think that what do you what do you think about this [00:32:00] shift? Do you think that the wealthy individuals are going to start filling in that Gap where the pros and cons there? Mason: Well, first of all, I think that's a lot of what think it's a fact a lot of wealthy people certainly in the US have been filling that Gap. They have been funding a lot of research more than in the past. The the cliche is you start your computer. Can you sell it you make a billion dollars in new investment with that you really care about which is space exploration and that that that pattern has been repeated over and over Elon Musk for sure. Jeff Bezos for blue origin and there's been plenty of examples of this so, I don't know maybe maybe it's more than just a cliche. But anyway the going back to this question of will private individual Step Up. We have to an extent but they all have a certain something in it for [00:33:00] themselves that that was always the case has always been the case for privately funded science. Remember there are foundations. Now that still do fund Sciences. There's not as much there used to be but there are still these foundations, right? So the question is what kind of science do you get when you have a billionaire from to your. There's always going to be some idiosyncrasy associated with it and what we can take the Breakthrough starshot project as an example. Personally. I think it's a fantastic project. And for those of you who don't know the Breakthrough starshot project consists of coming up with a 20-year plan to build a spacecraft that could launch again in 20 years and take maybe 20 years to reach the closest star Proxima Centauri or maybe Alpha Centauri with the goal of returning some science data. Another three or four years after that depending on the light travel time. So that's a long duration project meets almost at the scale of a medieval Cathedral. I doubt that many of us on The Advisory Board will even [00:34:00] be alive to see that data come back if it ever does so it's not dangerous undertaking. It probably makes sense for that reason for it to be privately funded or funded by something like, you know a church, but these days the church does not fund science that way so it's not not a critique, but it's just it doesn't do that. Yeah, the way that they may be used to fund building Cathedrals. So these large projects like Cathedrals or Starships probably deserve a special kind of funding one thing I've discovered about em, it's not my own Discovery plenty of other people know this as well. I was just late to realizing it. Congress wants to fund things that they can take credit for okay, so it's going to be 2 4 or 6 years time frame at most where they want to see a return on their investment their investment being stepping up to be sure that some product project is funded. But so that's their return on investment timeframe and industries return on investment time frames in the sale of months. It takes [00:35:00] something like a billionaire or some other kind of philanthropic effort to fund a project that is longer than a few years. So if we really have aspirations that lie along this axis this temporal axis that makes us want to get a result in decades from now. We're going to have to look for funding source. That is not something governmental throughly not up to Industry. So I think there's a place for private investment for foundations or philanthropic. God is definitely that kind of thing so that you're not going to get funded by you know, the Air Force. Let's say or by orbital Sciences Corporation of Northrop Grumman Corporation, Concerns about philanthropic time scales Ben: one concern that I always have about. Philanthropic efforts is as you said there has to be something in it for people and when you're not able to get sort of a return on investment that's in money. Sometimes I've seen people be less patient because they [00:36:00] want to see progress on on a shorter time scale. Do you do worry about that at all? Mason: Well, you know as I said, there's always this risk if you have a single investor, let's say again some billionaire to be named later that he or she will pull out the funding based on some whim they decide rather than funding a Starship that rather fun to the purchase of a massive sculpture massive bronze bust of him or herself to be placed in his front yard. Who knows? Yeah, and I'm not speaking about Yuri Milner here. Let me say for my few interactions of him. He seems like a legitimately. Two passionate scientist you really does care about knowledge for the sake of humanity. But it's also clear that he wants to be known as the person who successfully they support this work and things nothing wrong with that. So just like other examples the past of philanthropic contributions. You you probably want your name attached to these discoveries and that's again, that's fine [00:37:00] with me. Experience With Different Organizations Ben: and shifting gears a little bit. You've had your research funded by many different organizations both inside the government and in Private Industry. Have you had different experiences with that? And which ones are your favorite or what did your favorite ones do and what is your least favorite ones do? Mason: So that's a long story. So I'm gonna give you an answer which sounds like I'm itself said during and that maybe that's correct. The answer is when you get left alone to do the job. It works really well. Now I totally understand that if let's say I'm a member of a government organization or industry. I need to feel that my money is being well spent I want to check in and I don't want to end up with a yoyodyne propulsion systems. If you remember the movie Buckaroo Banzai, you don't want that kind of contractor gone amok kind of phenomenon. I get that [00:38:00] at the same time too much micromanagement sort of defeats the purpose of doing fundamental research. You know, the whole idea is we don't have a thing yet. We need to create that thing and that Act of Creation is not something you can exactly legislator specify the requirements. So I'm a little uneasy at of the idea about the idea that very tight control over the act of invention is going to give you a good result at the same time. Yeah, you need to be responsible stewards of whatever money you're using to fund Sky research. So I see where that comes comes from. I don't want to give a specific example that's going to get me in trouble with the essential functions, but I will say it government agency a government agency collaborating with us on a project. The project involved a few technological innovations after we scoped out the project with this government agency, the the folks involved at the government agency and supervising our work decided that work was so cool. They want to do it themselves. So they went ahead and try to make themselves removing. [00:39:00] Most of what I viewed as the really Innovative parts of the work leaving us with some fairly wrote tasks which there were still paying for. So, I guess I'm kind of glad to take the money but. Then the problem was because he's relatively unimaginative tasks the government agency decided it would be very helpful for us to be very tightly supervised to do these simple tasks. They were very good at and that led to a lot of in my opinion wasted money on things some example for this example is we were building an object out of some official part. Some of you can find at a hardware store, right? The reason we were doing so is because those parts a lot of design margin that is to say you could you can pressurize them or you could add electricity or whatever it was and the parts would not fail. They were made for Consumer use their super safe and excessively over design and it which is great actually very safe. But the sponsor wanted us to do value in [00:40:00] all these with a super detailed analysis using what's known as finite element analysis element analysis where you break it into little mathematical chunks and put in the computer. They wanted us to test it. They want to do all sorts of things for parts. You could buy at the hardware store which you buy every day without thinking about because they're super safe because they're built that way that was a ton of a waste of time. So so that was a very negative experience I think. II chalk it up to my naive tank and working with that sponsor. I now know what kind of work to specify for that sponsor at the same time. It was not going to be a relationship of whatever worked. Well for what it's worth. We took that project and we're doing ourselves now and we've made more progress in the last. Two years that we did in the two years previous where they were helping us. I guess we'll call it. So I'm glad to say that research is doing well now but it's only because we have a few resources internally that we can use to spend on the stuff. I'd rather not end on a cynical no Opera offer positive [00:41:00] version this case so the positive version and I will create the big breakthrough starshot with this positive version of those of us working on The Advisory Board. Sometimes get some funding. From the the foundation to see what that will really pay for a service but with that money I can do lots of cool stuff. So I've been able to turn a few students to we're solving some problems of interest of breakthrough starshot it when we've got some great results. It doesn't actually take that much as long as we have the researchers have some freedom to pursue the work on our own terms. So if there's a lesson there it's something along the lines of you need a light touch. Normal gostin, the former CEO of Lockheed said the best way he's ever found to manage people this pick the right folks be clear about what you want and then get out of their way. Yeah, and that's that's lucky to that's not just some pie-in-the-sky academic like me saying that so there's something to this in the lesson learned again is to have a light touch How do you change the 10 year goals 8 year political cycle mismatch? Ben: excellent. And then going back to [00:42:00] NASA briefly while I was working with you. I saw consistently that the executive branch would set tenure goals, but then. For political reasons those goals would change at most every eight years. And so you get this progress towards this 10-year goals and then it would change. Do you see any way to change that sort of unfortunate situation? Mason: Well, there have been wasted proposed for example for NASA again since I know that example really well, it has been proposed even in this most recent Congress that NASA should be funded on a 10-year time frame and the idea would be that a a congress whatever the hundred and some odd Congress whatever it is would set the budget for NASA appropriate the funds and get out of the way. So the idea is that again once a decade, maybe you would check in and change the objectives. So this is I think most people recognize that the best way to run these long-term [00:43:00] projects. If you keep changing course every two to six or eight years, you just have chaos. This is one of the main reasons why things like the James Webb Space Telescope the International Space Station space shuttle, these all have given mass of the reputation of going over budget. But I have to defend NASA in this case because NASA really is able to defend itself on the spaces. It's not NASA. Okay, it's Congress if you have a project. That is complicated and takes a long time. There's a natural funding profile that goes with this. It's a little bit at first while you get your feet under you and then there's a big lump in the middle and that tails off toward the end. This is standard funding profile. But NASA's budget from Congress is flat. So you end up very inefficiently smearing this money across a very long time which makes things inefficient expensive things. Don't go. Well, you lose good people along the way and you end up spending more in the long. This story has been told over and over again and Congress. They're smart people. Well, actually you may not think so, but they are [00:44:00] in my experience. They know what they're doing and they know that they're going to trade off between the right answer and the politically expedient answer the politically expedient answer is as long as they can be seen to having their finger on the button for NASA there there there folks will vote for them. So you understand that's what motivates them. So I would say if there's a way to make this work. Well, it's something like. Come up with a way for they can where they can get credit for things are working. Well without necessarily having to change what's going on. Yeah, and I don't have an answer probably make that work if that were possible that makes a lot of sense. What's the best way to make the world that has never been today? Ben: So I realize we're coming up on time. One of the the last things I want to ask you about was that some things that people might not have guessed about you is that you have a master's in English because. As your bio states that you thought that that was the way to make the world that has never been its by inspiring people with writing [00:45:00] and then you change track completely and well not completely but you figured out that engineering was sort of the best the best way to do that. Now, what what do you think? Do you still think that what you think the best way to enable the world that has never been? In today's here. And now Mason: I like the way you're asking that question it recalls that quote from Theodore Von Karman, right distinction between science engineering scientists create the world's or huh, scientists discover the world that is Engineers create the world that never was it's not exactly a way of claiming that Engineers are better than scientist. Is that really what it's about is about distinguishing between these two impulses. We have discovering the unknown and creating. What doesn't exist in my opinion both contribute to improving our lot as humans, so there's a place for both in a reason to have both let's not confuse one with the other. I have always been about creating things. I [00:46:00] suppose I get this from my parents. My dad's a writer. My mom has created many things over the years. She was an artist. She has been a an actress and a brilliant Coco to restaurant. She's a very much a polymath when it comes to things of all. So I probably get this from them at some level but I've always taken not to be one of the the Essential Elements of what it is to be human is to create to lie. If your impact on the world in a positive way at least an impact at all and positive is my choice. Okay suppose people choose to do negative things. So what I'm saying is that that impulse is always been part of what matters to me. When I was a young naive person, I thought I could have that impact through English literature. I still interested in this I still interested in writing and reading and I respect people who can make a career out us for a thing, but it wasn't what I was good at. So instead I felt like aerospace engineering particularly offered me the opportunity to [00:47:00] solve problems that haven't been solved and to make an impact that I felt like making. So I guess over the years I've discovered there are definitely different ways of looking at the world one of the most the way that I look at it another one of the ways that people get the world is what's the safest way I can keep my job and not get fired. And those are very different impulses and and look I recognize that my perspective here maybe comes across as I don't know what to elitist or entitled or first world or something where I'm saying that it's great to have the freedom to create and make an impact on the world. I see I clearly tightly to that value. At the same time, I recognize that not everybody has that opportunity. Sometimes you just gotta make do you got to do what you can keep your family fed? Keep your shoes on your feet and you don't have the freedom the luxury of being able to do everything exactly the way you want it. So I recognize I'm very fortunate in my career my life. So I do not in any way put down people who haven't got the bandwidth simply to set assignments sided set aside time to create. [00:48:00] But that is what matters to me and I'm very fortunate that I have a job that allows me to do that. Yeah, well said. Final Statements? Ben: So I do realize we're over time this was amazing by the way, so I just want to make sure that I there's any points that I didn't hit on absolutely want to give you a chance to talk about that. Mason: Well, I'm so glad that your interest in this question. How do we innovate? I will offer that when government works. Well, it enables people whatever works while it enables people to do their best in the service of our nation. Let's say when it doesn't work. Well it tries to prescribe to micro manage to get in the way so I am far from being an anti. It's very kind of person that I hope it doesn't come across. I think the right policies are essential. I mean policy you can look at is the software of our [00:49:00] lives here in an innovation when that software is written correctly the rules that we follow and we choose to follow they enable us to be successful when the software is not right everything falls apart. So, you know, I actually would not be averse to turning over some policy making the software Engineers because I think they have a sense of how to write good software and lawyers when they do their job. Well, you know that works out well too. Yeah, but unfortunately to be a software engineer and to affect society requires some additional kind of tranny. So if I want to close with a comment, it would be something along the lines of that. I don't see that much of a distinction in what people are capable of whether it's mathematics. Or history or philosophy or art or technology or science? These are all in my mind forms of the same thing. There are things of which we are all capable. I suppose there's some sabanci there who can do multi-digit multiplication in their heads, but I'm not interested in that because I have a computer. [00:50:00] So instead I take that multidisciplinary capability. We all have and my opinion were born with as a sign that. We shouldn't feel limited by what we think we're good at or not. And so those of you interested in creating an innovating don't feel that you are limited by what your label is if you're labeled as a software engineer, maybe policy is the right thing for you if your if your label Les a lawyer maybe you should think about going into space technology. I don't know. What I'm trying to say is that there's there's a lot of freedom that we all have for pursuing good ideas and we should take. Advantage of our rare position here at the beginning of the 21st century where we have these tools. We still have the resources. We wish to create we have this one chance. I think to make make our work right? Outro We got a lot out of this conversation. Here are some of my top takeaways. If you have an organization full of smart motivated people that doesn't produce great results. If all the incentives are set up to avoid [00:51:00] risk, there's been a shift in where different parts of the Innovation pipeline happen more is shifted to universities and startups away from larger companies and government but the systems of support having caught up to that change. Finally taking a portfolio approach to technology and Innovation is a powerful concept that we don't think about it enough. I hope you enjoyed that you'd like to reach out. You can find me on Twitter under app and Reinhart and I deeply appreciate any feedback you might have. Thank you.
In this episode I talk to Gary Bradski about the creation of OpenCV, Willow Garage, and how to get around institutional roadblocks. Gary is perhaps best known as the creator of OpenCV - an open source tool that has touched almost every application that involves computer vision - from cat-identifying AI, to strawberry-picking robots, to augmented reality. Gary has been part of Intel Research, Stanford (where he worked on Stanley, the self driving car that won the first DARPA grand challenge), Magic Leap, and started his own Startups. On top of that Gary was early at Willow Garage - a private research lab that produced two huge innovations in robotics: The open source robot operating system and the pr2 robot. Gary has a track record of seeing potential in technologies long before they appear on the hype radar - everything from neural networks to computer vision to self-driving cars. Key Takeaways Aligning incentives inside of organizations is both essential and hard for innovation. Organizations are incentivized to focus on current product lines instead of Schumpeterian long shots. Gary basically had to do incentive gymnastics to get OpenCV to exist. In research organization there's an inherent tension between pressure to produce and exploration. I love Gary's idea of a slowly decreasing salary. Ambitious projects are still totally dependent on a champion. At the end of the day, it means that every ambitious project has a single point of failure. I wonder if there's a way to change that. Notes Gary on Twitter The Embedded Vision Alliance Video of Stanley winning the DARPA Grand Challenge A short history of Willow Garage
Link to this Episode in Overcast In this episode I talk to Jun Axup about accelerating biotechnology, how to transition people and technology from academia to startups, the intersection of silicon valley and biology, and biology research in general. Jun is a partner at IndieBio - a startup accelerator specializing in quickly taking biotechnology from academic research to products. She has both started companies and did a PhD focused on using antibodies to fight cancer. This experience gives her a deep understanding of the constraints in both the world of academia and equity-funded startups and what it takes to jump the gap between the two. Key takeaways: Biology is reaching a cusp where we can truly start to use it to do things outside the realm of traditional medicine and therapeutics. These new products fit more cleanly into the silicon valley startup ecosystem. The gap between research and products in people's hands is not just a technical gap, but a people one as well. Indiebio is built to address both - guiding both the research and the researchER out of the lab. While the capital overhead has come down, biology-based innovation still require different support systems than your standard computer-based innovations. Links Jun's Homepage IndieBio Flight from Science Langer Lab Case Study (Paywalled) No transcript this week - trying a different production flow. If you feel strongly, please let us know at info@ideamachinespodcast.com.
My Guest this week is Adam Wiggins, the cofounder of Ink & Switch — an independent industrial research lab working on digital tools for creativity and productivity. The topic of the conversation is the future of product-focused R&D, the Hollywood Model of work in tech, Ink & Switch’s unique organizational structure, and whether it can be extended to other areas of research. Links Adam Wiggins’ Home Page Adam on Twitter Ink & Switch's Home Page A presentation on Ink & Switch's Structure Sloan Review Article on Applying Hollywood Model to R&D (Paywalled) Transcript How the idea came about Ben: How did you come up with this idea? Like wait what what originated that I'm just really interested in the thought process behind there Adam: sure, you know, I think me and my partner's we come out of the sort of the startup kind of school of thought on Innovation, I think. There's a lot of way to think about there's the more academic research minded approach to Innovation. There's made which get a bigger companies. So yeah, we come out of very much from the yeah. I don't know what you want to call it ad Jolene startup y combinator or whatever that you know mix of elements is which is really about build a thing really quickly get it in front of customers minimal viable product innovate, but at least my thinking is that the startup model has been so successful in the last let's say decade. Particularly with the kind of mass production of the startup that you get through groups like y combinator such that I feel like the problems the space of problems that can be solved with that kind of, you know group of 25 25 year old Founders spending three months to build a thing not say it's let's say saturated. Yeah to some degree in that maybe the more interesting problems are like bigger or longer in scope. And so then we thought about okay. Well, what's a what's a model that is more possible for going after bigger things. And that's when I kind of fell down the rabbit hole of researching these Industrial Research Labs. I know that you spent a lot of time on as well, you know, these big famous examples like Bell labs and Xerox Parc and arpa and so forth. And of course many other examples when we thought okay, well, You know, we're not at the we're not in a position to you know, be setting up a multimillion-dollar research arm of a government or commercial institution. But what can we do on a smaller scale with a small Grant and it's kind of a scrappy band and people and that's kind of what led us to the Incan switch approach. The Thought Process Behind the Model Ben: can you go one step further where it's you have the constraint that you can't do a straight-up corporate research lab, but I think there are a lot of unique ideas in terms of a model that are sort of just unique and. In that like how did you cope that Lee idea that like, okay, we're going to like have our principles. We're going to pull in people temporarily. We're going to build this network that that seems sort of to come out of the blue. So what was what was the thought process behind that? Adam: Well, maybe it came out of the constraint of do it with very little money. And so part of that is we're trying to work on a big problem. Hopefully and I can talk about that if you want, but the in terms of the the model that we're using we came at it from do it with very little money and that in turn leads to okay. Your big costs are usually sort of like office space and then the people right, but if we can do these really short term projects, we called the Hollywood model and I can explain about that if you want the basically we have like a four or six or eight week project. You can bring in some experts on a freelance basis and you don't necessarily need to commit to paying salary is over the longer term and you couple that with no office. We have an all distributed team. We're not asking people they don't need to pick up. Move somewhere to even temporarily to work on a project. Right? And so we what we can offer them as a lot of flexibility. And so the I think there's certain there's benefits for the people to participate in these projects join, but from the lab point of view again, it was we were embracing this constraint of do it really really cheap. Yeah and that basically boiled down to very short projects people on a freelance basis only no office and that that's kind of what what led us there, but I think there actually is a lot. Benefits to doing things that way there's some big downsides as well but there's some benefits as well. So the constraint led us to the model you might say got a desire to work on a big problem in the same with a longer time Horizon like you would for a you know, a classic R&D lab, but trying to do that with a lot less money. Let us to this kind of short-term project model. The Hollywood Model in Tech Ben: There are three things that I want to take into from that the three things are going to be how the Hollywood model works and sort of the difference between the Hollywood model in Tech versus in Hollywood and then like those those pros and cons and then it feels like there's a tension between working on a really big long term projects via very short term sort of Sprint demos. So. So let's let's start with the Hollywood model because in Hollywood I like after after I learned about. You doing that I sort of dug into it and it's it seems like the Hollywood model Works partially because all of Hollywood is set up so that even the best people work on this temporary basis. Whereas in Tech, it feels like you sort of have to get people who are in very special life situations in order to get the best people. So like, how do you how do you juggle that? Adam: Yeah, yeah, that is those are really good point. Well just to I guess briefly explain. The Hollywood model is please the idea. There is I actually lived in Los Angeles for a time and have a lot of friends who are trying to break into that industry and got a little exposure to that. I don't pretend to be an expert but and you can read about this online as well, which is that most movies are made by forming a an entity usually an LLC for the duration of the movie Project. You know, I might be a year or two. Here's whatever the shooting time is and everyone from the director or the camera people the whole cast the entire crew are all hired as essentially short-term contractors for whatever the duration of time their services are needed. But even someone like director who's there throughout. It's essentially a one or two-year gig for him it yeah, and everyone's fired right things right expanded and it's and it's an interesting accounting model because it means the sort of earnings from the movie in the and how that connects to the studio. And then the way the studio is invest is almost more like maybe Venture Capital invest in startups to some degree. So that's that's my understanding of it. So we kind of borrowed this idea for saying okay part of what we like about this is you get a situation. Any given person and a cameraman a crew member a member of the cast doesn't isn't guaranteed some long-term employment. They don't sign on for an indefinite thing. They sign up for the duration of the project. Right and the end everyone leaves. But what you see is that the same directors tend to hire the same crew the same. You probably noticed this most dramatically in directors that bring the same actors on to the same onto their future films because if working with them before worked, why wouldn't you bring them back? Right and so it's but it's it inverts the model of instead of we're going to keep working together by default. It's more every time a project ends. We're all going to disperse but the things that work will kind of bring back together again and just inverting the model in a subtle way. I. Produces better teams over the long term. But yeah, you get this sort of loose network of people who work and collaborate together to have more of an independent contractor gig mindset and I think that was yeah it was inspired by that and like you said, can we bring that to kind of Technology Innovation? How do you incentivize the hollywood model? Ben: Most people in Tech don't do that. So, how do you sort of generate? How do you get the best people to come along for that model? Adam: That was definitely a big unknown going into it and certainly could have been a showstopper. I was surprised to discover how many great people we were able to get on board maybe because we have an interesting Mission maybe because me and some of the other. Core people in the team have you know just good networks good career Capital. Yeah, but actually it's that more people are in between spaces and you might guess so quite a lot to work with us on projects. Certainly. There's just people who are straight. You know, they made freelancing or some kind of independent Contracting be their business, right so that those folks are to work with a lot of folks that do open source things, you know, we work a lot of people from the DAT Community, for example, a lot of folks there. They actually do make a livelihood through some degree of freelancing in this space. So that's an easy one. But more common I think is you think of that. Yeah full-time salaried software engineer or product design or what have you and they. You know, maybe they do a new job every few years, but they're expecting a full employment salary HR benefits, you know the lunch on campus and the you know, the massages and you know yoga classes and so I was worried that trying to you know compete to get Talent like that when all we have to offer these very short term projects would be difficult. But as it turned out a lot of people are in some kind of in-between space. We're really interesting. Project with an interesting team good sort of in between things maybe a palate cleanser in a lot of cases turned out to be quite interesting. So we got a lot of people who are you know, they're basically looking for their next full-time gig but then they see what we have to offer and they go oh, you know, that's actually quite interesting and they can keep looking for the next job while they're working with us or whatever. Yeah their Habits Like do this thing is like an in-between thing onto the way that are to their next. Employment or we have situations like, you know one person we were able to get on the team with someone who is on Parental. Leave from their startup and so basically wanted to be like getting the mental stimulation of a project but couldn't really go into the office due to needing to take care of an infant, right? Um, and so by working with us was able to get some nice in that case part-time work and some mental stimulation and a chance to build some skills in the short term in a way that was compatible with. Needing to be home to for childcare. So the a lot of cases like that. I think so it granted, you know people that are looking for full-time gigs. We can't give them the best offer in the world. But there's a surprising number of people that are willing to take a weird interesting kind of cool learning oriented project in between there. May be more conventional jobs. Building from scratch with the Hollywood Model? Ben: Yeah. Because one of the things that I'm constantly thinking about what I'm asking these questions is how do we have more things using the same model in the world? Because I think it's a really cool model that not many people are using and so it's like what like could there be a world where there are people who just go from like one to the other and then would be an interesting shift in the industry to be a little more gig oriented or Independent. Contractor oriented versus the sort of the full-time job expectation that folks have now. Yeah and another sort of difference between I think Hollywood and Tech is that Hollywood you're always sort of Reinventing things from scratch. Whereas in tech there is code and and things that sort of get passed on and built on top of . Do you do you run into any problems with that or is it just because like every every experiment is sort of its own its own thing. You don't you don't have that problem. Adam: Yeah, the building on what came before is obviously really important for a lot of our projects. We were pretty all over the place in terms of platforms. And that was on purpose we built a bunch. Projects on the iOS platform we bought built from on the Microsoft Surface platform. We've done in various different web Technologies, including electron and classic web apps and so in many cases there is not a direct, you know, even if we had written a library to do the thing we needed in the other thing. We actually couldn't bring that over in that kind of build it all from scratch each time or or the the mic slate of it. I think is part of what makes it creative or forced to rethink things and not just rely on the. Previous assumptions that said. You know for certain tracks to research you might call it a big one for us is this world of like CR DTS and essentially like getting a lot of the value of getting a lot of capabilities that you expect from cloud Solutions real time collaboration Google Docs style of being able to do that and more peer-to-peer or less centralized oriented environment. And so we in an earlier project. We built a library called Auto merge just in JavaScript and it was being plugged into our electron app and. And in future projects, we wanted to build on top of that and we have done a number of subsequent projects some of which were but obviously they needed to like use the JavaScript runtime in some ways. So if we were doing another electron project, yes, you can do that but that and then another case, you know, we wanted to go with tablet thing. All right. Well that limits us because we can't use that library in other places. And in one case is we chose to build for example in the Chrome OS platform because we can get a tablet there and partially because we already had this investment in kind of. Script ecosystem through these libraries. But yeah again that comes with comes with trade-offs to some degree. So so we're always trying to balance build on what we made before. But also we're really willing to kind of start over or do the blank canvas because we really feel like at this. Level of early Innovation. What matters is the learning and what lessons you learn from past projects and you could often rebuild things in a fraction of the time in some cases we have actually done that is rebuilt an entire project sort of like feature complete from what on a completely different platform. But if you can skip past all the false turns and you know Discovery process and to build what you where you ended up it's often something that can be done in just a tiny fraction of the time or cost Knowledge Transfer in the Ink&Switch Model Ben: Got it. And do you have a way of transferring learning between different groups of temporary people that things like would be one tricky piece. Adam: Absolutely. Well an important thing here is we do have core lab members both. We have some principal investigators who are people that are around long-term and are the people that drive our projects and their, you know, carry a lot of those learnings both the Practical ones, but also like culture. Cultural elements and then a lot of the folks we work with they'll come back to work for a future project. But yeah, absolutely every given project is a new combination of people some existing people in the lab. They carry forward some of those learnings and then some people who are new and so we've had to do we tried a variety of approaches to kind of. Do a mental download or crash course and you know, none of it's perfect. Right because so much knowledge. Is that even though we take a lot of time to do a big retrospective at the end of our projects try to write out both raw notes, but also like a summarize here's what we learn from this project even with that and sharing that information with new people so much of what you learn is like tacit knowledge. It's somehow, you know more in your gut than in your head. And so to some degree we do count on the people that are more standing numbers that go project project in some cases. We do have to relearn small lessons each time. And again that that somewhat is a you know, if you start over from scratch and you kind of start from the same premises then you often discover some of the same same learnings. I think that's okay as long as we get a little faster each. Each time and then yeah combine that with learning documents and I don't know for example, we're actually the point now we have enough projects under our belt. We actually have a deck that is like here's all our past projects and kind of a really quick crash course summary, at least here's what they're called and least when people reference. Oh, yeah. That's the way we did things on Project number five right was called this and you can be like at least have some context for that. And so short answer is we haven't solved the problem but here's some things that at least have helped with that. Yeah, and how many projects have you done in total? Yeah. Well depends on exactly how you count. But when it comes to what we consider the sort of the full list called formal projects, which is we spend some time kind of wandering around in a in a period of time to call pre-infusion named after the the espresso machine for the sort of record time. You put in the water to kind of warm up the grounds. So the version of that and once we have basically a process where once principal investigator finds a project with egg, I think there's a really promising area and we should fund this. Okay. Now we're going to go actually hire experts that are specific to this area. We're going to commit to doing this for again six weeks or eight weeks something on that order. There's a project brief we present basically present that to our board to basically give like a thumbs up thumbs down. I'm so if you count stuff that has been through that whole process we've now done 10 projects cool. That's over the course of about three years. Ink&Switch Speed vs. Startup Speed Ben: Yeah, that's that's really good compared to. Like I start up where you do one project and takes three years. Adam: I need to maybe feels sometimes it feels slow to me. But honestly, we spend as much time trying to figure out what it is that we want to do as actually doing it and then suspend a really good bit of time again trying to retrospect pull out the learnings actually figure out. What did we learn? You know, we usually come out with strong feelings and strong Instinct for kind of this work. This didn't work. We'd like to continue this. There's more to research here. This is really promising. This was a dead end but actually takes quite a bit of time to really digest that and turn it into something and then kind of the context shift of okay. Now, let me reorient and switch gears to a new project is really a whole skill, too. To be doing such a rapid turnover, I think and I think we've gotten decent at it over the last few years, but I think you get a lot better if you wanted to keep at it. Ink&Switch's Mission and Reconciling Long Term Thinking with Short Term Projects Ben: Yeah. And I've actually like to step back real fast to the bookmark in terms of a the big picture long-term thinking like what is in your mind the real Mission here and B. How do you square these? Like, how do you. Generate a long-term result from a whole bunch of short term projects. Adam: right. Yeah, really cool problem. Absolutely. Yeah. Yeah and one again, I don't pretend to have answers to we're still in the middle of this experiment will see if it actually actually works. Yeah, let me start by just briefly summarizing our our mission or a theme. I like to think of it a little bit right like typically these and these great examples of successful Industrial Research Labs, you know for Bell Labs or theme was this Universal connectivity that has Bell had this growing Communications Network and they wanted to like solve all the problems that had to do with trying to tie together an entire nation with Communications technology or Xerox Parc. Of course, they had this office of the future idea. It's. How many papers and copier what is it going to be? I think you need a theme that is pretty broad. But still you're not just doing a bunch of random stuff that people there, you know think it's cool or interesting new technologies. It's tied together in some way. So for us our theme or a research area is Computing for productivity and creativity. Sort of what the digital tools that let us do things like write or paint or do science or make art are going to look like in future and we were particularly drawn to this and. And our investors were drawn to this because so much of the brain power and money and general Innovation horsepower in Silicon Valley certainly the tech industry broadly and even to some degree in Academia computer interaction research and so on it really pointed what I would call consumer technology. Right, it does social media It's Entertainment. It's games. It's shopping. Yeah, and and that's really a phenomenon just the last five or ten years, right the successful smartphones the fact that sort of computing has become so ubiquitous and mass-market its health and fitness trackers yet wearable, and you know, that's all great, but. I think that the more inspiring uses the more interesting uses of computers for me personally. I things that are about creativity there about self-improvement there about productivity and when you look at what the state of I'm going to look like a spreadsheet, right if you look at Excel in 1995 and you compare that to Google Sheets in 2018 the kind of looks the same. Yep, you know, it's at a Google Sheets as real-time collaboration, which is great. Don't get me wrong. But basically the same kind of program, right? Yeah. And I think you can say that same thing for many different categories Photoshop or presentation software note-taking software that sort of thing. There's some Innovation to give me to go get me wrong, but it just feels very out of balance how much again of that Innovation horsepower of our industry broad. They could go into Super Side. So for us the theme is around all right. We look forward five or ten years to what we're using to be productive or created with computers. What does it look like and you know, the reality is desktop operating systems or more and more kind of advanced mode because that's not where apple or Microsoft revenue is anywhere. But at the same time I don't think it's you know, touch platform particularly, you know are built around phones and consumer Technologies and sort of the pro uses of them tend to be kind of attack on afterthought. And so it sort of feels like we're in a weird dead end which is like what are we going to be doing 10 years from now to yeah do a science paper or write a book or make a master thesis or write a film script? It's hard to picture and but actually picturing it is that's that's sort of our the job of our research here. Ben: and that is a really long term project because you sort of need to go back down the mountain a little bit to figure out what the what the other mountain is. Adam: Absolutely. Yeah the local Maxima of some kind and so maybe you need to yeah be a little. Out of the out of the box and go away from basically make things worse before they get better. Aside on AI Enabled Creativity Tools Ben: Yeah, just aside on that. Have you been paying attention to any of the sort of like a I enabled creativity tools? This is just been on my mind because Neurosis is coming up and there's some people who have been doing some like pretty cool stuff in terms of like. Enhance creativity tools were like maybe you start typing and then it starts completing the sentence for you and and or like you sort of like draw like a green blob and it fills in a mountain and then you sort of like just adjust it. Have you been paying any attention to those tools at all? Adam: Yeah. Absolutely. Some of the follow sir pokes on Twitter that post really interesting things in that vein that hasn't been an area of research for us partially because maybe we're a little contrarian and we like to kind of look where. You are looking and I feel like Ai, and that kind of Realm of things is very well. Or I should say a lot of people are interested in that that said yeah, I think to me one of the most interesting cases with that is usually we talk about with like generative design or things like. Sot great Target Range Loop last year by an architect who basically uses various kind of solvers we plug in like here's the criteria we have for like a building face, you know, we need the window has to be under the, you know can't because of the material dimensions and the legal things and whatever it can't be here's the constraints on it. But here's what we want out of the design. You can plug that in and the computer will give you sort of every possible permutation. And so it's a pretty natural step to go from there to then having some kind of. Algorithm whether it be here a stick or something more learning oriented, which is then try to figure out from that superset of every possible design satisfies the constraint which of them are actually sort of the best in some sense or fit what we said that we like before where we use, you know, the client or the market or whatever it is you're looking for. So I think there's a lot of potential there as I think it was more of an assistive device. I get a little skeptical when it gets into the like let's get the computers to do our thinking for us. Yeah realm of things. I would say, you know, I think you see with the fit and of the sort of auto complete version of this, but but yeah, but then but then maybe I you know, I love that artisanal Craftsman, you know, some kind of unique vibe that humans bring to the table and so yeah tools as. Assisting us and helping us and working in tandem with us and I think yeah, there's one probably a lot of potential for a eye on that that said that's not an area where researching. Ben: Yeah. I just I wanted to make sure that was on your radar because like that's that's something that I pay a lot of attention to him very excited about. More Reconciling Long Term and Short Term Ben: Yeah, and so for the long-term Vision, the thing that I always worry about in the modern world is that we are so focused on what can you do in a couple months these little Sprint's that if there's a long-term thing you just wouldn't be able to get there with a bunch of little projects. So I'm really interested in like how you resolve that conflict. Adam: Yeah, well you could say it's one of the biggest Innovations in Innovation, which I know is the area your study medication to get into this iterative mindset this what he called agile whether you call it. Yeah, iterative that the idea of kind of breaking it down into small discrete steps rather than thinking in terms of like I don't know we're going to go to the moon and let's spend the decade doing that. But instead think of and I didn't even see that difference in something like the space program right the way that the modern. Space exploration stuff that's going on is much more in terms of these little ratcheting steps where one thing gets you the next rather than that one big Mega project. It's going to take a really long time the super high risk and super high beta so I in general. I think that's a really good sort of shift that's happened. But yes, it does come at the expense of sometimes there are jumps you can or need to make that are not necessarily smaller steps. And so I certainly don't propose to have the answer to that. But at least for what we're doing the way I think of it is, you know, starting with a pretty Grand Vision or a big Vision or a long time Horizon. If nothing else and trying to force yourself first and foremost into the bigger thinking right? But then going from there to okay, if that's you know, where we want to go. What is the first step in that direction? What is the thing that can give us learning that will help us get there and one of the metaphors I always love to use for I guess research in general or any kind of Discovery oriented process is the other Lewis and Clark expedition, you know, this was commissioned by the Thomas Jefferson was president at the time and it to me was really crazy to read about. Holly you know they hadn't explored the interior of the continent they believe there might still be willing - running around and actually one of the things Thomas Jefferson Wander from the he's like, I really loved a, you know get a while you're out there they just had no idea they knew that the Pacific Ocean was on the other side that had ships go around there. But other than that it was this dark interior to the continent, but they sat out you know that expedition set out with the goal of reaching the Pacific Ocean and find out. What's on the way right and they did they took their best guess of what they might encounter on the way and put together Provisions in the team to try to get there. But then the individual sort of you might wait, you might call the iterative decisions. They need to make along the way to be go up this mountain rage. And we divert this way to be cut across this River. Do we do we follow this for a while do we try to befriend these tribes people who run away etcetera. Those are the sort of the iterative steps for the important thing is keeping in mind that long-term strategic goal. Um and defining that goal in such a way that it doesn't say go west, you know, it's not a set of directions to get there because you can't know that you have to start with here's what our vision is. Let's connect the two coasts of this country and then we're going to take whatever whatever iterative steps seem to be most promising to lead us in that direction. Also realizing that sometimes the most iterative step leads us in a way even away from our goal. So hopefully that's what we're trying to do it in can switch is picking individual projects. We hope carve off a piece of the bigger thing that we think will increase our learning or build our Network or just somehow illuminate some part of the this problem that we want to we want to understand better again, what is the future of you know, productive and creative Computing and then hopefully over time those will add up in the trick is not to get lost for me. I think the trick is not to get too lost in. Detail of the project right? And that's where the Hollywood model is. So important because you got to end the project and step away to truly have perspective on it and to truly return to looking at the bigger thing and that's what you don't get in my experience working in a startup that has operations and customers and revenue that you know goals. He need to hit us according to those things which are absolutely you know, the right way to run a business but then. Keeping that that that bigger picture view and that longer term mindset is very difficult. If not impossible in that setting. So that's our approach. Anyways, see how about in longer term? Loops around Loops: The Explicitly Temporary Nature of the Whole Lab Ben: and in terms of of your approach and ending things is it true that you're actually going to at the end of a certain amount of time. You're going to step back and look reevaluate the whole. Is it like you're sort of doing like loops loops around Loops Adam: indeed? Yes. So individual projects have this sort of you know, we'll end it and and step back and evaluate thing. And then yeah, the whole thing is basically, you know, we have a fixed Grant when that's how it's out and right and it's up to us to deliver invest to investors the learning you might call the intellectual property. We're not patenting things or whatever, but the. See protect the things that offer commercial potential and could potentially be funded as startups. Basically. Yeah, that's you know, that's that's what we'll do. And actually that will happen next year. Wow. And when that does happen will hopefully do you know will do the same process? Like you said that the the bigger loop on the smaller Loop which that we've done on the smaller Loops which is retrospect at the end write down everything we've learned and then we do go ahead and let the team. Part of that may be when you've done put all this hard work and getting a team together, but my experience is that if there's really some great opportunity there. You'll recall us it in some new form. What Comes out of the Lab? Ben: I can see it's going multiple ways. Where you. You ended and then you could either say there's another five-year research thing in this or there's some number of sort of more traditional startups to come out of that to try to capture that value are those sort of the the two options. What what do you see as the possibilities that come out of this? Adam: Yeah, those are those are both pretty key outcomes and they're not mutually exclusive right so it could be that we say, all right, great, you know we generated sort of five interesting startup options one of them. You know an investor decided to pick that up and you know, maybe take a team that is based on some of the people in the lab that worked on that and those folks are going to go and essentially work on commercializing that or making a go to market around that but then some other set of people who were involved in things and want to come back to this. He's promising tracks research and we're going to take another grant that has another time duration, I think. The obviously money is your ultimately all of them in limiting factor it yeah in any organization, but but I like the time boxes. Well, I think we use again we use that for our short-term projects and and some degree. We used it for the lab overall. I think thinking that it's like that Star Trek, you know, what is it or three-year Mission our five-year Mission, whatever it is. It's something about the time box that kind of creates clarity. Yeah, maybe in some is and yeah, you might decide to do another time box another chunk of time. In other chapter actually investors do this as well. If you look at something like the way that Venture struck funds are structured they often have sort of multiple. Entities which are you know, it's fun one fun to fund three. Yeah, right and those different funds can have different kind of buy-ins by different partners. They have different companies in their portfolio, even though there is like a continuous. I don't know if you want to call a brand or culture or whatever the ties them all together and I think that approach of like having these natural chapter breaks time or money based chapter breaks in any work is like a really useful and valuable thing for. Productivity and I don't know making the most of the time. Human Timescales - 4-5 Years Ben: I completely by that. I have this theory that human lives are kind of divided up in like these roughly five year chunks. We're like that's that's the amount of time that you can do the sort of the exact same thing for the most time and if you if you like if you don't have. You can reevaluate every five years but it's like you look at like school. It's like you really like maybe it's like five years plus or minus like to but beyond that it's really hard to like sustained. Intense and tension on the same thing. So that makes that makes a lot of sense Adam: agree with that. I would actually throw out 4 years as a number which does I think Max match the school thing it also matches the vesting schedules are usually the original vesting schedule and most startups is a four-year window. And if I'm not mistaken, I think that is the median length of marriages think there's something around. Well, you know, maybe it's something around, you know, there's renewal in our work life is what we're talking about here. But there's also renewal in a personal life, right? And if you're yeah if your employee at a company. Maybe something around for years as a feels like the right Tour of Duty. No not say you can't take on another Tour of Duty and maybe with the new role or different responsibilities, but there's something about that that seems like a natural like you said sustained attention, and I think there's something to goes about as well as inventing. Or Reinventing yourself your own personal identity and maybe not connects to you. Marry. Someone for years goes by you're both new people. Maybe those two people aren't compatible anymore. Yeah. I don't know. Maybe that's figure that's reaching a little bit far. I mean the other yeah Investors, Grants, and Structuring Lab Financing to Align Incentives Ben: that makes a lot of sense and you mentioned investors a couple of times but then also that it's a grant so how did you something something that I'm always interested in is sort of like how to. He's up. So the incentives are all aligned between the people like putting in the money the people doing the work and people setting the direction and so like how did you structure that? How did you think about sort of coming up with that structure? Adam: Yeah, I've used maybe investors and Grant sort of a little Loosely there again, the model we have is a little different. So when I you know went to pitch the private investors on what we were going to do with this, I basically said look. Me and my partner's we had been successful in the past producing commercial Innovations. We want to look now at something that's a little bigger a little longer term and wouldn't necessarily fit as cleanly into some of the existing funding models including things like the way that the academic research is funded and certainly Venture funding and so take a little gamble on us. Give us a pics. Amount of money a very small amount of money by some perspectives to deliver not profits, but rather to deliver again this kind of concept of learning intellectual property in the loose sense not in the legal sense, but in the sense of intellectual Capital, maybe might be another way to put it and more explicitly. Yeah spin out potential right but the but but no no commitment to make any of these things. It's just we've evaluated all of these opportunities. Here's what we think the most promising ones are and that includes both. Let's call it the validated findings. We think there's a promising opportunity here at technology. That's right to you know, serve serve some marketing users well, but also some things that we got negative findings on we said well look we think there's a really interesting Market of users to serve right here the technology that would be needed kind of isn't ready yet and still five years out or maybe the market is actually tough for an early not very good for sort of early adopter type products and so in some way that would be valuable to. There's as well to have this information on why actually is it is not wise to invest in a particular Market a particular product opportunity. So that was that was what we asked for and promised to deliver and obviously we're still in the middle of this experiment so I can't speak to the whether they're happy with the results. But at least that's the that's the deal that we set up. Tension between Open Knowledge Sharing and Value Capture Ben: I just I love the idea of investment not. Necessarily with a monetary return and it's like I wish there were more people who would think that way and. In terms of incentives. There's also always the question about value capture. So you you do a really good job of putting out into the world just like all like the things that you're working on and so it's like you have all those the great articles and like the code. Do you hold anything back specifically for for investors? So that because I mean it would make sense, right because you need to capture value at some point. So it's like there's there's got to be some Advantage. So like how do you think about that? Adam: Yeah. I don't have a great answer for you on that, you know, certainly again, you know, there's conventional conventional ideas there around yet Trade Secrets or patents or that sort of thing, but I kind of. Personally, I'm a little bit more of a believer in the maybe comes back to that tacit knowledge we talked about earlier, which is you can in a way. I feel like it's almost misleading to think that if you have the entire project is open source that somehow you have everything there is to know I feel like the code is more of an artifact or an output. Yeah of what you learn and the team of people that made that and the knowledge they have in their minds and and again in. To some degree in there. There are sort of hearts and souls. Yeah is actually what you would need to make that thing successful. Right? And I think a lot of Open Source people who work on open source for a living rely on that some degree, which is you can make a project that is useful and works well on its own but the person who made that and has all the knowledge about it. They have a they have a well of. They have a lot of the resources that are really valuable to the project. And so it's worth your while to for example go hire them. And so that's that's the that's the way I think that we think about in The Way We pitched it to investors. If I were to do this again, I might try to look for something a little more concrete than that a little more tangible than that. The other part of it that I think is. Pretty key. Is that the networking? Yeah, and so you could say okay. There's the knowledge of the people who worked on in their heads. It may be that that kind of ties together. But there's the knowledge we transferred directly by like here's a here's a document that tells you everything we learned about this area where we think the opportunities are but then it's also by the way, we had a bunch of people to work on this some of whom are now in some cases where we were pushing the envelope on a particular Niche e sub technology. We end up with people on the team who are in many cases of the world's experts or we're in touch with the few experts in the world on a particular topic and we have act we have that network access. And so if someone wants to go and make a company they have a very easy way to get in touch with those people not the really impossible for someone else to take that. Bundle of information or take even a code based on GitHub and pick through the contributors list try to figure out who worked on it and go contact them. You know, I think that's possible Right, but I think it's quite different. You would be the pretty substantial disadvantage. There's someone that actually had the worm Network and the existing working collaboration. Extending the Ink&Switch Model to Different Domains Ben: Yes, the I like that and in terms of using the model in different places. Have you thought about how well this applies to other really big themes the things that you're working on our nice because it's primarily software like the capital costs are pretty low. You don't need like a lab or equipment. Do you think that there's a way to get it to work for maybe in biology or other places where there's higher friction. Adam: Yeah, I think the fact that we are in essentially purely in the realm of the virtual is part of what makes the sort of low cost. By all remote team and not asking people to relocate that's what as part of what makes that possible. We do have some cost. We've certainly purchased it quite a bit of computing Hardware over the course of the of the course of the lab and ship those to whoever needs it. But that said, you know, we can do that. I think this model would best apply to something that was more in the realm of knowledge development and not in the realm of you have to get your hands physically on something. Whether that's a DNA sequencer or a hardware development or something of that nature, but on the other hand as certainly as cameras get a more of the quickest and high-speed internet connections get better and certainly we've learned a lot of little tricks over the time. I think we were talking about the start of the call there about. Our use of document cameras is basically screen screen sharing for tablets doesn't work great because you can't see what the users hands are doing. So we learned pretty quickly that you got to invest in document cameras or something like that in order to be able to kind of effectively demo to your teammates. One of the quick or as a kind of a sidebar but related to that is one of the learnings we had in making the distributed team thing work is you do have to get together in person periodically so we can to support early team Summits got it. Making Watercooler Talk and Serendipity Work with a Distributed Team Ben: I was actually literally just thinking about that because one of the things that I always hear about. Great research places like whether it's like Bell Labs or DARPA is sort of like the the water cooler talk or the fact that you can just sort of like walk down the hall and like really casually hop into someone's office. And that's the problem with distributed teams that I haven't seen anybody saw well, so so you just do that by bringing everybody together every once in a while. Do you think that generates enough? Adam: Yeah. I mean the to your right like. That problem is very big for us. And there's there's a number of benefits we get from the distributed team, but there's also a number of problems. We haven't solved and so I'm not sure how this would balance against the sort of the spending the same amount of money on a much shorter term thing where people could be more in person because that water cooler talk you get some of with a slack or whatever but. It's just not the same as being co-located. So yeah, the the one of the mitigating things we have that I think is works pretty well as about about quarterly or so. We got everyone together and it's actually kind of fun because because we don't have to go any place in particular. There's no central office. We try to pick a different city each time someplace that's creative and inspiring we tend to like interesting Bohemian Vibe, you know in some cases urban city Center's been in some cases more historic places or more in nature. Ideally someplace close to International Airport that it wouldn't fly into and for really a fraction. I mean offices are so expensive. Yeah, and so our fraction of the price of maintaining an office. Actually fly everyone to some pretty interesting place once a once a quarter and so for a week, we have like a really intense period where we're all together in the same physical space and we're working together. We're also getting the human bonds more that casual conversation and we tend to use that time for like a lot of design sketching and kind of informal hackathons are also some bigger picture. Let's talk about the some of the longer term things lift our gaze a little bit and that helps a lot. Again, it is not as is demonstrably not as good as being co-located all the time, but it gets you I don't know 30 to 40 percent of the way there for, you know a fraction of the cost. So yeah over the over the longer term again, I don't know how that would Stack Up Against. Collocated team, but that's one good thing to getting product review so far. Where to find out more Ben: I see that we're coming up on time and I want to be very respectful of your time. I'm going to make sure people know about the website and your Twitter. Is there anything else any other places online that people should learn more about and can switch to learn more about you and what you're working on. Adam: Ya know the the website and the Twitter is basically what we got right now. We've been really quiet in the beginning here not because you know, I'm a big believer in that, you know that science approach of Open Access and you know, it's about sharing what you've learned so that humanity and can build on each other's learnings that said it, you know, it's a lot of work to to package up your ideas, especially when they're weird and fringy like ours are in a way that's consumable to the outside world. So we're trying to do a lot more of that. All right now and I think you're starting to see that little bit to our to our Twitter account where in including publishing some of our back catalogue of internal memos and sketches and things which again very itchy things you got to be really into whatever the particular thing is to find find interest in our internal memo on something as well as taking more time to put together demo videos and longer articles that try to try to capture some of the things we've learned some of the philosophies that we have some of the technologies that were. So yeah, there's she spots a great Thinking About Extending The Model Ben: So freaking cool. the. That I'm doing is just putting together the ideas and trying to almost make a more generic description of what you're doing so say like, oh, what would this look like if it goes into biology or it goes into something? What would this look like for nanotech? could you do the distributed team using University resources? Right? Like could you partner with a whole bunch of universities and have someone in different places and they just like go in and use the lab when you need to I don't know like that's one Bay action item based on learning about this is like oh, yeah. I think I think it could work. Adam: That sounds great. Well, if you figure something out, I'd love to hear about it. I will absolutely keep you in the loop. Ben: awesome. Cool. Well, I really appreciate this. I'm just super excited because these new models and I think that you're really onto something. so I really appreciate you bringing me in and going into the nitty gritties. Adam: Well, thanks very much. Like I said, it's still an experiment will we get to see? But I feel like I feel like they're more Innovation models than just kind of start up. Corporate R&D lab and Academia. Yeah, and if you believe like I do that technology has the potential to be an enhancement for Humanity then you know Finding finding new ways to innovate and a new types of problems and you new shapes of problems potentially has a pretty high high leverage impact the world.
My guest this week is Brian Nosek, co-Founder and the Executive Director of the Center for Open Science. Brian is also a professor in the Department of Psychology at the University of Virginia doing research on the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals. The topic of this conversation is how incentives in academia lead to problems with how we do science, how we can fix those problems, the center for open science, and how to bring about systemic change in general. Show Notes Brian’s Website Brian on Twitter (@BrianNosek) Center for Open Science The Replication Crisis Preregistration Article in Nature about preregistration results The Scientific Method If you want more, check out Brian on Econtalk Transcript Intro [00:00:00] This podcast I talked to Brian nosek about innovating on the very beginning of the Innovation by one research. I met Brian at the Dartmouth 60th anniversary conference and loved his enthusiasm for changing the way we do science. Here's his official biography. Brian nozik is a co-founder and the executive director for the center for open science cos is a nonprofit dedicated to enabling open and reproducible research practices worldwide. Brian is also a professor in the department of psychology at the University of Virginia. He's received his PhD from Yale University in 2002 in 2015. He was on Nature's 10 list and the chronicle for higher education influence. Some quick context about Brian's work and the center for open science. There's a general consensus in academic circles that there are glaring problems in how we do research today. The way research works is generally like this researchers usually based at a university do experiments then when they have a [00:01:00] result they write it up in a paper that paper goes through the peer-review process and then a journal publishes. The number of Journal papers you've published and their popularity make or break your career. They're the primary consideration for getting a position receiving tenure getting grants and procedure in general that system evolved in the 19th century. When many fewer people did research and grants didn't even exist we get into how things have changed in the podcast. You may also have heard of what's known as the replication crisis. This is the Fairly alarming name for a recent phenomena in which people have tried and failed to replicate many well-known studies. For example, you may have heard that power posing will make you act Boulder where that self-control is a limited resource. Both of the studies that originated those ideas failed to replicate. Since replicating findings a core part of the scientific method unreplicated results becoming part of Cannon is a big deal. Brian has been heavily involved in the [00:02:00] crisis and several of the center for open science is initiatives Target replication. So with that I invite you to join my conversation with Brian idzik. How does open science accelerate innovation and what got you excited about it? Ben: So the theme that I'm really interested in is how do we accelerate Innovations? And so just to start off with I love to ask you sort of a really broad question of in your mind. How does having a more open science framework help us accelerate Innovations? And I guess parallel to that. Why what got you excited about it first place. Brian: Yeah, yeah, so that this is really a core of why we started the center for open science is to figure out how can we maximize the progress of science given that we see a number of different barriers to or number of different friction points to the PACE and progress of [00:03:00] Science. And so there are a few things. I think that how. Openness accelerates Innovation, and I guess you can think of it as sort of multiple stages at the opening stage openness in terms of planning pre-registering what your study is about why you're doing this study that the study exists in the first place has a mechanism of helping to improve Innovation by increasing The credibility of the outputs. Particularly in making a clear distinction between the things that we planned in advance that we're testing hypotheses of ideas that we have and we're acquiring data in order to test those ideas from the exploratory results the things that we learn once we've observed the data and we get insights but there are necessarily more uncertain and having a clear distinction between those two practices is a mechanism for. Knowing the credibility of the results [00:04:00] and then more confidently applying results. That one observes in the literature after the fact for doing next steps. And the reason that's really important I think is that we have so many incentives in the research pipeline to dress up exploratory findings that are exciting and sexy and interesting but are uncertain as if they were hypothesis-driven, right? We apply P values to them. We apply a story upfront to them we present them as. These are results that are highly credible from a confirmatory framework. Yeah, and that has been really hard for Innovation to happen. So I'll pause there because there's lots more but yeah, so listen, let's touch on that. What has changed to make the problem worse? Ben: There's there's a lot that right there. So you mentioned the incentives to basically make. Things that aren't really following the scientific method follow the clicker [00:05:00] following the scientific method and one of the things I'm always really interested in what has changed in the incentives because I think that there's definitely this. Notion that this problem has gotten worse over time. And so that means that that something has has changed and so in your mind like what what changed to make to sort of pull science away from that like, you know sort of ice training ideal of you have your hypothesis and then you test that hypothesis and then you create a new hypothesis to this. System that you're pushing back against. Brian: You know, it's a good question. So let me start with making the case for why we can say that nothing has changed and then what might lead to thinking something has changed in unpacking this please the potential reason to think that nothing has [00:06:00] changed is that the kinds of results that are the most rewarded results have always been the kinds of results that are more the most rewarded results, right? If I find a novel Finding rather than repeating something someone else has done. I'm like. To be rewarded more with publication without latex cetera. If I find a positive result. I'm more likely to gain recognition for that. Then a negative result. Nothing's there versus this treatment is effective, which one's more interesting. Well, we know which ones for interesting. Yeah. Yeah, and then clean and tidy story write it all fits together and it works and now I have this new explanation for this new phenomenon that everyone can can take seriously so novel positive clean and tidy story is the. They'll come in science and that's because it breaks new ground and offers a new idea and offers a new way of thinking about the world. And so that's great. We want those. We've always wanted those things. So the reason to think well, this is a challenge always is [00:07:00] because. Who doesn't want that and and who hasn't wanted that right? It turns out my whole career is a bunch of nulls where I don't do anything and not only fits together. It's just a big mess right on screen is not a way to pitch a successful career. So that challenge is there and what pre-registration or committing an advanced does is helps us have the constraints. To be honest about what parts of that are actual results of credible confrontations of pre-existing hypotheses versus stuff that is exploring and unpacking what it is we can find. Okay, so that in this in the incentive landscape, I don't think has changed. Mmm what thanks have changed. Well, there are a couple of things that we can point to as potential reasons to think that the problem has gotten worse one is that data acquisition many fields is a lot easier than it ever was [00:08:00] and so with access more data and more ways to analyze it more efficient analysis, right? We have computers that do this instead of slide rules. We can do a lot more adventuring in data. And so we have more opportunity to explore and exploit the malays and transform it into things signal. The second is that the competitive landscape is. Stronger, right there are fewer than the ratio of people that want jobs to jobs available is getting larger and larger and larger and that fact and then competitiveness for Grants and same way that competition than can. Very easily amplify these challenges people who are more willing to exploit more researcher degrees of freedom are going to be able to get the kinds of results more easily that are rewarded in the system. And so that would have amplify the presence of those in people that managed to [00:09:00] survive that competitive firm got it. So I think it's a reasonable hypothesis that people that it's gotten worse. I don't think there's definitive evidence but those would be the theoretical points. At least I would point to for that. That makes a lot of sense. So you had a just sort of jumping back. You had a couple a couple points and we had we have just touched on the first one. Point Number Two about Accelerating Innovation Ben: So I want to give you that chance to oh, yeah go back and to keep going through that. Brian: Right. Yeah. So accelerating Innovation is the idea, right? So that's a point of participation is accelerating Innovation by by clarifying The credibility of claims as they are produced. Yes, we do that better than I think will be much more efficient that will have a better understanding of the evidence base as it comes out. Yeah second phase is the ability is the openness of the data and materials for the purposes of verify. Those [00:10:00] initial claims right? I do a study. I pre-registered. It's all great and I share it with you and you read it. And you say well that sounds great. But did you actually get that and what would have happened if you made different decisions here here and there right because I don't quite agree with the decisions that you made in your analysis Pipeline and I see some gaps there so you're being able to access the materials that I produced in the data that came from. Makes it so that you can one just simply verify that you can reproduce the findings that I reported. Right? I didn't just screw up the analysis script or something and that as a minimum standard is useful, but even more than that, you can test the robustness in ways that I didn't and I came to that question with some approach that you might look at it and say well I would do it differently and the ability to reassess the data for the same question is a very useful thing for. The robustness particularly in areas that are that have [00:11:00] complex analytic pipelines where there's are many choices to make so that's the second part then the third part is the ReUse. So not only should we be able to verify and test the robustness of claims as they happen, but data can be used for lots of different purposes. Sometimes there are things that are not at all anticipated by the data originator. And so we can accelerate Innovation by making it a lot easier to aggregate evidence of claims across multiple Studies by having the data being more accessible, but then also making that data more accessible and usable for. Studying things that no one no one ever anticipated trying to investigate. Yeah, and so the efficiency gain on making better use of the data that already exists rather than the Redundant just really do Revenue question didn't dance it your question you did as it is a massive efficiency. Opportunity because there is a lot of [00:12:00] data there is a lot of work that goes in why not make the most use of it began? What is enabled by open science? Ben: Yeah that makes a lot of sense. Do you have any like really good sort of like Keystone examples of these things in action like places where because people could replicate the. The the study they could actually go back to the pipeline or reuse the data that something was enabled. That wasn't that wouldn't have been possible. Otherwise, Brian: yeah. Well, let's see. I'll give a couple of local mean personal examples just to just to illustrate some of the points, please so we have the super fun project that we did just to illustrate this second part of the pipeline right this robustness phase of. People may make different choices and those choices may have implications for the reliability results. So what we did in this project was that we get we acquired a dataset [00:13:00] of a very rich data set of lots of players and referees and outcomes in soccer and we took that data set and then we recruit a different teams. 29 in the end different teams with lots of varied expertise and statistics and analyzing data and have them all investigate the same research. Which is our players with darker skin tone more likely to get a red card then players with lighter skin tone. And so that's you know, that's a question. We'll of Interest people have studied and then we had provided this data set. Here's a data set that you can use to analyze that and. The teams worked on their own and developed an analysis strategies for how they're going to test that hypothesis. They came up with their houses strategy. They submitted their analysis and their results to us. We remove the results and [00:14:00] then took their analysis strategies and then share them among the teams for peer review right different people looking at it. They have made different choices. They appear each other and then went back. They took those peer reviews. They didn't know what each other found but they took. Because reviews and they wanted to update their analysis they could and so they did all that and then submitted their final analyses and what we observed was that a huge variation in analysis choices and variation in the results. So as a simple Criterion for Illustrated the variation results two-thirds of the teams found a significant. Write P less than 0.05 standard for deciding whether you see something there in the data, right and Atherton teams found a null. So the and then of course they debated amongst each other which was analysis strategy was the right strategy but in the end it was very clear among the teams that there are lots of reasonable choices that could be made. And [00:15:00] those reasonable choices had implications for the results that were observed from the same data. Yeah, and it's Standard Process. We do not see the how it's not easy to observe how the analytics choices influence the results, right? We see a paper. It has an outcome we say those are what the those fats those the outcomes of the data room. Right, but what actually the case is that those are the outcomes the data revealed contingent on all those choices that the researcher made and so that I think just as an illustrative illustrative. So it helps to figure out the robustness of that particular finding given the many different reasonable choices. That one would make where if we had just seen one would have had a totally different interpretation, right either. Yeah, it's there or it's not there. How do you encode context for experiments esp. with People? Ben: Yeah, and in terms of sort of that the data and. [00:16:00] Really sort of exposing the the study more something that that I've seen especially in. These is that it seems like the context really matters and people very often are like, well there's there's a lot of context going on in addition to just the procedure that's reported. Do you have any thoughts on like better ways of sort of encoding and recording that context especially for experiments that involve? Brian: Yeah. Yeah. This is a big challenge is because we presume particularly in the social and life sciences that there are many interactions between the different variables. Right but climate the temperature the time of day the circadian rhythms the personalities whatever it is that is the different elements of the subjects of the study whether they be the plants or people or otherwise, yeah. [00:17:00] And so the. There are a couple of different challenges here to unpack one is that in our papers? We State claims at the maximal level of generality. We can possibly do it and that that's just a normal pattern of human communication and reasoning right? I do my study in my lab at the University of Virginia on University of Virginia undergraduates. I don't conclude in the. University of university University of Virginia undergraduates in this particular date this particular time period this particular class. This is what people do with the recognition that that might be wrong right with recognition. There might be boundary conditions but not often with articulating where we think theoretically those boundary conditions could be so in one step of. Is actually putting what some colleagues in psychology of this great paper about about constraints on [00:18:00] generality. They suggest what we need in all discussion sections of all papers is a sexually say when won't this hold yeah, just give them what you know, where where is this not going to hold and just giving people an occasion to think about that for a second say oh. - okay. Yeah, actually we do think this is limited to people that live in Virginia for these reasons right then or no, maybe we don't really think this applies to everybody but now we have to say so you can get the call it up. So that alone I think would make a huge difference just because it would provide that occasion to sort of put the constraints ourselves as The Originators of findings a second factor, of course is just sharing as much of the materials as possible. But often that doesn't provide a lot of the context particularly for more complex experimental studies or if there are particular procedural factors right in a lot of the biomedical Sciences there. There's a lot of nuance [00:19:00] into how it is that this particular reagent needs to be dealt with how they intervention needs to be administered Etc. And so I like the. Moves towards video of procedures right? So there is a journal Journal of visualized events jove visualized experiments that that that tries to that gives people opportunities to show the actual experimental protocol as it is administered. To try to improve it a lot of people using the OSF put videos up of the experiment as they administered it. So to maximize your ability to sort of see how it is that it was done through. So those steps I think can really help to maximize the transparency of those things that are hard to put in words or aren't digitally encoded oil. Yeah, and those are real gaps What is the ultimate version of open science? Ben: got it. And so. In your mind what is sort of like the endgame of all this? What is it? Like what [00:20:00] would be the ideal sort of like best-case scenario of science? Like how would that be conducted? So I say you get to control the world and you get to tell everybody practicing science exactly what to do. What would that look like? Brian: Well, if it if I really had control we would just all work on Wikipedia and we would just revising one big paper with the new applicants. Ask you got it continuously and we get all of our credit by. You know logging how many words that I changed our words that survived after people have made their revisions and whether those words changed are on pages that were more important for the overall scientific record versus the less important spandrels. And so we would output one paper that is the summary of knowledge, which is what Wikipedia summarizes. All right, so maybe that's that's maybe going a little bit further than what like [00:21:00] that we can consider. The realm of conceptually possible. So if we imagine a little bit nearer term, what I would love to see is the ability to trace the history of any research project and that seems more achievable in the sense that. If a every in fact, my laboratory is getting close to this, right every study that we do is registered on the OSF. And once we finish the studies, we post the materials and the data or as we're doing it if we're managing the materials and data and then we attach a paper if we write a paper at the end preprint or the final report so that people can Discover it and all of those things are linked together. Be really cool if I had. Those data in a standardized framework of how it is that they are [00:22:00] coded so that they could be automatically and easily integrated with other similar kinds of data so that someone going onto the system would be able to say show me all the studies that ever investigated this variable associated with this variable and tell me what the aggregate result is Right real-time meta-analysis of the entire database of all data that I've ever been collected that. Enough flexibility would help to really very rapidly. I think not just spur Innovations and new things but to but help to point out where there are gaps right there a particular kinds of relationships between things particular effects of predict interventions where we know a ton and then we have this big assumption in our theoretical framework about how we get from X to y. And then as we look for variables that help us to identify whether X gets us to why we feel there just isn't stuff. The literature has not filled that Gap. So I think there are huge benefits for that [00:23:00] kind of aggregate ability. But mostly what I want to be able to do is instead of saying you have to do research in any particular way. The only requirement is you have to show us how you did your research and your particular way so that the marketplace of ideas. Can operate as efficiently as possible and that really is the key thing? It's not preventing bad ideas from getting into the system. It's not about making sure that the different kinds of best things are the ones that immediately are through with not that about Gatekeepers. It's about efficiency in how it is. We call that literature of figuring out which things are credible which things are not because it's really useful to. The ideas into the system as long as they can be. Self-corrected efficiently as well. And that's where I think we are not doing well in the current system. We're doing great on generation. [00:24:00] We're General kinds of innovative ideas. Yeah, but we're not is parsing through those ideas as efficiently as it could decide which ones are worth actually investing more resources in jumping. A couple levels in advance that Talmud for Science Ben: that makes a lot of sense and actually like I've definitely come across many papers just on the internet like you go and Google Scholar and you search and you find this paper and in fact, it has been refuted by another paper and there's no way to know that yeah, and so. I does your does the open science framework address that in any way? Brian: No, it doesn't yet. And this is a critical issue is the connectivity between findings and the updating of knowledge because the way that like I said doesn't an indirect way but it doesn't in the systematic way that actually would solve this problem. The [00:25:00] main challenge is that we treat. Papers as static entities. When what their summarizing is happening very dynamically. Right. It may be that a year later. After that paper comes out one realizes. We should have analyze that data totally different. We actually analyzed it wrong is indefensible the way that we analyzed it. Right right. There are very few mechanisms for efficiently updating that paper in a way that would actually update the knowledge and that's something where we all agree. That's analyze the wrong way, right? What are my options? I could. Retract the paper. So it's no longer in existence at all. Supposedly, although even retracted papers still get cited we guess nuts. So that's a base problem. Right or I could write a correction, which is another paper that comments on that original paper that may not itself even be discoverable with the original paper that corrects the analysis. Yeah, and that takes months and years. [00:26:00] All right. So the really what I think is. Fundamental for actually addressing this challenge is integrating Version Control with scholarly publishing. So that papers are seen as Dynamic objects not static objects. And so if you know what I would love to see so here's another Milestone of this if we if I could control everything another Milestone would be if a researcher could have a very productive career with. Only working on a single paper for his or her whole life, right? So they have a really interesting idea. And they just continue to investigate and build the evidence and challenge it and figure, you know, just continue to unpack it and they just revise that paper over time. This is what we understand. Now, this is where it is. Now. This is what we've learned over here are some other exceptions but they just keep fine-tuning it and then you get to see the versions of that paper over its [00:27:00] 50-year history as that phenomenon got unpacked that. Plus the integration with other literature would make this much more efficient for exactly the problem that you raised which is we with papers. We don't know what the current knowledge base is. We have no real good way except for these. These attempts to summarize the existing literature with yet a new paper and that doesn't then supersede those old papers. It's just another paper is very inefficient system. Can Social Sciences 'advance' in the same way as the physical sciences? Ben: Ya know that that totally makes sense. Actually. I just I have sort of a meta question that I've argued with several people about which is do you feel like. We can make advances in our understanding of sort of like [00:28:00] human-centered science in the same way that we can in like chemistry or physics. Like people we very clearly have like building blocks of physics and the Builds on itself. And there's I've had debates with people about whether you can do this in. In the humanities and the social sciences. What are your thoughts on that? Brian: Yeah. It is an interesting question and the. What seems to be the biggest barrier is not anything about methodology in particular but about complexity? Yeah, right, if the problem being many different inputs can have similar impact cause similar kinds of outcomes and singular inputs can have multivariate outcomes that it influences and all of those different inputs in terms of causal elements may have interactive effects on the [00:29:00] outs, so. How can we possibly develop Rich enough theories to predict the actions effectively and then ultimately explain the actions effectively of humans in a complex environments. It doesn't seem that we will get to the beautiful equations that underlie a lot of physics and chemistry and count for a substantial amount of evidence. So the thing that I don't feel like I under have any good hand along with that is if it's a theoretical or practical limit right is it just not possible because it's so complex and there isn't this predicted. Or it's just that's really damn hard. But if we had big enough computers if you had enough data, if we were able to understand complex enough models, we would be able to predict it. Right so is as a mom cycle historians, right? They figure it out right the head. [00:30:00] Oxidizing web series righty they could account for 99.9 percent of the variance of what people do next and but of course, even there it went wrong and that was sort of the basis of the whole ceilings. But yeah, I just don't know I don't have a way to. I don't yet have a framework for thinking about how is it that I could answer that question whether it's a practical or theoretical limit. Yeah. What do you think? Ben: What do I think I think that it's great. Yeah, so I usually actually come down on the I think it's a practical limit now how much it would take to get there might make it effectively a theoretical limit right now. But that there's there's nothing actually preventing us from like if you if you could theoretically like measure everything why not? I [00:31:00] think that is just with again. It's like the it's really a measurement problem and we do get better at measuring things. So that's the that's that's where I come down on but I. How do you shift incentives in science? Yep, that's just purely like I have no good argument. going going back to the incentives. It seems to me like a lot of what like I'm completely convinced that these changes would. Definitely accelerate the number of innovations that we have and so and it seems like a lot of these changes require shifting scientists incentives. And so and that's like a notoriously hard thing so we both like how are you going about shifting those incentives right now and how might they be shifted in the future. [00:32:00] Brian: Yeah, that's a great question. That's what we spend. A lot of our time worrying about in the sense of there is very little at least in my experience is very distal disagreement on the problems and the opportunities for improving the pace of Discovery and Innovation based on the solutions. It really is about the implementation. How is it that you change that those cultural incentives so that we can align. The values that we have for science with the practices that researchers do on a daily basis and that's a social problem. Yeah, there are technical supports. But ultimately it's a social problem. And so the the near term approach that we have is to recognize the systems of rewards as they are. And see how could we refine those to align with some of these improved practices? So we're not pitching. Let's all work on [00:33:00] Wikipedia because that's that is so far distant from. What they systems have reward for scientist actually surviving and thriving in science that we wouldn't be able to get actually pragmatic traction. Right? So I'll give one example of can give a few but here's the starting with one of an example that integrates with current incentives but changes them in a fundamental way and that is the publishing model of registered reports. Sophie in the standard process right? I do my research. I write up my studies and then I submit them for peer review at the highest possible prestigious Journal that I can hoping that they will not see all the flaws and if they'll accept it. I'll get all the do that process me and I understand it anyway - journal and the P plus Terminal C and eventually somewhere and get accepted. The register report model makes one change to the process and that is to move. The critical point of peer review [00:34:00] from after the results are known and I've written up the report and I'm all done with the research to after I've figured out what the question that I want to investigate is and what the methodology that I'm going to use so I don't have an observed the outcomes yet. All I've done is frame question. An articulated why it's important and a methodology that I'm going to just to test that question and that's what the peer reviewers evaluate right? And so the key part is that it fits into the existing system perfectly, right? The the currency of advancement is publication. I need to get as many Publications as I can in the most prestigious Outlets. I can to advance my career. We don't try to change that. Instead we just try to change. What is the basis for making a decision about publication and by moving the primary stage of peer reviewed before the results are known does a fundamental change in what I'm being rewarded for as the author [00:35:00] right? Yeah, but I'm being rewarded for as the author in the current system is sexy results, right get the best most interesting most Innovative results. I can write and the irony of that. Is that the results of the one thing that I'm not supposed to be able to control in your study? Right? Right. What I'm supposed to be able to control is asking interesting questions and developing good methodologies to test those questions. Of course that's oversimplifying a bit. There are in there. The presumption of emphasizing results is that my brilliant insights at the outset of the project are the reason that I was able to get those great results, right, but that depends on the credibility of that entire Pipeline and put that aside but the moving it to at the design stage means that my incentive as an author is to ask the most important questions that I can. And develop the most compelling and effective and valid methodologies that I can to test them. [00:36:00] Yeah, and so that changes to what it is presumably we are supposed to be being rewarded for in science. The other thing that it changes in the there's a couple of other elements of incentive changes that it has an impact on that are important for the whole process right for reviewers instant. It's. When I am asked to review a paper in my area of research when I when all the results are there, I have skin in the game as a reviewer. I'm an expert in that area. I may have made claims about things in that particular area. Yeah, if the paper challenges my cleanse make sure to find all kinds of problems with the methodology. I can't believe they did this is this is a ridiculous thing, right? We write my paper. That's the biggest starting point problem challenge my results all well forget out of you. But the amount of course if it's aligned with [00:37:00] my findings and excites me gratuitously, then I will find lots of reasons to like the paper. So I have these Twisted incentives to reinforce findings and behave ideologically as a reviewer in the existing system by moving peer review to the design stage. It fundamentally changes my incentives to right so say I'm in a very contentious area of research and there's only ten opponents on a particular claim when we are dealing with results You can predict the outcome right it people behave ideologically even when they're not trying to when you don't know the results. Both people have the same interests, right? If I truly believe in the phenomenon that I'm studying and the opponents of my point of view also believe in their perspective, right then both want to review that study and that design and that methodology to maximize its quality to reveal the truth, which I think I [00:38:00] have and so that alignment actually makes adversaries. To some extent allies and in review and makes the reviewer and the author more collaborative, right the feedback that I give on that paper can actually help the methodology get better. Whereas in the standard process when I say here's all the things you did wrong. All the author has this to say well geez, you're a jerk. Like I can't do anything about that. I've already done the research and so I can't fix it. Yeah. So the that shifts earlier is much more collaborative and helps with that then the other question is the incentives for the journal right? So in the. Journal editors have strong incentives of their own they want leadership. They want to have impact they don't want the one that destroyed their journal and so [00:39:00] the incentives and the in the existing model or to publish sexy results because more people were read those results. They might cite those results. They might get more attention for their Journal, right? And shifting that to on quality designs then shift their priorities to publishing the most rigorous research the most rust robust research and to be valued based on that now. Yeah, so I'll pause there there's lots of other things to say, but those I think are some critical changes to the incentive landscape that still fits. Into the existing way that research is done in communicated. Don't people want to read sexy results? Ben: Yeah. I have a bunch of questions just to poke at that last point a little bit wouldn't people still read the journals that are publishing the most sexy results sort of regardless of whether they were web what stage they're doing that peer review. Brian: Yeah. This is a key concern of editors and thinking about adopting registered reports. [00:40:00] So we have about a hundred twenty-five journals that are offering this now, but we continue to pitch it to other groups and other other ones, but one of the big concerns that Hunters have is if I do this then I'm going to end up publishing a bunch of no results and no one will read my journal known will cite it and I will be the one that ruined my damn door. All right. So it is a reasonable concern because of the way the system works now, so there's a couple answers to that but the one is empirical which is is it actually the case that these are less red or less cited than regular articles that are published in those. So we have a grant from the McDonald Foundation to actually study registered reports. And the first study that we finished is a comparison of articles that were done as register reports with this in the same published in the same Journal. [00:41:00] Articles that were done the regularly to see if they are different altmetrics attention, right citation and attention and Oppa in media and news and social media and also citation impact at least early stage citation impact because the this model is new enough that it isn't it's only been working for since 2014. In terms of first Publications and what we found in that is that at least in this initial data set. There's no difference in citation rates, and if anything the register report. Articles have gotten more altmetric impact social media news media. That's great. So at least the initial data suggests that who knows if that will sustain generalize, but the argument that I would make in terms of a conceptual argument is that if Studies have been vetted. In terms of without knowing the results. These are important results to know [00:42:00] right? So that's what the actors and the reviewers have to decide is do we need to know the outcome of this study? Yeah, if the answer is yes that this is an important enough result that we need to know what happened that any result is. Yeah, right. That's the whole idea is that we're doing the study harder find out what the world says about that particular hypothesis that particular question. Yeah, so it become citable. Whereas when were only evaluating based on the results. Well, yeah things that Purity people is that that's crazy, but it happened. Okay, that's exciting. But if you have a paper where it's that's crazy and nothing happened. Then people say well that was a crazy paper. Yeah, and that paper would be less likely to get through the register report kind of model that makes a lot of sense. You could even see a world where because they're being pre-registered especially for more like the Press people can know to pay attention to it. [00:43:00] So you can actually almost like generate a little bit more height. In terms of like oh we're not going to do this thing. Isn't that exciting? Yeah, exactly. So we have a reproducibility project in cancer biology that we're wrapping up now where we do we sample a set of studies and then try to replicate findings from those papers to see where where can we reproduce findings in the where are their barriers to be able to reproduce existing? And all of these went through the journal elife has registered reports so that we got peer review from experts in advance to maximize the quality of the designs and they published instead of just registering them on OSF, which they are they also published the register reports as an article of its own and those did generate lots of Interest rule that's going to happen with this and that I think is a very effective way to sort of engage the community on. The process of actual Discovery we don't know the answer to these [00:44:00] things. Can we build in a community-based process? That isn't just about let me tell you about the great thing that I just found and more about. Let me bring you into our process. How does were actually investigating this problem right and getting more that Community engagement feedback understanding Insight all along the life cycle of the research rather than just as the end point, which I think is much more inefficient than it could be. Open Science in Competitive Fields and Scooping Ben: Yeah and. On the note of pre-registering. Have you seen how it plays out in like extremely competitive Fields? So one of the world's that I'm closest to is like deep learning machine learning research and I have friends who keep what they're doing. Very very secret because they're always worried about getting scooped and they're worried about someone basically like doing the thing first and I could see people being hesitant to write down to [00:45:00] publicize what they're going to do because then someone else could do it. So, how do you see that playing out if at all? Brian: Yeah scoping is a real concern in the sense that people have it and I think that is also a highly inflated concern based on the reality of what happens in practice but nevertheless because people have the concern systems have to be built to address it. Yeah, so one simple answer on the addressing the concern and then reasons to be skeptical at the. The addressing the concern with the OSF you can pre-register an embargo your pre-registrations from to four years. And what that does is it still gets all the benefits of registering committing putting that into an external repository. So you have independent verification of time and date and what you said you were going to do but then gives you as the researcher the flexibility to [00:46:00] say I need this to remain private for some period of time because of whatever reason. As I need it to be private, right? I don't want the recent participants that I am engaged in this project to discover what the design is or I don't want it competitors to discover what the design is. So that is a pragmatic solution is sort of dress. Okay, you got that concern. Let's meet that concern with technology to help to manage the current landscape. There are a couple reasons to be skeptical that the concern is actually much of a real concerning practice Tristan. And one example comes from preprints. So a lot of people when they pre princess sharing the paper you have of some area of research prior to going through peer review and being published in a journal write and in some domains like physics. It is standard practice the archive which is housed at Cornell is the standard for [00:47:00] anybody in America physics to share their research through archive prior to publication in other fields. It's very new or unknown but emerging. But the exact same concern about scooping comes up regularly where they say there's so many people in our field if I share a preprint someone else with the lab that is productive lab is going to see my paper. They're going to run the studies really fast. They're going to submit it to a journal that will publish and quickly and then I'll lose my publication because it'll come out in this other one, right and that's a commonly articulated concern. I think there are very good reasons to be skeptical of it in practice and the experience of archive is a good example. It's been operating since 1991 physicists early in its life articulated similar kinds of concerns and none of them have that concern now, why is it that they don't have that concern now? Well the Norms have shifted from the way you establish priority [00:48:00] is not. When it's published in the journal, it's when you get it onto archive. Right? Right. So a new practice becomes standard. It's when is it that the community knows about what it is you did that's the way you get that first finder Accolade and that still carries through to things like publication a second reason is that. We all have a very inflated sense of self importance that our great our kids right? There's an old saw in in venture capital of take your best idea and try to give it to your competitor and most of the time you can write. We think of our own ideas really amazing and everyone else doesn't yeah people sleeping other people. Is Right Southern the idea that there are people looking their chops on waiting for your paper your registration to show up so they can steal your [00:49:00] idea and then use it and claim it as their own is is great. It's shows High self-esteem. And that's great. I am all for high self. I don't know and then the last part is that. It is a norm violation to do that to such a strong degree to do the stealing of and not crediting someone else for their work, but it's actually very addressable in the daily practice of how science operates which is if you can show that you put that registration or that paper up on a independent service and then it was it appeared prior to the other person doing it. And then that other group did try to steal it and claim it as their own. Well, that's misconduct. And if they did if they don't credit you as the originator then that's something that is a norm violation and how science operates and I'm actually pretty confident in the process of dealing with Norm [00:50:00] violations in the scientific Community. I've had my own experience with the I think this very rarely happens, but I have had an experience with it. I've posted papers on my website before there were pretty print services in the behavioral sciences since I. Been a faculty member and I've got a Google Scholar one day and was reading. Yeah, the papers that I have these alerts set up for things that are related to my work and I paper showed up and I was like, oh that sounds related to some things. I've been working on. So I've clicked on the link to the paper and I went to the website. So I'm reading the paper. I from these authors I didn't recognize and then I realized wait that's that's my paper. I need a second and I'm an author and I didn't submit it to that journal. And it was my paper. They had taken a paper off of my website. They had changed the abstract. They run it through Google translate. It looks like it's all Gobbledy gook, but it was an abstract. But the rest of it was [00:51:00] essentially a carbon copy of our paper and they published. Well, you know, so what did I do? I like contacted the editor and we actually is on retraction watch this story about someone stealing my paper and retraction watch the laughing about it and it got retracted. And as far as we heard the person that had gone it lost their job, and I don't know if that's true. I never followed. But there are systems place is the basic point to deal with the Regis forms of this. And so I have I am sanguine about those not be real issues. But I also recognize they are real concerns. And so we have to have our Technology Solutions be able to address the concerns as they exist today. And I think the those concerns will just disappear as people gain experience. Top down v Bottom up for driving change Ben: Got it. I like that distinction between issues and concerns that they may not be the same thing. To I've been paying attention to sort of the tactics that you're [00:52:00] taking to drive this adoption. And there's some bottom up things in terms of changing the culture and getting one Journal at a time to change just by convincing them and there's also been some some top-down approaches that you've been using and I was wondering if you could just sort of go through those and what you feel like. Is is the most effective or what combinations of things are are the most effective for really driving this change? Brian: Yeah. No, it's a good question because this is a culture change is hard especially with the decentralized system like science where there is no boss and the different incentive drivers are highly distributed. Right, right. He has a richer have a unique set of societies. Are relevant to establishing my Norms you could have funders that fund my work a unique set of journals that I publish in and my own institution. And so every researcher [00:53:00] has that unique combination of those that all play a role in shaping the incentives for his or her behavior and so fundamental change if we're talking about just at the level of incentives not even at the level of values and goals requires. Massive shift across all of those different sectors not massive in terms of the amount of things they need to shift but in the number of groups that need to make decisions tissue. Yeah, and so the we need both top-down and bottom-up efforts to try to address that and the top down ones are. That we work on at least are largely focused on the major stakeholders. So funders institutions and societies particularly ones that are publishing right so journals whether through Publishers societies, can we get them like with the top guidelines, which is this framework that that has been established to promote? What are the transparency standards? What could we [00:54:00] require of authors or grantees or employees of our organizations? Those as a common framework provide a mechanism to sort of try to convince these different stakeholders to adopt new standards new policies to that that then everybody that associated with that have to follow or incentivised to follow simultaneously those kinds of interventions don't necessarily get hearts and minds and a lot of the real work in culture change. Is getting people to internalize what it is that mean is good science is rigorous work and that requires a very bottom up community-based approach to how Norms get established Within. What are effectively very siloed very small world scientific communities that are part of the larger research community. And so with that we do a lot [00:55:00] of Outreach to groups search starting with the idealists right people who already want to do these practices are already practicing rigorous research. How can we give them resources and support to work on shifting those Norms in their small world communities and so. Out of like the preprint services that we host or other services that allow groups to form. They can organize around a technology. There's a preprint service that our Unity runs and then drive the change from the basis of that particular technology solution in a bottom-up way and the great part is that to the extent that both of these are effective they become self reinforcing. So a lot of the stakeholder leaders and editor of a journal will say that they are reluctant. They agree with all the things that we trying to pitch to them as ways to improve rigor and [00:56:00] research practices, but they don't they don't have the support of their Community yet, right. They need to have people on board with this right well in we can the bottom. It provides that that backing for that leader to make a change and likewise leaders that are more assertive are willing to sort of take some chances can help to drive attention and awareness in a way that facilitates the bottom-up communities that are fledgling to gain better standing and we're impact so we really think that the combination of the two is essential to get at. True culture change rather than bureaucratic adoption of a process that now someone told me I have to do yeah, which could be totally counterproductive to Scientific efficiency and Innovation as you described. Ben: Yeah, that seems like a really great place to to end. I know you have to get running. So I'm really grateful. [00:57:00] This is this has been amazing and thank you so much. Yeah, my pleasure.
My Guest this week is Malcolm Handley, General Partner and Founder of Strong Atomics. The topic of this conversation is Fusion power - how it’s funded now, why we don’t have it yet, and how he’s working on making it a reality. We touch on funding long-term bets in general, incentives inside of venture capital, and more. Show Notes Strong Atomics Malcolm on Twitter (@malcolmredheron) Fusion Never Plot Fusion Z-Pinch Experiment. ARPA-e Alpha Program ITER - International Thermonuclear Experimental Reactor. NIF - National Ignition Facility ARPA-e Office of Fusion Energy Science Sustainable Energy without the Hot Air Transcript [00:00:00] This podcast I talk to Malcolm Hanley about Fusion funding long-term bets incentives inside of venture capital and more Malcolm is the managing partner of strong atomics. Strong atomics is a venture capital firm that exists solely in a portfolio of fusion projects that have been selected based on their potential to create net positive energy and lead to plausible reactors before starting strong atomics. Malcolm was the first employee at the software company aside. I love talking to Malcolm because he's somewhat of a fanatic about making Fusion Energy reality. But at the same time he remains an intense pragmatist in some ways. He's even more pragmatic than I am. So here in the podcast. He thinks deeply about everything he does. So we go very deep on some topics. I hope you enjoy the conversation as much as I did. Intro Ben: Malcolm would you would you introduce yourself? Malcolm: Sure. So I'm Malcolm heavily. I found in strong [00:01:00] atomics after 17 years is software engineer because I. I was looking for the most important thing that I could work on and concluded that that was kind of change that was before democracy fell off the rails. And so it was the obvious most important thing. So my thesis is that climate change is a real problem and the. Typical ways that we are addressing it or insufficient, for example, even if you ignore the climate deniers most people seem to be of the opinion that we're on track that Renewables and storage for renewable energy are going to save the day and my fear as I looked into this more deeply is that this is not sufficient that we are in fact not on track and that we need to be looking at more possible ways of responding to [00:02:00] climate change. So I found an area nuclear fusion that is that it has the potential to help us solve climate change and that in my opinion is underinvested. So I started strong atomics to invest in those companies and to support them in other ways. And that's what I'm doing these days What did founding strong atomics entail? Ben: and he did a little bit more into what founding strong atomics and Tails. You can just snap your fingers and bring it into being Malcolm: I almost did because it was extremely lucky but in general Silicon Valley has a pretty well worn model for how people start startups and I think even the people getting out of college actually no a surprising amount about how to start a company and when you look at Fusion companies getting started you realize just how much knowledge we take for granted in Silicon Valley. On the other hand as far as I can tell the way [00:03:00] that every VC fund get started in the way that everyone becomes a VC is unique. It was really one story for how you start a company and there are n stories for how funds get started. So in my case, I wasn't sure that I wanted to start a fund more precisely. It hadn't even occurred to me that I would start a fund. I was a software engineer and looking for what I could do about climate change. I'm just assuming that I was looking for a technical way to be involved with that. I was worried because my only technical skill is software engineering but I figured hey, but software you can do many things. There must be a way that a software engineer can help. So I made my way to The arpa-e Summit in DC at the beginning of 2016 and went around and talked to a whole lot of people if they're different boots about what they were doing and. My questions for myself was does what you're doing matter. My question for them was how might a software engineer help [00:04:00] and to a first approximation even at a wonderful conference like the arpa-e summit. I think you'd have to say mostly these things are not moving the needle mostly in my terminology. They don't matter and it really wasn't clear how a software engineer could help and then because I was curious because I'd read many things about. Companies claiming that they were working on fusion and they were closed and made an effort to hit every Fusion Booth. I could find and a one of those booths. I said, I'm a software engineer. What can I do and they said well the next time this guy comes to San Francisco, you should organize an audience and he'll give a talk and won't that be fun? So that guy is now one of my science advisors, but that was. The first part of my relationship there. So he came I organized the talk we had dinner beforehand and is like how close is fusion and he says well, it could be 10 years away, but it's actually [00:05:00] in infinite time away. And the problem is we're not funded. So then you say well how much money do you need and it turns out to be a few million dollars you say that's really really dumb here. I am in Silicon Valley my. The company I work for is sauna making collaboration software for task management just raised 50 million dollars in here. These people are credibly trying to save the world and they're short two million dollars. Maybe I can find some rich people who can put some money in the answer was yes, I could find a rich person who is willing to put some money in and Rich. By and large unless they're really excited about the company do not want to put money in directly. They don't want that kind of relationship. So you work through all the mechanics here and you run as you can convince people to put money in but you need to [00:06:00] grease the wheels by making a normal VC structure in this case. And then before, you know it you wind up as the managing partner of a one-person VC fund but single investor. And then you say well, I've had a surprising amount of impact doing this. What should I do? Do I keep looking for that technical way to be involved and my conclusion was there's really no contest here. I could go back to my quest of how how is the software engineer? Can I help climate change or look? I've already put four million dollars into Fusion four million dollars of other people's money, but companies have four million dollars that. Born kinda half without me and several of them are doing way better making way more progress than they would have without me. And now I have all these contacts in the fusion industry. I can build a team of advisers. I'm in all of these internal discussions about [00:07:00] what's coming next in federal funding programs, and I'm invited to conferences and that kind of thing and it was. So obvious that the way to keep making an impact on climate change was to keep doing what I was doing. So that ends with my now taking the steps towards being what I call a real VC. Someone who goes out and really raises the next Fund in a much more normal way with multiple LPS and a much more significant amount of money. Ben: Got it Malcolm: Ray's right now in the baby if you see they VVC or. Ben: So you invest in babies? Malcolm: No. No, I'm the baby. That's and Tina that raises a whole bunch of questions. Why did you structure the venture as a vc firm? Ben: So one is why did you decide to structure it as a VC fund instead of say a philanthropic organization if you just wanted to redirect money. Malcolm: The short answer is [00:08:00] because I can get my hands on way more money. If this is a for-profit Enterprise, so my all P was very generous and trusting and also very open-minded and part of the four million dollars that I mentioned before actually was a donation. It was a gift to the University of Washington to support Fusion research there because. That particular project that we wanted to support was still an academic project for the others. The companies were our for-profit companies and there's just no good case to say to someone who has money. You should give money to support these for-profit things in a way that gets you know profit if they actually work you can tap a lot more money if you offer people a profit motive. And I think you create a stronger chain of [00:09:00] incentives. They are encouraged to give more money. I am more encouraged to look after that money. I have a share of the profit with my fund if it ever makes a profit and and finally you get a more traditional control structure. I don't yet have. At an actual Equity stake in these companies because we did a convertible note or a y combinator safe, but I sit on the board of the companies. They all know that my investment will turn into voting equity in the future and it's just a much cleaner setup. So I think there were no downsides to doing it this way and a lot of upsets the bigger question, which I. Contemplated the beginning of all of this was even for for-profit money is a fund the right vehicle or other other [00:10:00] options that I should pursue. That's something that I spent a lot of time looking into it after creating the first fund what other options are there, right? (Alternate structures) So one approach is you say, well there are four or however many companies here. I like what they're doing, but they're. Really annoyingly small by Annoying. I mean they are inefficient in terms of how they spend their money and their potentially leaving Innovation on the table. So the companies that I've invested in are all about four people maybe six, but that kind of size and they have one or two main science people in each company those. Interacts with other scientists a few times a year a conferences those scientists at the conference's are of course not completely trust to love each other. They are all competitors working at different companies [00:11:00] each convinced that they're going to crush the other guys and that's the extent of their scientific collaboration unless they have a couple of academics universities that they're close to. And when I think about my background in software, I never worked in a team that small I had many more people that I could turn to for help whenever it was stuck. So one thing we looked at seriously was starting a company that would raise a bunch of money and buy these four or so companies. We would merge them all into one. This is called the Roll-Up. And we'd move everyone to one place. They would certainly have a much larger pool of collaborators. They would also have the union of all of their equipment right? So now when someone had a new idea for an approach to Fusion, they wanted to test instead of needing to contemplate leaving [00:12:00] their job starting a new company raising money buying. Or scrounging a whole lot of equipment and then yours later doing the experiment. They could practically go in on the weekend and do the experiment after validating their ideas with their co-workers. Right? I think there's a lot to recommend this and it was seductive enough that I went a long way down this path in the end of the the complexities killed. And made it seem like something that wasn't actually a good idea when you netted everything. Complexities of Roll-Ups Ben: Can you go into a little more detail about that? Yeah, which complexities and how did you decided it was not a good idea, Malcolm: right? So it's much harder to raise money for because you're doing something much less traditional as I guess that's not necessarily harder in some ways. If you come to the market with a radically new idea. You're so novel that you. [00:13:00] Breakthrough everyone's filters and maybe you have an easier time raising money seen it go both witness. Yeah, and my existing investors was not enthused about this. So I would have certainly had to work past some skepticism there on top of that you have to convince all of these companies to sell to you and that looks really hard. The CEO of one of the companies told me look I'm a lone cowboy. I think he said and made it very clear that he was used to executing independently and didn't want to be part of larger company. Potentially. I could have bought his independence by offering him enough money that he couldn't refuse but that's not really the way you want to build your team. Other companies were enthusiastic but it would getting [00:14:00] the majority of these benefits would have required people moving. Yeah people and companies and these companies have connections to universities. Of course, the people have families they have whole lives. It wasn't clear that people wanted to move. It really looked as if everyone was really excited about a roll up that happened where they lived. Yeah on top of that. These people are cordial to each other at conferences. And at least think they wanted to collaborate more but they're also pretty Fierce competitors. So you also had to believe that when these people were all brought into one company. They would actually collaborate rather than get into status contests and fights and that kind of thing. Not to mention all the more subtle ways in which they might fail to collaborate and it really big wake-up call for me was when the [00:15:00] two technical co-founders one of my companies started fighting these people had known each other for decades. They were best men at each other's weddings. They had chosen to found the company together. No asshole VC had bought two companies include them together and force them to work together. This was their choice. And it got to the point where still they could not work together. I went down I spent two days at the company watching the team Dynamic interviewing each person at the company one-on-one and made the recommendation that the company fire one of the founders. So you look at that and then you're like, well these people say they're happy to cooperate with everyone at these other companies to I really believe that so. Huge caution, I think yeah other people cautioned me that the [00:16:00] competitive factors would be reduced. So I had one guy who went through YC not doing Fusion just a regular software startup say look when we were doing way see we were in the same year as Dropbox and it was clear the Dropbox was crash. And if we had known that actually we were part of some big roll up and we were going to share and dropboxes success. We would not have worked as hard on our little company as we did wanting to match their success. Yeah. (Holding companies and how they worked) So eventually I looked at the third model the first model being the VC fund at the second model being the roll up. The third model was a holding company and this is meant to be a middle ground where we would have a company that would invest in the various Fusion companies that we wanted to support. They would not be combined. I [00:17:00] guess. I'm neglected to mention several of the other advantages that we would have gotten with the holding with the roll up in addition to a unified team of scientists. We would have had the pool of Hardware that I did mention right we would also been able to have other infrastructure teams. For example, we could have had a software team that worked on modeling or simulation software that all of the different Fusion teams could use so the idea with the holding company was we would still be able to centralize things that made sense centralist right things where you could benefit by sharing. But we would have these companies remaining as separate companies. They could raise money from other people if they wanted to or we couldn't provide the money when they needed it. They wouldn't have to move they would be independent companies. But the first thing that we would do is say a condition of taking money from us is [00:18:00] you will give all of your experimental data and enough. Of the conditions of your experiments to us so that we can run our own simulations using our own software right and match them against your experimental results. We would of course encourage them to use our modeling software as well. But that's harder to force. So the idea was software is something that really can be shared right? We would encourage them to share it and by having access to their. Their detailed data, we would be able to validate what they were doing and being much more informed investors than others so we could make better investment decisions. We could tell who was really succeeding and who might be struggling or failing so we couldn't make better investment decisions than other investors, which would help us. It would also help the companies [00:19:00] because our decision to investor to continue to invest would be a more credible signal of success or the value creation and they could use that to to shop it around to raise money from other people. So to the benefits there would be still internalizing some of the externalities while keeping people with their independence, but allowing resource sharing and better signal. For further support raise so much more flexible sharing sharing where it made sense and not where it didn't and then in an optional way later later on we might have said, well, it turns out that the number of our companies need the same physical equipment may be pulsed Power Equipment, which is a large part of the expense for these companies so we could have bought that. Set it up somewhere and then said you're welcome to come and do experiments on our facility and you could imagine that over time. They would decide that the [00:20:00] facility was valuable enough that someone from the company moved there. And then maybe they do all their new hiring their and the company's gradually co-locate but in a much more gradual much smoother way than. In the roll-up where we envision seeing a condition of this purchase is you move right having just talked up the the holding company's so much. Obviously I decided I didn't like that either because that's not what I'm doing one of the death blows for the holding company was doing a science review of the four companies that I've invested in so far. Plus several other approaches by this point I built a team of four science advisors. We put all of these seven or so approaches past the advisors for basic feedback is this thing actually a terrible [00:21:00] idea and we haven't realized yet or what are the challenges or is this an amazing thing that we should be backing and the feedback that we got was that one of them was? And should definitely be back right for a bunch of them. The feedback was waiting to see another one. Was it an even more precarious position because of execution problems to more that received favorable feedback did not and still do not have companies associated with them but feedback was positive enough that we. Pay people to work on them inside basically shall companies so that we own the IP if something comes to that but what did we not so sorry just to interrupt right now. They're in universities right now. They're dormant. They're dormant. Okay a common theme in Fusion is someone does some [00:22:00] work gets some promising results and then for one reason or another fails to get. Funding to continue that it sometimes the story is then the Republicans got into power and cut the funding or they got less funding than they wanted. So they bought worse equipment and they wanted and therefore they weren't able to achieve the conditions that they wanted but they still did the experiment because of the bad conditions that got bad results. So they definitely didn't get any more money from that a whole host of reasons. The promising work doesn't continue. Yeah, so in both of those cases there are promising results and no one is working on this got it. Yeah another sad Fusion story. So so bunch of things came out of that science review, but what did not come out of it was oh yes here. We have a pool of for companies that are all [00:23:00] strong and deserve. And have enough overlap that that some sort of sharing model makes sense on top of that. It was becoming clear that even a holding company was sufficiently novel pitch as to make my life even more difficult for fundraising. Yeah, so it just. Didn't look like something that was worth taking that fundraising hit for given that the benefits for seeming to be more theoretical or in the future than then in the present Alternate Structures Ben: So with a VC fund to my understanding you are sitting on already given capital and your job is. To deploy it I'm going to use air quotes as [00:24:00] quickly as possible within a certain limit of responsibility. Would you ever consider something where you do something there there these private Equity firms that will have a thesis and the look for companies that meet these a certain set of conditions and only then. Will they basically exert a call option on promised money and invest that and it seems like that's that's another structure that you could have gone with. Did you consider anything like that at all? Malcolm: Right edit your description of a VC fund and yes, we may I please one is you're not sitting on a pool of money that is in your bank account. Some of the money is in your bank account, but there's a distinction between the money that is committed and the money that is raised. [00:25:00] So you might say I want to have a VC fund that has 40 million dollars over its lifespan if you wait until you have raised. All 40 million then the deals that you'd identified at the beginning that you are using to support the raising of your fund will likely be gone. It can take a long time to raise even a moderately sized fund. Yeah, unless you're one of those individuals leading very Charmed lives where in weeks they raise their entire fun, but for the rest of us the fundraising process can be 6-12 months that kind of thing so, You have a first close where you've identified or where you have enough and money committed to justify saying this fund is definitely happening. Right [00:26:00] we're going to do this even then maybe your first clothes is 15 million dollars. You don't need all 15 million dollars to start making your Investments right now. So you have 15 million dollars. Right, but over the life of the fund you do Capital calls when your account is too low to keep doing what you're trying to do. Right? So the LPS get penalized heavily if they fail to produce the money that they have committed within a certain amount of time after you're calling it got it you could in principle call all the money at the beginning but you damage your friends metrics if you do that. Got it. Funds are. Graded through their internal rate of return and I remember exactly how this is calculated. (Internal Rate of Return (IRR) ) But part of that is how long you actually have the money. So if you got the money closer to when you're going to spend it or invest it, you look better got it. So that's the first edit. The [00:27:00] second edit is I wouldn't say my job is to deploy the money as quickly as possible. Mmm. My job is to deploy the money for the best results possible. I measure results in terms of some combination of profit to my investors and impact on the world. Right because they think Fusion is well aligned to do both. I think these prophets are pretty consistent. So I'm not trying to spend my money as quickly as I can. I'm trying to support a large enough portfolio of companies for as long as I can. Large portfolio of companies because they want to mitigate the risk. I want to include as many companies in the portfolio. So that promising ideas do not go unsupported. That's the impact and also so that the company that succeeds if when ultimately does is in my fund so [00:28:00] that my investors get a return got it and then I want to support them for as long as I can because the longer I'm supporting them. The larger return my investors get rather than that later value creation accruing to later investors. Got it. Also. longer. I can support them the greater the chance that the company has of surviving for long enough and making enough progress that it can then raise from other investors investors who probably will know less about fusion and be less friendly to Fusion. Why not start the Bell Labs of Fusion Ben: okay, there's there's a bunch of bunch of bookmarks. I want to put there the first thing is one more question about possible structures. So a problem that you brought up consistently is the efficiency gains from having people all in the same place all sharing equipment all sharing code all sharing knowledge that. Does not happen [00:29:00] when you have a bunch of companies, why are you so focused on sort of starting with companies as or groups of people who have already formed companies as as the basic building blocks. So for example, you could imagine a world where you create the Bell Labs a fusion where you literally just start from scratch. Hire people and put them all in the same place with a bunch of equipment and aren't working together without having to pull people who have already demonstrated their willingness to go out on their own and start companies. Malcolm: Yeah, great question and the bell Labs diffusion is an analogy of it gets thrown around a fair amount including to describe what I was trying to do. Although I agree. It's slightly. I think there are two answers to that question. One is [00:30:00] by the point that I was really considering this. I already had invested in for companies. So partly the answer is path dependence got it and partly the answer. Is that by the time I was clearly seeing the problems with the rollout especially but also the holding company it was. It didn't seem as if just starting a company from scratch was really going to change that some people make the argument that actually the best plasma physicists aren't in companies at the moment. They are in Academia or National Labs because the best ones don't want to risk their reputation and a great job for a two-bit company that's going to have trouble. And therefore the if you could come along and create a credible [00:31:00] proposition of the legitimate company that will do well fundraising and prove that it will do well at fundraising by endowing it with a lot of money in the beginning you may then hire those people right? I know some people who are convinced that this is possible. You still have to deal. The asshole complex that is common with infusion. These people have had their entire careers which are long because they are all old or they're in PHD programs basically to become quite sure of the approach that they want to take for Fusion, right? So it was difficult to find a team of four. Experienced knowledgeable and open-minded advisors for my science board and not all of those people are able to be hired for any price. I think if you want to actually stock a [00:32:00] company with these people you need more people right and they all need to be able to be hired and you still need to convince them to move and you still need to convince them to work on each other's projects. So it I think it's an interesting idea. I have real concerned about the lack of competition that you would get about all the areas that I just mentioned and on top of that when I looked into the situation around the software sharing and the hardware sharing more closely. I became less convinced that this is actually available. What's that on the software side many people don't even believe that it's possible in a reasonable time frame to create simulation software that [00:33:00] can sufficiently accurately simulate the conditions used by a whole range of different approaches to Fusion at the moment. We. Many different pieces of software or yeah codes as the physics Community calls them that they're each validated and optimized for different conditions different temperatures different densities different physical geometries of the plasma that kind of thing. There are some people who believe that we can make software that spans a sufficiently large range of these parameters. As to be useful for a family of fusion approaches. There are even people who claim to be working on them right now. Yeah, and when you dig more deeply you discover, yeah, they're working on them, but they haven't accomplished as much of that unified solution [00:34:00] as they think they have is they say they have so you talk to other people who use these and they're like, yes. Yes, I think those people really have the. I think they might be the people who can do this. They're not there yet. So the notion of spinning up a team of software engineers and plasma physicists and numerical experts and so forth to try to do that came to seem like a bigger lift with much more dubious payback in the relevant time frame than I had initially thought similarly on the hardware side. It is really costly in many ways to reconfigure physical equipment for one experiment and then reconfigure it for another experiment is really bad when you have to move things between locations as well or move a team to a site and configure everything there and then do your experiments for a month, but [00:35:00] it's still bad even. All the people and all the equipment are in when se you get to the most consistent results. If you can leave everything set up and you want to be able to keep going on Saturday or keep going on Monday because you weren't quite done with those through experiments. So to what degree can you really share these results these certain not these results. She killed to what degree can you really share this equipment? Yeah, definitely to some degree to a large enough to agree to justify. Spinning up a whole company. I'm not convinced got it on top of that. If I were to start a company doing this, I would need to find a CEO build up a whole team that I don't have to build when I'm investing in other companies, right? Should I be that CEO of many people assumed that I showed her that I wanted to or something like that. I think it's a really hard sell for [00:36:00] investors that I'm the best person to run this company on the other hand. It wasn't actually clear who should do it. Incentives: How do you measure impact and incentivize yourself? Ben: Yeah, that makes that makes a lot of sense. I want to [00:37:00] go back to you're talking about incentives previously both that your incentives are to both have impact and make money for your shareholders. Yeah, I want to ask first. How do you measure impact for yourself in terms of your incentives you. I mentioned something along the lines of company's existing that would not otherwise exist. So like how it's pretty easy to know. Okay. I've like made this much money. It's a little harder to say. Okay, I've had this much impact. So how do you personally measure that? Malcolm: Yeah, the clearest example of impact so far is another project called fuse annoyingly. Annoyingly the same name is spelt differently. So this is the fusion z-pinch experiment Fu [00:38:00] Ze at the University of Washington. And it's the group that we donated to (Fusion Z-Pinch Experiment. https://www.aa.washington.edu/research/ZaP) it is all four of the companies that have given money to so far are supported by our pennies Alpha program (ARPA-e Alpha Program:https://arpa-e.energy.gov/?q=arpa-e-programs/alpha ) its Fusion program and all four of them got less money than. Rpe would have liked to have given them. So the time that I became involved with the fuse project they were behind schedule on their rpe milestones and we made them a donation that enabled them to hire an extra two people for the rest of the life of the project that enabled them to catch up with their milestones and become the. Most successful of the fusion programs that are P of fusion projects that are PE has [00:39:00] when I say most successful what I mean is they are hitting their Milestones they are getting very clean results. So there they have a simulation that says as they put more and more current through their plasma. They will get higher temperatures and higher densities basically. Better and better Fusion conditions and that at a certain point they will be making as much energy as they're putting in at a point beyond that they will actually be getting what we call reactor relevant game getting a large enough increase in energy through their Fusion that they could run a reactor off that this and the way we plot their progress is. We look at the increase in currents that they're putting through their pastor and check that they are getting results that match their theoretical results for them. It's especially clean because they have this theoretical concern this theoretical curve [00:40:00] and their experimental results keep falling very close to that curve. So it's a really nice story because the connection between the money that they got. From strong atomics and the people that they hired and the results that they were able to the progress they were able to make with those additional people and the scientific validity of what they were doing is clear at every step. Yeah. So so that's one way that I can see the impact of what I'm doing another way. That's more. Is by being involved in the field and trying to make sure that it all makes sense to me. I wind up having insights or coming to understandings the turn out to be helpful to everyone. So I spent a long time [00:41:00] wondering about the economics of fusion. Companies are understandably mainly focused on getting Fusion to work and they don't spend that much time thinking about the competitive energy Market that they're likely to be selling into 15 or 20 years from now and what that means for their product. I spend time thinking about that because I want to convince myself that the space matters enough to justify my time. So I went through the stock process. And came to the conclusion that the ways that the companies were calculating their cost of energy were wrong. They were assuming that the reactors would be operating more or less continuously and they would be able to sell all of the electricity that they made whereas the reality is likely to be that for five or hours or so every day. No one [00:42:00] will buy their electricity because wind and solar producing cheaper electricity. See, right. So the conclusion that I've come to is well so scratch that so the companies often conclude that they need to be demand following they need to make their reactors able to ramp up and down according to what the demand is right that has other problems because the reactors are so expensive to build and so cheap to run. That ramping your reactor down to follow the demand doesn't actually save you any money. And so it doesn't make the electricity any cheaper. So I worked through all of this and came to the conclusion, which I think most people in the fusion space agree with know that you actually need to have integrated thermal storage. Your reactor is producing typically hot molten salts. Anyway, right and rather than turning that into [00:43:00] electricity. You should store the bats of hot molten salt and then run the reactor continuously and ramp up and down the turbine that is used to go from hot molten salts to electricity interesting turbines are cheap. They have low fixed costs. So you can much more affordably ramp them up and down plus if you were going to be demand following you were already going to be ramping your turbine up and. All I'm saying is keep the turbine demand following right make the reactor smaller so that it can run continuously. Right which is the most efficient way to use a high Capital cost good and then have a buffer of molten salt. So that's the that's a kind of insight that have come to by working through the economics and overall the investment case for Fusion. That I hope will help all the companies not just the ones I'm [00:44:00] investing in. Incentives for LPs Ben: so those are those are your incentives is that combination of impact profit and you also have LPS because of the VC fund structure. Where do you see there? What are their incentives in terms of what they want to see out of this? All right out of your firm Malcolm: my current LP is anonymous and so there's a limit to what I can say about their incentives sure, but they care about climate change. They basically by into my argument that climate change is real and worth mitigating, right and. Fusion is a promising and underinvested potential mitigation. Does the profit motive increase impact? Ben: And to go a [00:45:00] little farther into that this is just a comment about impact investing as a whole so the question is could they get a better return? By tape putting that money into a this definitely putting you on the spot, but I think you could probably make an argument that they might get a better return just on the money putting the money in some other investment vehicle. And so they probably want to see that same impact that you want to see and. I guess the the thing that I'm interested in is does having the profit motive actually increased impact and if so how Malcolm: regarding the potential for profit. When [00:46:00] I started doing this, I thought it was really a charity play. I guess more politely only in Impact play but set up in a for-profit structure so that if it happened to make a profit then the people who had enabled it to happen would be able to share in that profit right as I have. Looked at the space more closely and refined my argument or arguments in this area. I have come to believe that there's a meaningful potential for profit here. This is all hinges on what you think on the chance that you were saying for Fusion working. It's very clear that if you shouldnt Works in a way that is economically competitive. The company that gets there will be immensely valuable assuming that [00:47:00] it manages to retain and say p and that kind of thing. So I've taken stabs figuring out the valuation of one of these companies the error bars are huge. So I got numbers around 25 billion for my low-end valuation closer to a trillion for the high-end. It's really hard to say, but the numbers are big enough on the profit if it works side right that it really boils down to do I think it's going to work that is a hard thing to put numbers on but by investing in a portfolio of them you increase your chances. Risk, Timescales, and Returns vs. Normal Firms Ben: something I know from other VC firms is that. You have to they have to limit their risk. They don't make as risky Investments because of their LPS because they feel they have this financial duty [00:48:00] to return some amount to their LPS in a certain amount of time right? I do you worry about those same pressures. Have you figured out ways around them ways to extend those time scales. Malcolm: I don't think I'm going to be subject to the same pressures because anyone who gives me money is going to be expecting something very different. Yeah, so instead of being subject to those pressures. I think that the same psychology manifests for me as limiting my pool of investors. So it's a real problem. It just plays out differently. Got it. That's it. A few things are different for me because I'm pitching the fund differently a normal fund cannot look at a space and say it's really important that something works in this space, but [00:49:00] it's not clear which company might succeed because there's real science risk, right so normally. Normally the investors to Silicon Valley can decide that this company or these two companies should be the winner and they can all agree that they're going to put all their money in there and they can anoint a winner it will win because it's getting all the money shorter than major Scandal that does not work for in when investing in companies with a heavy science risk. That's why I think you need to invest in a portfolio of companies and a normal fun has trouble doing that because they are obliged by their investment thesis that their investors have signed off on to spread their money out across different different sectors. Okay. So again, that doesn't make life. Magically easy [00:50:00] for me. It means I need to find investors who are on board for doing something different specifically investors who are wealthy enough that they are diversifying their Investments by investing in other funds or Vehicles besides mine and are not expecting diversity from me. But having found those investors, I will then be in a much better position because I can concentrate. In one sector and really solve that or at least strongly supported Is Money the limiting reagent on fusion? Ben: got it that makes a lot of sense. I want to shift and talk about Fusion itself. Okay a little bit more. So I'm sure you've seen the the fusion never plot. I'll put a link up in the show notes. (Fusion Never Plot: http://benjaminreinhardt.com/fusion_never/) So the question is this plot makes it look like if you pour more money in it will go faster. Do you think that's actually the case or is [00:51:00] there something else limiting the rate at which we achieve Fusion Malcolm: if you have to leave the existing spending. Then adding money is a way to make it go faster. But a cheaper alternative is to spend your existing money more wisely the world's Fusion spending and America's Fusion spending to a first approximation all goes in to eat. The international thermonuclear experimental reactor this International collaboration in France. (ITER - International Thermonuclear Experimental Reactor. https://www.iter.org/proj/inafewlines) This thing came about because the next step for a fusion experiment in America and Russia was too expensive for either country to pursue independently, even though everyone's first inclination was surely to keep competing. So became a collaborative [00:52:00] Endeavor and. It's now a collaboration between many countries that things expected to suck up 20 billion or more and has a depressing schedule that ends with Fusion Energy on the grid and 2100. Okay, America puts on the order of a hundred fifty million a year into either directly and say 500 million a year. Into what are called either relevant projects domestic projects where you're trying to learn about something some problem that's relevant either but you're learning in a way that is smaller cheaper better controlled right than a 20 billion dollar massive building where everything is inevitably really complicated the. Other placed in America spends money is on [00:53:00] Neff the national ignition facility, which is really a weapons research facility that is occasionally disguised as an energy research facility. Another way that it's been described to me is the perfect energy research facility what these cynical people meant was it's too small to. (NIF - National Ignition Facility: https://lasers.llnl.gov/) Actually get to ignition or energy Break Even but it's big enough that the people working on it can tell themselves that it might get there. If only they work harder If Only They dedicate the rest of their career to this. So it has large numbers of people who really care about Fusion Energy much more than bombs working on. Because it's the best way that they can see meaning the best funded way that they can see to get there but they don't actually seem to believe that it's [00:54:00] going to get there. They just don't have any choice. So we spent a lot of money on these two programs and that funding would be more than adequate for forgetting to. If we spent it on anything more modern it is not controversial to say that these techniques these two facilities are the best Fusion approaches and experimental setups that we could come up with in the mid-80s the mid-90s when they were being designed that's a fact that's when they were being designed. They've had limited upgrades since but yeah, that's. That's the overall story. What is controversial is weather continuing to support them is the best move. There are people who believe that we need to keep putting money in there [00:55:00] because we're going to learn a lot if we keep doing that science or because if we don't put the money in there, then the money will get pulled and probably stand on bombs or something like that, but it won't come to fusion and so. Better bad money in a fusion than worse money somewhere else, right. My personal view is that eater is such a ridiculous energy project that it harms the entire Fusion field by forcing people to pay lip service. To the validity of its goals that we would be better off admitting that that thing is a travesty and that there are better ways to do Fusion, even if it meant losing the money now, I'm not certain about that. But that's the gamble I would take good news is we probably won't have to take that Gamble and it [00:56:00] looks as if the federal government is becoming much more open to to a yes and approach to funding. The mainstream approaches diffusion if and eater and a variety of projects for alternative approaches and more basic Research into things like tritium handling tritium breeding hardening materials to deal with high-energy neutrons. Lasting longer in the face of high-energy neutrons that kind of thing. So I think there's real momentum towards building a in inclusive program that can support everyone and that is of course the best much as I would take the gamble with killing eater and killing them if they do produce real scientific results, and and if we can have all these things that's a wonderful out. Government Decision Making and Incentives Ben: on that note who ultimately [00:57:00] is the decision maker behind where government Fusion money is spent and what are their incentives? Malcolm: This is America. Is there ever one person who's the decision maker about some Ben: maybe not one person but is it is it Congress is it unelected officials in? Some department. Is it the executive branch? Do you have a sense? Is it some combination of all of them? Malcolm: The money flows through the department of energy a sub-department of the doughy is rpe, which has its 30 million dollar Fusion program and will hopefully have a new and larger Fusion program in the near future. (ARPA-e: https://en.wikipedia.org/wiki/ARPA-E) There's also the office of Fusion science in an office of Fusion Energy and science ofes that funds a lot [00:58:00] of the mainstream Research into Fusion ( Office of Fusion Energy Science https://science.energy.gov/fes/ ) arpa-e is to my knowledge created by Congress and fairly independent of the doughy, but there's still feuding. I think without describing malice to anyone. It is a great Testament to many people's conviction and political skills that they were able to get America to fund niff and either more or less consistently over decades at a high cost and. Those people are highly invested in those projects continuing. I don't know whether that's because they genuinely believe that that's the best way to spend the money or fear that the money would disappear from Fusion completely [00:59:00] if it stopped or don't think that the Alternatives actually have any scientific credibility or. Are so now trapped by the arguments that they've been making strongly and successfully for decades, but for one reason or another or many reasons, they strongly believe that we need to continue to do these so there is a tension between people who want to fund the Alternatives and the people who want to fund the mainstream fusion Who are the government decision makers? Ben: and who are these people do you have any sense of actually who they are? Like I'm not asking you to name names, but like what is their role? What is their nominal job title? Malcolm: I think it's a bunch of civil servants within the Dewey Congress has a role like cotton Congress gets to decide how much money to provide and that's often attached to a. About how that money will be spent right there have been [01:00:00] Congressional hearings on Fusion the covered either and whether we should continue to fund either it's a lot of different people. What are the roles of Academia, Government, Industry, and Philanthropy? Ben: Okay? Yeah. I'm just really interested in dating you down until I like where we're the incentive structure is set up along those lines in in your mind in sort of an ideal world. what do you see the ideal rolls of the the four Columns of Academia government private investment and philanthropy in making sort of an epic level project like Fusion. Yeah happen. Malcolm: There's a ton of room for government support on this. The federal government has National Labs that have the best computers the best software which is often classified the best testing sites many in many [01:01:00] ways. The only testing site. And lots and lots of experts the one thing that the federal government lacks is a drive to put Fusion Energy on the grid as quickly and as commercially successful as possible. I don't rule out that the federal government could develop that drive but. It seems like a long shot given that there's a lot of disagreement about climate change and energy policy and that kind of thing. So I think that the ideal would be that the federal government supports Fusion research with all of its resources Financial expertise modeling modeling software. Modelling hardware and testing facilities in partnership with Private Industry. So [01:02:00] that Private Industry is providing the drive to get things done. So I imagine a lot of research done at the federal government so that if the current crop of companies bottom out if it turns out that their techniques don't work. We have more Fusion research coming down the pipeline to support a later crop of companies, but we would have companies working closely with the federal government to try to build reactors getting assistance in all those ways from the federal government and providing the drive. The company's would have this. Call of Fusion Energy on the grid that they would be working towards but they would get to use the federal government's resources for the areas that they're focusing on. There are also the areas that the companies are not focusing on areas that are largely common [01:03:00] to all companies and therefore no company views it as on their critical path to demonstrating reactor relevant gain, For example, for example tritium is toxic to humans and difficult to contain. It turns out even hydrogen is difficult to contain it leaks through metal surfaces, but we don't talk about this because hydrogen is. Astonishingly boring and section in small quantities. So we don't care that it leaks out of our containers we do care when tritium leaks out of containers because it's heavily regulated and toxic right? So any Fusion company that's handling Trillium is going to need a way to contain tritium with very low leak rates. Also, the world does not have very much true. You can't actually use tritium as a [01:04:00] fuel for Fusion. You have to breed tritium in your reactor from lithium. So the real inputs to the reactor if it is a deuterium tritium reactor will be do tarian and lithium and you'll be breeding tritium from lithium in your reactor. So we also need to study how we're going to breathe. The tritium right? We're making mathematical calculations about the tritium breeding rate how much tritium we will get out after doing Fusion relative to the amount of tritium. We had before doing fusion and these tritium breeding rates are close to 1 if they're below 1 or really not enough above one. We're screwed, right? So there's important work for academics. And the federal government to do to better understand trillion breathing rates [01:05:00] and what we can do to increase them and write how to make this work. Companies aren't incentivized to look at things on critical path Ben: Right and at the companies are not incentivised to look into that right now because they don't feel like it's on their critical path Malcolm: investors Pope maybe including myself have made it clear to these companies that. What they will reward the companies for is progress on the riskiest parts, right? This is valid you want to work on the Unruh Burning Down the biggest risks that you have, right and everyone perceives that the biggest risk is getting through Fusion conditions. Sorry. In many of these companies are already getting to Fusion conditions, but everyone perceives that the biggest risk is getting Fusion to work getting reactor relevant games from Fusion, right? So compared to that these risks are small and it's [01:06:00] valid to the further, right? If you're a single Company If you're looking from the perspective of a portfolio, which the federal government is best positioned to do which is. Going to be somewhat well position to do then your risks are different you're willing to say I have a portfolio of these companies. I don't care which one succeeds I'm doing what I'm doing, assuming one of them will succeed. Now. What can I do to D risk my entire portfolio, right? You look at it differently and then these problems start to seem critical. So with my second fund one of the things I want to be able to do. Is support academics or maybe for-profit companies that are working on this but the federal government isn't even better fit for this it is perfectly positioned to do this. What does the ideal trajectory for Fusion look like? Ben: I think a good closing question is in your in your Ideal World. would Innovation infusion come into being what would the path look [01:07:00] like? Imagine Malcolm? King of the universe and we started with the world we have today what would happen Malcolm: I think the federal government would do the heavy lifting but it would rely on private companies to really provide the drive. It would the federal government would also support the longer term. Things that are critical but not the highest risks such as the tritium issues today - and perfect. Why do so few people invest in fusion Ben: there anything that I didn't ask that I should have asked about Malcolm: one question that comes up quite often is why so few people invest in Fusion? Yeah and why it is. That I'm the only one with the poor. Ben: Yeah, if you could if it's there's a possible payoff of a trillion dollars, right? And that's even if it takes 20 years the AR are still pretty good. Malcolm: Yeah, the way I think of it [01:08:00] is there's a funnel where. It's like a company's fun offer acquiring customers. But in this case, it's an industry's funnel for acquiring investors and investors are falling out of this funnel at every stage. The first stage is of course, you have to believe in anthropogenic climate change, but we have lots of investors who believe in. Why do you need to believe in climate change to fund fusion Ben: quick question. Why why do you need to believe in that in order to want to fun fusion? Malcolm: Okay, that's a fair point. You don't have to but it's the easiest route. Okay. If you don't believe in climate change, then you have to believe purely infusions potential to provide energy that will be cheaper than fossil fuels, right. I believe a zoo that but it is a higher bar [01:09:00] than I believe the climate change is going to encourage people one way or another to put a premium on clean energy. Got up when I do my modeling. I'm not taking into account carbon taxes or renewable portfolio standards, but it's nevertheless easier to convince yourself to care about the whole thing. If you think that this is an important problem, right? Otherwise you could do it just because you think you can make a whole ton of money, but it is a high-risk way of making money, right? So one way or another let's say you decide you're interested in well. No, I think that's carry on with the. The funnel for a climate change. Yes. Yes, because there are a few more places that they can fall out and these places might apply to an ordinary profit seeking investors as well, but it's less clear. So you've decided you believe in anthropogenic climate change [01:10:00] and you'd like to see what you can do about it. Maybe you then narrow to focusing on energy. That's a pretty reasonable bad energy is something like 70% of our emissions when you track everything back to the root. So perfectly reasonable Pace to focus within energy. There are lots of different ways that you might think you can do something about it. There's geothermal power. There's title power a long range of long tail a large range of long tail. Ways that you can make energy and a lot of people really get trapped in there or they decide they're going to look at the demand side of energy and think about how they can get people to drive less or insulate their buildings better or whatever. And in my opinion most of these people are basically getting stuck on [01:11:00] things that don't move the. Yeah, the don't add up to a complete solution. They are merely incrementally better. So when you look at things that might move the needle on energy, I think that the supply side is way more promising because the demand side is huge numbers of buildings cars people whose habits need to change on and on if you can change the supply side then. It doesn't matter as much if we are insulating our houses poorly driving too much that kind of thing. If we have enough energy we can doesn't matter we can make hydrocarbons. From the raw inputs and energy and we can continue to drive our cars and flat our planes and heat our cart heat our houses that have natural gas furnaces and that kind of thing. Ben: Well, if you have if you have absurd amounts of energy, you can just literally pull carbon out of the air and stick it in concrete right? Malcolm: That's [01:12:00] the other thing. So, I believe that focusing on energy Supply is the right way to go because. With a distributed Grid or not. We have many fewer places that we make energy than where we consume energy and what you said, we're looking at significantly increased demand in the likely or certain case where we need to do atmospheric carbon capture and sequestration. It takes the energy it takes energy to suck the carbon out of the atmosphere and it takes probably. More energy to put it into a form where we can store it for a long time. Right? So I think you need to focus on the energy Supply even within that we have lots of ways of making clean energy that aren't scalable and there's a wonderful book called sustainable energy without the hot [01:13:00] air (Sustainable Energy without the Hot Air https://www.withouthotair.com/ ) that catalogs for the United. All the ways that they could make energy renewably and all the ways that they use energy and is more or less unable to make the numbers add up without throwing new clear in their got it nuclear or solar in the Sahara and then transfer transmitting the energy to to the UK. So that's it. You can probably make Renewables work. If you can start but the problem with Renewables is that well Renewables come in two forms that are the predictable forms like Hydra where we control when we release energy and then the variable Renewables like wind and solar where they make energy when Nature cases for those. We need something to match the demand. That humans have for energy [01:14:00] with the supply that nature is offering and you can try to do that in some combination of three ways. You can over build your Renewables to the point where even on bad days or in Bad seasons, you're making enough energy. You can transmit your energy. You can overbuild you can space shift your energy. By building long distance transmission lines and you can time shifts your energy through storage. Right and it turns out that all three of these have real costs and challenges storage is an area where people get pretty excited. So this is the next point that people fall off. They say wind and solar are doing great. storage isn't solved yet, but gosh it's on these great cost curves and like I can totally see how storage is going to. A solved problem and then Renewables for work and I think there are two problems with that is that these one is [01:15:00] storage actually needs to get a lot cheaper. If you want to scale it to the point that we can use it for seasonal storage. Right? And we need a solution to the seasonal problem. There are places like California the. Vastly more Renewable Power sometimes if you're in the other times of year in California, it's by a factor of 10 or 12. Wow. Yeah, so it's not a day by day. It's Yeah by month and and that's critical because if you are cycling your storage daily just to to bridge between when the Sun's shining and when people need their power you get to monetize your story. Every day you get the entire capacity of your stories roughly right used 365 times a year for the 20 years that your plant is left. Right? If you're cycling it seasonally meaning once a year you get to sell 20 times [01:16:00] the capacity of your storage rather than 20 times 365 times the capacity of your storage. So your storage needs to be one 365th of the price. To hit that same price Target. Yeah. This is a really high bar for storage is economics. So the first problem with betting on storage is it needs to drop in price a lot to really solve the problem. The second problem is lots of people are investing in storage. So if you want your money to make a difference in terms of making you a profit or having an impact on the world. You need to out-compete all those other people working on wind and solar and storage. you're going to play in that space and that leaves something like nuclear. There are compared to Fusion plenty of people investing in various nuclear fission approaches. So again, your company can make a difference your money can make a difference there. [01:17:00] But and it'll be a bigger difference than in wind and solar and storage but a small difference than Fusion. So say you get all the way to I think I want to invest in Fusion. what you now encounter is an industry where everyone you talk to will tell you that their approach is definitely going to work unless they're being really nice and that everyone else is approach is definitely not going to work so. Pretty understandable at that point to give up on the whole Space. So in this is the step before that you have to to get over the hurdle where Fusion is always 10 years way. It's been 10 years away for as long as most of us have been alive, you have to decide that you even want to look at Fusion then you hit the problem where everyone says nasty things about everyone else to a first approximation if you spend a long enough time there you might find a company. The convinces you that they are different that all those other people are crazy [01:18:00] and they're worth investing in but I've talked about why investing in one company is a bad idea to invest in multiple companies. You have to find some advisors or just be Reckless enough that you can so you have to find. Enough advisors who are open-minded or you have to be sufficiently Reckless to just roll a dice on these companies or maybe just sufficiently Rich that you're going to roll the dice on these companies, but it's a real hurdle to find advisors who are experienced and credible and open-minded enough to support investing in a portfolio of these companies. Yes, even then, If you're a regular VC fund which you probably are if you have enough money to do this, right we have the problems that I mentioned earlier where your Charter is to invest in a diverse [01:19:00] set of companies and you can't put enough money in these companies to infusion to support a portfolio of companies. so that is the funnel did ends with as far as I can tell just strong atomics coming out the end as the only fund supporting a portfolio of fusion companies. That's why in my opinion. There are not more people investing in Fusion in general along the way. Outro I got a lot out of this conversation here are some of my top takeaways. There are many ways of structuring organization that's trying to enable Innovations each with pros and cons that depend on the domain. You're looking at Malcolm realize that a VC fund is best for Fusion because of the low return from shared resources and the temperaments of people involved just because there's a lot of money going into a domain. It doesn't mean it's being spent well. I love the way that Malcolm thought very deeply about the incentives of everybody. He's dealing with and how to align them with his [01:20:00] vision of a fusion filled future. I hope you enjoyed that. It's like to reach out. You can find me on Twitter at been underscore Reinhardt. I deeply appreciate any feedback. Thank you.
My guest this week is Mark Micire, group lead for the Intelligent Robotics Group at NASA’s Ames Research Center. Previously Mark was a program manager at DARPA, an entrepreneur, and a volunteer firefighter. The topic of this conversation is how DARPA works and why it’s effective at generating game-changing technologies, the Intelligent Robotics Group at NASA, and developing Robotics and technology in high-stakes scenarios. Links Intelligent Robotics Group DARPA Camp Fire DARPA Defense Sciences Office First DARPA Grand Challenge Footage - looks like a blooper reel FEMA Robotics Transcript Ben: [00:00:00] [00:00:00] Mark, welcome to the show. I actually want to start let's start by talking about the campfire. [00:00:04]Camp Fire [00:00:04] So we have a unprecedented campfire going on right now. It's basically being fought primarily with people. I know you have a lot of experience dealing with natural disasters and Robotics for emergency situations. So I guess the big question is why don't we have more robots fighting the campfire right now? [00:00:26] Mark: [00:00:26] Well, so the believe it or not. There are a lot of efforts happening right now to bring robotics to bear on those kinds of problems. Menlo Park fire especially has one of the nation's leading. Groups, it's a small called kind of like a squad of folks that are actually on Menlo Park fire trained in their absolute career firefighters who are now learning how to leverage in their case. [00:00:57] They're [00:01:00] using a lot of uavs to to do Arrow aerial reconnaissance. It's been used on multiple disasters the we had the damn breakage up in almost the same area as campfire. And they were using the the uavs to do reconnaissance for for those kind of things. So so the the ability for fire rescue to begin adopting these two new technologies is always slow the inroads that I have seen in the last say five years is that they like that it has cameras. [00:01:32] They like that it can get overhead and can give them a view they wouldn't have been able to see otherwise the fact that now you can get these uavs. That have thermal imaging cameras is frighteningly useful, especially for structure fires. So that's so that's the baby steps that we've taken where we haven't gone yet that I'm hopeful we'll eventually see is the idea that you actually have some of [00:02:00] these robots deploying suppressant. [00:02:01] So the idea that they are helping to, you know, provide water and to help put out the fire that that's a long leap from where we are right now, but I would absolutely see that being within the realm of the possible. Sybil about gosh now friend 2008. So about 10 years ago NASA was leveraging a predator be that it had with some with some. [00:02:27] Imagery technology that was up underneath it. Um to help with the fire that was down in Big Sur and I helped with with that a little bit while I was back then I was just an intern here at Nasa and that's I think a really really good example of us using of the fire service leveraging larger government facilities and capabilities to use Robotics and usually these and other things in a way that the fire service itself frankly doesn't have the budget or R&D [00:03:00] resources to really do on their own. [00:03:00]Ben: [00:03:00] [00:03:00]So you think it's primarily a resources thing [00:00:00] Mark: [00:00:00] t it's a couple factors there's resources. So, you know outside of I'll say really outside of DHS. So the problem that homeland security has a science and technology division that does some technology development outside of that. There's not a whole lot of organizations outside of commercial entities that are doing R&D a for fire rescue the it just doesn't exist. [00:00:28] So that's so that's that's your first problem. The second problem is culturally the fire service is just very slow to adopt new technology. And that's not it. It's one part. You know, well, my daddy didn't need it in my daddy's daddy didn't need it. So why the heck do I need it right at that? [00:00:49] That's it's easy to blame it on that. What I guess I've learned over [00:04:00] time and after working within the fire service is that everything is life-critical? There's very few things that you're doing when you're in the field providing that service in this case Wildfire response where lives don't. Kind of hang in the balance. [00:01:09] And so the technologies that you bring to bear have to be proven because what you don't want to do is bring half-baked ideas or half-baked Technologies and frankly have your normal operations have have that technology in a fail in a way that your normal operations would have provided the right kind of service to protect those lives God. [00:01:33] So the evaluation and also kind of the acceptance criteria. For technology is much much higher in especially the fire service. Then the many other domains that I've worked in. I can only think of a few other ones and you know, like aircraft safety and automobile safety tend to be the same where [00:05:00] they're just very slow to roll in Technologies and other things like that, but those two areas that I just described have government and other groups that are providing R&D budgets to help push that technology forward. [00:02:06] So when you get the. You get the the combination of we don't have a lot of budget for R&D and we're very slow to accept new technology because we have to be risk adverse that those two tend to just make that domain of very slow-moving Target for new technologies. [00:02:21]Enabling Innovations in Risk Averse Domains [00:02:21] [00:02:21]Ben: [00:02:21] that actually strikes me as very similar to to NASA. [00:00:03] Actually. We're , there's always the the saying that you know, you can't fly it until you've flown it and do you see any ways for. Making Innovations happen faster in these risk-averse domains you have any thoughts about that? [00:00:16]Mark: [00:00:16] It's it's tough. I mean so short short answer is I don't know. I've been trying for the last 15 years and [00:06:00] I'm still still swinging at it the. [00:00:29] The trick is just to keep going and ultimately I think it just comes down to exposure and the folks the the decision makers within the respective Fields just being comfortable with the technology. So as we now have automobiles that are sharing the highways with us that are controlling themselves and I'm not even talking like fully autonomous, you know, driverless Vehicles, you know, the fact that we have, you know, Tesla's and other high-end cars. [00:00:59] They have Auto Pilots that are Auto steering and Lane keeping and stuff like that the ability for folks within the fire rescue domain to start becoming comfortable with the idea that machines can make decisions that are in life. Critical scenarios and if they can make the right decision on a regular basis, it sounds weird to say that something completely removed from the fire service may help improve the ability for fire service to adopt those [00:07:00] Technologies. [00:01:27] It seems weird to think that that's the case. It's absolutely the case and I you know, like I've been doing this for longer than well. I guess 10 or 15 years now as much as I hate to admit that and I've seen a dramatic change in that now I can go into a room and I can talk about. Averaging and unmanned air vehicle and I'm not laughed out of the room. [00:01:48] There's a comfortableness now that I see these domains accepting that they wouldn't before so, you know, hopefully we're making inroads. It's not going to be a fast path by any stretch. Yeah culturation is something that I don't think people think about very much when it comes to technology, but that's a really good point. [00:02:09] I have geek we don't and that's that's unfortunate. And the one thing I've learned over time. That as Geeks we have to realize that sometimes the technology isn't first that there's a lot of other factors that play in. [00:02:20]Mark's Mission [00:02:20] [00:02:20] Ben: [00:02:20] Yeah, absolutely. something that I want people to hear about is I feel like you're one of the most [00:08:00] mission-driven people that I know and not to put you on the spot too much but could you tell folks what you do? [00:00:07] Like why you do what you do? [00:00:11] Mark: [00:00:11] Um well and it really depends. I'll say in yeah, you can appreciate this a depends on what it is. I'm doing so, you know for my day job. I work at Nasa have always been a space geek and an advocate for humans finding ways of working in space and one of the best ways that I have found that at least for my talents that I can help enable that is to leverage machines to do a lot of the precursor. [00:00:42] Work that allows us to put humans in those places. It turns out strangely enough of it a lot of the talents that I use for my day job here also help with work that I do on the side related to my role as search and rescue Personnel in FEMA [00:09:00] that a lot of the life safety critical things that we have to do to keep humans alive in the vacuum of space also apply to. [00:01:11] Women's Safe and finding humans at and around and after disasters and so I've always had this strange kind of bent for trying to find a technology that not only ties to a mission but then you can very clearly kind of Point your finger at that and say well that's that's really going to help someone stay safer or do their job more effectively if they had that piece of equipment. [00:01:39] Those are fun, you know. An engineering standpoint. Those are the kind of Base requirements that you want and and it always helps with there's a lot of other technology areas that I could have played in and I like the fact that when I'm when I'm making a design decision or an engineering trade that I can look at it and really grounded out [00:10:00] into okay. [00:02:02] Is that going to make that person safer? Is that going to make them do their job better? And it's really motivating to be able to. To have those as kind of your level one requirements as you as you try to design things that make the world better. [00:02:14] Intro to IRG [00:02:14] [00:02:14]Ben: [00:02:14] and So currently you're the head of IRG. [00:00:05] Yeah group lead is the official title. So I'm the group leader of the intelligent robotics group. Yeah, and I bet that many people haven't actually heard of the intelligent robotics for group at Ames which is kind of sad, but could you tell us a publicly shareable story that really captures IRG as an organization? [00:00:22]Mark: [00:00:22] [00:00:22] Serve, yeah, well, I would say that it is it is a an interesting Motley Crew of capabilities that that allow robots to go do things and all kinds of different domains. We have folks within our group. That specialize in ground robotic. So we have [00:11:00] Rovers that have quite literally gone to the ends of the Earth and that we've had them up in the northern Arctic. [00:00:49] We've had them in desert in Chile. We they've roamed around just about every crater or interesting Landmark that we have in California here and long story short. We have folks that not only work with and make ground robotics smart, but then. Of them and one of the things I adore about the team is that they're all filled capable. [00:01:13] So we all subscribe to the philosophy that if we're not taking this equipment out in the field and breaking it. We're probably not learning the right things. And so none of our robots are garage queens and stay inside inside of the lab that we love like to take our stuff outside and take them out into these domains where they're really really tested. [00:01:34] We have a group here. Subgroup within RG that's working on Technologies for the International Space [00:12:00] Station. So we have a free flyer and have worked with many of the free Flyers that are up on the International Space Station. Now, there's a new one that we are building. That should fly very soon here called Astro B, which is all you can think of it as in astronauts assistant. [00:02:01] So it's able to not only do things on its own but hopefully will be helpful to astronauts and also allow ground controllers to to have a virtual presence on the International Space Station in the way that they the way they haven't been able to. Let's see. We're turns out that when you're working with robots like this having very good maps and representations of the world's that you are exploring becomes important. [00:02:27] And so we have a sub grouped here. That works with planetary mapping. So in the best, I guess most digestible way of describing that is that if you've ever opened up [00:13:00] Google Google Earth and kicked it into Google moon or Google Mars mode. That most of the especially the base in Ministry imagery and other products that are in that in that Google Earth We're actually generated by my group. [00:03:00] And so it turns out that when you get these pictures and imagery from satellites, they're not always right and they need a lot of kind of carrying and coercing to make them actually look correct. And so my group has a suite of software. Where that's all publicly available the that can be used to make that imagery more correct and more digestible by things like Google Earth and other systems like that and then you know in general we at any given time have, you know, north of 30 to 40 researchers that are here. [00:03:38] Doing all kinds of work that is relevant to robotics relative [00:14:00] relevant to space and yeah, and it's an awesome group and every single one of them is motivated and exactly the right kind of ways. [00:03:52]Organizational Nitty-Gritties: IRG [00:03:52] [00:03:52] Ben: [00:03:52] Yeah. I mean having having worked there I completely agree with that statement from personal experience. [00:03:58] And actually related to to the motivations something that I really like doing is digging into the nitty gritties of organizations that really generate Innovations. So so look what tell me about the incentives that are at play in IRG like what really what motivates people like, how are people sort of rewarded for success and failure and how do those pieces work? [00:00:12] Mark: [00:00:12] Well, I and. I'm going to say this and it's going to sound super simple. But the IRG is one of the few places and it's one of the reasons why I wanted to when I was given the opportunity to be the group lead that I took it is I still feel like I RG is like one of the last one of the few places. [00:15:00] I guess I'll say where the research can kind of be up front. [00:00:34] We're creativity can be king and we can kind of focus on doing the good work in a way that I'll just say that is a little bit more difficult when you're out. A commercial world because you know chasing the next product sometimes has a whole bunch of things that come along with it. You know, what is the market doing what you know our is this going to be supported by Senior Management other things like that that we that we don't have to deal with that as much it has to align with NASA's Mission. [00:01:06] It has to align with what the focus is our of the agency, but I will. That because we have such good researchers here our ability to create a proposal. So we end up just like everyone else writing proposals to to NASA itself and winning those proposals that they that they were kind of ization is actually in the [00:16:00] fact that these researchers get to do the research that they're wanting to do and all the research that's being handed down to them by, you know, a marketing team or some corporate exec. [00:01:40] The other thing that is huge here and I know. Probably experienced it during your tenure when I say the folks are here for the right reasons. We all know every single person within IRG and I'll say that within especially NASA Ames out here in Silicon Valley every single one of us could go a thousand yards outside of the fence and be making two to three times what we make working for the government. [00:02:08] And that's not it's not so much a point of Pride. But what it does is it just helps relieve the the idea that folks are are are here for the money you're here for the research and you here for the science. I use the best analogy I make quite often is I used to. [00:17:00] I used to teach as an Adjunct professor at a community college doing and this is more than this is about 15 years ago in the courses were on like PC repair and other things it was this certification called A Plus and the I used to confound the other professors because I used to always take they had one section that they would do and it was 8 a.m. [00:02:52] On Saturday morning. It was like a it was like an 8 a.m. To 1 p.m. And it was just one day a week and I used to always take that one and the other professors were like, why are you taking an 8 a.m. Saturday course and I would smile at them and say. Because every single student it's in there. I know they want to be there. [00:03:14] I know that they are motivated and want to be there because no one in their right mind is other than a motivated student is going to get up at 8 a.m. On a Saturday morning to go learn about PC repair and to add in to everyone's surprise, but not my surprise. I had a [00:18:00] 100 percent pass rate on that test because it was independently tested outside out of out of the classroom and I. [00:03:39] So just smile because it was like wow, you must be a great professor and I'm like, no, I've got great students because they all are motivated to be there. So that's effectively what I have here within NASA sitting inside of you know, this Silicon Valley bubble is I have a whole bunch of frightening Lee smart people that are motivated to do good science and have absolute have financial reasons to go elsewhere and decided for themselves. [00:04:07] This is where they'd rather work. Yeah and do so the in terms of the majority. Let's break that down a little bit the way that projects happen is that you do a proposal to like, who do you propose projects to I guess [00:19:00] is the the correct question. Well the the fun part and this is one of the the freedoms and NASA has. [00:04:36] Can really propose to anybody we have projects here that our commercial so we work with like for instance. We're doing work with Nissan on autonomous vehicles. And and if actually done some really really interesting work there, you know related to visualization and other things like that which which borrows a lot from work that we do with the Rovers so so we can work with companies. [00:05:03] We work with Within. First so NASA itself. One of the ways that NASA works is that because we have multiple centers, you know, NASA Ames for instance in our group will propose to NASA headquarters. So we just pitched a couple of months ago we pitch to a program that was doing satellite-based Technologies and I flew to NASA headquarters in DC and we [00:20:00] pitched it to a much like you would do to a VC or any. [00:05:35] No any funding source, if you were a company doing it in the valley and you pitch it and we and we want it. We also work with other government agencies. So we have done work for DARPA. We've done work with the Marine Corps. It turns out that the dod Department of Defense is interested in a lot of the ways that we have worked with autonomous vehicles as the Department of Defense tries to figure out how they want to work with autonomous vehicles. [00:06:05] So it's easy for us to open a conversation with Department of Defense and say hey, here's what we did for our Rovers our uavs or whatever and this may be something that you know, you may want to consider and a lot of times they'll come back and say well look we not only want to consider that but we'd also like to go ahead and kind of put you on proverbial payroll here and how do you either do the work for us or help us [00:21:00] understand? [00:06:30] You know, what are the important parts of this we can work with Academia? And so we will often have projects where we partner with a university and we will go in and do a joint proposal either to NASA or all of the different funding sources that that are out there. And so NASA. Has a lot of flexibility in a way that you know myself having previously worked in the Department of Defense. [00:06:58] NASA can do something unique and that NASA can be a consult or NASA can do work for a private company. We have a thing called a space act agreement and like the Nissan workout was talking about there. It seems odd that that a government organization would be able to receive a paycheck if you will. [00:07:18] Yeah from a private Corporation. And it turns out that NASA has a very unique way of doing that and we leverage that frankly as often as we can. [00:22:00] So I realized that's probably a really long answer to a simple question and that's to say we can take money from just about anybody as long as it is legal and it benefits NASA in some way. [00:07:41] Those are the only two real catches that we have. We You Know It ultimately has to benefit and NASA's Mission as. You know being Shepherds of taxpayer dollars, but as long as we can justify that we can work with a lot of different funding sources. [00:07:58]Aligning with NASA's Mission [00:07:58] [00:07:58] Ben: [00:07:58] And what is NASA's mission right now? Like how do you know whether something is within the purview of NASA's Mission or not? [00:08:08] Mark: [00:08:08] Well, I NASA takes its guidance from from a lot of different places. I mean, we you know, there's the two A's. NASA, you know with respect to you know Aeronautics. I'm sorry, the what we have Aeronautics and we have space right and those are the two kind of built into the name, you know missions that are in there. [00:08:29] We [00:23:00] also you know, the we take direction from NASA headquarters. And they are putting out, you know, we have the science side, especially for space which is really driven a lot by the Decay deal surveys and other kind of Direction with respect to where we want to see and it sounds kind of funny to say but it's like where we want to see mankind go in terms of, you know, space exploration and other things like that, but we also have Earth Sciences, you know, some of the kind of flipping back to to the the. [00:09:02] It's up in Northern California some of the some of the best especially satellite imagery that is coming through there's actually being processed through NASA's Earth Sciences missions. And so, you know, there's a worldview and a bunch of other tools that are out there that as as the Earth Sciences. [00:09:24] With all of the different things that are affecting especially, you know, the climate and everything else. It turns out that [00:24:00] NASA's mission is also to benefit that and to help with Earth observations in a way that ultimately helps us understand how we might be impacting other worlds when were able to achieve going there [00:09:42] [00:09:42]NASA-> DARPA [00:09:42] [00:09:42] Ben: [00:09:42] Got it. I'm going to transition a little bit from your time at Nasa to then your time at DARPA. And what I wanted to know is like what were some of the biggest shocks transitioning from NASA to DARPA and then now back from DARPA to NASA because they're both government agencies, but they feel like they have very different fields at least from the outside. [00:00:20] Mark: [00:00:20] Yeah. Um, gosh, that's there's especially from NASA to DARPA. It was I guess the biggest things that come to mind one as a program manager. It is frightening Lee empowering to go to an organization where you know [00:25:00] where you're at Nasa here. We you know with Ed My Level and with the group kind of scenario that I just described to you. [00:00:51] We're in the trenches right? We're trying to do the science. We're doing the research and we're we're trying to make a kind of an impact at a kind of a ground level right when you go in as a program manager at DARPA your your. Trying to change a field. So you have your basically being given the power to say within this field within this field of let's say autonomous vehicles. [00:01:19] I see the following Gap and in stating that and in creating kind of the the request for proposals and other things that you do that bring researchers to darpa's door you're saying. You're not saying I'm going to go do this technology technological thing you're saying I think everyone needs to focus on this part of the [00:26:00] technology landscape. [00:01:44] That's a that's a different conversation at a very different level and it was startling to be frankly one of those program managers where you say. Hey, I don't think the field is doing this right and then to have an entire field turn to you and say oh, okay. Well then let's. From the thing that you want that you're suggesting that that's that isn't interesting and kind of empowering position to be in. [00:02:11] but has a NASA does too but DARPA specifically especially with Department of Defense type technologies that eventually roll out into civilian use your ability to just speak at such a different level and at a level that is. Accepting of risk in a way that NASA does not do that for DARPA. You almost have to have if it's not ready [00:27:00] yet. [00:02:43] If it's not risky enough that you can have a program not basically make the cut DARPA because it didn't have enough custo. It didn't have they call it and dark within DARPA. They called The Laugh ability test and that if your if your idea isn't just crazy enough that it's almost laughable. Then then then it didn't it didn't it's going to have to work a lot harder to get there. [00:03:07] And so I'd say the probably I guess in conclusion the risk and just the empowerment to move an entire field than a different Vector that that would probably be the biggest difference as I had between between my NASA world and then going over and being able to Moonlight as a program manager [00:03:26]Fields Impacted by DARPA [00:03:26] [00:00:00] Mark: [00:00:00] and what are some fields that you. All like DARPA has really moved that concept is incredible and makes sense. And I it hasn't been expressed. So concisely before I'd love some [00:28:00] examples of that. [00:00:02] Mark: [00:00:02] What are the best and I think the most recent example that we can now see the impact for is is autonomous vehicles. [00:00:12] I mean you have to remember the that that now is over a decade ago that the original that the first DARPA Grand Challenge happened and what you know, I was reflecting on this while I was being chased down by a Tesla on the way into work this morning that clearly was autonomously driving itself. And I remembered that in most people forget that the first arpa Grand Challenge. [00:00:38] First of all was millions and millions of dollars in investment and no one won. Yeah one got to the finish line. And in thinking about risk and thinking about risk acceptance what I think that's one of the best data or a really good data point of darpa's not only saying this is really hard. We're going to call it a Grand Challenge and we're going to have these [00:29:00] vehicles basically racing across the desert that if that wasn't gutsy enough from a risk standpoint, but they also then failed and then did it again and said, you know what week we literally had. [00:01:16] Humvee flipped over on fire on in the desert and that was on the evening news for everyone to enjoy to the embarrassment of DARPA and the dod and everybody else and then they said you know, what? No, we're going to double down. This is really worth it. And we need to make this happen and the the impact for that is huge because that then became, you know kind of the ground floor. [00:01:46] Of the vehicles that we now have running around especially out here and you know in the Bay Area you got fully autonomous vehicles now that are able to navigate their way through, you know through all of the different difficulties in the complex situations [00:30:00] that can be presented. The folks that were that Noble Sebastian threatened and his Stanford team that won the the the Grand Challenge that those people went on to to work for you know, what was the Google autonomous car which eventually became way Mo and all of the different companies and talented is sprung out of all of that. [00:02:25] That was all born over a decade ago by an organization that is using your taxpayer dollars to do. Risky things and to say for this it's for this autonomy thing. We really think that vehicles are where the money needs to be spent and spent in a real way that that takes guts and it's still in my mind one of the only organizations that really able to kind of make an impact like that until that entire field. [00:02:53] Hey, I don't think you're doing this right and here's what I want you to do and I'm going to put money behind those words and we're going to go change the world and a [00:31:00] decade later. We've got autonomous vehicles quite literally beside you on the highway. That's pretty awesome. [00:03:07]Levels of Risk DARPA Shoots For [00:03:07] [00:03:07] Ben: [00:03:07] That is incredibly awesome. [00:03:09] Do you have a sense of what the level of risk that you're shooting for is I'm thinking just sort of. Like what is the the acceptable or even desired failure rate or is there a sense of how many fields per decade you're shooting for? Right because you think about it and even if it's changing one field per decade. [00:03:42] The amount of change that comes out of something like autonomous cars or even the human computer interaction that came out came from the 60s might even make the whole thing worth it. So do does anybody even think about it in terms of numbers at all? [00:32:00] [00:00:03] Mark: [00:00:03] So I never heard it framed that way the thing that the Mantra that was always drilled into US was that that it was that the way that you kept score was by the number of Transitions and how how DARPA and I guess that's more of a general DOD term. [00:00:25] That's to say for something you create how many times did. Someone take that technology and go use it for something and so, you know, we would count a transition as you know, well, you know Army decided to take our autonomous vehicle and use it for this but we also got contacted by Bosh and they are interested in leveraging that thing that we built with this new sensor that they're commercially making available and we provided the missing link that now allows them to use that safely. [00:00:59] Vehicles and so you kind of keep score internally on [00:33:00] that basis. The other thing though that darpa's doing is you got multiple horses in the race. So DARPA is organized into multiple floors that have different specializations. So they have like and just a couple examples. I have like the biological technology office and the micro technology office and each one of those. [00:01:29] Floors has a specialization in so the idea that you're bringing in these program managers, you're empowering them to go change their respective fields. And then you're doing that across multiple broad domains like biology and micro technology and other things like that. That's pretty that's pretty and that's awesome in a way that it provides overlap because when I was for instance where I work, What's called DSL which is the defense Sciences office, which is to say it works on [00:34:00] kind of first principle science and physics and Mathematics and other things like that the fact that you can as somebody who's working that go talk to somebody who was fundamental in the development of mems technology, which is what MTO the micro technology office. [00:02:21] That's what they work. And then you want to see how let's say that new chip that is leveraging mems technology might. By law might be able to parallel or be inspired by biology and go get one of the experts from the biotechnology office to you know to scrimmage on some new idea that you're having or whatever that that's that's awesome. [00:02:44] And what that does is that just ends up being kind of this this this multiplier this Catalyst for innovations that are then, you know, you've got multiple domains that are all kind of being affected in the same kind of positive feedback loop. So I would say that's the biggest thing to directly to your question that I don't ever remember anybody saying, okay. [00:03:03] We're not [00:35:00] hitting quote. We need you know, we need another six domain changing ideas organize, you know not have satisfied or obligation of Congress. I don't ever remember any kind of conversations like that. [00:03:16]Organizational Nitty Gritties: DARPA [00:03:16] [00:03:16]Ben: [00:03:16] Yeah that description of the like cross-disciplinary interactions is shockingly similar to some descriptions that I've heard of bell labs and it's the parallels that are really interesting. [00:03:32] And I want to dig into sort of the organizational nitty-gritties of DARPA as well. So all of the the program managers who are the sort of the the drivers of DARPA, you're all basically temporary employees. And so how did the incentives their work? what are your goals as. Program manager and what drives people, what incentivizes them to do their work? [00:00:04]Mark: [00:00:04] [00:36:00] Well, so you're right you're there. So as a DARPA program manager you therefore it's. Typically to two-year renewable contracts. So you you go in you have basically two years at which point you're evaluated as to how well your programs are doing and then you may be renewed for typically another two years. [00:00:26] Most program managers are there for about three years and that that's kind of the the center of the bell curve is three years the motivation simple and that you're you're being. Given one of the largest. within certainly within DOD. If not within just the overall research community and DARPA has a bit of a Swagger. [00:00:51] It has a bit of a like a brand recognition that when DARPA says that we are going to now going to focus on this particular type of sensor this particular type of technology that you as [00:37:00] a program manager. You have the ability to go talk to the best of the best the the the folks that are. Either changing or moving or working in those those respective technology bases that you can drop somebody an email and the fact that it's you at DARPA dot mil that that will probably get you a response that you might not have been able to get otherwise. [00:01:28] And so so that's you know, that's I would say one of the biggest kind of motivators that are incoming program manager has as they're going in and then the the other big motivator there is you're not you're there for a limited amount of time. So for years may sound like a lot of time it's not it's really is not because you it takes about to go from like idea on the back of a napkin. [00:01:57] To you know to kick off of a program it takes about a year. [00:38:00] There's a for as much as it looks like it's loose and free and a little crazy in terms of the ideas and stuff like that. It turns out that there's a pretty regimented all jokingly call it a hazing ritual that's on the backside that involves multiple pitches. [00:02:21] There's a level of. Programmatic oversight called a tech Council that you have to go present to that is extremely critical of whatever it is. It is that you're you're presenting and I'll admit it some of the toughest pitches and certainly the toughest like presentations that I ever prepared for. My first tech Council was way more difficult than anything I ever did like for my PhD dissertation or anything. [00:02:52] Like that and so yeah, and so, you know once the so if if you're on a let's say a three year time scale and it takes you a year to get a [00:39:00] program up and running you have got enough time to maybe make two or three dents in the universe, which is what you're hoping to probably do when you go in the door. [00:03:16] And then the other thing that could happen is as program managers are cycling out. So, you know you everybody's on kind of disorder. Even in their out after three years the other program managers have to then inherit the programs that are run up and running that some previous program manager, you know may have pitched in awarded but is now headed off to you know, make you know, buku bucks and industry or whatever and so it's another disc I'll say distraction that you have because program managers sometimes naively myself included go in thinking. [00:03:47] Okay. Well, I'm just going to go in and. Ditch my own ideas, and I don't even know what this inheriting other programs thing is but I'm going to try to avoid that as much as possible and now you've got three or four or five different programs that you're running and hopefully what you've done is you've built a good [00:40:00] staff because you're able to assemble your own staff and you can kind of keep keep the ball running but that's kind of a that's the cycle if I can give you kind of a you know, the the the day in the life kind of you is that you're going to go in. [00:04:19] You're going to be pitching and coming up with new ideas and trying to get them through Tech Council. Once they get through Tech Council, then you've got a program up and running in as soon as that programs up and running then you've got to be looking toward the next program while your staff. You know the ball rolling on your other on your other programs, then you rinse and repeat at least three or four times [00:04:43]What does success or failure look like at DARPA [00:04:43] [00:00:00] Ben: [00:00:00] and what does the end of a program look like either success failure or question? [00:00:11] Mark: [00:00:11] Um, it depends on the program and it depends on the objectives of the program, I guess, you know, the grand challenges always end with [00:41:00] a huge Fanfare and robots presumably in a running through Finish Lines and other things like that. There's other programs that end much much more quietly. We're a technology may have been built that is just dramatic. [00:00:37] We enabling and and the final tests occur and a lot of times DARPA may or may not have an immediate use for the technology. Are that the reasons for the Technology Building being built. Innocence the program started and so you may see the companies basically take that technology back and continuing improving on it or incorporating it into their products and you know, and that's a very kind of quiet. [00:01:07] Quiet closure to what was a really really good runner really really good program and then presumably you would see that technology pop up and you know in the consumer world or in the, you [00:42:00] know our kind of our real world, you know in the next four to five years or so, and so there's a it's the full spectrum as you would probably imagine that that some of the program's some of them fail loudly some of them fail quietly. [00:01:35] And the successes are the same some of the successes are with great Fanfare and then other times and I'll say some of the most enabling Technologies are out there sometimes close their their time and their tenure at DARPA very quietly. And then some years later go on to do great things for the public. [00:01:53]How DARPA Innovations get into the world [00:01:53] [00:01:53] Ben: [00:01:53] That's something that I hadn't thought about so the sort of expectation of the model of how the the technology then gets. Into the world is just that the people who are working on it as part of the program are then the ones to go and take the ball and run with it. Is that accurate? [00:02:18] Mark: [00:02:18] Absolutely, and [00:43:00] I'd say that that's a difference so strictly speaking. [00:02:22] No research happens Within darpa's Walls, and I guess that's one of the things that that both Hollywood and the description of DARPA. Sometimes get confused is be. That you know DARPA is this this, you know, presumably the warehouse full of mad scientists and you go inside and everybody's in lab coats and it looks like something out of X-Files and that's not it's not the case at all that DARPA is there to to first, you know catalyzed Technologies for DOD purposes, but. [00:02:59] But those those folks that are working for DOD are also companies that are producing products made many of them are producing products that are very much outside of DOD and so the spillover and the fact that the DARPA can and I'll say relatively quietly create technology that [00:44:00] is that is just it's a catalyst for the greater good or the the greater use of Technology more broadly that that is a it's a wonderful. [00:03:28] Ability that DARPA has that a lot of other labs don't have that ability to do so you take and I'll give you an example. So let's take like either Air Force research Labs or Army research lab or any of the research Labs that are with the particular branches of the military that does have actual researchers much like NASA Ames here. [00:03:49] We have actual researchers that are inside of our four walls that are doing work and we can do work that you know is it can be exclusive to the government? But but in darpa's case because there is no research being done within its four walls that most of the contractors most of the what they would call the performers the folks that are performing the technology development that depending on the mostly depending on the contract and the contracts are usually written such that those companies can take those Technologies and and use them for [00:45:00] whatever they'd like after the the terms of the contract is done [00:04:26]Improving the Process of Getting DARPA Innovations into the world [00:04:26] [00:04:26]Ben: [00:04:26] something that I've always wondered is you try so many things at DARPA and there's there's no good way of sort of knowing all the things that have been tried and what the results were. Is there any there ever any thought. having it better knowledge base of what's been tried who tried it and what the result was because it feels like for every technology that was developed by a company who then picks it up and runs with it. [00:00:04] Sometimes there's a something that's developed by a lab that. Is full of folks who just wanted to do the research and sort of have no desire to then push it out into the world so is there is there any effort to make that better make that [00:46:00] process better? [00:00:06] Mark: [00:00:06] Yes, and no but this is a bit of a trick question and I'll answer that. [00:00:12] Well, I'll answer the tricky part. First of all, let me ask let me back up. The obvious answer is that DARPA especially within the last five years or so on his been working much harder to be more open with the public about the work that's being done. You can hit darpa's website and. To the 80th percentile of an understanding of the work that's being done within within DARPA did that the balance of the twenty percent is stuff. [00:00:44] That's either classified or is of a nature where you would just need to do a little more digging or talking with the program manager to really understand what's happening. Okay. So that's a straightforward answer the trick. The trick answer here is that it's better sometimes. Have folks go in that don't know their [00:47:00] history. [00:01:05] The don't know why that previous program failed because since that previous program ran technology may have changed. There may be something that's different today that didn't exist 10 years ago. When that was when that program was also tried the there was this interesting effect within DARPA that because your. [00:01:31] Managers out about every three to four years and because I'll say it like this because DARPA didn't in the past had not done a very good job of documenting all of the programs that it had been running that there was a tendency for a program manager to come to the same Epiphany that a equivalent program integer had come up with a decade earlier. [00:01:56] But that doesn't mean that that program shouldn't be funded. Now. There were folks within DARPA that had been there for a long [00:48:00] time. So interestingly enough the caught the support contractors, so we call him sita's which is systems engineering technical assistance, and there are some CDs support staff that has been there for multiple decades. [00:02:20] So they were back at DARPA during the you know, roaring 80s and 90s, which is kind of, you know, some of the the Heyday for some of the more crazy DARPA stuff that was happening that you would have a program manager go and Pitch some idea. Timers in the back start, you know lean one would lean over to the other one in elbow on the you know, and pointed a slide and they both Giggle and then you would ask them later is like hey, what was the what was the weird body language? [00:02:48] He's like, yeah, you know, we tried this back in the 90s and and he didn't work out because Laser Technology was insufficiently precise in terms of its timing or you know, some other technical aspect or whatever, but it's good to see you doing this because I think it [00:49:00] actually has got a fight. [00:03:06] Chance of making it through this time and hearing that and watching that happen multiple times was interesting because we tend to We tend to say oh well if somebody already tried it and I you know, I'm probably not going to try it again. Whereas with DARPA that's built into the model. The the the ignorance is an essay. [00:03:26] It is ignorance. It's not necessarily it's ignorance of the fact that the idea and the Epiphany you just came up with may have been done before. For that is all I want to believe it's by design that then they will allow a program to be funded that may have been very similar to one that was funded earlier. [00:03:48] But because it's under a new it's under a whole new set of capabilities in terms of technology that if you do that intelligently that that's actually a blessing for for folks that are trying to come up with new programs. [00:04:04]The Heilmeier Catechism [00:04:04] [00:04:04] Ben: [00:04:04] The [00:50:00] concept of forgetting things that has been tried feels almost Blasphemous in the the face bright. [00:04:12] It's right. Like that's why I do wonder if there's sort of a middle ground where you say we tried this it failed for these reasons and then whenever someone wants to pick it up again, they can they can know that it's been tried and they have to make the argument of this is why the world is different now. [00:04:31]Mark: [00:04:31] yes. So that is actually part of within DARPA and one of the framings that they use for pitch is this thing called a Heilmeier catechism and and it's basically a framework that one of the previous DARPA directors made that said if you're going to pitch an idea pitch it within this Framing and that kind of helped that will help you kind of codify your argument and make it succinct one of the Lines within the. [00:00:27] Ism is why is this why can this happen now and that addresses that [00:51:00] kind of ignorance that I was talking about before as a program manager when you pitch that thing and you realize that some program manager did it back in 87 and you're all bummed because you're like, oh man, you can't come up with an original idea and these four walls that somebody hasn't done it previously that. [00:00:52] Then then you just after you get over, you know, the being hurt that you know that your idea is already been done. Then you go talk to some of the original contractors you go talk to some of the sita's you talk to the folks that were there and figure out what is different and then and that is part of the catechism that is part of the what is different. [00:01:13] Now that will enable this to work in a way that it didn't work previously. [00:01:18]Best ways to Enable Robotics [00:01:18] [00:01:18] Ben: [00:01:18] Yeah. The catechism is I think a. Our full set of questions that people don't ask enough outside of DARPA and I'll definitely put a link to it in the show notes. So I do know we're coming up on time. So as a final question, I want to ask [00:52:00] you've been involved in robotics in one way or another for quite some time in Academia and in governments and start. [00:01:42] And it's a notoriously tricky fueled in terms of the amount of hype and excitement and possibility versus the reality of robots coming into especially the the unstructured real world that we live in and why do you think of there? There's a better way to do it from sort of all the different systems that you've been a part of like is there an entirely different system. [00:02:10] What would you change to make to make some like more of that happen? [00:02:16] Mark: [00:02:16] I this and I hate to say like this. I don't know that there's I don't know that there's much I would change. I think that right now especially working in robotics. That is I look at the. The capabilities the [00:53:00] sensors the all of the enabling work that we have right now in terms of machine learning and autonomy and everything else like it. [00:02:41] This is a great day to be alive and working in the field of Robotics in a way that you know, and I'll feel like the old man is I say this but you know, I started this back in the late 90s early 2000s and frankly when I think of the tools and the platforms and sensors that we had to work with. Um that that you spent especially my experience was a grad school grad student experience. [00:03:08] But when I remember how much time we would spend just just screwing around with sensors that didn't work right in platforms that weren't precise in their movements and you know, just all the other aspects that make robotics robotics and I now look at today the fact that you know, we've got kind of. [00:03:30] Bischoff platforms that we can go find that [00:54:00] you can use that that for these lower low-cost platforms. You can really dig deep into research areas that are still just wide open. The fact that now, you know in the mid-2000s if you wanted to do a Thomas car research you needed to be especially or basically. [00:04:03] They don't know how to work high power crazy high power servos and other things like that. Now you go buy a Prius like or Tesla or something, you know what I mean and you're off and the platforms built for you. We you know, the the lidar the computing power and everything else we have today. I might answer your question right now. [00:04:23] I don't know that I would change a thing. I maybe naively believe that we have all of the tools that we need to really really. Make dramatic impacts [00:55:00] and I believe we are making dramatic impacts in the world that we're living by enabling Automation and autonomy to do really really incredible things. [00:04:43] The biggest thing is is for folks to to go back and to kind of along the line of the last line of questioning the you would have had as far as you know, forgetting and remembering the things that we've done in the past. I find that some of the best ideas that I'm seeing that are coming forward into Robotics and autonomy are. [00:05:01] Ideas that were really born back in the 90s. We just didn't have the computing power or the sensors to pull it off and now we do and so it's almost a go look back and look at you know, kind of create a Renaissance of us going back and looking at some of the really really great ideas. That just didn't have their day. [00:05:23] Back when you know when things were a little more scarce in terms of computing and algorithmic complexity and other things like that that we can now address in a really kind of powerful way that [00:56:00] is quite a note of optimism. I really appreciate it mark, thank you so much for doing this. I want to let you get on with your day. [00:00:06] I've learned a ton and I hope other folks have as well. Absolutely. Well, thank you for having me on I appreciate it.