POPULARITY
Economist Alex Tabarrok joins Bob to review Trump's executive order on prescription drug pricing. They explore how price discrimination works in global pharmaceutical markets, the unintended consequences of importation policies, and why U.S. consumers often pay more—yet benefit most from drug innovation. Tabarrok also critiques the FDA's role in delaying treatments and explains how regulatory reform, not price caps, could make healthcare more affordable and effective.Alex's Article, "Econ 101 is Underrated: Pharma Price Controls": Mises.org/HAP500aThe Mises Institute is giving away 100,000 copies of Murray Rothbard's, What Has Government Done to Our Money? Get your free copy at Mises.org/HAPodFree
Economist Alex Tabarrok joins Bob to review Trump's executive order on prescription drug pricing. They explore how price discrimination works in global pharmaceutical markets, the unintended consequences of importation policies, and why U.S. consumers often pay more—yet benefit most from drug innovation. Tabarrok also critiques the FDA's role in delaying treatments and explains how regulatory reform, not price caps, could make healthcare more affordable and effective.Alex's Article, "Econ 101 is Underrated: Pharma Price Controls": Mises.org/HAP500aThe Mises Institute is giving away 100,000 copies of Murray Rothbard's, What Has Government Done to Our Money? Get your free copy at Mises.org/HAPodFree
We have a lengthy conversation with Movie Producer Nicholas Tabarrok. We go through a myriad of topics including his start in the industry, the inception of his production company DariusFilms, what it was like working with stars such as Kurt Russell and Ethan Hawke, and many other interesting thoughts from the world of movies. Very fun and insightful.Engage!
A very special episode of the Fan Girl Film Club! We're joined by producer Nicholas Tabarrok (Darius Films: The Art of the Steal, Stockholm, Defendor, etc.) for a chat about moviemaking, job titles, and guilty pleasure movies.Patreon • Tumblr • Instagram
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Congressional Insider Trading, published by Maxwell Tabarrok on August 31, 2024 on LessWrong. You've probably seen the Nancy Pelosi Stock Tracker on X or else a collection of articles and books exposing the secret and lucrative world of congressional insider trading. The underlying claim behind these stories is intuitive and compelling. Regulations, taxes, and subsidies can make or break entire industries and congresspeople can get information on these rules before anyone else, so it wouldn't be surprising if they used this information to make profitable stock trades. But do congresspeople really have a consistent advantage over the market? Or is this narrative built on a cherrypicked selection of a few good years for a few lucky traders? Is Congressional Insider Trading Real There are several papers in economics and finance on this topic First is the 2004 paper: Abnormal Returns from the Common Stock Investments of the U.S. Senate by Ziobrowski et al. They look at Senator's stock transactions over 1993-1998 and construct a synthetic portfolio based on those transactions to measure their performance. This is the headline graph. The red line tracks the portfolio of stocks that Senators bought, and the blue line the portfolio that Senators sold. Each day, the performance of these portfolios is compared to the market index and the cumulative difference between them is plotted on the graph. The synthetic portfolios start at day -255, a year (of trading days) before any transactions happen. In the year leading up to day 0, the stocks that Senators will buy (red line) basically just tracks the market index. On some days, the daily return from the Senator's buy portfolio outperforms the index and the line moves up, on others it underperforms and the line moves down. Cumulatively over the whole year, you don't gain much over the index. The stocks that Senators will sell (blue line), on the other hand, rapidly and consistently outperform the market index in the year leading up to the Senator's transaction. After the Senator buys the red portfolio and sells the blue portfolio, the trends reverse. The Senator's transactions seem incredibly prescient. Right after they buy the red stocks, that portfolio goes on a tear, running up the index by 25% over the next year. They also pick the right time to sell the blue portfolio, as it barely gains over the index over the year after they sell. Ziobrowski finds that the buy portfolio of the average senator, weighted by their trading volume, earns a compounded annual rate of return of 31.1% compared to the market index which earns only 21.3% a year over this period 1993-1998. This definitely seems like evidence of incredibly well timed trades and above-market performance. There are a couple of caveats and details to keep in mind though. First, it's only a 5-year period. Additionally, any transactions from a senator in a given year a pretty rare: Only a minority of Senators buy individual common stocks, never more than 38% in any one year. So sample sizes are pretty low in the noisy and highly skewed distribution of stock market returns. Another problem, the data on transactions isn't that precise. Senators report the dollar volume of transactions only within broad ranges ($1,001 to $15,000, $15,001 to $50,000, $50,001 to $100,000, $100,001 to $250,000, $250,001 to $500,000, $500,001 to $1,000,000 and over $1,000,000) These ranges are wide and the largest trades are top-coded. Finally, there are some pieces of the story that don't neatly fit in to an insider trading narrative. For example: The common stock investments of Senators with the least seniority (serving less than seven years) outperform the investments of the most senior Senators (serving more than 16 years) by a statistically significant margin. Still though, several other paper...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Congressional Insider Trading, published by Maxwell Tabarrok on August 31, 2024 on LessWrong. You've probably seen the Nancy Pelosi Stock Tracker on X or else a collection of articles and books exposing the secret and lucrative world of congressional insider trading. The underlying claim behind these stories is intuitive and compelling. Regulations, taxes, and subsidies can make or break entire industries and congresspeople can get information on these rules before anyone else, so it wouldn't be surprising if they used this information to make profitable stock trades. But do congresspeople really have a consistent advantage over the market? Or is this narrative built on a cherrypicked selection of a few good years for a few lucky traders? Is Congressional Insider Trading Real There are several papers in economics and finance on this topic First is the 2004 paper: Abnormal Returns from the Common Stock Investments of the U.S. Senate by Ziobrowski et al. They look at Senator's stock transactions over 1993-1998 and construct a synthetic portfolio based on those transactions to measure their performance. This is the headline graph. The red line tracks the portfolio of stocks that Senators bought, and the blue line the portfolio that Senators sold. Each day, the performance of these portfolios is compared to the market index and the cumulative difference between them is plotted on the graph. The synthetic portfolios start at day -255, a year (of trading days) before any transactions happen. In the year leading up to day 0, the stocks that Senators will buy (red line) basically just tracks the market index. On some days, the daily return from the Senator's buy portfolio outperforms the index and the line moves up, on others it underperforms and the line moves down. Cumulatively over the whole year, you don't gain much over the index. The stocks that Senators will sell (blue line), on the other hand, rapidly and consistently outperform the market index in the year leading up to the Senator's transaction. After the Senator buys the red portfolio and sells the blue portfolio, the trends reverse. The Senator's transactions seem incredibly prescient. Right after they buy the red stocks, that portfolio goes on a tear, running up the index by 25% over the next year. They also pick the right time to sell the blue portfolio, as it barely gains over the index over the year after they sell. Ziobrowski finds that the buy portfolio of the average senator, weighted by their trading volume, earns a compounded annual rate of return of 31.1% compared to the market index which earns only 21.3% a year over this period 1993-1998. This definitely seems like evidence of incredibly well timed trades and above-market performance. There are a couple of caveats and details to keep in mind though. First, it's only a 5-year period. Additionally, any transactions from a senator in a given year a pretty rare: Only a minority of Senators buy individual common stocks, never more than 38% in any one year. So sample sizes are pretty low in the noisy and highly skewed distribution of stock market returns. Another problem, the data on transactions isn't that precise. Senators report the dollar volume of transactions only within broad ranges ($1,001 to $15,000, $15,001 to $50,000, $50,001 to $100,000, $100,001 to $250,000, $250,001 to $500,000, $500,001 to $1,000,000 and over $1,000,000) These ranges are wide and the largest trades are top-coded. Finally, there are some pieces of the story that don't neatly fit in to an insider trading narrative. For example: The common stock investments of Senators with the least seniority (serving less than seven years) outperform the investments of the most senior Senators (serving more than 16 years) by a statistically significant margin. Still though, several other paper...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Investigating the Chart of the Century: Why is food so expensive?, published by Maxwell Tabarrok on August 16, 2024 on LessWrong. You've probably seen this chart from Mark Perry at the American Enterprise Institute. I've seen this chart dozens of times and have always enjoyed how many different and important stories it can tell. There is a story of the incredible abundance offered by technological growth and globalization. Compared to average hourly wages, cars, furniture, clothing, internet access, software, toys, and TVs have become far more accessible than they were 20 years ago. Flatscreens and Fiats that were once luxuries are now commodities. There is also a story of sclerosis and stagnation. Sure, lots of frivolous consumer goods have gotten cheaper but healthcare, housing, childcare, and education, all the important stuff, has exploded in price. Part of this is " cost disease" where the high productivity of labor in advancing industries like software, raises the cost of labor in slower productivity growth industries like healthcare. Another part is surely the near-universal "restrict supply and subsidize demand" strategy that governments undertake when regulating an industry. Zoning laws + Prop 13 in housing, occupational licensing and the FDA + Medicare in healthcare, and free student debt + all of the above for higher ed. One story from this graph I've never heard and only recently noticed is that "Food and Beverages" has inflated just as much as Housing in this graph. This is extremely counterintuitive. Food is a globally traded and mass produced commodity while housing is tied to inelastic land supply in desirable locations. Farming, grocery, and restaurants are competitive and relatively lightly regulated markets while the housing is highly regulated, subsidized, and distorted. Construction productivity is worse than stagnant while agricultural productivity has been ascendent for the past 300 years and even retail productivity is 8x higher than it was in 1950. Construction is also more labor intensive than farming or staffing the grocery. Source Yet food prices have risen just as much as housing prices over the past 24 years. What explains this? One trend is that Americans are eating out more. The "Food and Beverages" series from the BLS includes both "Food At Home" and "Food Away From Home." In 2023, eating out was a larger portion of the average household's budget than food at home for the first time, but they have been converging for more than 50 years. Restaurant food prices have increased faster than grocery prices. This makes sense, as a much larger portion of a restaurant's costs are location and labor, both of which are affected by tight supply constraints on urban floor space. This isn't enough to satisfy my surprise at the similarity in price growth though. Even if we just look at "food at home" price growth, it only really sinks below housing after 2015. Beverages at home/away from home follow a more divergent version of the same pattern, but are a much smaller part of the weighted average that makes up the aggregate index. The BLS series for "Housing" is also an aggregate index of " Shelter" prices, which is the actual rent (or Owner Rent Equivalent), and other expenses like utilities, moving, and repairs. Stagnant construction productivity and land use regulation will show up mostly in rents so these other pieces of the series are masking a bit of the inflation. There is also changing composition within the "Food at home" category. Americans eat more fats and oils, more sugars and sweets, more grains, and more red meat; all four items that grew the most in price since 2003. There's also a flipside to food and beverage's easy tradability: they're closer to the same price everywhere. House prices per square foot, by contrast, differ by more th...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Investigating the Chart of the Century: Why is food so expensive?, published by Maxwell Tabarrok on August 16, 2024 on LessWrong. You've probably seen this chart from Mark Perry at the American Enterprise Institute. I've seen this chart dozens of times and have always enjoyed how many different and important stories it can tell. There is a story of the incredible abundance offered by technological growth and globalization. Compared to average hourly wages, cars, furniture, clothing, internet access, software, toys, and TVs have become far more accessible than they were 20 years ago. Flatscreens and Fiats that were once luxuries are now commodities. There is also a story of sclerosis and stagnation. Sure, lots of frivolous consumer goods have gotten cheaper but healthcare, housing, childcare, and education, all the important stuff, has exploded in price. Part of this is " cost disease" where the high productivity of labor in advancing industries like software, raises the cost of labor in slower productivity growth industries like healthcare. Another part is surely the near-universal "restrict supply and subsidize demand" strategy that governments undertake when regulating an industry. Zoning laws + Prop 13 in housing, occupational licensing and the FDA + Medicare in healthcare, and free student debt + all of the above for higher ed. One story from this graph I've never heard and only recently noticed is that "Food and Beverages" has inflated just as much as Housing in this graph. This is extremely counterintuitive. Food is a globally traded and mass produced commodity while housing is tied to inelastic land supply in desirable locations. Farming, grocery, and restaurants are competitive and relatively lightly regulated markets while the housing is highly regulated, subsidized, and distorted. Construction productivity is worse than stagnant while agricultural productivity has been ascendent for the past 300 years and even retail productivity is 8x higher than it was in 1950. Construction is also more labor intensive than farming or staffing the grocery. Source Yet food prices have risen just as much as housing prices over the past 24 years. What explains this? One trend is that Americans are eating out more. The "Food and Beverages" series from the BLS includes both "Food At Home" and "Food Away From Home." In 2023, eating out was a larger portion of the average household's budget than food at home for the first time, but they have been converging for more than 50 years. Restaurant food prices have increased faster than grocery prices. This makes sense, as a much larger portion of a restaurant's costs are location and labor, both of which are affected by tight supply constraints on urban floor space. This isn't enough to satisfy my surprise at the similarity in price growth though. Even if we just look at "food at home" price growth, it only really sinks below housing after 2015. Beverages at home/away from home follow a more divergent version of the same pattern, but are a much smaller part of the weighted average that makes up the aggregate index. The BLS series for "Housing" is also an aggregate index of " Shelter" prices, which is the actual rent (or Owner Rent Equivalent), and other expenses like utilities, moving, and repairs. Stagnant construction productivity and land use regulation will show up mostly in rents so these other pieces of the series are masking a bit of the inflation. There is also changing composition within the "Food at home" category. Americans eat more fats and oils, more sugars and sweets, more grains, and more red meat; all four items that grew the most in price since 2003. There's also a flipside to food and beverage's easy tradability: they're closer to the same price everywhere. House prices per square foot, by contrast, differ by more th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: End Single Family Zoning by Overturning Euclid V Ambler, published by Maxwell Tabarrok on July 27, 2024 on LessWrong. On 75 percent or more of the residential land in most major American cities it is illegal to build anything other than a detached single-family home. 95.8 percent of total residential land area in California is zoned as single-family-only, which is 30 percent of all land in the state. Restrictive zoning regulations such as these probably lower GDP per capita in the US by 8 36%. That's potentially tens of thousands of dollars per person. Map of land use in San Jose, California. Pink is single family only (94%) The legal authority behind all of these zoning rules derives from a 1926 Supreme Court decision in Village of Euclid v. Ambler Realty Co. Ambler realty held 68 acres of land in the town of Euclid, Ohio. The town, wanting to avoid influence, immigration, and industry from nearby Cleveland, passed a restrictive zoning ordinance which prevented Ambler realty from building anything but single family homes on much of their land, though they weren't attempting to build anything at the time of the case. Ambler realty and their lawyer ( a prominent Georgist!) argued that since this zoning ordinance severely restricted the possible uses for their property and its value, forcing the ordinance upon them without compensation was unconstitutional. The constitutionality claims in this case are about the 14th and 5th amendment. The 5th amendment to the United States Constitution states, among other things, that "private property [shall not] be taken for public use, without just compensation." The part of the 14th amendment relevant to this case just applies the 5th to state and local governments. There are two lines of argument in the case. First is whether the restrictions imposed by Euclid's zoning ordinance constitute "taking" private property at all. If they are taking, then the 5th amendment would apply, e.g when the govt takes land via eminent domain, they need to compensate property owners. However, even government interventions that do take don't always have to offer compensation. If the government, say, requires you to have an external staircase for fire egress, they don't have to compensate you because it protects "health, safety, and welfare" which is a " police powers" carveout from the takings clause of the 5th amendment. The other line of argument in the case is that zoning ordinances, while they do take from property owners, do not require compensation because they are part of this police power. Police Power Let's start with that second question: whether zoning laws count as protecting health and safety through the police power or are takings that require compensation. A common rhetorical technique is to reach for the most extreme case of zoning: a coal powered steel foundry wants to open up right next to the pre-school, for example. Conceding that this hypothetical is a legitimate use of the police power does not decide the case, however, because Euclid's zoning ordinance goes much further than separating noxious industry from schoolyards. The entire area of the village is divided by the ordinance into six classes of use districts, U-1 to U-6; three classes of height districts, H-1 to H-3, and four classes of area districts, A-1 to A-4. U-1 is restricted to single family dwellings, public parks, water towers and reservoirs, suburban and interurban electric railway passenger stations and rights of way, and farming, noncommercial greenhouse nurseries and truck gardening; U-2 is extended to include two-family dwellings; U-3 is further extended to include apartment houses, hotels, churches, schools, public libraries, museums, private clubs, community center buildings, hospitals, sanitariums, public playgrounds and recreation buildings, and a city ha...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: End Single Family Zoning by Overturning Euclid V Ambler, published by Maxwell Tabarrok on July 27, 2024 on LessWrong. On 75 percent or more of the residential land in most major American cities it is illegal to build anything other than a detached single-family home. 95.8 percent of total residential land area in California is zoned as single-family-only, which is 30 percent of all land in the state. Restrictive zoning regulations such as these probably lower GDP per capita in the US by 8 36%. That's potentially tens of thousands of dollars per person. Map of land use in San Jose, California. Pink is single family only (94%) The legal authority behind all of these zoning rules derives from a 1926 Supreme Court decision in Village of Euclid v. Ambler Realty Co. Ambler realty held 68 acres of land in the town of Euclid, Ohio. The town, wanting to avoid influence, immigration, and industry from nearby Cleveland, passed a restrictive zoning ordinance which prevented Ambler realty from building anything but single family homes on much of their land, though they weren't attempting to build anything at the time of the case. Ambler realty and their lawyer ( a prominent Georgist!) argued that since this zoning ordinance severely restricted the possible uses for their property and its value, forcing the ordinance upon them without compensation was unconstitutional. The constitutionality claims in this case are about the 14th and 5th amendment. The 5th amendment to the United States Constitution states, among other things, that "private property [shall not] be taken for public use, without just compensation." The part of the 14th amendment relevant to this case just applies the 5th to state and local governments. There are two lines of argument in the case. First is whether the restrictions imposed by Euclid's zoning ordinance constitute "taking" private property at all. If they are taking, then the 5th amendment would apply, e.g when the govt takes land via eminent domain, they need to compensate property owners. However, even government interventions that do take don't always have to offer compensation. If the government, say, requires you to have an external staircase for fire egress, they don't have to compensate you because it protects "health, safety, and welfare" which is a " police powers" carveout from the takings clause of the 5th amendment. The other line of argument in the case is that zoning ordinances, while they do take from property owners, do not require compensation because they are part of this police power. Police Power Let's start with that second question: whether zoning laws count as protecting health and safety through the police power or are takings that require compensation. A common rhetorical technique is to reach for the most extreme case of zoning: a coal powered steel foundry wants to open up right next to the pre-school, for example. Conceding that this hypothetical is a legitimate use of the police power does not decide the case, however, because Euclid's zoning ordinance goes much further than separating noxious industry from schoolyards. The entire area of the village is divided by the ordinance into six classes of use districts, U-1 to U-6; three classes of height districts, H-1 to H-3, and four classes of area districts, A-1 to A-4. U-1 is restricted to single family dwellings, public parks, water towers and reservoirs, suburban and interurban electric railway passenger stations and rights of way, and farming, noncommercial greenhouse nurseries and truck gardening; U-2 is extended to include two-family dwellings; U-3 is further extended to include apartment houses, hotels, churches, schools, public libraries, museums, private clubs, community center buildings, hospitals, sanitariums, public playgrounds and recreation buildings, and a city ha...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An AI Manhattan Project is Not Inevitable, published by Maxwell Tabarrok on July 7, 2024 on LessWrong. Early last month, Leopold Aschenbrenner released a long essay and podcast outlining his projections for the future of AI. Both of these sources are full of interesting arguments and evidence, for a comprehensive summary see Zvi's post here. Rather than going point by point I will instead accept the major premises of Leopold's essay but contest some of his conclusions. So what are the major premises of his piece? 1. There will be several orders of magnitude increase in investment into AI. 100x more spending, 100x more compute, 100x more efficient algorithms, and an order of magnitude or two gains from some form of "learning by doing" or "unhobbling" on top. 2. This investment scale up will be sufficient to achieve AGI. This means the models on the other side of the predicted compute scale up will be able to automate all cognitive jobs with vast scale and speed. 3. These capabilities will be essential to international military competition. All of these premises are believable to me and well-argued for in Leopold's piece. Leopold contends that these premises imply that the national security state will take over AI research and the major data centers, locking down national secrets in a race against China, akin to the Manhattan project. Ultimately, my main claim here is descriptive: whether we like it or not, superintelligence won't look like an SF startup, and in some way will be primarily in the domain of national security. By late 26/27/28 … the core AGI research team (a few hundred researchers) will move to a secure location; the trillion-dollar cluster will be built in record-speed; The Project will be on. The main problem is that Leopold's premises can be applied to conclude that other technologies will also inevitably lead to a Manhattan project, but these projects never arrived. Consider electricity. It's an incredibly powerful technology with rapid scale up, sufficient to empower those who have it far beyond those who don't and it is essential to military competition. Every tank and missile and all the tech to manufacture them relies on electricity. But there was never a Manhattan project for this technology. It's initial invention and spread was private and decentralized. The current sources of production and use are mostly private. This is true of most other technologies with military uses: explosives, steel, computing, the internet, etc. All of these technologies are essential in the government's monopoly on violence and it's ability to exert power over other nations and prevent coups from internal actors. But the government remains a mere customer of these technologies and often not even the largest one. Why is this? Large scale nationalization is costly and unnecessary for maintaining national secrets and technological superiority. Electricity and jet engines are essential for B-2 bombers, but if you don't have the particular engineers and blueprints, you can't build it. So, the government doesn't need to worry about locking down the secrets of electricity production and sending all of the engineers to Los Alamos. They can keep the first several steps of the production process completely open and mix the outputs with a final few steps that are easier to keep secret. To be clear, I am confident that governments and militaries will be extremely interested in AI. They will be important customers for many AI firms, they will create internal AI tools, and AI will become an important input into every major military. But this does not mean that most or all of the AI supply chain, from semi-conductors to data-centers to AI research, must be controlled by governments. Nuclear weapons are outliers among weapons technology in terms of the proportion of the supply chai...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An AI Manhattan Project is Not Inevitable, published by Maxwell Tabarrok on July 7, 2024 on LessWrong. Early last month, Leopold Aschenbrenner released a long essay and podcast outlining his projections for the future of AI. Both of these sources are full of interesting arguments and evidence, for a comprehensive summary see Zvi's post here. Rather than going point by point I will instead accept the major premises of Leopold's essay but contest some of his conclusions. So what are the major premises of his piece? 1. There will be several orders of magnitude increase in investment into AI. 100x more spending, 100x more compute, 100x more efficient algorithms, and an order of magnitude or two gains from some form of "learning by doing" or "unhobbling" on top. 2. This investment scale up will be sufficient to achieve AGI. This means the models on the other side of the predicted compute scale up will be able to automate all cognitive jobs with vast scale and speed. 3. These capabilities will be essential to international military competition. All of these premises are believable to me and well-argued for in Leopold's piece. Leopold contends that these premises imply that the national security state will take over AI research and the major data centers, locking down national secrets in a race against China, akin to the Manhattan project. Ultimately, my main claim here is descriptive: whether we like it or not, superintelligence won't look like an SF startup, and in some way will be primarily in the domain of national security. By late 26/27/28 … the core AGI research team (a few hundred researchers) will move to a secure location; the trillion-dollar cluster will be built in record-speed; The Project will be on. The main problem is that Leopold's premises can be applied to conclude that other technologies will also inevitably lead to a Manhattan project, but these projects never arrived. Consider electricity. It's an incredibly powerful technology with rapid scale up, sufficient to empower those who have it far beyond those who don't and it is essential to military competition. Every tank and missile and all the tech to manufacture them relies on electricity. But there was never a Manhattan project for this technology. It's initial invention and spread was private and decentralized. The current sources of production and use are mostly private. This is true of most other technologies with military uses: explosives, steel, computing, the internet, etc. All of these technologies are essential in the government's monopoly on violence and it's ability to exert power over other nations and prevent coups from internal actors. But the government remains a mere customer of these technologies and often not even the largest one. Why is this? Large scale nationalization is costly and unnecessary for maintaining national secrets and technological superiority. Electricity and jet engines are essential for B-2 bombers, but if you don't have the particular engineers and blueprints, you can't build it. So, the government doesn't need to worry about locking down the secrets of electricity production and sending all of the engineers to Los Alamos. They can keep the first several steps of the production process completely open and mix the outputs with a final few steps that are easier to keep secret. To be clear, I am confident that governments and militaries will be extremely interested in AI. They will be important customers for many AI firms, they will create internal AI tools, and AI will become an important input into every major military. But this does not mean that most or all of the AI supply chain, from semi-conductors to data-centers to AI research, must be controlled by governments. Nuclear weapons are outliers among weapons technology in terms of the proportion of the supply chai...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Acemoglu on AI, published by Maxwell Tabarrok on June 29, 2024 on The Effective Altruism Forum. The Simple Macroeconomics of AI is a 2024 working paper by Daron Acemoglu which models the economic growth effects of AI and predicts them to be small: About a .06% increase in TFP growth annually. This stands in contrast to many predictions which forecast immense impacts on economic growth from AI, including many from other academic economists. Why does Acemoglu come to such a different conclusion than his colleagues and who is right? First, Acemoglu divides up the ways AI could affect productivity into four channels: 1. AI enables further (extensive-margin) automation. Obvious examples of this type of automation include generative AI tools such as large language models taking over simple writing, translation and classification. 2. AI can generate new task complementarities, raising the productivity of labor in tasks it is performing. For example, AI could provide better information to workers, directly increasing their productivity. Alternatively, AI could automate some subtasks (such as providing readymade subroutines to computer programmers) and simultaneously enable humans to specialize in other subtasks, where their performance improves. 3. AI could induce deepening of automation - meaning improving performance, or reducing costs, in some previously capital-intensive tasks. Examples include IT security, automated control of inventories, and better automated quality control 4. AI can generate new labor-intensive products or tasks. Each of these four channels is referring to specific mechanism in his task-based model of production. Automation raises the threshold of tasks which are performed by capital instead of labor Complementarities raises labor productivity in non-automated tasks Deepening of automation raises capital productivity in already-automated tasks New tasks are extra production steps that only labor can perform in the economy, for example, the automation of computers leads to programming as a new task. The chief sin of this paper is dismissing the latter half of these mechanisms without good arguments or evidence. "Deepening automation" in Acemoglu's model means increasing the efficiency of tasks already performed by machines. This raises output but doesn't change the distribution of tasks assigned to humans vs machines. AI might deepen automation by creating new algorithms that improve Google's search results on a fixed compute budget or replacing expensive quality control machinery with vision-based machine learning, for example. This kind of productivity improvement can have huge growth effects. The second industrial revolution was mostly "deepening automation" growth. Electricity, machine tools, and Bessemer steel improved already automated processes, leading to the fastest rate of economic growth the US has ever seen. In addition, this deepening automation always increase wages in Acemoglu's model, in contrast to the possibility of negative wage effects from the extensive margin automation that he focuses on. So why does Acemoglu ignore this channel? I do not dwell on deepening of automation because the tasks impacted by (generative) AI are quite different than those automated by the previous wave of digital technologies, such as robotics, advanced manufacturing equipment and software systems. This single sentence is the only justification he gives for omitting capital productivity improvements from his analysis. A charitable interpretation of this argument acknowledges that he is only referring to "(generative) AI", like ChatGPT and Midjourney. These tools do seem more focused on augmenting human labor rather than doing what software can already do, but more efficiently. Though Acemoglu is happy to drop the "generative" qualifier everywhere ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is There Really a Child Penalty in the Long Run?, published by Maxwell Tabarrok on May 17, 2024 on LessWrong. A couple of weeks ago three European economists published this paper studying the female income penalty after childbirth. The surprising headline result: there is no penalty. Setting and Methodology The paper uses Danish data that tracks IVF treatments as well as a bunch of demographic factors and economic outcomes over 25 years. Lundborg et al identify the causal effect of childbirth on female income using the success or failure of the first attempt at IVF as an instrument for fertility. What does that mean? We can't just compare women with children to those without them because having children is a choice that's correlated with all of the outcomes we care about. So sorting out two groups of women based on observed fertility will also sort them based on income and education and marital status etc. Successfully implanting embryos on the first try in IVF is probably not very correlated with these outcomes. Overall success is, because rich women may have the resources and time to try multiple times, for example, but success on the first try is pretty random. And success on the first try is highly correlated with fertility. So, if we sort two groups of women based on success on the first try in IVF, we'll get two groups that differ a lot in fertility, but aren't selected for on any other traits. Therefore, we can attribute any differences between the groups to their difference in fertility and not any other selection forces. Results How do these two groups of women differ? First of all, women who are successful on the first try with IVF are persistently more likely to have children. This random event causing a large and persistent fertility difference is essential for identifying the causal effect of childbirth. This graph is plotting the regression coefficients on a series of binary variables which track whether a woman had a successful first-time IVF treatment X years ago. When the IVF treatment is in the future (i.e X is negative), whether or not the woman will have a successful first-time IVF treatment has no bearing on fertility since fertility is always zero; these are all first time mothers. When the IVF treatment was one year in the past (X = 1), women with a successful first-time treatment are about 80% more likely to have a child that year than women with an unsuccessful first time treatment. This first year coefficient isn't 1 because some women who fail their first attempt go through multiple IVF attempts in year zero and still have a child in year one. The coefficient falls over time as more women who failed their first IVF attempt eventually succeed and have children in later years, but it plateaus around 30%. Despite having more children, this group of women do not have persistently lower earnings. This is the same type of graph as before, it's plotting the regression coefficients of binary variables that track whether a woman had a successful first-time treatment X years ago, but this time the outcome variable isn't having a child, it's earnings. One year after a the first IVF treatment attempt the successful women earn much less than their unsuccessful counterparts. They are taking time off for pregnancy and receiving lower maternity leave wages (this is in Denmark so everyone gets those). But 10 years after the first IVF attempt the earnings of successful and unsuccessful women are the same, even though the successful women are still ~30% more likely to have a child. 24 years out from the first IVF attempt the successful women are earning more on average than the unsuccessful ones. Given the average age of women attempting IVF in Denmark of about 32 and a retirement age of 65, these women have 33 years of working life after their IVF attempt. W...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is There Really a Child Penalty in the Long Run?, published by Maxwell Tabarrok on May 17, 2024 on LessWrong. A couple of weeks ago three European economists published this paper studying the female income penalty after childbirth. The surprising headline result: there is no penalty. Setting and Methodology The paper uses Danish data that tracks IVF treatments as well as a bunch of demographic factors and economic outcomes over 25 years. Lundborg et al identify the causal effect of childbirth on female income using the success or failure of the first attempt at IVF as an instrument for fertility. What does that mean? We can't just compare women with children to those without them because having children is a choice that's correlated with all of the outcomes we care about. So sorting out two groups of women based on observed fertility will also sort them based on income and education and marital status etc. Successfully implanting embryos on the first try in IVF is probably not very correlated with these outcomes. Overall success is, because rich women may have the resources and time to try multiple times, for example, but success on the first try is pretty random. And success on the first try is highly correlated with fertility. So, if we sort two groups of women based on success on the first try in IVF, we'll get two groups that differ a lot in fertility, but aren't selected for on any other traits. Therefore, we can attribute any differences between the groups to their difference in fertility and not any other selection forces. Results How do these two groups of women differ? First of all, women who are successful on the first try with IVF are persistently more likely to have children. This random event causing a large and persistent fertility difference is essential for identifying the causal effect of childbirth. This graph is plotting the regression coefficients on a series of binary variables which track whether a woman had a successful first-time IVF treatment X years ago. When the IVF treatment is in the future (i.e X is negative), whether or not the woman will have a successful first-time IVF treatment has no bearing on fertility since fertility is always zero; these are all first time mothers. When the IVF treatment was one year in the past (X = 1), women with a successful first-time treatment are about 80% more likely to have a child that year than women with an unsuccessful first time treatment. This first year coefficient isn't 1 because some women who fail their first attempt go through multiple IVF attempts in year zero and still have a child in year one. The coefficient falls over time as more women who failed their first IVF attempt eventually succeed and have children in later years, but it plateaus around 30%. Despite having more children, this group of women do not have persistently lower earnings. This is the same type of graph as before, it's plotting the regression coefficients of binary variables that track whether a woman had a successful first-time treatment X years ago, but this time the outcome variable isn't having a child, it's earnings. One year after a the first IVF treatment attempt the successful women earn much less than their unsuccessful counterparts. They are taking time off for pregnancy and receiving lower maternity leave wages (this is in Denmark so everyone gets those). But 10 years after the first IVF attempt the earnings of successful and unsuccessful women are the same, even though the successful women are still ~30% more likely to have a child. 24 years out from the first IVF attempt the successful women are earning more on average than the unsuccessful ones. Given the average age of women attempting IVF in Denmark of about 32 and a retirement age of 65, these women have 33 years of working life after their IVF attempt. W...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Student Debt Cancellation From All Sides of the Political Compass, published by Maxwell Tabarrok on May 14, 2024 on LessWrong. A stance against student debt cancellation doesn't rely on the assumptions of any single ideology. Strong cases against student debt cancellation can be made based on the fundamental values of any section of the political compass. In no particular order, here are some arguments against student debt cancellation from the perspectives of many disparate ideologies. Equity and Fairness Student debt cancellation is a massive subsidy to an already prosperous and privileged population. American college graduates have nearly double the income of high school graduates. African Americans are far underrepresented among degree holders compared to their overall population share. Within the group of college graduates debt cancellation increases equity, but you can't get around the fact that 72% of African Americans have no student debt because they never went to college. The tax base for debt cancellation will mostly come from rich white college graduates, but most of the money will go to … rich white college graduates. Taxing the rich to give to the slightly-less-rich doesn't have the same Robin Hood ring but might still slightly improve equity and fairness relative to the status quo, except for the fact that it will trade off with far more important programs. Student debt cancellation will cost several hundred billion dollars at least, perhaps up to a trillion dollars or around 4% of GDP. That's more than defense spending, R&D spending, more than Medicaid and Medicare, and almost as much as social security spending. A trillion-dollar transfer from the top 10% to the top 20% doesn't move the needle much on equity but it does move the needle a lot on budgetary and political constraints. We should be spending these resources on those truly in need, not the people who already have the immense privilege on an American college degree. Effective Altruism The effective altruist critique of student debt cancellations is similar to the one based on equity and fairness, but with much more focus on global interventions as an alternative way to spend the money. Grading student debt cancellation on impact, tractability, and neglectedness, it scores very poorly. Mostly because of tiny impact compared to the most effective charitable interventions. Giving tens of thousands of dollars to people who already have high incomes, live in the most prosperous country on earth, and face little risk of death from poverty or disease is so wasteful that it borders on criminal on some views of moral obligations. It is letting tens of millions of children drown (or die from malaria) because you don't want to get your suit wet saving them. Saving a life costs $5,000, cancelling student debt costs $500 billion, you do the math. Student Debt Crisis If what you really care about is stemming the ill-effects of large and growing student debt, debt cancellation is a terrible policy. If you want people to consume less of something, the last thing you should do is subsidize people who consume that thing. But that's exactly what debt cancellation does: It is a massive subsidy on student debt. Going forward, the legal precedent and political one-upmanship will make future cancellations more likely, so students will be willing to take more debt, study less remunerative majors, and universities will raise their prices in response. Helping those who are already saddled with student debt by pushing future generations further into it is not the right way out of this problem. Fiscal Conservativism Student debt cancellation is expensive. Several hundred billion dollars has already been spent and several hundred billion more are proposed. This will mostly be financed through debt, especially si...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Student Debt Cancellation From All Sides of the Political Compass, published by Maxwell Tabarrok on May 14, 2024 on LessWrong. A stance against student debt cancellation doesn't rely on the assumptions of any single ideology. Strong cases against student debt cancellation can be made based on the fundamental values of any section of the political compass. In no particular order, here are some arguments against student debt cancellation from the perspectives of many disparate ideologies. Equity and Fairness Student debt cancellation is a massive subsidy to an already prosperous and privileged population. American college graduates have nearly double the income of high school graduates. African Americans are far underrepresented among degree holders compared to their overall population share. Within the group of college graduates debt cancellation increases equity, but you can't get around the fact that 72% of African Americans have no student debt because they never went to college. The tax base for debt cancellation will mostly come from rich white college graduates, but most of the money will go to … rich white college graduates. Taxing the rich to give to the slightly-less-rich doesn't have the same Robin Hood ring but might still slightly improve equity and fairness relative to the status quo, except for the fact that it will trade off with far more important programs. Student debt cancellation will cost several hundred billion dollars at least, perhaps up to a trillion dollars or around 4% of GDP. That's more than defense spending, R&D spending, more than Medicaid and Medicare, and almost as much as social security spending. A trillion-dollar transfer from the top 10% to the top 20% doesn't move the needle much on equity but it does move the needle a lot on budgetary and political constraints. We should be spending these resources on those truly in need, not the people who already have the immense privilege on an American college degree. Effective Altruism The effective altruist critique of student debt cancellations is similar to the one based on equity and fairness, but with much more focus on global interventions as an alternative way to spend the money. Grading student debt cancellation on impact, tractability, and neglectedness, it scores very poorly. Mostly because of tiny impact compared to the most effective charitable interventions. Giving tens of thousands of dollars to people who already have high incomes, live in the most prosperous country on earth, and face little risk of death from poverty or disease is so wasteful that it borders on criminal on some views of moral obligations. It is letting tens of millions of children drown (or die from malaria) because you don't want to get your suit wet saving them. Saving a life costs $5,000, cancelling student debt costs $500 billion, you do the math. Student Debt Crisis If what you really care about is stemming the ill-effects of large and growing student debt, debt cancellation is a terrible policy. If you want people to consume less of something, the last thing you should do is subsidize people who consume that thing. But that's exactly what debt cancellation does: It is a massive subsidy on student debt. Going forward, the legal precedent and political one-upmanship will make future cancellations more likely, so students will be willing to take more debt, study less remunerative majors, and universities will raise their prices in response. Helping those who are already saddled with student debt by pushing future generations further into it is not the right way out of this problem. Fiscal Conservativism Student debt cancellation is expensive. Several hundred billion dollars has already been spent and several hundred billion more are proposed. This will mostly be financed through debt, especially si...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation is Unsafe, published by Maxwell Tabarrok on April 22, 2024 on LessWrong. Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be. There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests. Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives to care about long term, global, costs or benefits and they do have strong incentives to push the development of AI forwards for their own purposes. Noticing that AI companies put the world at risk is not enough to support greater government involvement in the technology. Government involvement is likely to exacerbate the most dangerous parts of AI while limiting the upside. Default government incentives Governments are not social welfare maximizers. Government actions are an amalgam of the actions of thousands of personal welfare maximizers who are loosely aligned and constrained. In general, governments have strong incentives for myopia, violent competition with other governments, and negative sum transfers to small, well organized groups. These exacerbate existential risk and limit potential upside. The vast majority of the costs of existential risk occur outside of the borders of any single government and beyond the election cycle for any current decision maker, so we should expect governments to ignore them. We see this expectation fulfilled in governments reactions to other long term or global externalities e.g debt and climate change. Governments around the world are happy to impose trillions of dollars in direct cost and substantial default risk on future generations because costs and benefits on these future generations hold little sway in the next election. Similarly, governments spend billions subsidizing fossil fuel production and ignore potential solutions to global warming, like a carbon tax or geoengineering, because the long term or extraterritorial costs and benefits of climate change do not enter their optimization function. AI risk is no different. Governments will happily trade off global, long term risk for national, short term benefits. The most salient way they will do this is through military competition. Government regulations on private AI development will not stop them from racing to integrate AI into their militaries. Autonomous drone warfare is already happening in Ukraine and Israel. The US military has contracts with Palantir and Andruil which use AI to augment military strategy or to power weapons systems. Governments will want to use AI for predictive policing, propaganda, and other forms of population control. The case of nuclear tech is informative. This technology was strictly regulated by governments, but they still raced with each other and used the technology to create the most existentially risky weapons mankind has ever seen. Simultaneously, they cracked down on civilian use. Now, we're in a world where all the major geopolitical flashpoints have at least one side armed with nuclear weapons and where the nuclear power industry is worse than stagnant. Government's military ambitions mean that their regulation will preserve the most dangerous misuse risks from AI. They will also push the AI frontier and train larger models, so we will still face misalignment risks. These may ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation is Unsafe, published by Maxwell Tabarrok on April 22, 2024 on LessWrong. Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be. There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests. Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives to care about long term, global, costs or benefits and they do have strong incentives to push the development of AI forwards for their own purposes. Noticing that AI companies put the world at risk is not enough to support greater government involvement in the technology. Government involvement is likely to exacerbate the most dangerous parts of AI while limiting the upside. Default government incentives Governments are not social welfare maximizers. Government actions are an amalgam of the actions of thousands of personal welfare maximizers who are loosely aligned and constrained. In general, governments have strong incentives for myopia, violent competition with other governments, and negative sum transfers to small, well organized groups. These exacerbate existential risk and limit potential upside. The vast majority of the costs of existential risk occur outside of the borders of any single government and beyond the election cycle for any current decision maker, so we should expect governments to ignore them. We see this expectation fulfilled in governments reactions to other long term or global externalities e.g debt and climate change. Governments around the world are happy to impose trillions of dollars in direct cost and substantial default risk on future generations because costs and benefits on these future generations hold little sway in the next election. Similarly, governments spend billions subsidizing fossil fuel production and ignore potential solutions to global warming, like a carbon tax or geoengineering, because the long term or extraterritorial costs and benefits of climate change do not enter their optimization function. AI risk is no different. Governments will happily trade off global, long term risk for national, short term benefits. The most salient way they will do this is through military competition. Government regulations on private AI development will not stop them from racing to integrate AI into their militaries. Autonomous drone warfare is already happening in Ukraine and Israel. The US military has contracts with Palantir and Andruil which use AI to augment military strategy or to power weapons systems. Governments will want to use AI for predictive policing, propaganda, and other forms of population control. The case of nuclear tech is informative. This technology was strictly regulated by governments, but they still raced with each other and used the technology to create the most existentially risky weapons mankind has ever seen. Simultaneously, they cracked down on civilian use. Now, we're in a world where all the major geopolitical flashpoints have at least one side armed with nuclear weapons and where the nuclear power industry is worse than stagnant. Government's military ambitions mean that their regulation will preserve the most dangerous misuse risks from AI. They will also push the AI frontier and train larger models, so we will still face misalignment risks. These may ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A High Decoupling Failure, published by Maxwell Tabarrok on April 15, 2024 on LessWrong. High-decoupling vs low-decoupling or decoupling vs contextualizing refers to two different cultural norms, cognitive skills, or personal dispositions that change the way people approach ideas. High-decouplers isolate ideas from each other and the surrounding context. This is a necessary practice in science which works by isolating variables, teasing out causality and formalizing claims into carefully delineated hypotheses. Low-decouplers, or contextualizers, do not separate ideas from their connotation. They treat an idea or claim as inseparable from the narratives that the idea might support, the types of people who usually make similar claims, and the history of the idea and the people who support it. Decoupling is uncorrelated with the left-right political divide. Electoral politics is the ultimate low-decoupler arena. All messages are narratives, associations, and vibes, with little care paid to arguments or evidence. High decouplers are usually in the " gray tribe" since they adopt policy ideas based on metrics that are essentially unrelated to what the major parties are optimizing for. My community prizes high decoupling and for good reason. It is extremely important for science, mathematics, and causal inference, but it is not an infallible strategy. Should Legality and Cultural Support be Decoupled? Debates between high and low decouplers are often marooned by a conflation of legality and cultural support. Conservatives, for example, may oppose drug legalization because their moral disgust response is activated by open self-harm through drug use and they do not want to offer cultural support for such behavior. Woke liberals are suspicious of free speech defenses for rhetoric they find hateful because they see the claims of neutral legal protection as a way to conceal cultural support for that rhetoric. High-decouplers are exasperated by both of these responses. When they consider the costs and benefits of drug legalization or free speech they explicitly or implicitly model a controlled experiment where only the law is changed and everything else is held constant. Hate speech having legal protection does not imply anyone agrees with it, and drug legalization does not necessitate cultural encouragement of drug use. The constraints and outcomes to changes in law vs culture are completely different so objecting to one when you really mean the other is a big mistake. This decoupling is useful for evaluating the causal effect of a policy change but it underrates the importance of feedback between legality and cultural approval. The vast majority of voters are low decouplers who conflate the two questions. So campaigning for one side or the other means spinning narratives which argue for both legality and cultural support. Legal changes also affect cultural norms. For example, consider debates over medically assistance in dying (MAID). High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if its better than any of their current options. We should take revealed preferences seriously, if someone would rather die than continue living with a painful or terminal condition then that is a reliable signal of what would make them better off. So world A, with legal medically assisted death compared to world B, without it, is a better world all else held equal. Low decouplers on the left and right see the campaign for MAID as either a way to push those in poverty towards suicide or as a further infection of the minds of young people. I agree with the high decouplers within their hypothetical controlled experiment, but I am also confident that attitudes towards suicide, drug use, etc ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A High Decoupling Failure, published by Maxwell Tabarrok on April 15, 2024 on LessWrong. High-decoupling vs low-decoupling or decoupling vs contextualizing refers to two different cultural norms, cognitive skills, or personal dispositions that change the way people approach ideas. High-decouplers isolate ideas from each other and the surrounding context. This is a necessary practice in science which works by isolating variables, teasing out causality and formalizing claims into carefully delineated hypotheses. Low-decouplers, or contextualizers, do not separate ideas from their connotation. They treat an idea or claim as inseparable from the narratives that the idea might support, the types of people who usually make similar claims, and the history of the idea and the people who support it. Decoupling is uncorrelated with the left-right political divide. Electoral politics is the ultimate low-decoupler arena. All messages are narratives, associations, and vibes, with little care paid to arguments or evidence. High decouplers are usually in the " gray tribe" since they adopt policy ideas based on metrics that are essentially unrelated to what the major parties are optimizing for. My community prizes high decoupling and for good reason. It is extremely important for science, mathematics, and causal inference, but it is not an infallible strategy. Should Legality and Cultural Support be Decoupled? Debates between high and low decouplers are often marooned by a conflation of legality and cultural support. Conservatives, for example, may oppose drug legalization because their moral disgust response is activated by open self-harm through drug use and they do not want to offer cultural support for such behavior. Woke liberals are suspicious of free speech defenses for rhetoric they find hateful because they see the claims of neutral legal protection as a way to conceal cultural support for that rhetoric. High-decouplers are exasperated by both of these responses. When they consider the costs and benefits of drug legalization or free speech they explicitly or implicitly model a controlled experiment where only the law is changed and everything else is held constant. Hate speech having legal protection does not imply anyone agrees with it, and drug legalization does not necessitate cultural encouragement of drug use. The constraints and outcomes to changes in law vs culture are completely different so objecting to one when you really mean the other is a big mistake. This decoupling is useful for evaluating the causal effect of a policy change but it underrates the importance of feedback between legality and cultural approval. The vast majority of voters are low decouplers who conflate the two questions. So campaigning for one side or the other means spinning narratives which argue for both legality and cultural support. Legal changes also affect cultural norms. For example, consider debates over medically assistance in dying (MAID). High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if its better than any of their current options. We should take revealed preferences seriously, if someone would rather die than continue living with a painful or terminal condition then that is a reliable signal of what would make them better off. So world A, with legal medically assisted death compared to world B, without it, is a better world all else held equal. Low decouplers on the left and right see the campaign for MAID as either a way to push those in poverty towards suicide or as a further infection of the minds of young people. I agree with the high decouplers within their hypothetical controlled experiment, but I am also confident that attitudes towards suicide, drug use, etc ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 2nd Demographic Transition, published by Maxwell Tabarrok on April 7, 2024 on LessWrong. Birth rates in the developed world are below replacement levels and global fertility is not far behind. Sub-replacement fertility leads to exponentially decreasing population. Our best models of economic growth suggest that a shrinking population causes economic growth and technological progress to stop and humanity to stagnate into extinction. One theory of fertility decline says it's all about opportunity costs, especially for women. Rising labor productivity and expanded career opportunities for potential parents make each hour of their time and each forgone career path much more valuable. Higher income potential also makes it cheaper for parents to gain utility by using financial resources to improve their children's quality of life compared to investing time in having more kids. Simultaneously, economic growth raises the returns to these financial investments in quality (e.g education). In addition to higher incomes, people today have more diverse and exciting options for leisure. DINKs can go to Trader Joes and workout classes on the weekend, play video games, watch Netflix, and go on international vacations. These rising opportunity costs accumulate into the large and pervasive declines in fertility that we see in the data. If this explanation is correct, it puts a double bind on the case for economic growth. Unless AI upends the million-year old relationship between population and technological progress just in time, progress seems self defeating. The increases in labor productivity and leisure opportunities that make economic growth so important also siphon resources away from the future contributors to that growth. Empirically, the opportunity cost of having kids has grown large enough to bring fertility well below replacement levels all around the world. The opportunity cost explanation suggests we have to pick between high incomes and sustainable fertility. Luckily, this explanation is not correct. At least not entirely. There are several observations that the opportunity cost theory cannot explain without clarification. Across and within countries today, the relationship between income and fertility is positive or U-shaped. Further economic growth can raise everyone's incomes to the upward sloping part of the relationship and begin a 2nd demographic transition. Micro Data Above a $200k a year, fertility is increasing in household income. ** Update ** I replicated this graph from more recent ACS data (2018-2022) and also weighted each point by population to give a sense of the size of each of these income brackets This U-shaped relationship holds up in multiple data sources with different measures of fertility. The households in the top percentiles of income stand to lose far more future wages from having children, but they have ~20 more children per hundred households than the middle income percentiles. This isn't exactly inconsistent with opportunity cost but it requires some explanation. The number of dollars that households are giving up by having children is increasing in household income, but as you get more and more dollars, each one is worth less. Going from making say $75 to $150 dollars an hour pushes you to work more hours, but if you go from $150 to $500, you might be happy to work half as many hours for more money and spend the time on other things, like starting a family. So while the dollar opportunity cost of having kids is always increasing in household income, the utility opportunity cost is not. The positively sloped section of the relationship between income and fertility isn't just spurious correlation either. Random shocks to wealth, like lottery winnings, also increase fertility. This rules out the DINK leisure time explanation for low ferti...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 2nd Demographic Transition, published by Maxwell Tabarrok on April 7, 2024 on LessWrong. Birth rates in the developed world are below replacement levels and global fertility is not far behind. Sub-replacement fertility leads to exponentially decreasing population. Our best models of economic growth suggest that a shrinking population causes economic growth and technological progress to stop and humanity to stagnate into extinction. One theory of fertility decline says it's all about opportunity costs, especially for women. Rising labor productivity and expanded career opportunities for potential parents make each hour of their time and each forgone career path much more valuable. Higher income potential also makes it cheaper for parents to gain utility by using financial resources to improve their children's quality of life compared to investing time in having more kids. Simultaneously, economic growth raises the returns to these financial investments in quality (e.g education). In addition to higher incomes, people today have more diverse and exciting options for leisure. DINKs can go to Trader Joes and workout classes on the weekend, play video games, watch Netflix, and go on international vacations. These rising opportunity costs accumulate into the large and pervasive declines in fertility that we see in the data. If this explanation is correct, it puts a double bind on the case for economic growth. Unless AI upends the million-year old relationship between population and technological progress just in time, progress seems self defeating. The increases in labor productivity and leisure opportunities that make economic growth so important also siphon resources away from the future contributors to that growth. Empirically, the opportunity cost of having kids has grown large enough to bring fertility well below replacement levels all around the world. The opportunity cost explanation suggests we have to pick between high incomes and sustainable fertility. Luckily, this explanation is not correct. At least not entirely. There are several observations that the opportunity cost theory cannot explain without clarification. Across and within countries today, the relationship between income and fertility is positive or U-shaped. Further economic growth can raise everyone's incomes to the upward sloping part of the relationship and begin a 2nd demographic transition. Micro Data Above a $200k a year, fertility is increasing in household income. ** Update ** I replicated this graph from more recent ACS data (2018-2022) and also weighted each point by population to give a sense of the size of each of these income brackets This U-shaped relationship holds up in multiple data sources with different measures of fertility. The households in the top percentiles of income stand to lose far more future wages from having children, but they have ~20 more children per hundred households than the middle income percentiles. This isn't exactly inconsistent with opportunity cost but it requires some explanation. The number of dollars that households are giving up by having children is increasing in household income, but as you get more and more dollars, each one is worth less. Going from making say $75 to $150 dollars an hour pushes you to work more hours, but if you go from $150 to $500, you might be happy to work half as many hours for more money and spend the time on other things, like starting a family. So while the dollar opportunity cost of having kids is always increasing in household income, the utility opportunity cost is not. The positively sloped section of the relationship between income and fertility isn't just spurious correlation either. Random shocks to wealth, like lottery winnings, also increase fertility. This rules out the DINK leisure time explanation for low ferti...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metascience of the Vesuvius Challenge, published by Maxwell Tabarrok on March 30, 2024 on The Effective Altruism Forum. The Vesuvius Challenge is a million+ dollar contest to read 2,000 year old text from charcoal-papyri using particle accelerators and machine learning. The scrolls come from the ancient villa town of Herculaneum, nearby Pompeii, which was similarly buried and preserved by the eruption of Mt. Vesuvius. The prize fund comes from tech entrepreneurs and investors Nat Friedman, Daniel Gross, and several other donors. In the 9 months after the prize was announced, thousands of researchers and students worked on the problem, decades-long technical challenges were solved, and the amount of recovered text increased from one or two splotchy characters to 15 columns of clear text with more than 2000 characters. The success of the Vesuvius Challenge validates the motivating insight of metascience: It's not about how much we spend, it's about how we spend it. Most debate over science funding concerns a topline dollar amount. Should we double the budget of the NIH? Do we spend too much on Alzheimer's and too little on mRNA? Are we winning the R&D spending race with China? All of these questions implicitly assume a constant exchange rate between spending on science and scientific progress. The Vesuvius Challenge is an illustration of exactly the opposite. The prize pool for this challenge was a little more than a million dollars. Nat Friedman and friends probably spent more on top of that hiring organizers, building the website etc. But still this is pretty small in the context academic grants. A million dollars donated to the NSF or NIH would have been forgotten if it was noticed at all. Even a direct grant to Brent Seales, the computer science professor whose research laid the ground work for reading the scrolls, probably wouldn't have induced a tenth as much progress as the prize pool did, at least not within 9 months. It would have been easy to spend ten times as much on this problem and get ten times less progress out the other end. The money invested in this research was of course necessary but the spending was not sufficient, it needed to be paired with the right mechanism to work. The success of the challenge hinged on design choices at a level of detail beyond just a grants vs prizes dichotomy. Collaboration between contestants was essential for the development of the prize-winning software. The discord server for the challenge was (and is) full of open-sourced tools and discoveries that helped everyone get closer to reading the scrolls. A single, large grand prize is enticing but it's also exclusive. Only one submission can win so the competition becomes more zero-sum and keeping secrets is more rewarding. Even if this larger prize had the same expected value to each contestant, it would not have created as much progress because more research would be duplicated as less is shared. Nat Friedman and friends addressed this problem by creating several smaller progress prizes to reward open-source solutions to specific problems along the path to reading the scrolls or just open ended prize pools for useful community contributions. They also added second-place and runner-up prizes. These prizes funded the creation of data labeling tools that everyone used to train their models and visualizations that helped everyone understand the structure of the scrolls. They also helped fund the contestant's time and money investments in their submissions. Luke Farritor, one of the grand prize winners, used winnings from the First Letters prize to buy the computers that trained his prize winning model. A larger grand prize can theoretically provide the same incentive, but it's a lot harder to buy computers with expected value! Nat and his team also decided to completely swit...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Need Major, But Not Radical, FDA Reform, published by Maxwell Tabarrok on February 25, 2024 on LessWrong. Fellow progress blogger Alex Telford and I have had a friendly back-and-forth going over FDA reform. Alex suggests incremental reforms to the FDA, which I strongly support, but these don't go far enough. The FDA's failures merit a complete overhaul: Remove efficacy requirements and keep only basic safety testing and ingredient verification. Any drug that doesn't go through efficacy trials gets a big red warning label, but is otherwise legal. Before getting into Alex's points let me quickly make the positive case for my position. The FDA is punished for errors of commission: drugs they approve which turn out not to work or to be harmful. They don't take responsibility for errors of omission: drugs they could have approved earlier but delayed, or drugs that would have been developed but were abandoned due to the cost of approval. This asymmetry predictably leads to overcaution. Every week the Covid-19 vaccines were delayed, for example, cost at least four thousand lives. Pfizer sent their final Phase 3 data to the FDA on November 20th but was not approved until 3 weeks later on December 11th. There were successful Phase I/II human trials and successful primate-challenge trials 5 months earlier in July. Billions of doses of the vaccine were ordered by September. Every week, thousands of people died while the FDA waited for more information even after we were confident that the vaccine would not hurt anybody and was likely to prevent death. The extra information that the FDA waited months to get was not worth the tens of thousands of lives it cost. Scaling back the FDA's mandatory authority to safety and ingredient testing would correct for this deadly bias. This isn't as radical as it may sound. The FDA didn't have efficacy requirements until 1962. Today, off-label prescriptions already operate without efficacy requirements. Doctors can prescribe a drug even if it has not gone through FDA-approved efficacy trials for the malady they are trying to cure. These off-label prescriptions are effective, and already make up ~20% of all prescriptions written in the US. Removing mandatory efficacy trials for all drugs is equivalent to expanding this already common practice. Now, let's get to Alex's objections. Most of his post was focused on my analogy between pharmaceuticals and surgery. There are compelling data and arguments on both sides and his post shifted my confidence in the validity and conclusions of the analogy downwards, but in the interest of not overinvesting in one particular analogy I'll leave that debate where it stands and focus more on Alex's general arguments in favor of the FDA. Patent medicines and snake oil Alex notes that we can look to the past, before the FDA was created, to get an idea of what the pharmaceutical market might look like with less FDA oversight. Maxwell argues that in the absence of government oversight, market forces would prevent companies from pushing ineffective or harmful drugs simply to make a profit. Except that there are precedents for exactly this scenario occurring. Until they were stamped out by regulators in the early 20th century, patent medicine hucksters sold ineffective, and sometimes literally poisonous, nostrums to desperate patients. We still use " snake oil" today as shorthand from a scam product. There is no denying that medicine has improved massively over the past 150 years alongside expanding regulatory oversight, but this relationship is not causal. The vast majority of gains in the quality of medical care are due to innovations like antibiotics, genome sequencing, and robotic surgery. A tough and discerning FDA in the 1870s which allows only the best available treatments to be marketed would not have improv...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Need Major, But Not Radical, FDA Reform, published by Maxwell Tabarrok on February 25, 2024 on LessWrong. Fellow progress blogger Alex Telford and I have had a friendly back-and-forth going over FDA reform. Alex suggests incremental reforms to the FDA, which I strongly support, but these don't go far enough. The FDA's failures merit a complete overhaul: Remove efficacy requirements and keep only basic safety testing and ingredient verification. Any drug that doesn't go through efficacy trials gets a big red warning label, but is otherwise legal. Before getting into Alex's points let me quickly make the positive case for my position. The FDA is punished for errors of commission: drugs they approve which turn out not to work or to be harmful. They don't take responsibility for errors of omission: drugs they could have approved earlier but delayed, or drugs that would have been developed but were abandoned due to the cost of approval. This asymmetry predictably leads to overcaution. Every week the Covid-19 vaccines were delayed, for example, cost at least four thousand lives. Pfizer sent their final Phase 3 data to the FDA on November 20th but was not approved until 3 weeks later on December 11th. There were successful Phase I/II human trials and successful primate-challenge trials 5 months earlier in July. Billions of doses of the vaccine were ordered by September. Every week, thousands of people died while the FDA waited for more information even after we were confident that the vaccine would not hurt anybody and was likely to prevent death. The extra information that the FDA waited months to get was not worth the tens of thousands of lives it cost. Scaling back the FDA's mandatory authority to safety and ingredient testing would correct for this deadly bias. This isn't as radical as it may sound. The FDA didn't have efficacy requirements until 1962. Today, off-label prescriptions already operate without efficacy requirements. Doctors can prescribe a drug even if it has not gone through FDA-approved efficacy trials for the malady they are trying to cure. These off-label prescriptions are effective, and already make up ~20% of all prescriptions written in the US. Removing mandatory efficacy trials for all drugs is equivalent to expanding this already common practice. Now, let's get to Alex's objections. Most of his post was focused on my analogy between pharmaceuticals and surgery. There are compelling data and arguments on both sides and his post shifted my confidence in the validity and conclusions of the analogy downwards, but in the interest of not overinvesting in one particular analogy I'll leave that debate where it stands and focus more on Alex's general arguments in favor of the FDA. Patent medicines and snake oil Alex notes that we can look to the past, before the FDA was created, to get an idea of what the pharmaceutical market might look like with less FDA oversight. Maxwell argues that in the absence of government oversight, market forces would prevent companies from pushing ineffective or harmful drugs simply to make a profit. Except that there are precedents for exactly this scenario occurring. Until they were stamped out by regulators in the early 20th century, patent medicine hucksters sold ineffective, and sometimes literally poisonous, nostrums to desperate patients. We still use " snake oil" today as shorthand from a scam product. There is no denying that medicine has improved massively over the past 150 years alongside expanding regulatory oversight, but this relationship is not causal. The vast majority of gains in the quality of medical care are due to innovations like antibiotics, genome sequencing, and robotic surgery. A tough and discerning FDA in the 1870s which allows only the best available treatments to be marketed would not have improv...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Surgery Works Well Without The FDA, published by Maxwell Tabarrok on January 27, 2024 on LessWrong. Here is a conversation from the comments of my last post on the FDA with fellow progress blogger Alex Telford that follows a pattern common to many of my conversations about the FDA: Alex: Most drugs that go into clinical trials (90%) are less effective or safe than existing options. If you release everything onto the market you'll get many times more drugs that are net toxic (biologically or financially) than the good drugs you'd get faster. You will almost surely do net harm. Max: Companies don't want to release products that are worse than their competitors. Companies test lots of cars or computers or ovens which are less effective or safe than existing options but they only release the ones that are competitive. This isn't because most consumers could tell whether their car was less efficient or that their computer is less secure, and it's not because making a less efficient car or less secure computer is against the law. Pharmaceutical companies won't go and release hundreds of dud or dangerous drugs just because they can. That would ruin their brand and shut down their business. They have to sell products that people want. Alex: Consumer products like ovens and cars aren't comparable to drugs. The former are engineered products that can be tested according to defined performance and safety standards before they are sold to the public. The characteristics of drugs are more discovered than engineered. You can't determine their performance characteristics in a lab, they can only be determined through human testing (currently). Alex claims that without the FDA, pharmaceutical companies would release lots of bunk drugs. I respond that we don't see this behavior in other markets. Car companies or computer manufacturers could release cheaply made, low quality products for high prices and consumers might have a tough time noticing the difference for a while. But they don't do this, they always try to release high quality products at competitive prices. Alex responds, fairly, that car or computer markets aren't comparable to drug markets. Pharmaceuticals have stickier information problems. They are difficult for consumers to evaluate and, as Alex points out, usually require human testing. This is usually where the conversation ends. I think that consumer product markets are informative for what free-market pharmaceuticals would look like, Alex (and lots of other reasonable people) don't and it is difficult to convince each other otherwise. But there's a much better non-FDA counterfactual for pharmaceutical markets than consumer tech: surgery. The FDA does not have jurisdiction over surgical practice and there is no other similar legal requirement for safety or efficacy testing of new surgical procedures. The FDA does regulate medical devices like the da Vinci surgical robot but once they are approved surgeons can use them in new ways without consulting the FDA or any other government authority. In addition to this lack of regulation, surgery is beset with even thornier information problems than pharmaceuticals. Evaluating the quality of surgery as a customer is difficult. You're literally unconscious as they provide the service and retrospective observation of quality is usually not possible for a layman. Assessing quality is difficult even for a regulator, however. So much of surgery hinges on the skill of a particular surgeon and varies within surgeons day to day or before and after lunch. Running an RCT on a surgical technique is therefore difficult. Standardizing treatment as much as in pharmaceutical trials is basically impossible. It also isn't clear what a surgical placebo should be. Do just put them under anesthetic for a few hours? Or do you cut people open and s...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Surgery Works Well Without The FDA, published by Maxwell Tabarrok on January 27, 2024 on LessWrong. Here is a conversation from the comments of my last post on the FDA with fellow progress blogger Alex Telford that follows a pattern common to many of my conversations about the FDA: Alex: Most drugs that go into clinical trials (90%) are less effective or safe than existing options. If you release everything onto the market you'll get many times more drugs that are net toxic (biologically or financially) than the good drugs you'd get faster. You will almost surely do net harm. Max: Companies don't want to release products that are worse than their competitors. Companies test lots of cars or computers or ovens which are less effective or safe than existing options but they only release the ones that are competitive. This isn't because most consumers could tell whether their car was less efficient or that their computer is less secure, and it's not because making a less efficient car or less secure computer is against the law. Pharmaceutical companies won't go and release hundreds of dud or dangerous drugs just because they can. That would ruin their brand and shut down their business. They have to sell products that people want. Alex: Consumer products like ovens and cars aren't comparable to drugs. The former are engineered products that can be tested according to defined performance and safety standards before they are sold to the public. The characteristics of drugs are more discovered than engineered. You can't determine their performance characteristics in a lab, they can only be determined through human testing (currently). Alex claims that without the FDA, pharmaceutical companies would release lots of bunk drugs. I respond that we don't see this behavior in other markets. Car companies or computer manufacturers could release cheaply made, low quality products for high prices and consumers might have a tough time noticing the difference for a while. But they don't do this, they always try to release high quality products at competitive prices. Alex responds, fairly, that car or computer markets aren't comparable to drug markets. Pharmaceuticals have stickier information problems. They are difficult for consumers to evaluate and, as Alex points out, usually require human testing. This is usually where the conversation ends. I think that consumer product markets are informative for what free-market pharmaceuticals would look like, Alex (and lots of other reasonable people) don't and it is difficult to convince each other otherwise. But there's a much better non-FDA counterfactual for pharmaceutical markets than consumer tech: surgery. The FDA does not have jurisdiction over surgical practice and there is no other similar legal requirement for safety or efficacy testing of new surgical procedures. The FDA does regulate medical devices like the da Vinci surgical robot but once they are approved surgeons can use them in new ways without consulting the FDA or any other government authority. In addition to this lack of regulation, surgery is beset with even thornier information problems than pharmaceuticals. Evaluating the quality of surgery as a customer is difficult. You're literally unconscious as they provide the service and retrospective observation of quality is usually not possible for a layman. Assessing quality is difficult even for a regulator, however. So much of surgery hinges on the skill of a particular surgeon and varies within surgeons day to day or before and after lunch. Running an RCT on a surgical technique is therefore difficult. Standardizing treatment as much as in pharmaceutical trials is basically impossible. It also isn't clear what a surgical placebo should be. Do just put them under anesthetic for a few hours? Or do you cut people open and s...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Land Reclamation is in the 9th Circle of Stagnation Hell, published by Maxwell Tabarrok on January 13, 2024 on LessWrong. Land reclamation is a process where swamps, wetlands, or coastal waters are drained and filled to create more dry land. Despite being complex and technologically intensive, land reclamation is quite old and was common in the past. The reclamation of the Dutch lowland swamps since the 13th century is well-known. Perhaps less well known is that almost every major American city had major land reclamation projects in the 19th and 20th centuries. Boston changed the most with well over half of the modern downtown being underwater during the American Revolution, but it's not unique. New York, San Francisco, Seattle, Chicago, Newark, Philadelphia, Baltimore, Washington, and Miami have all had several major land reclamation projects. Today, land prices in these cities are higher than ever, dredging ships are bigger, construction equipment is more powerful, landfills and foundations are more stable, and rising sea levels provide even more reason to expand shorelines, but none of these cities have added any land in 50 years or more. Land reclamation is a technologically feasible, positive-sum way to build our way out of a housing crisis and to protect our most important cities from flooding, but it's never coming back. The 9th Circle of Stagnation Hell Land reclamation is simultaneously harried by every single one of the anti-progress demons who guard Stagnation Hell. Let's take a trip to see what it's like. The first circle of Stagnation Hell is environmental review. The guardian demon, NEPA-candezzar, has locked congestion pricing and transmission lines in the corner and is giving them a thousand paper cuts an hour for not making their reports long enough. Land reclamation suffers from environmental review in the same way as all other major infrastructure projects, or it would if anyone even tried to get one approved. Reclamation clearly has environmental effects so a full Environmental Impact Statement would be required, adding 3-15 years to the project timeline. There's also NEPA-candezzar's three headed dog: wetland conservation, which, while less common, is extra vicious. Lots of land reclamation happens by draining marshes and wetlands. NEPA reviews are arduous but ultimately standardless i.e they don't set a maximum level of environmental damage, they just require that all possible options are considered. Wetland conservation is more straightforward: wetlands are federally protected and can't be developed. The second circle is zoning. This circle looks like a beautiful neighborhood of detached single-family homes, but every corner is filled with drug markets and stolen goods and every home is eight million dollars. Most land reclamation projects have become large housing developments or new airports, both of which are imperiled by strict zoning. The third circle is the Foreign Dredging Act. This watery hell is guarded by an evil kraken which strikes down any ship not up to its exacting standards. This law requires that any dredging ship (essentially a ship with a crane on it) be American made and American crewed. This law makes dredging capacity so expensive that the scale required for a large land reclamation project may not even exist in the domestic market. Next is cost disease, a walking plague. Construction labor is a massive input into land reclamation and the building that comes after it. Productivity growth in this sector has been slow relative to other industries which raises the opportunity cost of this labor, another reason why land reclamation was more common in the past. The final circle is low-hanging fruit. The shallowest estuaries and driest marshes have already been reclaimed, leaving only deeper waters that are harder to fill....
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Land Reclamation is in the 9th Circle of Stagnation Hell, published by Maxwell Tabarrok on January 13, 2024 on LessWrong. Land reclamation is a process where swamps, wetlands, or coastal waters are drained and filled to create more dry land. Despite being complex and technologically intensive, land reclamation is quite old and was common in the past. The reclamation of the Dutch lowland swamps since the 13th century is well-known. Perhaps less well known is that almost every major American city had major land reclamation projects in the 19th and 20th centuries. Boston changed the most with well over half of the modern downtown being underwater during the American Revolution, but it's not unique. New York, San Francisco, Seattle, Chicago, Newark, Philadelphia, Baltimore, Washington, and Miami have all had several major land reclamation projects. Today, land prices in these cities are higher than ever, dredging ships are bigger, construction equipment is more powerful, landfills and foundations are more stable, and rising sea levels provide even more reason to expand shorelines, but none of these cities have added any land in 50 years or more. Land reclamation is a technologically feasible, positive-sum way to build our way out of a housing crisis and to protect our most important cities from flooding, but it's never coming back. The 9th Circle of Stagnation Hell Land reclamation is simultaneously harried by every single one of the anti-progress demons who guard Stagnation Hell. Let's take a trip to see what it's like. The first circle of Stagnation Hell is environmental review. The guardian demon, NEPA-candezzar, has locked congestion pricing and transmission lines in the corner and is giving them a thousand paper cuts an hour for not making their reports long enough. Land reclamation suffers from environmental review in the same way as all other major infrastructure projects, or it would if anyone even tried to get one approved. Reclamation clearly has environmental effects so a full Environmental Impact Statement would be required, adding 3-15 years to the project timeline. There's also NEPA-candezzar's three headed dog: wetland conservation, which, while less common, is extra vicious. Lots of land reclamation happens by draining marshes and wetlands. NEPA reviews are arduous but ultimately standardless i.e they don't set a maximum level of environmental damage, they just require that all possible options are considered. Wetland conservation is more straightforward: wetlands are federally protected and can't be developed. The second circle is zoning. This circle looks like a beautiful neighborhood of detached single-family homes, but every corner is filled with drug markets and stolen goods and every home is eight million dollars. Most land reclamation projects have become large housing developments or new airports, both of which are imperiled by strict zoning. The third circle is the Foreign Dredging Act. This watery hell is guarded by an evil kraken which strikes down any ship not up to its exacting standards. This law requires that any dredging ship (essentially a ship with a crane on it) be American made and American crewed. This law makes dredging capacity so expensive that the scale required for a large land reclamation project may not even exist in the domestic market. Next is cost disease, a walking plague. Construction labor is a massive input into land reclamation and the building that comes after it. Productivity growth in this sector has been slow relative to other industries which raises the opportunity cost of this labor, another reason why land reclamation was more common in the past. The final circle is low-hanging fruit. The shallowest estuaries and driest marshes have already been reclaimed, leaving only deeper waters that are harder to fill....
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Girlfriends Won't Matter Much, published by Maxwell Tabarrok on December 23, 2023 on LessWrong. Love and sex are pretty fundamental human motivations, so it's not surprising that they are incorporated into our vision of future technology, including AI. The release of Digi last week immanentized this vision more than ever before. The app combines a sycophant and flirtatious chat feed with an animated character "that eliminates the uncanny valley, while also feeling real, human, and sexy." Their marketing material unabashedly promises "the future of AI Romantic Companionship," though most of the replies are begging them to break their promise and take it back. Despite the inevitable popularity of AI girlfriends, however, they will not have large counterfactual impact. AI girlfriends and similar services will be popular, but they have close non-AI substitutes which have essentially the same cultural effect on humanity. The trajectory of our culture around romance and sex won't change much due to AI chatbots. So what is the trajectory of our culture of romance? Long before AI, there has been a trend towards less sex, less marriage, and more online porn. AI Girlfriends will bring down the marginal cost of chatrooms, porn, and OnlyFans. These are popular services so if a fraction of their users switch over, AI girlfriends will be big. But the marginal cost of these services is already extremely low. Generating custom AI porn from a prompt is not much different than typing that prompt into your search bar and scrolling through the billions of hours of existing footage. The porno latent space has been explored so thoroughly by human creators that adding AI to the mix doesn't change much. AI girlfriends will be cheaper and more responsive but again there are already cheap ways to chat with real human girls online but most people choose not to. Demand is already close to satiated at current prices. AI girlfriends will shift the supply curve outwards and lower price but if everyone who wanted it was getting it already, it won't increase consumption. My point is not that nothing will change, but rather that the changes from AI girlfriends and porn can be predicting by extrapolating the pre-AI trends. In this context at least, AI is a mere continuation of the centuries long trend of decreasing costs of communication and content creation. There will certainly be addicts and whales, but there are addicts and whales already. Human-made porn and chatrooms are near free and infinite, so you probably won't notice much when AI makes them even nearer free and even nearer infinite. Misinformation and Deepfakes There is a similar argument for other AI outputs. Humans have been able to create convincing and, more importantly, emotionally affecting fabrications since the advent of language. More recently, information technology has brought down the cost of convincing fabrication by several orders of magnitude. AI stands to bring it down further. But people adapt and build their immune systems. Anyone who follows the Marvel movies has been prepared to see completely photorealistic depictions of terrorism or aliens or apocalypse and understand that they are fake. There are other reasons to worry about AI, but changes from AI girlfriends and deepfakes are only marginal extensions of pre-AI capabilities that likely would have been replicated from other techniques without AI. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Girlfriends Won't Matter Much, published by Maxwell Tabarrok on December 23, 2023 on LessWrong. Love and sex are pretty fundamental human motivations, so it's not surprising that they are incorporated into our vision of future technology, including AI. The release of Digi last week immanentized this vision more than ever before. The app combines a sycophant and flirtatious chat feed with an animated character "that eliminates the uncanny valley, while also feeling real, human, and sexy." Their marketing material unabashedly promises "the future of AI Romantic Companionship," though most of the replies are begging them to break their promise and take it back. Despite the inevitable popularity of AI girlfriends, however, they will not have large counterfactual impact. AI girlfriends and similar services will be popular, but they have close non-AI substitutes which have essentially the same cultural effect on humanity. The trajectory of our culture around romance and sex won't change much due to AI chatbots. So what is the trajectory of our culture of romance? Long before AI, there has been a trend towards less sex, less marriage, and more online porn. AI Girlfriends will bring down the marginal cost of chatrooms, porn, and OnlyFans. These are popular services so if a fraction of their users switch over, AI girlfriends will be big. But the marginal cost of these services is already extremely low. Generating custom AI porn from a prompt is not much different than typing that prompt into your search bar and scrolling through the billions of hours of existing footage. The porno latent space has been explored so thoroughly by human creators that adding AI to the mix doesn't change much. AI girlfriends will be cheaper and more responsive but again there are already cheap ways to chat with real human girls online but most people choose not to. Demand is already close to satiated at current prices. AI girlfriends will shift the supply curve outwards and lower price but if everyone who wanted it was getting it already, it won't increase consumption. My point is not that nothing will change, but rather that the changes from AI girlfriends and porn can be predicting by extrapolating the pre-AI trends. In this context at least, AI is a mere continuation of the centuries long trend of decreasing costs of communication and content creation. There will certainly be addicts and whales, but there are addicts and whales already. Human-made porn and chatrooms are near free and infinite, so you probably won't notice much when AI makes them even nearer free and even nearer infinite. Misinformation and Deepfakes There is a similar argument for other AI outputs. Humans have been able to create convincing and, more importantly, emotionally affecting fabrications since the advent of language. More recently, information technology has brought down the cost of convincing fabrication by several orders of magnitude. AI stands to bring it down further. But people adapt and build their immune systems. Anyone who follows the Marvel movies has been prepared to see completely photorealistic depictions of terrorism or aliens or apocalypse and understand that they are fake. There are other reasons to worry about AI, but changes from AI girlfriends and deepfakes are only marginal extensions of pre-AI capabilities that likely would have been replicated from other techniques without AI. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Scott on Abolishing the FDA, published by Maxwell Tabarrok on December 15, 2023 on LessWrong. Scott Alexander's recent post on the FDA raises the average level of discourse on the subject. He starts from the premise that the FDA deserves destruction but cautions against rash action. No political slogan can be implemented without clarification and "Abolish the FDA" is no different, but Scott's objections aren't strong reasons to stop short of a policy implementation that still retains the spirit behind the slogan. Scott's preferred proposal is to essentially keep the authority and structure of the FDA the same but expand the definition of supplements and experimental drugs. This way, fewer drugs are illegal but there aren't big ripple effects on the prescription and health insurance systems that we have to worry about. The more hardline libertarian proposal is to restrict the FDA's mandatory authority to labeling and make their efficacy testing completely non-binding. This would turn the FDA into a informational consumer protection agency rather than a drug regulator. They can slap big red labels on non-FDA approved drugs and invite companies to run efficacy tests to get nice green labels instead, but they can't prevent anyone from taking a drug if they want it. Let's go through Scott's objections to the hardline plan and see if they give good reasons to favor one over the other. Are we also eliminating the concept of prescription medication? I can see some "If I were king of the world" overhauls to the health system that might do away with mandatory prescriptions, but I think the point of this exercise is to see if we can abolish the FDA without changing anything else and still come out ahead, accounting for costly second-order effects from the rest of the messed up health system. So no, the hardline "abolish the FDA" plan would not remove the legal barrier of prescription. Here is Scott's response: But if we don't eliminate prescriptions, how do you protect prescribers from liability? Even the best medications sometimes cause catastrophic side effects. Right now your doctor doesn't worry you'll sue them, because "the medication was FDA-approved" is a strong defense against liability. But if there are thousands of medications out there, from miraculous panaceas to bleach-mixed-with-snake-venom, then it becomes your doctor's responsibility to decide which are safe-and-effective vs. dangerous-and-useless. And rather than take that responsibility and get sued, your doctor will prefer to play it safe and only use medications that everyone else uses, or that were used before the FDA was abolished. This is a reasonable concern, litigation pressure is a common culprit behind spiraling regulatory burden. But in this case we can be confident that turning the FDA into a non-binding informational board won't turn prescriptions into an even higher legal hurdle because doctors already prescribe far outside of FDA approval. When a drug is tested by the FDA it is tested as a treatment for a specific condition, like diabetes or throat cancer. If the drug is approved, it is approved only for the outcome measured in efficacy testing and nothing else. However, doctors know that certain drugs approved for one thing are effective at treating others. So they can issue an "off-label" prescription based on their professional opinion. Perhaps 20% of all prescriptions in the US are made off-label and more than half of doctors make some off label prescriptions. So doctors are clearly willing to leave the legal umbrella of FDA approval when they make prescription decisions. There are lots of high profile legal cases about off-label prescriptions but they are mostly about marketing and they haven't dampened doctor's participation in the practice. If doctors were comfortable enough to pres...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Scott on Abolishing the FDA, published by Maxwell Tabarrok on December 15, 2023 on LessWrong. Scott Alexander's recent post on the FDA raises the average level of discourse on the subject. He starts from the premise that the FDA deserves destruction but cautions against rash action. No political slogan can be implemented without clarification and "Abolish the FDA" is no different, but Scott's objections aren't strong reasons to stop short of a policy implementation that still retains the spirit behind the slogan. Scott's preferred proposal is to essentially keep the authority and structure of the FDA the same but expand the definition of supplements and experimental drugs. This way, fewer drugs are illegal but there aren't big ripple effects on the prescription and health insurance systems that we have to worry about. The more hardline libertarian proposal is to restrict the FDA's mandatory authority to labeling and make their efficacy testing completely non-binding. This would turn the FDA into a informational consumer protection agency rather than a drug regulator. They can slap big red labels on non-FDA approved drugs and invite companies to run efficacy tests to get nice green labels instead, but they can't prevent anyone from taking a drug if they want it. Let's go through Scott's objections to the hardline plan and see if they give good reasons to favor one over the other. Are we also eliminating the concept of prescription medication? I can see some "If I were king of the world" overhauls to the health system that might do away with mandatory prescriptions, but I think the point of this exercise is to see if we can abolish the FDA without changing anything else and still come out ahead, accounting for costly second-order effects from the rest of the messed up health system. So no, the hardline "abolish the FDA" plan would not remove the legal barrier of prescription. Here is Scott's response: But if we don't eliminate prescriptions, how do you protect prescribers from liability? Even the best medications sometimes cause catastrophic side effects. Right now your doctor doesn't worry you'll sue them, because "the medication was FDA-approved" is a strong defense against liability. But if there are thousands of medications out there, from miraculous panaceas to bleach-mixed-with-snake-venom, then it becomes your doctor's responsibility to decide which are safe-and-effective vs. dangerous-and-useless. And rather than take that responsibility and get sued, your doctor will prefer to play it safe and only use medications that everyone else uses, or that were used before the FDA was abolished. This is a reasonable concern, litigation pressure is a common culprit behind spiraling regulatory burden. But in this case we can be confident that turning the FDA into a non-binding informational board won't turn prescriptions into an even higher legal hurdle because doctors already prescribe far outside of FDA approval. When a drug is tested by the FDA it is tested as a treatment for a specific condition, like diabetes or throat cancer. If the drug is approved, it is approved only for the outcome measured in efficacy testing and nothing else. However, doctors know that certain drugs approved for one thing are effective at treating others. So they can issue an "off-label" prescription based on their professional opinion. Perhaps 20% of all prescriptions in the US are made off-label and more than half of doctors make some off label prescriptions. So doctors are clearly willing to leave the legal umbrella of FDA approval when they make prescription decisions. There are lots of high profile legal cases about off-label prescriptions but they are mostly about marketing and they haven't dampened doctor's participation in the practice. If doctors were comfortable enough to pres...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Offense-Defense Balance Rarely Changes, published by Maxwell Tabarrok on December 9, 2023 on LessWrong. You've probably seen several conversations on X go something like this: Michael Doomer : Advanced AI can help anyone make bioweapons If this technology spreads it will only take one crazy person to destroy the world! Edward Acc : I can just ask my AI to make a vaccine Yann LeCun: My good AI will take down your rogue AI The disagreement here hinges on whether a technology will enable offense (bioweapons) more than defense (vaccines). Predictions of the "offense-defense balance" of future technologies, especially AI, are central in debates about techno-optimism and existential risk. Most of these predictions rely on intuitions about how technologies like cheap biotech, drones, and digital agents would affect the ease of attacking or protecting resources. It is hard to imagine a world with AI agents searching for software vulnerabilities and autonomous drones attacking military targets without imagining a massive shift the offense defense balance. But there is little historical evidence for large changes in the offense defense balance, even in response to technological revolutions. Consider cybersecurity. Moore's law has taken us through seven orders of magnitude reduction in the cost of compute since the 70s. There were massive changes in the form and economic uses for computer technology along with the increase in raw compute power: Encryption, the internet, e-commerce, social media and smartphones. The usual offense-defense balance story predicts that big changes to technologies like this should have big effects on the offense defense balance. If you had told people in the 1970s that in 2020 terrorist groups and lone psychopaths could access more computing power than IBM had ever produced at the time from their pocket, what would they have predicted about the offense defense balance of cybersecurity? Contrary to their likely prediction, the offense-defense balance in cybersecurity seems stable. Cyberattacks have not been snuffed out but neither have they taken over the world. All major nations have defensive and offensive cybersecurity teams but no one has gained a decisive advantage. Computers still sometimes get viruses or ransomware, but they haven't grown to endanger a large percent of the GDP of the internet. The US military budget for cybersecurity has increased by about 4% a year every year from 1980-2020, which is faster than GDP growth, but in line with GDP growth plus the growing fraction of GDP that's on the internet. This stability through several previous technological revolutions raises the burden of proof for why the offense defense balance of cybersecurity should be expected to change radically after the next one. The stability of the offense-defense balance isn't specific to cybersecurity. The graph below shows the per capita rate of death in war from 1400 to 2013. This graph contains all of humanity's major technological revolutions. There is lots of variance from year to year but almost zero long run trend. Does anyone have a theory of the offense-defense balance which can explain why the per-capita deaths from war should be about the same in 1640 when people are fighting with swords and horses as in 1940 when they are fighting with airstrikes and tanks? It is very difficult to explain the variation in this graph with variation in technology. Per-capita deaths in conflict is noisy and cyclic while the progress in technology is relatively smooth and monotonic. No previous technology has changed the frequency or cost of conflict enough to move this metric far beyond the maximum and minimum range that was already set 1400-1650. Again the burden of proof is raised for why we should expect AI to be different. The cost to sequence a human genome h...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Offense-Defense Balance Rarely Changes, published by Maxwell Tabarrok on December 9, 2023 on LessWrong. You've probably seen several conversations on X go something like this: Michael Doomer : Advanced AI can help anyone make bioweapons If this technology spreads it will only take one crazy person to destroy the world! Edward Acc : I can just ask my AI to make a vaccine Yann LeCun: My good AI will take down your rogue AI The disagreement here hinges on whether a technology will enable offense (bioweapons) more than defense (vaccines). Predictions of the "offense-defense balance" of future technologies, especially AI, are central in debates about techno-optimism and existential risk. Most of these predictions rely on intuitions about how technologies like cheap biotech, drones, and digital agents would affect the ease of attacking or protecting resources. It is hard to imagine a world with AI agents searching for software vulnerabilities and autonomous drones attacking military targets without imagining a massive shift the offense defense balance. But there is little historical evidence for large changes in the offense defense balance, even in response to technological revolutions. Consider cybersecurity. Moore's law has taken us through seven orders of magnitude reduction in the cost of compute since the 70s. There were massive changes in the form and economic uses for computer technology along with the increase in raw compute power: Encryption, the internet, e-commerce, social media and smartphones. The usual offense-defense balance story predicts that big changes to technologies like this should have big effects on the offense defense balance. If you had told people in the 1970s that in 2020 terrorist groups and lone psychopaths could access more computing power than IBM had ever produced at the time from their pocket, what would they have predicted about the offense defense balance of cybersecurity? Contrary to their likely prediction, the offense-defense balance in cybersecurity seems stable. Cyberattacks have not been snuffed out but neither have they taken over the world. All major nations have defensive and offensive cybersecurity teams but no one has gained a decisive advantage. Computers still sometimes get viruses or ransomware, but they haven't grown to endanger a large percent of the GDP of the internet. The US military budget for cybersecurity has increased by about 4% a year every year from 1980-2020, which is faster than GDP growth, but in line with GDP growth plus the growing fraction of GDP that's on the internet. This stability through several previous technological revolutions raises the burden of proof for why the offense defense balance of cybersecurity should be expected to change radically after the next one. The stability of the offense-defense balance isn't specific to cybersecurity. The graph below shows the per capita rate of death in war from 1400 to 2013. This graph contains all of humanity's major technological revolutions. There is lots of variance from year to year but almost zero long run trend. Does anyone have a theory of the offense-defense balance which can explain why the per-capita deaths from war should be about the same in 1640 when people are fighting with swords and horses as in 1940 when they are fighting with airstrikes and tanks? It is very difficult to explain the variation in this graph with variation in technology. Per-capita deaths in conflict is noisy and cyclic while the progress in technology is relatively smooth and monotonic. No previous technology has changed the frequency or cost of conflict enough to move this metric far beyond the maximum and minimum range that was already set 1400-1650. Again the burden of proof is raised for why we should expect AI to be different. The cost to sequence a human genome h...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scaling laws for dominant assurance contracts, published by jessicata on November 30, 2023 on LessWrong. (note: this post is high in economics math, probably of narrow interest) Dominant assurance contracts are a mechanism proposed by Alex Tabarrok for funding public goods. The following summarizes a 2012 class paper of mine on dominant assurance contracts. Mainly, I will be determining how much the amount of money a dominant assurance contract can raise as a function of how much value is created for how many parties, under uncertainty about how much different parties value the public good. Briefly, the conclusion is that, while Tabarrok asserts that the entrepreneur's profit is proportional to the number of consumers under some assumptions, I find it is proportional to the square root of the number of consumers under these same assumptions. The basic idea of assurance contracts is easy to explain. Suppose there are N people ("consumers") who would each benefit by more than $S > 0 from a given public good (say, a piece of public domain music) being created, e.g. a park (note that we are assuming linear utility in money, which is approximately true on the margin, but can't be true at limits). An entrepreneur who is considering creating the public good can then make an offer to these consumers. They say, everyone has the option of signing a contract; this contract states that, if each other consumer signs the contract, then every consumer pays $S, and the entrepreneur creates the public good, which presumably costs no more than $NS to build (so the entrepreneur does not take a loss). Under these assumptions, there is a Nash equilibrium of the game, in which each consumer signs the contract. To show this is a Nash equilibrium, consider whether a single consumer would benefit by unilaterally deciding not to sign the contract in a case where everyone else signs it. They would save $S by not signing the contract. However, since they don't sign the contract, the public good will not be created, and so they will lose over $S of value. Therefore, everyone signing is a Nash equilibrium. Everyone can rationally believe themselves to be pivotal: the good is created if and only if they sign the contract, creating a strong incentive to sign. Tabarrok seeks to solve the problem that, while this is a Nash equilibrium, signing the contract is not a dominant strategy. A dominant strategy is one where one would benefit by choosing that strategy (signing or not signing) regardless of what strategy everyone else takes. Even if it would be best for everyone if everyone signed, signing won't make a difference if at least one other person doesn't sign. Tabarrok solves this by setting a failure payment $F > 0, and modifying the contract so that if the public good is not created, the entrepreneur pays every consumer who signed the contract $F. This requires the entrepreneur to take on risk, although that risk may be small if consumers have a sufficient incentive for signing the contract. Here's the argument that signing the contract is a dominant strategy for each consumer. Pick out a single consumer and suppose everyone else signs the contract. Then the remaining consumer benefits by signing, by the previous logic (the failure payment is irrelevant, since the public good is created whenever the remaining consumer signs the contract). Now consider a case where not everyone else signs the contract. Then by signing the contract, the remaining consumer gains $F, since the public good is not created. If they don't sign the contract, they get nothing and the public good is still not created. This is still better for them. Therefore, signing the contract is a dominant strategy. What if there is uncertainty about how much the different consumers value the public good? This can be modeled as a Bayesi...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scaling laws for dominant assurance contracts, published by jessicata on November 30, 2023 on LessWrong. (note: this post is high in economics math, probably of narrow interest) Dominant assurance contracts are a mechanism proposed by Alex Tabarrok for funding public goods. The following summarizes a 2012 class paper of mine on dominant assurance contracts. Mainly, I will be determining how much the amount of money a dominant assurance contract can raise as a function of how much value is created for how many parties, under uncertainty about how much different parties value the public good. Briefly, the conclusion is that, while Tabarrok asserts that the entrepreneur's profit is proportional to the number of consumers under some assumptions, I find it is proportional to the square root of the number of consumers under these same assumptions. The basic idea of assurance contracts is easy to explain. Suppose there are N people ("consumers") who would each benefit by more than $S > 0 from a given public good (say, a piece of public domain music) being created, e.g. a park (note that we are assuming linear utility in money, which is approximately true on the margin, but can't be true at limits). An entrepreneur who is considering creating the public good can then make an offer to these consumers. They say, everyone has the option of signing a contract; this contract states that, if each other consumer signs the contract, then every consumer pays $S, and the entrepreneur creates the public good, which presumably costs no more than $NS to build (so the entrepreneur does not take a loss). Under these assumptions, there is a Nash equilibrium of the game, in which each consumer signs the contract. To show this is a Nash equilibrium, consider whether a single consumer would benefit by unilaterally deciding not to sign the contract in a case where everyone else signs it. They would save $S by not signing the contract. However, since they don't sign the contract, the public good will not be created, and so they will lose over $S of value. Therefore, everyone signing is a Nash equilibrium. Everyone can rationally believe themselves to be pivotal: the good is created if and only if they sign the contract, creating a strong incentive to sign. Tabarrok seeks to solve the problem that, while this is a Nash equilibrium, signing the contract is not a dominant strategy. A dominant strategy is one where one would benefit by choosing that strategy (signing or not signing) regardless of what strategy everyone else takes. Even if it would be best for everyone if everyone signed, signing won't make a difference if at least one other person doesn't sign. Tabarrok solves this by setting a failure payment $F > 0, and modifying the contract so that if the public good is not created, the entrepreneur pays every consumer who signed the contract $F. This requires the entrepreneur to take on risk, although that risk may be small if consumers have a sufficient incentive for signing the contract. Here's the argument that signing the contract is a dominant strategy for each consumer. Pick out a single consumer and suppose everyone else signs the contract. Then the remaining consumer benefits by signing, by the previous logic (the failure payment is irrelevant, since the public good is created whenever the remaining consumer signs the contract). Now consider a case where not everyone else signs the contract. Then by signing the contract, the remaining consumer gains $F, since the public good is not created. If they don't sign the contract, they get nothing and the public good is still not created. This is still better for them. Therefore, signing the contract is a dominant strategy. What if there is uncertainty about how much the different consumers value the public good? This can be modeled as a Bayesi...
Que valor tem a sua vida? E a vida animal? Quanto vale um parque natural ou uma praia?É comum dizer-se que certas coisas ‘não têm preço'. Geralmente, a expressão pressupõe que o seu valor é incalculável; mas, economistas como Hugo Figueiredo, discordam, dizendo que a não atribuição de um valor provoca precisamente o efeito contrário: faz com que o mesmo passe a ser zero. Esta desvalorização tem efeitos profundos e acaba por se traduzir na maneira como respeitamos (ou não) a natureza, a vida humana e tantos outros. Em conversa com Hugo van der Ding, Hugo Figueiredo dá inúmeros exemplos e ajuda-nos a compreender como é possível fazer análises de custo/benefício e calcular o ‘preço' de tudo aquilo que nos parece impossível quantificar, mesmo quando os benefícios são imateriais. Parece-lhe demasiado duro e objetivo? Prepare-se: a sua vida vale muito mais do que provavelmente imagina. REFERÊNCIAS E LINKS ÚTEIS Quanto vale uma vida?Viscusi, W. (2018). Pricing lives: Guideposts for a safer society. Princeton University Press.Kniesner, T. J., & Viscusi, W. K. (2019). The Value of a Statistical Life. In Oxford Research Encyclopedia of Economics and Finance.Friedman, H. S. (2021). Ultimate price: The value we place on life. University of California Press. Thaler, R., & Rosen, S. (1976). The value of saving a life: evidence from the labor market.In Household production and consumption (pp. 265-302). NBER.Athey, S., Kremer, M., Snyder, C., & Tabarrok, A. (2020). In the race for a coronavirus vaccine, we must go big. really, really big. New York Times, 4. Avaliação de bens ambientais:Banzhaf, H. (2023). Pricing the Priceless: A History of Environmental Economics (Historical Perspectives on Modern Economics). Cambridge: Cambridge University Press.Riera, P., McConnell, K. E., Giergiczny, M., & Mahieu, P. A. (2011). Applying the travel cost method to Minorca beaches: some policy results. The international handbook on non-market environmental valuation, 60-73.Haefele, M., Loomis, J. B., & Bilmes, L. (2016). Total economic valuation of the National Park Service lands and programs: Results of a survey of the American public. Harvard Kennedy School Working Papers Number of, 48.Economistas: Kip ViscusiHoward FriedmanSpencer Banzhaf BIOSHUGO VAN DER DINGHugo van der Ding é muitas personagens. Locutor, criativo e desenhador acidental. Uma espécie de cartunista de sucesso instantâneo a quem bastou uma caneta Bic, uma boa ideia e uma folha em branco. Criador de personagens digitais de sucesso como a Criada Malcriada e Cavaca a Presidenta, também autor de um dos podcasts mais ouvidos em Portugal, Vamos Todos Morrer, podemos encontrá-lo, ou melhor ouvi-lo, todas as manhãs na Antena 3 ou por detrás dos bonecos que nos surgem todos os dias por aqui e ali. HUGO FIGUEIREDOÉ professor de Economia na Universidade de Aveiro, investigador do CIPES - Centro de Investigação em Políticas do Ensino Superior e colaborador do GOVCOPP – Unidade de Investigação em Governança, Competitividade e Políticas Públicas. É licenciado em Economia pela Universidade do Porto e doutorado em Ciências Empresariais pela Universidade de Manchester. Os seus interesses de investigação centram-se nas áreas da economia do trabalho, da educação e do ensino superior.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Double Feature on The Extropians, published by Maxwell Tabarrok on June 4, 2023 on The Effective Altruism Forum. Link-post for two pieces I just wrote on the Extropians.The Extropians were an online group of techno-optimist transhumanist libertarians active in the 90s who influence a lot of online intellectual culture today, especially in EA and Rationalism. Prominent members include Eliezer Yudkowsky, Nick Bostrom, Robin Hanson, Eric Drexler, Marvin Minsky and all three of the likely candidates for Satoshi Nakamoto (Hal Finney, Wei Dai, and Nick Szabo).The first piece is a deep dive into the archived Extropian forum. It was super fun to write and I was constantly surprised about how much of the modern discourse on AI and existential risk had already been covered in 1996. The second piece is a retrospective on predictions made by Extropians in 1995. Eric Drexler, Nick Szabo and 5 other Extropians give their best estimates for when we'll have indefinite biological lifespans and reproducing asteroid eaters. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Double-Feature on The Extropians, published by Maxwell Tabarrok on June 3, 2023 on LessWrong. Link-post for two pieces I just wrote on the Extropians.The Extropians were an online group of techno-optimist transhumanist libertarians active in the 90s who influence a lot of online intellectual culture today. Prominent members include Eliezer Yudkowsky, Nick Bostrom, Robin Hanson, Eric Drexler, Marvin Minsky and all three of the likely candidates for Satoshi Nakamoto (Hal Finney, Wei Dai, and Nick Szabo).The first piece is a deep dive into the archived Extropian forum. It was super fun to write and I was constantly surprised about how much of the modern discourse on AI and existential risk had already been covered in 1996. The second piece is a retrospective on predictions made by Extropians in 1995. Eric Drexler, Nick Szabo and 5 other Extropians give their best estimates for when we'll have indefinite biological lifespans and reproducing asteroid eaters. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Double-Feature on The Extropians, published by Maxwell Tabarrok on June 3, 2023 on LessWrong. Link-post for two pieces I just wrote on the Extropians.The Extropians were an online group of techno-optimist transhumanist libertarians active in the 90s who influence a lot of online intellectual culture today. Prominent members include Eliezer Yudkowsky, Nick Bostrom, Robin Hanson, Eric Drexler, Marvin Minsky and all three of the likely candidates for Satoshi Nakamoto (Hal Finney, Wei Dai, and Nick Szabo).The first piece is a deep dive into the archived Extropian forum. It was super fun to write and I was constantly surprised about how much of the modern discourse on AI and existential risk had already been covered in 1996. The second piece is a retrospective on predictions made by Extropians in 1995. Eric Drexler, Nick Szabo and 5 other Extropians give their best estimates for when we'll have indefinite biological lifespans and reproducing asteroid eaters. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mary Theroux is Chairman of the Board of Directors and CEO of the Independent Institute. She was married to the late David Theroux. In this episode, she gives the background on how she and David met, and the early years of the Independent Institute. Along the way she touches on many topics including the CS Lewis Society, the war fever on America's right wing, and the problem of homelessness. Mentioned in the Episode and Other Links of Interest: The https://youtu.be/rGiL7bz9WAs (YouTube version) of this interview. A nice tribute to David Theroux https://marginalrevolution.com/marginalrevolution/2022/04/david-theroux-rip.html (from Alex Tabarrok). Tabarrok on https://marginalrevolution.com/marginalrevolution/2003/10/bounty_hunters_.html (bounty hunters.) David's article on https://www.independent.org/publications/article.asp?id=2846 (C.S. Lewis). Independent Institute's https://www.beyondhomeless.org/ (documentary on homelessness). Their series https://www.independent.org/lovegov/ (Love Gov). Bob's https://www.independent.org/pdf/tir/tir_14_02_03_murphy.pdf (article on Nordhaus' carbon tax) in The Independent Review, and his book https://www.independent.org/store/book.asp?id=116&s=na#t-2 (Choice) (published by Independent). http://bobmurphyshow.com/contribute (Help support) the Bob Murphy Show. The audio production for this episode was provided by http://podsworth.com/ (Podsworth Media).
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Parfit + Singer + Aliens = ?, published by Maxwell Tabarrok on October 12, 2022 on The Effective Altruism Forum. Summary: If you Have a wide moral circle that includes non human animals and Have a low or zero moral discount rate Then the discovery of alien life should radically change your views on existential risk. A grad student plops down in front of their computer where their code has been running for the past few hours. After several months of waiting, she had finally secured time on the James Webb Space Telescope to observe a freshly discovered exoplanet: Proxima Centauri D. “That's strange.” Her eyes dart over the results. Proxima D's short 5 day orbit meant that she could get observations of both sides of the tidally locked planet. But the brightness of each side doesn't vary nearly as much as it should. The dark side is gleaming with light. Northern Italy at night The Argument This argument requires a few assumptions. Strong evidence of intelligent alien life on a nearby planet Future moral value is not inherently less important than present moral value Many types of beings contain moral value including nonhuman animals and aliens I will call people who have a Singer-style wide moral circle and a Parfit-style concern for the long term future “Longtermist EAs.” Given these assumptions, lets examine the basic argument given by Longtermist EAs for why existential risks should be a primary concern. Start with assumption 2. The lives of future human beings are not inherently less important than the lives of present ones. Should Cleopatra eat an ice cream that causes a million deaths today? Then consider that humanity may last for a very long time, or may be able to greatly increase the amount of moral value it sustains, or both. Therefore, the vast majority of moral value in the universe lies along these possible future paths where humanity does manage to last for a long time and support a lot of moral value. Existential risks make it less likely or impossible to end up on these paths so they are extremely costly and important to avoid. But now introduce assumptions 1 and 3 and the argument falls apart. The link between the second and third points is broken when we discovery another morally valuable species which also has a chance to settle the galaxy. Discovering aliens nearby means that there are likely billions of planetary civilizations in our galaxy. If, like Singer, you believe that alien life is morally valuable, then humanity's future is unimportant to the sum total of moral value in the universe. If we are destroyed by an existential catastrophe, another civilization will fill the vacuum. If humanity did manage to preserve itself and expand, most of the gains would be zero-sum; won at the expense of another civilization that might have taken our place. Most of the arguments for caring about human existential risk implicitly assume a morally empty universe if we do not survive. But if we discover alien life nearby, this assumption is probably wrong and humanity's value-over-replacement goes way down. Holding future and alien life to be morally valuable means that, on the discovery of alien life, humanity's future becomes a vanishingly small part of the morally valuable universe. In this situation, Longtermism ceases to be action relevant. It might be true that certain paths into far future contain the vast majority of moral value but if there are lots of morally valuable aliens out there, the universe is just as likely to end up one of these paths whether humans are around or not so Longtermism doesn't help us decide what to do. We must either impartially hope that humans get to be the ones tiling the universe or go back to considering the nearer term effects of our actions as more important. Consider Parfit's classic thought experiment: Option A: Peace O...
Criptomoedas. NFT's. Blockchain. O mundo digital também está a revolucionar a nossa vida ao nível das transações. E a melhor chave para abrir e compreender esta porta é a tão falada Blockchain. O Hugo van der Ding quis saber e por isso fez as perguntas à Joana Pais que lhe vão permitir perceber por que razão se criou a Blockchain, quais as transações que facilita, a sua ligação íntima com as criptomoedas.Pelo caminho, conhecerá também como o mundo da arte se somou ao universo digital e saberá que, através dela, também os contratos se tornaram mais inteligentes. REFERÊNCIAS E LINKS ÚTEIS:Cowen, T. e Tabarrok, A. (2022). Cryptoeconomics. https://a16zcrypto.com/wp-content/uploads/2022/06/cryptoeconomics-chapter-in-modern-principles-of-economics_tylercowen-alextabarrok.pdfSatoshi Nakamoto (2008). Bitcoin: A Peer-to-Peer Electronic Cash System:https://bitcoin.org/bitcoin.pdf[Two-sided markets] Jean Tirole (2017). Economics for the Common Good. Princeton UniversityPress. Capítulo 14.BIOSJOANA PAISJoana Pais é professora de Economia no ISEG da Universidade de Lisboa. Obteve o seu Ph.D. em Economia na Universitat Autònoma de Barcelona em 2005. Atualmente é coordenadora do programa de Mestrado em Economia e do programa de Doutoramento em Economia, ambos do ISEG, e membro da direção da unidade de investigação REM - Research in Economics and Mathematics. É ainda coordenadora do XLAB – Behavioural Research Lab, um laboratório que explora a tomada de decisão e o comportamento económico, político e social, suportado pelo consórcio PASSDA (Production and Archive of Social Science Data). Os seus interesses de investigação incluem áreas como a teoria de jogos, em particular, a teoria da afetação (matching theory), o desenho de mercados, a economia comportamental e a economia experimental.HUGO VAN DER DING Hugo van der Ding nasceu nos finais dos anos 70 ao largo do Golfo da Biscaia, durante uma viagem entre Amesterdão e Lisboa, e cresceu numa comunidade hippie nos arredores de Montpellier. Estudou História das Artes Decorativas Orientais, especializando-se em gansos de origami. Em 2012, desistiu da carreira académica para fazer desenhos nas redes sociais. Depois do sucesso de A Criada Malcriada deixou de precisar de trabalhar. Ainda assim, escreve regularmente em revistas e jornais, é autor de alguns livros e podcasts, faz ocasionalmente teatro e televisão, e continua a fazer desenhos nas redes sociais. Desde 2019 é um dos apresentadores do programa Manhãs da 3, na Antena 3.
On this week’s CSPI Podcast, Richard interviews the top three winners of the CSPI Essay Contest: Policy Reform For Progress. The first interview is with contest winner Andrew Kenneson, a program navigator at a public housing authority in Kodiak, Alaska and former reporter. In “Gathering Steam: Unlocking Geothermal Potential in the United States,” Andrew explains why exempting geothermal exploration on federally owned lands from NEPA requirements could set off a cascade of energy innovation. The second interview (starting at 29:12) is with Maxwell Tabarrok, an Econ and Math student at the University of Virginia whose essay on science funding reform “Mo’ Money Mo’ Problems” won second prize. Maxwell proposes a system of research guided funding in which the ~$120 billion spent by the federal government on science each year is distributed equally to the ~250,000 full-time STEM faculty at high research activity universities.The third interview (starting at 57:03) is with Brent Skorup, a senior research fellow at George Mason University's Mercatus Center and a visiting faculty fellow at the Nebraska Governance and Technology Center at the Nebraska College of Law. Brent’s 3rd place essay, “Drone Airspace: A New Global Asset Class,” outlines how public auctions for drone airspace would be an improvement on the FAA’s current plan to ration airspace to a few lucky companies.Listen in podcast form or watch on YouTube. Winning Essays:“Gathering Steam: Unlocking Geothermal Potential in the United States” by Andrew Kenneson“Mo’ Money Mo’ Problems” by Maxwell Tabarrok“Drone Airspace: A New Global Asset Class” by Brent SkorupHonorable Mentions: “The University-Government Complex” by William L. Krayer“It’s Time to Review the Institutional Review Boards” by Willy Chertman Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enlightenment Values in a Vulnerable World, published by Maxwell Tabarrok on July 18, 2022 on The Effective Altruism Forum. Enlightenment Values in a Vulnerable World Introduction: The Vulnerable World Hypothesis: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semianarchic default condition. The Vulnerable World Hypothesis (VWH) is an influential 2019 paper by philosopher Nick Bostrom. It begins with a metaphor for technological progress: An urn full of balls representing technologies of varying degrees of danger and reward. A white ball is a technology which powerfully increases human welfare, while a black ball is one which “by default destroys the civilization that invents it.” Bostrom stipulates that “the term ‘civilizational devastation' in the VWH refers to any destructive event that is at least as bad as the death of 15 percent of the world population or a reduction of global GDP by > 50 percent lasting for more than a decade.” Given the dire consequences of such a technology, Bostrom argues for enlarged state capacity, especially in terms of global reach and surveillance, to prevent the devastating technology from being invented. The VWH is a wet blanket thrown over Enlightenment values; values which are popular with many EAs and among thinkers associated with progress studies such as David Deutsch, Steven Pinker, and Tyler Cowen. These Enlightenment values can be summarized as: political liberty, technological progress, and political liberty ⇒ technological progress. Even if technology has a highly positive expected value on human welfare this can be easily outweighed by a small chance of catastrophic or existential risk. The value of political liberty is often tied to its promotion of technological progress. Large risks from technological progress would therefore confer large risks on political liberty. Bostrom highlights this connection but goes further. Not only is political liberty dangerous because of its facilitation of catastrophic technological risk, but strict political control is good (or at least better than you thought it was before) because it is necessary to prevent these risks. In response to a black ball technology Bostrom says that “It would be unacceptable if even a single state fails to put in place the machinery necessary for continuous surveillance and control of its citizens.” If Bostrom is right that even a small credence in the VWH requires continuously controlling and surveilling everyone on earth, then Enlightenment values should be rejected in the face of existential risk. We do not know whether the VWH is true, and it is undecidable via statistical analysis until we draw a black ball or empty the urn. Thus, I consider the implications of the VWH for Enlightenment values both when it is false and when it is true. . If it is false then traditional arguments for Enlightenment values become even stronger. If the VWH is true I find that one can still reasonably believe that unconstrained technological progress and political liberty are important moral goods as both ends and means as long as some properties of the urn are satisfied. Even if these properties are not necessarily satisfied, I show that Bostrom's proposed solution of empowering a global government likely increases existential risk overall. Part 1: Outcomes Conditional on VWH Truth Value VWH Is False First, we can quickly consider what we should do if we knew that the VWH was false. In this case, the arguments made by progress studies in support of the set of Enlightenment values: political liberty, technological progress and political liberty ⇒ technological progress, are proved stronger. Since we know that there are no bla...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Many People Are In The Invisible Graveyard?, published by Maxwell Tabarrok on April 19, 2022 on The Effective Altruism Forum. This is a cross-post from my blog Maximum Progress. More lives were saved by vaccines than by any other response to Covid-19. Vaccines save more lives if they are available earlier. I use two methods to estimate how many lives might have been saved given an earlier vaccine rollout. I then consider safety and efficacy information tradeoffs and non-regulatory constraints to vaccine rollout. I find that a 4 month acceleration to a vaccine rollout in August of 2020 was achievable and would have saved between 130 and 350 thousand lives over the next two years of the pandemic. I conclude with recommendations for future pandemics given the large benefits of vaccine acceleration. I. Introduction Of all measures taken to curb the spread or danger of the Covid-19 pandemic, vaccines were by far the most effective. With three doses, a vaccinated person is 41 times less likely to die from Covid than an unvaccinated person. Incredibly, these vaccines were designed essentially over a weekend at the earliest stage of the pandemic in January of 2020. It took 11 months, billions of dollars, and thousands of man hours for these life-saving vaccines to get through the FDA's emergency testing process. This was the fastest vaccine rollout ever, and it was the result of immense effort and ingenuity. Still, over three hundred thousand people died in those 11 months. The lives we could have saved with a faster vaccine rollout are in an invisible graveyard since we cannot observe the counterfactual worlds where they still live. In sections II and III, I use two methods: linear estimation and SIR modeling, to perform counterfactual analyses of earlier vaccine approvals using CDC data and find that hundreds of thousands of lives could have been saved even with modest accelerations. Then, in section IV I analyze the costs and feasibility of an accelerated rollout. Section V concludes with recommendations for future pandemics. II. Linear Estimation In this section I approach the question of how many lives could have been saved by an earlier vaccine rollout by simply scaling down observed cases by a constant “vaccine effect” term which is calculated from CDC data on differences in death rates depending on vaccination status. Figure 1. Deaths Rates Among Unvaccinated (blue) and Vaccinated (red) Source: CDC Data on differential death rates found on their website here. This data shows that unvaccinated people are at somewhere between a 15 and 20 times higher risk of death from Covid than vaccinated people. Some of this effect is probably due to the selection of more cautious people into the vaccinated category who would have been less likely to die anyway. But there is also a selection of the most vulnerable into the vaccinated category, especially older people, which would make the vaccinated group as a whole look more likely to die. Direct studies of the vaccine's protection confirm large effects so I take the mean of this vaccine effect (16.6) as the constant vaccine effect for the rest of this section. The CDC's data on the vaccine effect only goes back to April of 2021, but they have data on Covid-19 cases and deaths since the start of the pandemic. This data is what will be scaled up or down by the vaccine effect. Figure 2. New Cases (blue) and Deaths (red) Per 100k Each Week Since January 2020 Source: CDC Data found on their website here. In the counterfactual estimations of earlier vaccine rollouts, the vaccine scaling effect will only apply to the percentage of the population who are vaccinated, over and above the percentage who were already vaccinated at that point in the actual rollout schedule, graphed below. Figure 3. Cumulative Double Dose Percentage Source: ...
Government is a bureaucratic, slow-moving institution. It's too easily captured by special interests. It's often incapable of acting at the speed and scale our problems demand. And when it does act, it can make things worse. Look no further than the Food and Drug Administration's slowness to approve rapid coronavirus tests or major cities' inability to build new housing and public transit or Congress's failure to pass basic voting rights legislation.This criticism is typically weaponized as an argument for shrinking government and outsourcing its responsibilities to the market. But the past two years have revealed the hollowness of that approach. A pandemic is a problem the private sector simply cannot solve. The same is true for other major challenges of the 21st century, such as climate change and technology-driven inequality. Ours is an age in which government needs to be able to do big things, solve big problems and deliver where the market cannot or will not.Alex Tabarrok is an economist at George Mason University, a blogger at Marginal Revolution and for years has been one of the sharpest libertarian critics of big government. But the experience of the pandemic has changed his thinking in key ways. “Ninety-nine years out of 100, I'm a libertarian,” he told me last year. “But then there's that one year out of 100.”So this conversation is about the central tension that Tabarrok and I are grappling with right now: Government failure has never been more apparent — and yet we need government more than ever.We discuss (and debate) the public choice theory of government failure, why it's so damn hard to build things in America, how reforms intended to weaken special interests often empower them, why the American right is responsible for much of the government dysfunction it criticizes, the case for state capacity libertarianism, the appropriate size of the welfare state, the political importance of massive economic inequality and how the crypto world's pursuit of decentralization could backfire.Mentioned:The Rise and Decline of Nations by Mancur Olson“It's Time to Build” by Marc Andreessen“The bulldozer vs. vetocracy political axis” by Vitalik ButerinBook recommendations:The Anarchy by William DalrympleIndia: A Story Through 100 Objects by Vidya DehejiaThe Splendid and the Vile by Erik LarsonThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Our executive producer is Irene Noguchi. Special thanks to Kristin Lin.
Alex Tabarrok is a professor of economics and co-author, with Tyler Cowen, of the blog Marginal Revolution. A strident critic of institutional failure during the pandemic, Tabarrok has applied his libertarian perspective to a wide range of topics, including public health, regulation and the law, criminal justice, and entrepreneurship. In this week's conversation, Alex Tabarrok and Yascha Mounk discuss the failure of American institutions to respond to COVID-19, the cost of insufficient economic innovation, and the possibility of building a more agile and resilient American society. This transcript has been condensed and lightly edited for clarity. Please do listen and spread the word about The Good Fight. If you have not yet signed up for our podcast, please do so now by following this link on your phone. Email: podcast@persuasion.community Website: http://www.persuasion.community Podcast production by John T. Williams, and Brendan Ruberry Learn more about your ad choices. Visit megaphone.fm/adchoices Connect with us! Spotify | Apple | Google Twitter: @Yascha_Mounk & @joinpersuasion Youtube: Yascha Mounk LinkedIn: Persuasion Community Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
Alex Tabarrok, perhaps the world's sole Canadian libertarian, joins The Remnant today for the first time. Inflation is on the rise, the vaccine rollout is stalling, and illiberalism is resurgent. In other words, there are plenty of demanding issues for Americans to be concerned about. Thankfully, Tabarrok has a range of considered policy solutions for Jonah to explore. How can we revitalize democracy? Would open borders work? And should we abandon advanced civilization now before the machines destroy us all? Show Notes: -Alex's website -“Inflation, no chance …” -Jonah on the wackiness of vaccine paranoia -Newsmax outcrazies itself -The Mayor Quimby of anti-vaxxers -Jonah on the importance of character -Ezra Klein on the good old days -Alex's case for open borders -“Born American, but in the Wrong Place” -The Baumol effect -Home Economics, by Nick Shulz -The elite master's degrees that don't pay off See omnystudio.com/listener for privacy information.
John Stossel- Privacy: Who Needs It, Make America California and 5 Other Stossel Clips John Stossel Privacy: Who Needs It Make America California Woke Colleges vs Testing Unions Invade Private Property The Woke Award Shows One Dose or Two? We Are ALL Essential with Mike Rowe Privacy: Who Needs It https://youtu.be/BW3Kmw6-huI John Stossel 518K subscribers Edward Snowden tries to convince me to worry more about privacy. "They're trying to shape your behavior!" he warns. I rudely say: "Americans by and large don't care, and I mostly don't care. I figure that teenage boy across the street could be picking up the stuff I send. The cork's out of the bottle. What difference does it make?" Snowden has good answers. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- Google's former CEO once said, creepily: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." Snowden points out that we will feel pressured to "constrain our intellectual curiosity, and even frankly, our weirdnesses ... because we could potentially someday be judged on the basis for it." Another scary thing about today's internet is that big tech companies have the power to manipulate. Facebook even did a study that confirmed they could make users angrier by controlling which posts they saw. "This is controlling human behavior by a private company!" Snowden points out. "For what end? Just to see if they could... the next variants... are not going to be just to see if they could. It is going to be for their advantage. It is going to be to shape laws, it is going to be to shape elections.” More of Snowden's points, and my pushback on whether tech companies are really "monopolies", in the video above. Make America California https://youtu.be/NetJ5LDQ3Dc John Stossel 518K subscribers California is the Biden Administration's role model for America. What are they thinking? The state loses 170,000 people every year. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- "People are just emptying out of California!” says Kristin Tate, who reports on why so many abandon the state: "exorbitant tax rates, the high crime rates, the failing public school systems, the exorbitant cost of living.” Despite these failures, the LA Times writes, Make America California Again? That's Biden's plan. Why would the rest of us want California's problems? Our video above highlights California laws and regulations Biden pushes for, and the California politicians who failed their way into his administration. “If we make America California, we are all going to be paying for it,” concludes Tate. Woke Colleges vs Testing https://youtu.be/WT-8z-sAx5Y John Stossel 518K subscribers Colleges are ending SAT/ACT tests in the name of diversity, despite research that shows they are good at predicting college success. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- Some schools now won't even look at applicants' test scores. The reason: richer kids may get tutoring, and some minority groups, on average, don't score as well. Bob Schaeffer of the advocacy group FairTest compares test-makers to "the tobacco industry." He's winning his war against testing. More than half of colleges in the country are now test-optional. "The test makers themselves admit that the SAT and ACT are inferior predictors of college performance [to grades]," Schaeffer tells me. But here's the data: high school grades predict 33% of college grades, while tests predict 32%. Not very “inferior!” Using both grades and SATs predicts 42% of college success. A University of California report found that this trend holds across all races and income levels: https://senate.ucsd.edu/media/424154/... In other words, tests are useful predictors of college success and failure. Yet university administrators didn't follow the faculty report's recommendations. Why? Diversity and political correctness. "It really is about making these campuses look right,” says Jason Riley of the Wall Street Journal editorial board. "...using them to make your college catalog look more colorful... It's about aesthetics. It's not about learning." I ask: "What's wrong with these schools saying we want a more diverse student body?” “How you achieve it is what I take issue with," Riley says. "There's this assumption. We just get these kids in the door. They'll be fine. They'll do okay. No, they won't! ... they're being set up to fail." To really increase diversity, Riley says, support school choice and charter schools that succeed in preparing disadvantaged kids for college. Unions Invade Private Property https://youtu.be/CNJjBC_w-oI John Stossel 518K subscribers “If I didn't allow them in, I'm the one going to jail!” rants a farmer after a California law allowed dozens of union activists to storm onto his farm. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- “You don't know if they're mad, if they're going to get violent,” says one employee who was working when the union arrived with flags and bullhorns, “It was a scary situation.” The union wanted the workers to strike. Few were interested. “It is asinine!” says Mike Fahner, the farm's owner. He points out that unions don't have the right to access private property without permission in any other industry. “If they came back every day I would have been paralyzed.” Mike and another business are challenging the law in the Supreme Court. But they lost in two lower courts. California officials argue that unions must be allowed to go onto farms because "workers remain isolated from the flow of information characteristic of modern society.” Mike says that's not true. “Every person has a cell phone in their pocket. [Workers] know how to communicate through Facebook and … Twitter much better than most. “This is trespassing,” he adds, “You should be going to jail for doing this.” The Woke Award Shows https://youtu.be/yiJFjUk0Zwo John Stossel 518K subscribers Award shows, like the Oscars, are beating themselves up for their history of white supremacy. Future awards will go only to films that meet diversity quotas. "These awards just shouldn't even be seen as legitimate," musician Eric July tells me. "It's supposed to be based on merit. You'd think that's what the awards are for," he adds. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- "If the work is good, it doesn't matter if it's exclusively black people working on it, exclusively white, Asian, none of that should even matter!" Eric July says. I push back: “Award shows historically haven't given out the same number of awards" to blacks. July replies, "it's a thing, but this battle was already fought and won… we're talking as if there's a Klansman behind every single corner--preventing people from being great!” In fact, Black actors have achieved great success in Hollywood. Samuel L. Jackson is America's all time highest grossing actor. What's the harm in diversity quotas? Actor Viggo Mortensen points out that they are "exclusionary" and could keep movies like "1917" from winning awards. The new rules are just one example of awards shows going "woke." The video above has more. One Dose or Two? https://youtu.be/REnJBfJx17A John Stossel 518K subscribers Government's "experts" botched much of the Covid response. Fortunately other experts -- scientists, economists, and web developers, are helping save lives. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- There have been MANY problems with government's Covid response. Government predictions were way off. State vaccine websites didn't work. Now government holds back vaccine doses, reserving them for 2nd-doses. But they could do more good as first doses. "We have given out more than 20 million second doses," Economics professor Alex Tabarrok points out. "Those could have been first doses.” He helped convince the British government to change to a first-dose-first policy. American experts like Anthony Fauci oppose that, saying they'd need to do long trials to determine if that's okay, and in "the amount of time that it will take ... we will already be in the arena of having enough vaccines to go around anyway." That's the wrong way to think about it, says Tabarrok. "You have to act decisively and you have to act quickly. Bureaucrats are just not used to doing that," he says. "When you have a tiger chasing you in the forest, you don't want to run a randomized controlled trial -- should I run left or should I run right?" England, which took his suggestion, is now beating America in vaccinating the most people. Other non-government experts have stepped in and made things better. One data scientist living with his parents made models that were much better than government's. They became widely used, and government modellers even asked him for advice. One woman became so frustrated by Massachusetts' vaccine website that she built her own, showing people where in the state vaccines were actually available: macovidvaccines.com. Within days, it was getting 400 hits a minute. The video above has more about how a quick change to government experts' vaccine policy could save lives. We Are ALL Essential with Mike Rowe John Stossel 518K subscribers Mike Rowe tells John Stossel that Covid rules had a huge unintended consequence: They crushed work, sapping meaning from many people's lives. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- Rowe says that lockdowns and business closures meant to save lives also take lives. Domestic violence is up. So are calls to suicide hotlines. Unemployment kills. A National Bureau for Economic Research study finds that "890,000 additional deaths may result over the next 15 years from actions taken to mitigate the spread of the coronavirus." That's one of many unintended consequence from a "safety first” mindset. Rowe's slogan is "safety THIRD”. If safety were really first, he notes, "knock the speed limit down to 10 miles an hour… make cars out of rubber… make everybody wear a helmet, and let's eliminate left turns!" "The goal of living is not to merely stay alive. Cars are a lot safer in the driveway. Ships are a lot safer when they don't leave Harbor, and people are a lot safer during a pandemic when they sit quietly in their basements, waiting for the all clear, but that that's not why cars, ships and people are on the planet!" The above video has more of Rowe's points, including starting with the arrogance of politicians decreeing which workers are "essential."
John Stossel- Privacy: Who Needs It, Make America California and 5 Other Stossel Clips John Stossel Privacy: Who Needs It Make America California Woke Colleges vs Testing Unions Invade Private Property The Woke Award Shows One Dose or Two? We Are ALL Essential with Mike Rowe Privacy: Who Needs It https://youtu.be/BW3Kmw6-huI John Stossel 518K subscribers Edward Snowden tries to convince me to worry more about privacy. "They're trying to shape your behavior!" he warns. I rudely say: "Americans by and large don't care, and I mostly don't care. I figure that teenage boy across the street could be picking up the stuff I send. The cork's out of the bottle. What difference does it make?" Snowden has good answers. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- Google's former CEO once said, creepily: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." Snowden points out that we will feel pressured to "constrain our intellectual curiosity, and even frankly, our weirdnesses ... because we could potentially someday be judged on the basis for it." Another scary thing about today’s internet is that big tech companies have the power to manipulate. Facebook even did a study that confirmed they could make users angrier by controlling which posts they saw. "This is controlling human behavior by a private company!" Snowden points out. "For what end? Just to see if they could... the next variants... are not going to be just to see if they could. It is going to be for their advantage. It is going to be to shape laws, it is going to be to shape elections.” More of Snowden's points, and my pushback on whether tech companies are really "monopolies", in the video above. Make America California https://youtu.be/NetJ5LDQ3Dc John Stossel 518K subscribers California is the Biden Administration’s role model for America. What are they thinking? The state loses 170,000 people every year. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- "People are just emptying out of California!” says Kristin Tate, who reports on why so many abandon the state: "exorbitant tax rates, the high crime rates, the failing public school systems, the exorbitant cost of living.” Despite these failures, the LA Times writes, Make America California Again? That's Biden's plan. Why would the rest of us want California’s problems? Our video above highlights California laws and regulations Biden pushes for, and the California politicians who failed their way into his administration. “If we make America California, we are all going to be paying for it,” concludes Tate. Woke Colleges vs Testing https://youtu.be/WT-8z-sAx5Y John Stossel 518K subscribers Colleges are ending SAT/ACT tests in the name of diversity, despite research that shows they are good at predicting college success. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- Some schools now won't even look at applicants' test scores. The reason: richer kids may get tutoring, and some minority groups, on average, don’t score as well. Bob Schaeffer of the advocacy group FairTest compares test-makers to "the tobacco industry." He's winning his war against testing. More than half of colleges in the country are now test-optional. "The test makers themselves admit that the SAT and ACT are inferior predictors of college performance [to grades]," Schaeffer tells me. But here's the data: high school grades predict 33% of college grades, while tests predict 32%. Not very “inferior!” Using both grades and SATs predicts 42% of college success. A University of California report found that this trend holds across all races and income levels: https://senate.ucsd.edu/media/424154/... In other words, tests are useful predictors of college success and failure. Yet university administrators didn't follow the faculty report's recommendations. Why? Diversity and political correctness. "It really is about making these campuses look right,” says Jason Riley of the Wall Street Journal editorial board. "...using them to make your college catalog look more colorful... It's about aesthetics. It's not about learning." I ask: "What's wrong with these schools saying we want a more diverse student body?” “How you achieve it is what I take issue with," Riley says. "There's this assumption. We just get these kids in the door. They'll be fine. They'll do okay. No, they won’t! ... they're being set up to fail." To really increase diversity, Riley says, support school choice and charter schools that succeed in preparing disadvantaged kids for college. Unions Invade Private Property https://youtu.be/CNJjBC_w-oI John Stossel 518K subscribers “If I didn’t allow them in, I’m the one going to jail!” rants a farmer after a California law allowed dozens of union activists to storm onto his farm. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- “You don’t know if they’re mad, if they’re going to get violent,” says one employee who was working when the union arrived with flags and bullhorns, “It was a scary situation.” The union wanted the workers to strike. Few were interested. “It is asinine!” says Mike Fahner, the farm’s owner. He points out that unions don’t have the right to access private property without permission in any other industry. “If they came back every day I would have been paralyzed.” Mike and another business are challenging the law in the Supreme Court. But they lost in two lower courts. California officials argue that unions must be allowed to go onto farms because "workers remain isolated from the flow of information characteristic of modern society.” Mike says that’s not true. “Every person has a cell phone in their pocket. [Workers] know how to communicate through Facebook and … Twitter much better than most. “This is trespassing,” he adds, “You should be going to jail for doing this.” The Woke Award Shows https://youtu.be/yiJFjUk0Zwo John Stossel 518K subscribers Award shows, like the Oscars, are beating themselves up for their history of white supremacy. Future awards will go only to films that meet diversity quotas. "These awards just shouldn't even be seen as legitimate," musician Eric July tells me. "It's supposed to be based on merit. You'd think that's what the awards are for," he adds. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- "If the work is good, it doesn't matter if it's exclusively black people working on it, exclusively white, Asian, none of that should even matter!" Eric July says. I push back: “Award shows historically haven't given out the same number of awards" to blacks. July replies, "it's a thing, but this battle was already fought and won… we're talking as if there's a Klansman behind every single corner--preventing people from being great!” In fact, Black actors have achieved great success in Hollywood. Samuel L. Jackson is America’s all time highest grossing actor. What's the harm in diversity quotas? Actor Viggo Mortensen points out that they are "exclusionary" and could keep movies like "1917" from winning awards. The new rules are just one example of awards shows going "woke." The video above has more. One Dose or Two? https://youtu.be/REnJBfJx17A John Stossel 518K subscribers Government's "experts" botched much of the Covid response. Fortunately other experts -- scientists, economists, and web developers, are helping save lives. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- There have been MANY problems with government's Covid response. Government predictions were way off. State vaccine websites didn't work. Now government holds back vaccine doses, reserving them for 2nd-doses. But they could do more good as first doses. "We have given out more than 20 million second doses," Economics professor Alex Tabarrok points out. "Those could have been first doses.” He helped convince the British government to change to a first-dose-first policy. American experts like Anthony Fauci oppose that, saying they'd need to do long trials to determine if that's okay, and in "the amount of time that it will take ... we will already be in the arena of having enough vaccines to go around anyway." That's the wrong way to think about it, says Tabarrok. "You have to act decisively and you have to act quickly. Bureaucrats are just not used to doing that," he says. "When you have a tiger chasing you in the forest, you don't want to run a randomized controlled trial -- should I run left or should I run right?" England, which took his suggestion, is now beating America in vaccinating the most people. Other non-government experts have stepped in and made things better. One data scientist living with his parents made models that were much better than government's. They became widely used, and government modellers even asked him for advice. One woman became so frustrated by Massachusetts' vaccine website that she built her own, showing people where in the state vaccines were actually available: macovidvaccines.com. Within days, it was getting 400 hits a minute. The video above has more about how a quick change to government experts' vaccine policy could save lives. We Are ALL Essential with Mike Rowe John Stossel 518K subscribers Mike Rowe tells John Stossel that Covid rules had a huge unintended consequence: They crushed work, sapping meaning from many people's lives. ---- Don't miss the weekly video from Stossel TV. Sign up here: https://johnstossel.activehosted.com/f/1 ---- Rowe says that lockdowns and business closures meant to save lives also take lives. Domestic violence is up. So are calls to suicide hotlines. Unemployment kills. A National Bureau for Economic Research study finds that "890,000 additional deaths may result over the next 15 years from actions taken to mitigate the spread of the coronavirus." That's one of many unintended consequence from a "safety first” mindset. Rowe's slogan is "safety THIRD”. If safety were really first, he notes, "knock the speed limit down to 10 miles an hour… make cars out of rubber… make everybody wear a helmet, and let's eliminate left turns!" "The goal of living is not to merely stay alive. Cars are a lot safer in the driveway. Ships are a lot safer when they don't leave Harbor, and people are a lot safer during a pandemic when they sit quietly in their basements, waiting for the all clear, but that that's not why cars, ships and people are on the planet!" The above video has more of Rowe's points, including starting with the arrogance of politicians decreeing which workers are "essential."
Alex Tabarrok is Bartley J. Madden Chair in Economics at the Mercatus Center and a professor of economics at George Mason University. Along with Tyler Cowen, he is the co-author of the popular economics blog Marginal Revolution and co-founder of Marginal Revolution University. He is the author of numerous academic papers in the fields of […]
Alex Tabarrok is a professor of economics at George Mason University and a research fellow at the Mercatus Center. Alex joins David Beckworth on the podcast to discuss how best to deal with COVID-19 and what lessons we can learn from it moving forward. Transcript for the episode can be found here: https://www.mercatus.org/bridge/tags/macro-musings Alex’s Twitter: @ATabarrok Alex’s GMU profile: https://mason.gmu.edu/~atabarro/ Related Links: Bonus segment with Tabarrok: https://www.youtube.com/watch?v=tQUnnumgXvw&feature=youtu.be *Pandemic Policy in Developing Countries: Recommendations for India* by Shruti Rajagopalan and Alex Tabarrok https://www.mercatus.org/publications/covid-19-policy-brief-series/pandemic-policy-developing-countries-recommendations-india Chad Brown’s PIIE archive, which include a series of articles related to COVID-19 and its impact on trade: https://www.piie.com/experts/senior-research-staff/chad-p-bown David’s blog: macromarketmusings.blogspot.com David’s Twitter: @DavidBeckworth
One of the most remarkable aspects of the last few generations is that for the first time in human history, at least to this degree, stuff has been getting cheaper while human labor gets more valuable. It’s a technology-enabled humanist revolution! At the same time, labor-intensive sectors like healthcare and education have become more expensive relative to the declining price of goods. Economists call this the “Baumol effect,” though it’s sometimes referred to as the “cost disease.” But economist Alex Tabarrok joins the show to discuss how that curse might actually be a blessing in disguise and how the Baumol effect radically disrupts our preconceived notions about effective government policies.Why are some prices getting higher while innovation causes the lowering of other prices? Why has the price of education gone up? What is the Baumol Effect? How can we substitute for skilled labor?Further Reading:Why Are the Prices So Damn High?, written by Eric Helland and Alexander TabarrokMarginal RevolutionStubborn Attachments, written by Tyler CowenRelated Content:The Automation Revolution is Upon Us, Building Tomorrow PodcastWill Artificial Intelligence Take Your Job?, Building Tomorrow PodcastOn Innovation: Don’t Ask for Permission, Building Tomorrow Podcast See acast.com/privacy for privacy and opt-out information.
Last week I reviewed Alex Tabarrok and Eric Helland’s Why Are The Prices So D*mn High?. On Marginal Revolution, Tabarrok wrote: SSC does have some lingering doubts and points to certain areas where the data isn’t clear and where we could have been clearer. I think this is inevitable. A lot has happened in the post World War II era. In dealing with very long run trends so much else is going on that answers will never be conclusive. It’s hard to see the signal in the noise. I think of the Baumol effect as something analogous to global warming. The tides come and go but the sea level is slowly rising I was pretty disappointed by this comment. T&H’s book blames cost disease on rising wages in high-productivity sectors, and consequently in education and medicine. My counter is that wages in high productivity sectors, education, and medicine are not actually rising. This doesn’t seem like an “area where you could have been clearer”. This seems like an existential challenge to your theory! Come on! Since we’re not getting an iota of help from the authors, we’re going to have to figure this out ourselves. The points below are based on some comments from the original post and some conversations I had with people afterwards. 1. Median wages, including wages in high-productivty sectors like manufacturing, are not rising I originally used this chart to demonstrate:
Why have prices for services like health care and education risen so much over the past fifty years? When I looked into this in 2017, I couldn’t find a conclusive answer. Economists Alex Tabarrok and Eric Helland have written a new book on the topic, Why Are The Prices So D*mn High? (link goes to free pdf copy, or you can read Tabarrok’s summary on Marginal Revolution). They do find a conclusive answer: the Baumol effect. T&H explain it like this: In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010. Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826. Put another way, a violinist can always choose to stop playing violin, retrain for a while, and work in a factory instead. Maybe in 1826, when factory owners were earning $1.14/hour and violinists were earning $5/hour, so no violinists would quit and retrain. But by 2010, factory workers were earning $26.44/hour, so if violinists were still only earning $5 they might all quit and retrain. So in 2010, there would be a strong pressure to increase violinists’ wage to at least $26.44 (probably more, since few people have the skills to be violinists). So violinists must be paid 5x more for the same work, which will look like concerts becoming more expensive.
(https://www.bobmurphyshow.com/wp-content/uploads/2019/02/tabarrok.jpeg) Alex Tabarrok is a professor of economics at George Mason University and co-author (with Tyler Cowen) of the very popular blog, Marginal Revolution. Bob and Alex cover a wide range of topics, including his early experience with Rothbardians, the brief window when economics blogs were the center of discussion, problems with the FDA, how a kidney market might work, and why Bitcoin is not as secure as some of its fans believe. Mentioned in the Episode and Other Links of Interest: Alex Tabarrok’s Marginal Revolution (https://marginalrevolution.com/) . The online economics courses created by Tabarrok and Cowen, MRUniversity (https://www.mruniversity.com/) . Cowen & Tabarrok’s economics textbooks (https://www.macmillanlearning.com/catalog/static/worth/cowentabarrok/) . Tabarrok’s edited collection, The Voluntary City (https://www.amazon.com/gp/product/B018DWGJ92/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=consultingbyr-20&creative=9325&linkCode=as2&creativeASIN=B018DWGJ92&linkId=7e69f3f647ff42e1cb50112910c68fba) . The Great Debt Debate– the epic fable (https://consultingbyrpm.com/blog/2012/01/the-economist-zone.html) with all of the relevant links. Scott Sumner’s book, Murphy’s (critical) review (https://mises.org/library/gold-standard-did-not-cause-great-depression-1) of it. Tabarrok blog post (https://marginalrevolution.com/marginalrevolution/2015/08/is-the-fda-too-conservative-or-too-aggressive.html) on problems with the FDA. Murphy’s book (co-authored with Doug McGuff) highlighting flaws with the FDA, The Primal Prescription (https://www.amazon.com/gp/product/1939563097/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=consultingbyr-20&creative=9325&linkCode=as2&creativeASIN=1939563097&linkId=fc5f1885cb5c52c8132aea43a2bf0f5b) . Tabarrok on the U.S. organ shortage (http://www.econlib.org/library/Columns/y2009/Tabarroklifesaving.html) . Murphy’s introductory booklet (http://understandingbitcoin.us) (co-authored with Silas Barta) on Bitcoin. Tabarrok blog post (https://marginalrevolution.com/marginalrevolution/2019/01/bitcoin-much-less-secure-people-think.html) on why Bitcoin is less secure than most people think. The sound engineer for this episode was Chris Williams. Learn more about his work at ChrisWilliamsAudio.com (http://www.ChrisWilliamsAudio.com) .
Tyler Cowen is an American economist, academic, and writer. He is the author of books like The Great Stagnation, Average is Over, The Complacent Class, and more. He occupies the Holbert L. Harris Chair of economics, as a professor at George Mason University, and is co-author, with Alex Tabarrok, of the popular economics blog Marginal Revolution. Cowen and Tabarrok have also ventured into online education by starting Marginal Revolution University. He currently writes a regular column for Bloomberg View. He also has written for such publications as The New York Times, The Wall Street Journal, Forbes, Time, Wired, Newsweek, and the Wilson Quarterly. Cowen also serves as faculty director of George Mason's Mercatus Center, a university research center that focuses on the market economy. In February 2011, Cowen received a nomination as one of the most influential economists in the last decade in a survey by The Economist. He was ranked #72 among the "Top 100 Global Thinkers" in 2011 by Foreign Policy Magazine "for finding markets in everything." His newest book (subject of conversation today) is Stubborn Attachments. It took him almost 20 years to write it. A few practical tips by Tyler: You should re-read the best books; you should have hobbies that make you think more, and you should argue for what you think is correct, but also understand it’s likely that you’re wrong. Then we talk about big picture things, like: What is Tyler’s view of economic growth and how it happens? Have we seen the majority of economic growth from the Internet, or is most of it yet to come? What are the 2 things we—as a society—should prioritize above all else. And also: When should we prioritize economic growth over the redistribution of wealth by government? What are the 3 things scarce in today’s economy? How will future cities look different in 20-30 years from now? And, will the future economy reward generalists or specialists? Learn more about Tyler: On Twitter Marginal Revolution (economic website) Conversations with Tyler (podcast) Tyler Cowen’s Ethnic Dining Guide Amazon Page ============= If you enjoyed this discussion: *Subscribe to Future Skills on: iTunes | Android | Stitcher | Spotify *Join our newsletter for weekly summaries of the episodes. *Apply for the Future Skills Program Email us at admin@futureskillspodcast.com
Econ Duel: Cowen/Tabarrok Is Education Signaling Or Skill Building? by Marginal Revolution University
Econ Duel: Cowen/Tabarrok Does Fiscal Policy Work? by Marginal Revolution University
Econ Duel: Cowen/Tabarrok Rent Or Buy? by Marginal Revolution University
Econ Duel: Cowen/Tabarrok Will Machines Take Our Jobs? by Marginal Revolution University
Rob Wiblin's top recommended EconTalk episodes v0.2 Feb 2020
Alex Tabarrok of George Mason University talks to EconTalk host Russ Roberts about a recent paper Tabarrok co-authored with Shruti Rajagopalan on Gurgaon, a city in India that until recently had little or no municipal government. The two discuss the successes and failures of this private city, the tendency to romanticize the outcomes of market and government action, and the potential for private cities to meet growing demand for urban living in India and China.
Alex Tabarrok of George Mason University talks to EconTalk host Russ Roberts about a recent paper Tabarrok co-authored with Shruti Rajagopalan on Gurgaon, a city in India that until recently had little or no municipal government. The two discuss the successes and failures of this private city, the tendency to romanticize the outcomes of market and government action, and the potential for private cities to meet growing demand for urban living in India and China.
My guests today are Alex Tabarrok and David Nott. Tabarrok is the co-author of the Marginal Revolution website. He's an economist at Covel's alma mater, George Mason University. Nott is the president of The Reason Foundation–an organization that is all about free minds and free markets. The topic is libertarianism. In this episode of Trend Following Radio we discuss: Covel and Tabarrok discuss China and the importance of teaching Chinese to American children; the greatest anti-poverty program in the world; the difficulty of improving the infrastructure of the United States; regulation and innovation; interest groups; benevolent dictators; the need for a democracy in an information age; innovation in the current American education system; why the American education system is focused on getting you to work for the man; women in the American education system; innovation, intellectual property and patents; the cumulative properties of innovation; grounds for optimism in the United States when it comes to innovation; how George Mason University became a libertarian economist hotspot; if Europe is following the path of Japan; why the European monetary union was a mistake; and why travel can create optimism. Covel and Nott discuss The Reason Foundation and what it means to be a libertarian today; how Nott explains being a libertarian to people; how the actor and comedian Drew Carey came to be involved with The Reason Foundation; finding the optimism to stay focused on swaying people to the founding principles of our country; politics in China; the 2008 financial crisis, state control, and Reason's response to the bailouts; common sense notions; pension reform; drug policy reform; the cultural policies surrounding prohibition; expanding the idea of liberty to younger people; Vernon Smith and Walter Williams. Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!
Michael Covel speaks with Alex Tabarrok and David Nott on today’s two-part podcast, which Covel affectionately calls “The Libertarian Episode”. First, Covel speaks with Alex Tabarrok, co-author of the Marginal Revolution website. He’s an economist at Covel’s alma mater, George Mason University. Covel and Tabarrok discuss China and the importance of teaching Chinese to American children; the greatest anti-poverty program in the world; the difficulty of improving the infrastructure of the United States; regulation and innovation; interest groups; benevolent dictators; the need for a democracy in an information age; innovation in the current American education system; why the American education system is focused on getting you to work for the man; women in the American education system; innovation, intellectual property and patents; the cumulative properties of innovation; grounds for optimism in the United States when it comes to innovation; how George Mason University became a libertarian economist hotspot; if Europe is following the path of Japan; why the European monetary union was a mistake; and why travel can create optimism. Next, Covel speaks with David Nott, president of The Reason Foundation--an organization that is all about free minds and free markets. Covel and Nott discuss The Reason Foundation and what it means to be a libertarian today; how Nott explains being a libertarian to people; how the actor and comedian Drew Carey came to be involved with The Reason Foundation; finding the optimism to stay focused on swaying people to the founding principles of our country; politics in China; the 2008 financial crisis, state control, and Reason’s response to the bailouts; common sense notions; pension reform; drug policy reform; the cultural policies surrounding prohibition; expanding the idea of liberty to younger people; Vernon Smith and Walter Williams. For more information on The Reason Foundation visit reason.com. Want a free trend following DVD? Go to trendfollowing.com/win.
We had the opportunity a couple months back to sit down with Producer Nicholas Tabarrok (“The Art of the Steal”, “Defendor”). On location in Nicholas’s LA office, he gives a dissertation on things he would have done differently had he started his career today, how he goes about packaging and financing films, and gets into ... The post Producing with Nicholas Tabarrok appeared first on Craft Truck.
Alex Tabarrok of George Mason University talks with EconTalk host Russ Roberts about his new book, Launching the Innovation Renaissance. Tabarrok argues that innovation in the United States is being held back by patent law, the legal system, and immigration policies. He then suggests how these might be improved to create a better climate for innovation that would lead to higher productivity and a higher standard of living.
Alex Tabarrok of George Mason University talks with EconTalk host Russ Roberts about his new book, Launching the Innovation Renaissance. Tabarrok argues that innovation in the United States is being held back by patent law, the legal system, and immigration policies. He then suggests how these might be improved to create a better climate for innovation that would lead to higher productivity and a higher standard of living.