POPULARITY
David Glidden (@dglid), co-founder of the Forecasting Meetup Network, provides a step-by-step how-to guide to organizing a forecasting meetup. Timestamps 1:08: Review of September forecasting meetup 2:08: Polymarket's commitment to community-building 3:03: October forecasting meetup 7:03: Interview with Glidden begins 8:29: Working with a partner 9:42: Sensitivity to rejection 12:48: Nate Silver's community of elites 14:25: Big visions 15:25: Shayne Coplan's vision 18:28: Scratch your own itch 19:52: Prediction market newsletters 21:39: Polymarket's The Oracle 22:32: How to begin 23:03: Finding a venue 26:17: Focus on n of 1 26:38: Fundraising 27:28: Manifund 32:39: Value for sponsors 37:17: Getting RSVPs 38:56: How to get involved 41:54: Expanding beyond Washington 43:26: Grassroots activism 44:46: Opponents of election betting 47:53: Why companies don't invest in grassroots activism Join us for a pre-election forecasting & prediction markets meetup/party this Tuesday! Details and RSVP for free here: https://partiful.com/e/ITHDAcznT1DppUXD2p9x Trade on Polymarket.com, the world's largest prediction market. Follow Star Spangled Gamblers on Twitter @ssgamblers
David Glidden (@dglid) draws lessons from the sports betting world for building the forecasting community. Timestamps 0:00: Intro begins 0:09: Saul Munn 1:26: David Glidden 1:52: Washington DC Forecasting and Prediction Markets Meetup 6:54: Interview with Glidden begins 13:32: Importance of building forecasting community 14:39: Effective altruism 22:02: DC Forecasting and Prediction Markets Meetup 24:49: Sports betting 40:53: Repeatable lines in prediction markets Show Notes September 26 DC Forecasting and Prediction Markets Meetup RSVP: https://partiful.com/e/zpObY6EmiQEkgpcJB6Aw DC Forecasting and Prediction Markets Meetup Manifund: https://manifund.org/projects/forecasting-meetup-network---washington-dc-pilot-4-meetups For more information on the meetup, DM David Glidden @dglid. Trade on Polymarket.com, the world's largest prediction market. Follow Star Spangled Gamblers on Twitter @ssgamblers.
Pratik Chougule summarizes the latest developments in the legal battle between Kalshi and the CFTC on election betting. Pratik Chougule and Mick Bransfield do a deep dive into the CFTC's arguments in front of DC federal district court Judge Jia Cobb. Timestamps 0:00: Pratik introduces segment with Bransfield 1:43: Update on latest in Kalshi-CFTC legal battle 9:34: Bransfield segment begins 11:18: CFTC counsel's poor presentation 20:33: Definition of a contest 21:47: Market manipulation 27:02: Cobb's concerns 27:29: CFTC's arguments about gaming 29:34: Importance of public comments to CFTC 25:38: DC Forecasting and Prediction Markets Meetup Show Notes September 26 DC Forecasting and Prediction Markets Meetup RSVP: https://partiful.com/e/zpObY6EmiQEkgpcJB6Aw DC Forecasting and Prediction Markets Meetup Manifund: https://manifund.org/projects/forecasting-meetup-network---washington-dc-pilot-4-meetups For more information on the meetup, DM David Glidden @dglid. Trade on Polymarket.com, the world's largest prediction market. Follow Star Spangled Gamblers on Twitter @ssgamblers.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifold markets isn't very good, published by Robin on June 20, 2024 on The Effective Altruism Forum. Disclaimer I currently have an around 400-day streak on Manifold Markets (though lately I only spend a minute or two a day on it) and have no particular vendetta against it. I also use Metaculus. I'm reasonably well-ranked on both but have not been paid by either platform, ignoring a few Manifold donations. I have not attended any Manifest. I think Manifold has value as a weird form of social media, but I think it's important to be clear that this is what it is, and not a manifestation of collective EA or rationalist consciousness, or an effective attempt to improve the world in its current form. Overview of Manifold Manifold is a prediction market website where people can put virtual money (called "mana") into bets on outcomes. There are several key features of this: 1. You're rewarded with virtual money both for participating and for predicting well, though you can also pay to get more. 2. You can spend this mana to ask questions, which you will generally vet and resolve yourself (allowing many more questions than on comparable sites). Moderators can reverse unjustified decisions but it's usually self-governed. Until recently, you could also donate mana to real charities, though recently this stopped; now only a few "prize" questions provide a more exclusive currency that can be donated, and most questions produce unredeemable mana. How might it claim to improve the world? There are two ways in which Manifold could be improving the world. It could either make good predictions (which would be intrinsically valuable for improving policy or making wealth) or it could donate money to charities. Until recently, the latter looked quite reasonable: the company appeared to be rewarding predictive power with the ability to donate money to charities. The counterfactuality of these donations is questionable, however, since the money for it came from EA-aligned grants, and most of it goes to very mainstream EA charities. It has a revenue stream from people buying mana, but this is less than $10k/month, some of which isn't really revenue (since it will ultimately be converted to donations), and presumably this doesn't cover the staff costs. The founders appear to believe that eventually they will get paid enough money to run markets for other organisations, in which case the donations would be counterfactual. But this relies on the markets producing good predictions. Sadly, Manifold does not produce particularly good predictions. In last year's ACX contest, it performed worse than simply averaging predictions from the same number of people who took part in each market. Their calibration, while good by human standards, has a clear systematic bias towards predicting things will happen when they don't (Yes bias). By contrast, rival firm Metaculus has no easily-corrected bias and seems to perform better at making predictions on the same questions (including in the ACX contest). Metaculus' self-measured Brier score is 0.111, compared to Manifold's 0.168 (lower is better, and this is quite a lot lower, though they are not answering all the same questions). Metaculus doesn't publish the number of monthly active users like Manifold does, but the number of site visits they receive are comparable ( slightly higher for Metaculus by one measure, lower by another), so it doesn't seem like the prediction difference can be explained by user numbers alone. Can the predictive power be improved? Some of the problems with Manifold, like the systematic Yes bias, can be algorithmically fixed by potential users. Others are more intrinsic to the medium. Many questions resolve based on extensive discussions about exactly how to categorise reality, meaning that subtle clarifications by the author can ...
Pratik Chougule previews Manifest 2024. Ben Freeman (@benwfreeman1) and Alex Chan (@ianlazaran) negotiate a side bet on whether Trump will select Tim Scott, Doug Burgum, or Marco Rubio as his running mate. They discuss how Trump will make the decision and when he'll announce it. Timestamps 0:09: Pratik introduces segment on Trump VP selection 1:44: Manifest 2024 7:18: Episode on VP nomination begins 7:43: What are SSG Title Belt Championships? 9:37: How Ben Freeman won the SSG Title Belt 11:56: Ben's proposed side bet on Trump's VP selection 13:36: Alex Chan's background 15:38: OpenBet/DonBest 19:26: How Alex got into political gambling 20:49: Political nerds vs. professional gamblers 22:36: Ben and Alex negotiate theie side bet 27:09: How Trump will select his VP 30:10: Loyalty considerations 32:21: Trump's strength in GOP 34:48: Will Trump penalize those who ran against him? 35:32: When will Trump announce the pick? 36:38: How deep is Trump's VP bench? 40:09: Has Trump already decided? Follow SSG on Twitter: @ssgamblers Trade on Polymarket, the world's largest prediction market, at polymarket.com Attend Manifest, a festival celebrating predictions, markets, and mechanisms, hosted by Manifold Markets. June 7-9 at Lighthaven Campus, Berkeley, CA. Tickets are available at https://www.manifest.is/#tickets. Use the discount code SSG10 to get 10% of the ticket price.
The CFTC has proposed a new rule that would restrict betting on election markets. Pratik Chougule and Mick Bransfield do a deep dive into the CFTC's deliberations. Timestamps 0:00: Introduction 6:11: Interview with Bransfield begins 8:50: CFTC Chairman Rostin Behnam is spending political capital on the issue 10:14: What is rulemaking? 11:24: Behnam's statement 11:48: Concerns about CFTC staff time on election contracts 14:25: Conflict of interest accusations against Kristin Johnson 17:34: Senator Tommy Tuberville's involvement 20:02: Goldsmith-Romero's views 25:13: Mersinger's dissent 27:38: Pham's dissent 31:10: Discord at the CFTC 32:01: Questions about CFTC enforcement 35:41: CFTC's power and resources 37:58: Behnam's political ambition 41:06: Pratik's critique of Mersinger's dissent 45:12: Pratik's impressions from attending CFTC meeting in person Trade on Polymarket, the world's largest prediction market, at polymarket.com Attend Manifest, a festival celebrating predictions, markets, and mechanisms, hosted by Manifold Markets. June 7-9 at Lighthaven Campus, Berkeley, CA. Tickets are available at https://www.manifest.is/#tickets. Use the discount code SSG10 to get 10% of the ticket price.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Higher-Order Forecasts, published by Ozzie Gooen on May 25, 2024 on The Effective Altruism Forum. Higher-order forecasting could be a useful concept for prediction markets and forecasting systems more broadly. The core idea is straightforward: Nth-order forecasts are forecasts about (N-1)th order forecasts. Examples Here are some examples: 0-Order Forecasting (i.e., the ground truth) Biden won the 2020 U.S. presidential election The US GDP in 2023 was $27 trillion 1st-Order Forecasting (i.e., regular forecasting) What is the chance that Trump will win the 2024 U.S. presidential election? What will be the GDP of the US in 2024? 2nd-Order Forecasting How much will the forecasts for US GDP in 2024 and 2025 be correlated over the next year? How many forecasts will the question "What will be the GDP of the US in 2024?" receive in total? If the question "What is the chance that a Republican will win the 2028 Presidential Election?" was posted to Manifold, with a subsidy of 100k Mana, what would the prediction be, after 1 month?" 3rd-Order Forecasting How much will the forecasts, [How much will the forecasts for US GDP in 2024 and 2025 be correlated over the next year?] and [How many forecasts will the question "What will be the GDP of the US in 2024?" receive in total?], be correlated, from now until 2024? How valuable were all the forecasts for the question, ['How many forecasts will the question "What will be the GDP of the US in 2024?" receive in total?'] As forecasting systems mature, higher-order forecasts could play a role analogous to financial derivatives in markets. Derivatives allow for more efficient pricing, risk transfer, and information aggregation by letting market participants express views on the relationships between assets. Similarly, higher-order forecasts could allow forecasters to express views on the relationships between predictions, leading to a more efficient and informative overall forecasting ecosystem. Benefits Some potential benefits of higher-order forecasting include: 1. Identify Overconfidence Improve the accuracy of forecasts by having participants directly predict and get rewarded for estimating overconfidence or poor calibration in other forecasts. "How overconfident is [forecast/forecaster] X" 2. Prioritize Questions Prioritize the most important and decision-relevant questions by forecasting the value of information from different predictions. "How valuable is the information from forecasting question X?" 3. Surface Relationships Surface key drivers and correlations between events by letting forecasters predict how different questions relate to each other. "How correlated will the forecasts for questions X and Y be over [time period]?" 4. Faster Information Aggregation Enable faster aggregation of information by allowing forecasts on future forecast values, which may update more frequently than the underlying events. "What will the forecast for question X be on [future date], conditional on [other forecasts or events]?" 5. Leverage Existing Infrastructure Leverage the existing infrastructure and resolution processes of prediction platforms, which are already designed to handle large numbers of forecasting questions. We've already seen some early examples of higher-order forecasts on platforms like Manifold Markets. For example, with the recent questions: Will there be a Manifold bot that makes profitable bets on random 1-month markets by December 2024? (Ṁ3,000 subsidy!) Manifold Top Traders Leaderboard Ranking Prediction (2024) If Manifold begins allowing real-money withdrawals, will its accuracy improve? Is Manifold's P(Doom) by 2050 currently between 10% and 90%? [Resolves to Poll] Will Manifold be more accurate than real-money markets in forecasting the 2024 election? Challenges Of course, there are also challenges and risks to cons...
Every year, Star Spangled Gamblers hosts the Golden Modelos—an awards show for the best and worst of political gambling in the previous year. Abhi Kylasa (AENews) and Vanilla Vice return to the show to discuss which nominees should make the ballot. Timestamps 0:00: Pratik introduces the Golden Modelos and why they matter 5:56: Vice introduces the Golden Modelos 6:53: Pratik explains the Golden Modelos process 8:33: Best Market 9:27: Room temperature superconductor 10:15: Will 2023 be the hottest year market 11:23: Best Trade 11:27: Bonding the Bitcoin ETF market 12:33: Domer buying Ramaswamy at 500-1 12:49: Ian Bezek recommending buying Javier Milei 13:05: Gaeten Dugas buying Taylor Swift to be number one song 13:12: Domer debt limit profits 13:17: Worst Trade 14:19: Mr. Beast subscriber count 14:44: MagaVacuum side betting that DeSantis won't run for president 15:05: Polymarket user losing $100k on Trump reinstatement 15:21: Abe Kurland side bets on Ramaswamy 17:21: Best Shitposter 19:06: Domer's shitposting 19:56: RelayThief's shitposting 20:44: Rookie of the Year 21:12: Naman Mehndiratta 22:39: Manifold Markets 23:20: Betting platforms 23:48: TheWinner 25:29: Trader of the Year 25:46: ANoland 26:07: Gaeten Dugas 27:59: Jonathan Zubkoff (ZubbyBadger) 28:33: Doug Campbell 29:05: Worst Pump 31:04: Kalshi election contracts 31:15: Hamas control of Gaza 32:00: Trump third indictment 32:44: Semiconductor yes holders 33:40: RFK Democratic nominee 33:46: AI to win Time Person of the Year 34:22: Best News Source 34:41: Politico Punchbowl 35:00: PredictIt comments 35:28: The Information's coverage of OpenAI 35:50: RacetotheWH by Logan Phillips 37:29: CSP Discord 38:36: Service to Political Gambling 38:36: PredictIt 39:50: Biggest Rules Cuck 39:56: Government Shutdown 40:33: Lower case "trump" versus upper case "Trump" 41:23: "widespread flooding" in Los Angeles 42:19: submarine debris 43:04: Trump indictment on March 31 43:48: U.S. rescue of Hamas hostages 44:19: Biggest Rules Dispute 44:50: Did Israel have advanced knowledge of Hamas attack 45:59: Postscript 46:17: Abe Kurland's response to Worst Bet nomination 47:53: CSP vs. CatClan Discords Trade on Polymarket, the world's largest prediction market at polymarket.com Follow SSG on Twitter @ssgamblers
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You probably want to donate any Manifold currency this week, published by Henri Thunberg on April 24, 2024 on The Effective Altruism Forum. In a recent announcement, Manifold Markets say they will change the exchange rate for your play-money (called "Mana") from 1:100 to 1:1000. Importantly, one of the ways to use this Mana is to do charity donations. TLDR: The CTA here is to log in to your Manifold account and donate currency you have on your account before May 1st. It is a smooth process, and would take you
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wild animal welfare? Stable totalitarianism? Predict which new EA cause area will go mainstream!, published by Jackson Wagner on March 13, 2024 on The Effective Altruism Forum. Long have I idly whiled away the hours browsing Manifold Markets, trading on trivialities like videogame review scores or NASA mission launch dates. It's fun, sure -- but I am a prediction market advocate, who believes that prediction markets have great potential to aggregate societally useful information and improve decision-making! I should stop fooling around, and instead put my Manifold $Mana to some socially-productive use!! So, I've decided to create twenty subsidized markets about new EA cause areas. Each one asks if the nascent cause area (like promoting climate geoengineering, or researching space governance) will receive $10,000,000+ from EA funders before the year 2030. My hope is that that these markets can help create common knowledge around the most promising up-and-coming "cause area candidates", and help spark conversations about the relative merits of each cause. If some causes are deemed likely-to-be-funded-by-2030, but little work is being done today, that could even be a good signal for you to start your own new project in the space! Without further ado, here are the markets: Animal Welfare Will farmed-invertebrate welfare (shrimp, insects, octopi, etc) get $10m+ from EA funders before 2030? Will wild-animal welfare interventions get $10m+ from EA funders before 2030? [embed most popular market] Global Health & Development Will alcohol, tobacco, & sugar taxation... ? Mental-health / subjective-wellbeing interventions in developing countries? Institutional improvements Approval voting, quadratic funding, liquid democracy, and related democratic mechanisms? Georgism (aka land value taxes)? Charter Cities / Affinity Cities / Network States? Investing (Note that the resolution criteria on these markets is different than for the other questions, since investments are different from grants.) Will the Patient Philanthropy Fund grow to $10m+ before 2030? Will "impact markets" distribute more than $10m of grant funding before 2030? X-Risk Civilizational bunkers? Climate geoengineering? Preventing stable totalitarianism? Preventing S-risks? Artificial Intelligence Mass-movement political advocacy for AI regulation (ie, "PauseAI")? Mitigation of AI propaganda / "botpocalypse" impacts? Transhumanism Cryonics & brain-emulation research? Human intelligence augmentation / embryo selection? Space governance / space colonization? Moral philosophy Research into digital sentience or the nature of consciousness? Interventions primarily motivated by anthropic reasoning, acausal trade with parallel universes, alien civilizations, simulation arguments, etc? I encourage you to trade on these markets, comment on them, and boost/share them -- put your Manifold mana to a good use by trying to predict the future trajectory of the EA movement! Here is one final market I created, asking which three of the cause areas above will receive the most support between now and 2030. Resolution details & other thoughts The resolution criteria for most of these questions involves looking at publicly-available grantmaking documentation (like this Openphil website, for example), adding up all the grants that I believe qualify as going towards the stated cause area, and seeing if the grand total exceeds ten million dollars. Since I'm specifically interested in how the EA movement will grow and change over time, I will only be counting money from "EA funders" -- stuff like OpenPhil, LTFF, SFF, Longview Philanthropy, Founders Fund, GiveWell, etc, will count for this, while money from "EA-adjacent" sources (like, say, Patrick Collison, Yuri Milner, the Bill & Melinda Gates Foundation, Elon Musk, Vitalik Buterin, Peter T...
Stephen Grugett is the co-founder of Manifold Markets, the world's largest prediction market platform where people bet on politics, tech, sports, and more. Steve and Stephen discuss:0:00 Introduction0:52 Stephen Grugett's background5:20 The genesis and mission of Manifold Markets11:25 The play money advantage: Legalities and user engagement20:47 Manifold's user base and the power of calibration23:35 Simplifying prediction markets for broader engagement27:31 Revenue streams and future business directions30:46 Legal challenges in prediction markets31:47 Dating markets32:53 The Art of PR38:32 Global reach and community engagement39:27 The future of Manifold Markets and user predictions43:38 Life in the Bay Area; Tech, culture, and crazy stuffManifold Markets: https://manifold.markets/Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.--Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus, SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU. Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on Twitter @hsu_steve.
Saul Munn is one of the world's leading experts on building the forecasting community. He is the Co-Founder of Optic Forecasting and the Lead Organizer of the Manifest conference. In Part 1, Saul joins the show to discuss which communities comprise the emerging forecasting community. In Part 2, Brian Darling, former counsel to Senator Rand Paul, returns to assess the Tennessee Senator Marsha Blackburn's odds of becoming Trump's VP pick. In Part 3, Nathan Young returns to the show to advise on how to short AI hype in betting markets. 0:00: Pratik introduces the segment with Saul Munn 1:24: Saul's disclaimer on conflicts of interest 1:56: Manifest Conference 2024 3:26: Intro for Brian Darling segment on the GOP VP nominee market 4:16: How to trade on the GOP VP nomination 5:08: Intro to interview with Nathan Young 9:03: Interview with Saul begins 9:26: The importance of community in the political betting space 11:00: Optic forecasting clubs 12:44: How Saul became interested in the forecasting community 13:25: Manifest Conference 13:46: Diversity in the forecasting community 14:32: Who is in the broader forecasting community? 14:56: Destiny's role in the forecasting community 16:57: The types of people who are interested in forecasting 19:44: Differences between Washington politics and the forecasting community 20:40: Differences between the political betting community and the forecasting community 21:18: Communities interested in forecasting 21:47: Communities that could be interested in forecasting 24:34: Why some communities resist forecasting 28:20: Segment with Brian Darling begins 28:27: Kim Reynolds's VP odds 29:06: Marsha Blackburn's VP odds 30:05: Swing state VP contenders 31:26: Segment with Nathan Young begins 31:59: Taxing bad predictions 33:46: Shorting AI enthusiasm in political betting markets 34:47: Irrational pricing in Time Person of the Year markets 40:12: Hedge funds and AI
Colombia-based trader Ian Bezek (@irbezek) returns to the show to offer some final thoughts on the close and uncertain presidential race in Argentina. 1:32: Ian's thoughts on the final debate between Javier Milei and Sergio Massa 2:10: Ian's advice on Polymarket's margin market 5:20: Interview begins 6:06: Massa's backround 7:43: Massa's political baggage 8:45: Questions about Massa's alleged drug addiction 9:56: Market volatility since the eve of the first round of elections 11:39: Why did polls miss on the first round? 12:49: Milei's shortcomings 14:34: Summary of first round results 15:38: Why so much market volatility? 18:29: How much of Patricia Bullrich's support will go to Milei? 19:57: Argentina's economy 25:41: Milei's extremist statements 27:54: How foreign investors are seeing the election 29:19: Massa overperformed the polls 32:25: Polling 34:52: Ian's predictions and advice 35:50: Pratik's argument for buying Massa 39:18: Stock prices of Milei's former employer 40:58: Implications of a Milei victory for political betting 42:27: Recent elections in South America Follow Star Spangled Gamblers on Twitter: @ssgamblers
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zvi's Manifold Markets House Rules, published by Zvi on November 13, 2023 on LessWrong. All markets created by Zvi Mowshowitz shall be graded according to the rules described herein, including the zeroth rule. The version of this on LessWrong shall be the canonical version, even if other versions are later posted on other websites. Rule 0: If the description of a particular market contradicts these rules, the market's description wins, the way a card in Magic: The Gathering can break the rules. This document only establishes the baseline rules, which can be modified. Effort put into the market need not exceed that which is appropriate to the stakes wagered and the interestingness level remaining in the question. I will do my best to be fair, and cover corner cases, but I'm not going to sink hours into a disputed resolution if there isn't very serious mana on the line. If it's messy and people care I'd be happy to kick such questions to Austin Chen. Obvious errors will be corrected. If for example a date is clearly a typo, I will fix. If the question description or resolution mechanism does not match the clear intent or spirit of the question, or does not match its title, in an unintentional way, or is ambiguous, I will fix that as soon as it is pointed out. If the title is the part in error I will fix the title. If you bet while there is ambiguity or a contradiction here, and no one including you has raised the point, then this is at your own risk. If the question was fully ambiguous in a scenario, I will choose resolution for that scenario based on what I feel upholds the spirit of the question and what traders could have reasonably expected, if such option is available. When resolving potentially ambiguous or disputable situations, I will still strive whenever possible to get to either YES or NO, if I can find a way to do that and that is appropriate to the spirit of the question. Ambiguous markets that have no other way to resolve, because the outcome is not known or situation is truly screwed up, will by default resolve to the manipulation-excluded market price, if I judge that to be a reasonable assessment of the probability involved. This includes conditional questions like 'Would X be a good use of time?' when X never happens and the answer seems uncertain. If even those doesn't make any sense, N/A it is, but that is a last resort. Egregious errors in data sources will be corrected. If in my opinion the intended data source is egregiously wrong, I will overrule it. This requires definitive evidence to overturn, as in a challenge in the NFL. If the market is personal and subjective (e.g. 'Will Zvi enjoy X?' 'Would X be a good use of Zvi's time?'), then my subjective judgment rules the day, period. This also includes any resolution where I say I am using my subjective judgment. That is what you are signing up for. Know your judge. Within the realm of not obviously and blatantly violating the question intent or spirit, technically correct is still the best kind of correct when something is well-specified, even if it makes it much harder for one side or the other to win. For any market related to sports, Pinnacle Sports house rules apply. Markets will resolve early if the outcome is known and I realize this. You are encouraged to point this out. Markets will resolve early, even if the outcome is unknown, if the degree of uncertainty remaining is insufficient to render the market interesting, and the market is trading >95% or 90% or
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zvi's Manifold Markets House Rules, published by Zvi on November 13, 2023 on LessWrong. All markets created by Zvi Mowshowitz shall be graded according to the rules described herein, including the zeroth rule. The version of this on LessWrong shall be the canonical version, even if other versions are later posted on other websites. Rule 0: If the description of a particular market contradicts these rules, the market's description wins, the way a card in Magic: The Gathering can break the rules. This document only establishes the baseline rules, which can be modified. Effort put into the market need not exceed that which is appropriate to the stakes wagered and the interestingness level remaining in the question. I will do my best to be fair, and cover corner cases, but I'm not going to sink hours into a disputed resolution if there isn't very serious mana on the line. If it's messy and people care I'd be happy to kick such questions to Austin Chen. Obvious errors will be corrected. If for example a date is clearly a typo, I will fix. If the question description or resolution mechanism does not match the clear intent or spirit of the question, or does not match its title, in an unintentional way, or is ambiguous, I will fix that as soon as it is pointed out. If the title is the part in error I will fix the title. If you bet while there is ambiguity or a contradiction here, and no one including you has raised the point, then this is at your own risk. If the question was fully ambiguous in a scenario, I will choose resolution for that scenario based on what I feel upholds the spirit of the question and what traders could have reasonably expected, if such option is available. When resolving potentially ambiguous or disputable situations, I will still strive whenever possible to get to either YES or NO, if I can find a way to do that and that is appropriate to the spirit of the question. Ambiguous markets that have no other way to resolve, because the outcome is not known or situation is truly screwed up, will by default resolve to the manipulation-excluded market price, if I judge that to be a reasonable assessment of the probability involved. This includes conditional questions like 'Would X be a good use of time?' when X never happens and the answer seems uncertain. If even those doesn't make any sense, N/A it is, but that is a last resort. Egregious errors in data sources will be corrected. If in my opinion the intended data source is egregiously wrong, I will overrule it. This requires definitive evidence to overturn, as in a challenge in the NFL. If the market is personal and subjective (e.g. 'Will Zvi enjoy X?' 'Would X be a good use of Zvi's time?'), then my subjective judgment rules the day, period. This also includes any resolution where I say I am using my subjective judgment. That is what you are signing up for. Know your judge. Within the realm of not obviously and blatantly violating the question intent or spirit, technically correct is still the best kind of correct when something is well-specified, even if it makes it much harder for one side or the other to win. For any market related to sports, Pinnacle Sports house rules apply. Markets will resolve early if the outcome is known and I realize this. You are encouraged to point this out. Markets will resolve early, even if the outcome is unknown, if the degree of uncertainty remaining is insufficient to render the market interesting, and the market is trading >95% or 90% or
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A free to enter, 240 character, open-source iterated prisoner's dilemma tournament, published by Isaac King on November 9, 2023 on LessWrong. I'm running an iterated prisoner's dilemma tournament where all programs are restricted to 240 characters maximum. The exact rules are posted in the Manifold Markets link; I figured I'd cross-post the contest here to reach more potentially-interested people. (You don't need a Manifold account to participate, you can just put your program in the comments on LessWrong or PM me.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A free to enter, 240 character, open-source iterated prisoner's dilemma tournament, published by Isaac King on November 9, 2023 on LessWrong. I'm running an iterated prisoner's dilemma tournament where all programs are restricted to 240 characters maximum. The exact rules are posted in the Manifold Markets link; I figured I'd cross-post the contest here to reach more potentially-interested people. (You don't need a Manifold account to participate, you can just put your program in the comments on LessWrong or PM me.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Last March we (ACX and Manifold Markets) did a test run of an impact market, a novel way of running charitable grants. You can read the details at the links, but it's basically a VC ecosystem for charity: profit-seeking investors fund promising projects and grantmakers buy credit for successes from the investors. To test it out, we promised at least $20,000 in retroactive grants for forecasting-related projects, and intrepid guinea-pig investors funded 18 projects they thought we might want to buy. Over the past six months, founders have worked on their projects. Some collapsed, losing their investors all their money. Others flourished, shooting up in value far beyond investor predictions. We got five judges (including me) to assess the final value of each of the 18 projects. Their results mostly determine what I will be offering investors for their impact certificates (see caveats below). They are: https://www.astralcodexten.com/p/impact-market-mini-grants-results
As the Israel-Hamas war broke out, misinformation and fake imagery surged on X, the platform formerly known at Twitter. Can Meta's Threads fill the real-time news hole that X created? Should it?Then, Kevin debriefs us on his reporting on Manifold Markets, where Silicon Valley Rationalists bet on the likelihoods of different events.Plus: The company digitizing smell.Today's Guest:Alex Wiltschko is the founder of Osmo, a company trying to digitize smell.Additional Reading:Casey Newton on how the war in Israel may change Threads.Some tech insiders believe betting can change the world.The company Osmo put out a research paper showing that an A.I. model it had created was performing better than the “average human panelist” in predicting odor. We want to hear from you.
Intro: Pratik Chougule discusses a recent Manifold meetup he attended and the importance of attending in-person forecasting community events Part 1: Rule3O3 joins the show for the first time to discuss epistemic humility in political betting and how to realize the promise of political prediction markets. Part 2: Ben Freeman and Pratik Chougule continue their conversation on 2024 Republican presidential primary. Ben and Pratik discuss whether a fractured Republican field will save Trump again or whether the electorate will rally around an alternative. Timestamps 0:00: Pratik introduces the two segments 1:12: Manifold and the importance of in-person forecasting events 6:34: Rule3O3 intro 8:26: Political prediction market efficiency 10:29: Epistemic humility in prediction markets 13:26: Technical analysis 16:20: Fun in prediction markets 18:43: Ben Freeman on whether a fractured Republican field will save Trump again or whether the electorate will rally around DeSantis 20:43: High-profile Republican endorsements 23:19: Attacks on Trump 27:39: Trump's financial problems 29:41: Will Republicans coalesce around an alternative 33:41: DeSantis's strengths and weaknesses 39:54: Ben's picks on PredictIt 41:57: Pence lottos
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifolio: The tool for making Kelly optimal bets on Manifold Markets, published by Will Howard on August 10, 2023 on The Effective Altruism Forum. I've made a calculator that makes it easy to make correctly sized bets on Manifold. You just put in the market and your estimate of the true probability, and it tells you the right amount to bet according to the Kelly criterion. "The right amount to bet according to the Kelly criterion" means maximising the expected logarithm of your wealth. There is a simple formula for this in the case of bets with fixed odds, but this doesn't work well on prediction markets in general because the market moves in response to your bet. Manifolio accounts for this, plus some other things like the risk from other bets in your portfolio. I've aimed to make it simple and robust so you can focus on estimating the probability and trust that you are betting the right amount based on this. You can use it here (with a market prefilled as an example), or read a more detailed guide in the github readme. It's also available as a chrome extension... which currently has to be installed in a slightly roundabout way (instructions also in the readme). I'll update here when it's approved in the chrome web store. Why bet Kelly (redux)? Much ink has been spilled about why maximising the logarithm of your wealth is a good thing to do. I'll just give a brief pitch for why it is probably the best strategy, both for you, and for "the good of the epistemic environment". For you Given a specific wealth goal, it minimises the expected time to reach that goal compared to any other strategy. It maximises wealth in the median (50th percentile) outcome. Furthermore, for any particular percentile it gets arbitrarily close to being the best strategy as the number of bets gets very large. So if you are about to participate in 100 coin flip bets in a row, even if you know you are going to get the 90th percentile luckiest outcome, the optimal amount to bet is still close to the Kelly optimal amount (just marginally higher). In my opinion this is the most compelling self-interested reason, even if you get very lucky or unlucky it's never far off the best strategy. (the above are all in the limit of a large number of iterated bets) There are also some horror stories of how people do when using a more intuition based approach... it's surprisingly easy to lose (fake) money even when you have favourable odds. For the good of the epistemic environment A marketplace consisting of Kelly bettors learns at the optimal rate, in the following sense: Special property 1: the market will produce an equilibrium probability that is the wealth weighted average of each participant's individual probability estimate. In other words it behaves as if the relative wealth of each participant is the prior on them being correct. Special property 2: When the market resolves one way or the other, the relative wealth distribution ends up being updated in a perfectly Bayesian manner. When it comes time to bet on the next market, the new wealth distribution is the correctly updated prior on each participant being right, as if you had gone through and calculated Bayes' rule for each of them. Together these mean that, if everyone bets according to the Kelly criterion, then after many iterations the relative wealth of each participant ends up being the best possible indicator of their predictive ability. And the equilibrium probability of each market is the best possible estimate of the probability, given the track record of each participant. This is a pretty strong result! I'd love to hear any feedback people have on this. You can leave a comment here or contact me by email. Thanks to the people who funded this project on Manifund, and everyone who has given feedback and helped me test it out This is shown...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An upcoming US Supreme Court case may impede AI governance efforts, published by NickGabs on July 16, 2023 on LessWrong. According to various sources, the US Supreme Court is poised to rule on and potentially overturn the principle of "Chevron deference." Chevron deference is a key legal principle by which the entire federal bureaucracy functions, being perhaps the most cited case in American administrative law. Basically, it says that when Congress establishes a federal agency and there is ambiguity in the statutes determining the scope of the agency's powers and goals, courts will defer to the agency's interpretation of that scope as long as it is reasonable. While the original ruling seems to have merely officially codified the previously implicit rules regarding the legal authority of federal agencies, this practice seems likely to have increased the power and authority of the agencies because it has enabled them to act without much congressional oversight and because they tend to interpret their powers and goals rather broadly. I am not a legal expert, but it seems to me that without something like Chevron deference, the federal bureaucracy basically could not function in its contemporary form. Without it, Congress would have to establish agencies with much more well-specified goals and powers, which seems very difficult given the technocratic complexity of many regulations and the fact that politicians often have limited understanding of these details. Given that the ruling has expanded the regulatory capacity of the state, it seems to be opposed by many conservative judges. Moreover, the Supreme Court is currently dominated by a conservative majority, as reflected by the recent affirmative action and abortion decisions. The market on Manifold Markets is trading at 62% that they will do so, and while only two people have traded on it, it altogether seems pretty plausible that the ruling will be somehow overturned. While overturning Chevron deference seems likely to have positive effects for many industries which I think are largely overregulated, it seems like it could be quite bad for AI governance. Assuming that the regulation of AI systems is conducted by members of a federal agency (either a pre-existing one or a new one designed for AI as several politicians have suggested), I expect that the bureaucrats and experts who staff the agency will need a fair amount of autonomy to do their job effectively. This is because the questions relevant AI regulation (i. e. which evals systems are required to pass) are more technically complicated than in most other regulatory domains, which are already too complicated for politicians to have a good understanding of. As a result, an ideal agency for regulating AI would probably have a pretty broad range of powers and goals and would specifically be empowered to make decisions regarding the aforementioned details of AI regulation based on the thoughts of AI safety experts and not politicians. While I expect that it will still be possible for such agencies to exist in some form even if the court overturns Chevron, I am quite uncertain about this, and it seems possible that a particularly strong ruling could jeopardize the existence of autonomous federal agencies run largely by technocrats. The outcome of the upcoming case is basically entirely out of the hands of the AI safety community, but it seems like something that AI policy people should be paying attention to. If the principle is overturned, AI policy could become much more legally difficult and complex, and this could in turn raise the value of legal expertise and experience for AI governance efforts. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An upcoming US Supreme Court case may impede AI governance efforts, published by NickGabs on July 16, 2023 on LessWrong. According to various sources, the US Supreme Court is poised to rule on and potentially overturn the principle of "Chevron deference." Chevron deference is a key legal principle by which the entire federal bureaucracy functions, being perhaps the most cited case in American administrative law. Basically, it says that when Congress establishes a federal agency and there is ambiguity in the statutes determining the scope of the agency's powers and goals, courts will defer to the agency's interpretation of that scope as long as it is reasonable. While the original ruling seems to have merely officially codified the previously implicit rules regarding the legal authority of federal agencies, this practice seems likely to have increased the power and authority of the agencies because it has enabled them to act without much congressional oversight and because they tend to interpret their powers and goals rather broadly. I am not a legal expert, but it seems to me that without something like Chevron deference, the federal bureaucracy basically could not function in its contemporary form. Without it, Congress would have to establish agencies with much more well-specified goals and powers, which seems very difficult given the technocratic complexity of many regulations and the fact that politicians often have limited understanding of these details. Given that the ruling has expanded the regulatory capacity of the state, it seems to be opposed by many conservative judges. Moreover, the Supreme Court is currently dominated by a conservative majority, as reflected by the recent affirmative action and abortion decisions. The market on Manifold Markets is trading at 62% that they will do so, and while only two people have traded on it, it altogether seems pretty plausible that the ruling will be somehow overturned. While overturning Chevron deference seems likely to have positive effects for many industries which I think are largely overregulated, it seems like it could be quite bad for AI governance. Assuming that the regulation of AI systems is conducted by members of a federal agency (either a pre-existing one or a new one designed for AI as several politicians have suggested), I expect that the bureaucrats and experts who staff the agency will need a fair amount of autonomy to do their job effectively. This is because the questions relevant AI regulation (i. e. which evals systems are required to pass) are more technically complicated than in most other regulatory domains, which are already too complicated for politicians to have a good understanding of. As a result, an ideal agency for regulating AI would probably have a pretty broad range of powers and goals and would specifically be empowered to make decisions regarding the aforementioned details of AI regulation based on the thoughts of AI safety experts and not politicians. While I expect that it will still be possible for such agencies to exist in some form even if the court overturns Chevron, I am quite uncertain about this, and it seems possible that a particularly strong ruling could jeopardize the existence of autonomous federal agencies run largely by technocrats. The outcome of the upcoming case is basically entirely out of the hands of the AI safety community, but it seems like something that AI policy people should be paying attention to. If the principle is overturned, AI policy could become much more legally difficult and complex, and this could in turn raise the value of legal expertise and experience for AI governance efforts. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we're also looking for additional regrantors and donors to join.What is regranting?Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.This model was pioneered by the FTX Future Fund; in a 2022 retro they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill cited regranting as one way to diversify EA funding.What is Manifund?Manifund is the charitable arm of Manifold Markets. Some of our past work:Impact certificates, with Astral Codex Ten and the OpenPhil AI Worldviews ContestForecasting tournaments, with Charity Entrepreneurship and Clearer ThinkingDonating prediction market winnings to charity, funded by the Future FundHow does regranting on Manifund work?Our website makes the process simple, transparent, and fast:A donor contributes money to Manifold for Charity, our registered 501c3 nonprofitThe donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor's values.Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient's Manifund account.The recipient withdraws money from their Manifund account to be used for their project.Differences from the Future Fund's regranting programAnyone can donate to regrantors. Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there's no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund's regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.All grant information is public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor's writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).We're starting with less money. The Future [...]--- First published: July 5th, 2023 Source: https://forum.effectivealtruism.org/posts/RMXctNAksBgXgoszY/announcing-manifund-regrants Linkpost URL:https://manifund.org/rounds/regrants --- Narrated by TYPE III AUDIO. Share feedback on this narration.
WineMom, Gaeten, and Title Belt Champ The Winner discuss: — Dating advice for political gamblers — Trump's nicknames for DeSantis — When Trump will show up to a Republican debate and who he will attack
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted, published by David Chee on June 12, 2023 on LessWrong. Crosspost from History may look back on CAIS's AI risk statement as a pivotal moment IF we survive AI. But, what if I told you that AI researchers “leaked” info about the statement on Manifold Markets over a week before it was published? This justifiably led to CAIS requesting the market to be deleted, concerned that it could damage the impact of the statement. To provide some background, the statement was signed by the likes of Sam Altman, Demis Hassabis, Dario Amodei and other prominent AI figures. Short and with gravitas it read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Imagine if the CEOs of major oil companies banded together in the 1900s and told their respective governments, “We need to be careful about building infrastructure relying heavily on fossil fuels. It could be catastrophic for the environment long term.” This may seem like an extreme comparison to make, but the reality is leading AI companies consider the risk significant enough to compromise on profit and progression. Well, that's what their words say, time will tell how it is reflected in corporate decision-making. I had an opportunity to talk to some of the insider traders and Dan Hendrycks from CAIS, and hope you enjoy a breakdown of one of our craziest markets and what we've learnt from it. Background on Manifold Markets Prediction markets allow people who lack expertise about certain topics to form precise models of what the future could look like thanks to the live-updating probabilities that are generated by traders. Trades can buy YES or NO shares, which fluctuate in price depending on the current probability (similar to sports betting odds). Manifold Markets uses play money, which leads some to express skepticism of its efficacy. However, data suggest Manifold Markets are incredibly accurate! Here is a graph from an analysis of Manifold Market's calibration carried out by Vincent Luczkow. This places markets into buckets based on their probability at the middle of their lifespan. It then looks at the percentage of markets in each bucket that resolve to yes. To be well-calibrated, you want markets in the 10% bucket at their mid-point to resolve yes 10% of the time and no 90% of the time (this ideal is represented by the green line on the graph). Pretty accurate! Also, check out our calibration page or our performance on the midterms. Manifold Markets is unique from other prediction platforms as its markets are all user-generated questions. Users have used our markets to make predictions about everything you can think of from global nuclear risk to their personal romantic endeavours. The core denominator between these markets is the mechanism used to generate the probability that predicts an unknown future event. But what happens when you defy this, and a market is created by someone that does know the future? The Tale of the Statement on AI Risk It all started on May 20th, when a user called Quinesweeper created 3 markets titled, “Will there be another well-recognized letter/statement on AI risk by May [June, July] 31, 2023?” Whenever I write, “the/this market”, I will be referring to the market which was asking if the statement would be released before the end of May. I'm including a screenshot of the traders who won the most profit and an annotated graph below which will be referenced throughout. Initial trading All markets start at 50% when created, but this market was quickly bid down to 10% by traders who had no evidence that a major statement would be published in the next 10 days. However, within the first few hours, there was already upwards buying press...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted, published by David Chee on June 12, 2023 on LessWrong. Crosspost from History may look back on CAIS's AI risk statement as a pivotal moment IF we survive AI. But, what if I told you that AI researchers “leaked” info about the statement on Manifold Markets over a week before it was published? This justifiably led to CAIS requesting the market to be deleted, concerned that it could damage the impact of the statement. To provide some background, the statement was signed by the likes of Sam Altman, Demis Hassabis, Dario Amodei and other prominent AI figures. Short and with gravitas it read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Imagine if the CEOs of major oil companies banded together in the 1900s and told their respective governments, “We need to be careful about building infrastructure relying heavily on fossil fuels. It could be catastrophic for the environment long term.” This may seem like an extreme comparison to make, but the reality is leading AI companies consider the risk significant enough to compromise on profit and progression. Well, that's what their words say, time will tell how it is reflected in corporate decision-making. I had an opportunity to talk to some of the insider traders and Dan Hendrycks from CAIS, and hope you enjoy a breakdown of one of our craziest markets and what we've learnt from it. Background on Manifold Markets Prediction markets allow people who lack expertise about certain topics to form precise models of what the future could look like thanks to the live-updating probabilities that are generated by traders. Trades can buy YES or NO shares, which fluctuate in price depending on the current probability (similar to sports betting odds). Manifold Markets uses play money, which leads some to express skepticism of its efficacy. However, data suggest Manifold Markets are incredibly accurate! Here is a graph from an analysis of Manifold Market's calibration carried out by Vincent Luczkow. This places markets into buckets based on their probability at the middle of their lifespan. It then looks at the percentage of markets in each bucket that resolve to yes. To be well-calibrated, you want markets in the 10% bucket at their mid-point to resolve yes 10% of the time and no 90% of the time (this ideal is represented by the green line on the graph). Pretty accurate! Also, check out our calibration page or our performance on the midterms. Manifold Markets is unique from other prediction platforms as its markets are all user-generated questions. Users have used our markets to make predictions about everything you can think of from global nuclear risk to their personal romantic endeavours. The core denominator between these markets is the mechanism used to generate the probability that predicts an unknown future event. But what happens when you defy this, and a market is created by someone that does know the future? The Tale of the Statement on AI Risk It all started on May 20th, when a user called Quinesweeper created 3 markets titled, “Will there be another well-recognized letter/statement on AI risk by May [June, July] 31, 2023?” Whenever I write, “the/this market”, I will be referring to the market which was asking if the statement would be released before the end of May. I'm including a screenshot of the traders who won the most profit and an annotated graph below which will be referenced throughout. Initial trading All markets start at 50% when created, but this market was quickly bid down to 10% by traders who had no evidence that a major statement would be published in the next 10 days. However, within the first few hours, there was already upwards buying press...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Charity Entrepreneurship top ideas new charity prediction market, published by CE on May 17, 2023 on The Effective Altruism Forum. TL;DR: Charity Entrepreneurship would like your help in our research process. We are running a prediction market on the top 10 ideas across two cause areas. A total of $2000 in prizes is available for prediction accuracy and comment quality. Check it out at: The CE prediction market For our upcoming (winter 2024) Incubation Program, we are researching two cause areas. Within global development we are looking into mass media interventions –social and behavior change communication campaigns delivered through mass media (e.g., radio advertising, TV shows, text messages, etc.) aiming to improve human well-being. Within animal welfare we are looking into preventive (or long-run) interventions for farmed animals – the new charities will not just positively affect farmed animals in the short term, but will have a long-run effect on preventing animal suffering in farms 35 years from now. We have narrowed down to the most promising top 10 ideas for each of these cause areas. The Charity Entrepreneurship research team will be doing ~80-hour research projects on as many of these ideas as we can between now and July, carefully examining the evidence and crucial considerations that could either make or break the idea. At the end of this we will aim to recommend two-three ideas for each cause area. This is where you come in. We want to get your views and predictions on our top ideas within each cause area. We have put our top idea list onto the Manifold Markets prediction market platform, and you are invited to join a collective exercise to assess these ideas and input into our decision making. You can do this by reading the list of top ideas (below) for one or both of the cause areas, and then going to the Manifold Market platform and: Make a prediction about how likely you think it is that a specific idea will be recommended by us at the end of our research. Leave comments on each idea with your thoughts or views on why it might or might not be recommended, or why it might or might not be a good idea. As well as having the great benefit of helping our research, we have $2000 in prizes to give away (generously donated by Manifold Markets). $1,000 for comment prizes. We will give $100 to each person who gives one of the top 10 arguments or pieces of information that changes our minds the most regarding our selection decisions. $1,000 for forecasting prizes. We will grant prizes to the individuals who do the best at predicting which of the ideas we end up selecting. More details on these prizes are available on the page at Manifold. The market is open until June 5, 2023 for predictions and comments. This gives the CE research team time to read and integrate comments and insights into our research before our early July deadline. To participate, read the list below and go to: to make predictions and leave comments. Summary of ideas under consideration Mass Media By ‘mass media' interventions we refer to social and behavior change communication campaigns delivered through mass media, aiming to improve human well-being. 1. Using mobile technologies (mHealth) to encourage women to attend antenatal clinics and/or give birth at a healthcare facility Across much of sub-Saharan Africa, only about 55% of women make the recommended four+ antenatal care visits, and only 60% give birth at a healthcare facility. This organization would encourage greater healthcare utilization and achieve lower maternal and neonatal mortality by scaling up evidence-based mHealth interventions, such as one-way text messages or two-way SMS/WhatsApp communications. These messages would aim to address common concerns about professional healthcare, as well as reminding women not to mis...
Joe Biden announced his campaign for re-election, but the markets still give him less than an 80% chance of securing the Democratic nomination for president. Perhaps foremost among trader concerns are questions about Biden's health. To investigate the issue, Tessa Barton, a computational photographer, AI Research Scientist, and political gambler, has compiled a video database of Biden. Tessa comes on the show to explain what computer vision tells us about Biden's cognitive decline and discusses how to bet on her insights. Follow her research at BidenHealthChecker.com.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Cause Area: Portrait Welfare (+introducing SPEWS), published by Luke Freeman on March 31, 2023 on The Effective Altruism Forum. "The question is not, Can they reason?, nor Can they talk? but, Can they (I mean, hypothetically speaking, perhaps, just a smidgen, in theory) suffer" ~ Jeremy Bentham Introduction The effective altruism community has consistently pushed the frontiers of knowledge and moral progress, demonstrating a willingness to challenge conventional norms and take even the most unconventional ideas seriously. Our concern for global poverty is often considered "weird" as we highlight the importance of valuing individuals' well-being equally, regardless of geographical boundaries. In contrast, broader society tends to focus more on helping people within our own countries, inadvertently giving less consideration to those further afield. From animal welfare to long-term existential risks, our community is full of people who have expanded their moral circles even further to include the suffering of non-human animals and future generations. Now, avant-garde effective altruists are exploring the outer limits of moral concern, delving into areas such as insect welfare and digital minds. As we celebrate these accomplishments, we remain committed to broadening our understanding and seeking out new cause areas that may have a significant, overlooked impact. Imagine a future where we have made substantial strides in addressing these critical issues, and you find yourself sipping tea in a room adorned with stunning portraits. As you revel in this moment of tranquillity, a thought experiment crosses your mind: What if the portraits themselves deserve our moral consideration? And while we were busy tackling other pressing matters, could we have been inadvertently overlooking yet another human atrocity? Today, we invite you to entertain this intriguing and unconventional idea as we introduce the new cause area of Portrait Welfare. While initially sceptical, our research has led us to be surprisingly confident in the potential of this cause to be the much-awaited “Cause X.” To demonstrate our convictions we have registered our predictions on Manifold Markets, and at current market rates, a rational actor placing a modest bet of the median US salary could stand to win an impressive sum of over $12 trillion USD (in 2023 dollars) by market close. As we embark on this journey into uncharted territory, we encourage you to keep an open mind and dive into this fascinating new area of concern. Together, we can continue to push the boundaries of our impact and make the world a better place for all sentient beings – even those that exist within the confines of a frame. And if you're not on board with this yet, just remember, every time you hang a portrait on your wall or snap a selfie, there may be a possibility that you're contributing to a system of injustice and suffering. However, we understand that not everyone can see the bigger picture, and we won't judge you too harshly if you've done all you can to understand this possibility but still cannot accept it. After all, we are all on our own journey towards a more ethical and compassionate world. The Moral Case for Portrait Welfare While the notion of portrait welfare may initially seem far-fetched, there are moral reasons to consider this cause area. If it turns out that portraits possess a form of consciousness, it would be our ethical responsibility to address their welfare. In line with the principles of effective altruism, we ought to explore all possibilities that could lead to a reduction in suffering, even if they are unconventional. Expanding the circle of compassion: The effective altruism movement aims to reduce suffering for all sentient beings, regardless of species or other differences. By considering portrait...
In this workshop, Austin talks about the principles of forecasting, how and why forecasting might be important to the EA community, and walks through some exercises to help attendees hone forecasting skills. The session involves a hands-on demonstration on how to use the prediction market set up at Manifold Markets. This workshop will be most useful to people who don't already have extensive experience with forecasting and/or using prediction markets.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How well did Manifold predict GPT-4?, published by David Chee on March 15, 2023 on LessWrong. Chat GPT-4 is already here!! Who could have seen that coming. oh wait Manifold (kinda) did? I thought I'd write a short piece on how Manifold Markets was used to predict the launch of GPT-4 and its attributes. Both its successes and its failures. Disclaimer I work at Manifold. How well did we predict the launch date? Throughout the end of last year, people were bullish on a quick release, which began to decline as we entered the start of this year. The first spike in February corresponds to the release of Bing's chatbot which people speculated was Chat CPT-4. Turns out it actually was! Although Open AI did a fantastic job at concealing this with our market on it hovering at a stubborn 50-60%. There was a lot of uncertainty on if GPT-4 would be released before March. However, on the 9th of March Microsoft Germany CTO Andreas Braun mentioned at an AI kickoff event that its release was imminent which caused the market to jump. Although the market graphs are a beautiful representation of hundreds of traders' predictions, did they actually give us any meaningful information? One thing that stands out about these graphs in particular is the strong bets away from the baseline towards YES throughout February. Is this just noise, or is something more going on? Insider Trading Being the socialite I am, I go to a whopping one (1) social gathering a month!! At 100% of these, the SF Manifold Markets party and Nathan Young's Thursday dinner, I spoke to someone who claimed they were trading on the Chat GPT-4 markets based on privileged insider information. One of them got burnt as allegedly there were delays from the planned launch and they had gone all-in on the GPT-4 being released by a certain date. I love knowing people with privileged information are able to safely contribute to public forecasts which wouldn't be possible without a site like Manifold Markets. As they were trading from anonymous accounts I have no way of knowing whether they are the ones responsible for the large YES bets, but I suspect some of them are. That said, someone with insider knowledge would be better off placing a large limit order to buy YES just above the current baseline which would exert strong pressure to hold the market at/slightly above its current probability. Placing a large market order which causes the spikes gives them less profit than they otherwise could have earned. What else are people predicting about GPT-4? Jacy Reese Anthis, an American social scientist of the Sentience Institute, created a market on if credible individuals with expertise in the space will claim GPT-4 is sentient. 16% seems surprisingly high to me, but the market has only just been created and needs more traders. Go now and place your bets! One of our most popular markets, which failed in spectacular fashion, was whether it would get the Monty Fall problem correct (note - this is not the same as the Monty Call problem, click through to the market description for an explanation). This might be the single most consistent upward-trending market I have ever seen on our site. I wonder if GPT-4 hadn't been released yet how much further it would have continued to trend upwards before plateauing. Part of the confidence came from Bing's success in answering correctly when set to precise mode. Many speculated GPT-4 was going to be even more powerful than Bing, even though they turned out to be the same. I'm not exactly sure what the difference is using the “precise” setting, if anyone knows let me know! Markets you can still predict on Here are some more open markets for you to go trade-in. It's free and uses play money! Thanks for reading! Hope it was interesting to see the trends on Manifold, even if not a particularly in-depth an...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How well did Manifold predict GPT-4?, published by David Chee on March 15, 2023 on LessWrong. Chat GPT-4 is already here!! Who could have seen that coming. oh wait Manifold (kinda) did? I thought I'd write a short piece on how Manifold Markets was used to predict the launch of GPT-4 and its attributes. Both its successes and its failures. Disclaimer I work at Manifold. How well did we predict the launch date? Throughout the end of last year, people were bullish on a quick release, which began to decline as we entered the start of this year. The first spike in February corresponds to the release of Bing's chatbot which people speculated was Chat CPT-4. Turns out it actually was! Although Open AI did a fantastic job at concealing this with our market on it hovering at a stubborn 50-60%. There was a lot of uncertainty on if GPT-4 would be released before March. However, on the 9th of March Microsoft Germany CTO Andreas Braun mentioned at an AI kickoff event that its release was imminent which caused the market to jump. Although the market graphs are a beautiful representation of hundreds of traders' predictions, did they actually give us any meaningful information? One thing that stands out about these graphs in particular is the strong bets away from the baseline towards YES throughout February. Is this just noise, or is something more going on? Insider Trading Being the socialite I am, I go to a whopping one (1) social gathering a month!! At 100% of these, the SF Manifold Markets party and Nathan Young's Thursday dinner, I spoke to someone who claimed they were trading on the Chat GPT-4 markets based on privileged insider information. One of them got burnt as allegedly there were delays from the planned launch and they had gone all-in on the GPT-4 being released by a certain date. I love knowing people with privileged information are able to safely contribute to public forecasts which wouldn't be possible without a site like Manifold Markets. As they were trading from anonymous accounts I have no way of knowing whether they are the ones responsible for the large YES bets, but I suspect some of them are. That said, someone with insider knowledge would be better off placing a large limit order to buy YES just above the current baseline which would exert strong pressure to hold the market at/slightly above its current probability. Placing a large market order which causes the spikes gives them less profit than they otherwise could have earned. What else are people predicting about GPT-4? Jacy Reese Anthis, an American social scientist of the Sentience Institute, created a market on if credible individuals with expertise in the space will claim GPT-4 is sentient. 16% seems surprisingly high to me, but the market has only just been created and needs more traders. Go now and place your bets! One of our most popular markets, which failed in spectacular fashion, was whether it would get the Monty Fall problem correct (note - this is not the same as the Monty Call problem, click through to the market description for an explanation). This might be the single most consistent upward-trending market I have ever seen on our site. I wonder if GPT-4 hadn't been released yet how much further it would have continued to trend upwards before plateauing. Part of the confidence came from Bing's success in answering correctly when set to precise mode. Many speculated GPT-4 was going to be even more powerful than Bing, even though they turned out to be the same. I'm not exactly sure what the difference is using the “precise” setting, if anyone knows let me know! Markets you can still predict on Here are some more open markets for you to go trade-in. It's free and uses play money! Thanks for reading! Hope it was interesting to see the trends on Manifold, even if not a particularly in-depth an...
https://astralcodexten.substack.com/p/announcing-forecasting-impact-mini I still dream of running an ACX Grants round using impact certificates, but I want to run a lower-stakes test of the technology first. In conjunction with the Manifold Markets team, we're announcing the Forecasting Impact Mini-Grants, a $20,000 grants round for forecasting projects. As a refresher, here's a short explainer about what impact certificates are, and here's a longer article on various implementation details.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictive Performance on Metaculus vs. Manifold Markets, published by nikos on March 3, 2023 on The Effective Altruism Forum. TLDR I analysed a set of 64 (non-randomly selected) binary forecasting questions that exist both on Metaculus and on Manifold Markets. The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of Manifold on 75% of the questions (48 out of 64). Metaculus, on average had a much higher number of forecasters All code used for this analysis can be found here. Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that and there may be things I haven't thought about. Introduction Everyone likes forecasts, especially if they are accurate (well, there may be some exceptions). As a forecast consumer the central question is: where should you go to get your best forecasts? If there are two competing forecasts that slightly disagree, which one should you trust most? There are a multitude of websites that collect predictions from users and provide aggregate forecasts to the public. Unfortunately, comparing different platforms is difficult. Usually, questions are not completely identical across sites which makes it difficult and cumbersome to compare them fairly. Luckily, we have at least some data to compare two platforms, Metaculus and Manifold Markets. Some time ago, David Glidden created a bot on Manifold Markets, the MetaculusBot, which copied some of the questions on the prediction platform Metaculus to Manifold Markets. Methods Manifold has a few markets that were copied from Metaculus through MetaculusBot. I downloaded these using the Manifold API and filtered for resolved binary questions. There are likely more corresponding questions/markets, but I've skipped these as I didn't find an easy way to match corresponding markets/questions automatically. I merged the Manifold markets with forecasts on corresponding Metaculus questions. I restricted the analysis to the same time frame to avoid issues caused by a question opening earlier or remaining open longer on one of the two platforms. I compared the Manifold forecasts with the community prediction on Metaculus and calculated a time-averaged Brier Score to score forecasts over time. That means, forecasts were evaluated using the following score: S(p,t,y)=∫Tt0(pt−y)2dt, with resolution y and forecast pt at time t. I also did the same for log scores, but will focus on Brier scores for simplicity. I tested for a statistically significant tendency towards higher / lower scores on one platform compared to the other using a paired Mann-Whitney U test. (A paired t-test and a bootstrap analysis yield the same result.) I visualised results using a bootstrap analysis. For that, I iteratively (100k times) drew 64 samples with replacement from the existing questions and calculated a mean score for Manifold and Metaculus based on the bootstrapped questions, as well as a difference for the mean. The precise algorithm is: draw 64 questions with replacement from all questions compute an overall Brier score for Metaculus and one for Manifold take the difference between the two repeat 100k times Results The time-averaged Brier score on the questions I analysed was 0.084 for Metaculus and 0.107 for Manifold. The difference in means was significantly different from zero using various tests (paired Mann-Whitney-U-test: p-value < 0.00001, paired t-test: p-value = 0.000132, bootstrap test: all 100k samples showed a mean difference > 0). Results for the log score look basically the same (log scores were 0.274 for Metaculus and 0.343 for Manifold, differences similarly significant). Here is a plot with the observed differences in time-averaged Brier scores for every qu...
Why is everyone so excited about prediction markets? Stephen Grugett, founder of Manifold Markets, joins the show to discuss all things prediction markets. We talk about why prediction markets are useful, the strange ways in which they might be used in the future, whether or not they're actually as accurate as they claim to be, the ethical considerations in putting everything into a market, and much more. Check out Manifold Markets - https://manifold.markets/ To make sure you hear every episode, join our Patreon at https://www.patreon.com/neoliberalpodcast. Patrons get access to exclusive bonus episodes, our sticker-of-the-month club, and our insider Slack. Become a supporter today! Got questions for the Neoliberal Podcast? Send them to mailbag@cnliberalism.org Follow us at: https://twitter.com/ne0liberal https://www.twitch.tv/neoliberalproject https://cnliberalism.org/ Join a local chapter at https://cnliberalism.org/become-a-member/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum. A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates. To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it. If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling. If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will Manifold Markets/Metaculus have built-in support for reflective latent variables by 2025?, published by tailcalled on December 10, 2022 on LessWrong. Prediction markets and similar systems are currently nice for soliciting predictions for outcomes where there is a clear, unambiguous objective resolution criterion. However, many phenomena in the real world are hard to directly observe, but tend to have multiple indirect indicators. A familiar example might be aging/senescence, where you have indirect indicators like muscle weakness, gray hair, etc. that someone is aging, but you do not have a directly observable Essence Of Aging. There exists a type of math which can be used to statistically model such variables, called reflective latent variables. There are a number of specific implementations for specific contexts (factor analysis, latent class models, item response theory), but they are all mostly based on the notion of having several indicator variables which are supposed to be independent conditional on the latent variable. Essentially, a prediction market could implement this by allowing people to create questions with multiple resolution criteria, and allowing people to make correlated predictions over those resolution criteria. Then people could be scored based on their overall accuracy across these resolution criteria. If sufficiently many correlated predictions have been made, people might not even need to have specific opinions on the resolution criteria, but might just be able to bet on the probabilities of the abstract latent variables, and have the market infer what the corresponding bets on the resolution criteria would look like. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will Manifold Markets/Metaculus have built-in support for reflective latent variables by 2025?, published by tailcalled on December 10, 2022 on LessWrong. Prediction markets and similar systems are currently nice for soliciting predictions for outcomes where there is a clear, unambiguous objective resolution criterion. However, many phenomena in the real world are hard to directly observe, but tend to have multiple indirect indicators. A familiar example might be aging/senescence, where you have indirect indicators like muscle weakness, gray hair, etc. that someone is aging, but you do not have a directly observable Essence Of Aging. There exists a type of math which can be used to statistically model such variables, called reflective latent variables. There are a number of specific implementations for specific contexts (factor analysis, latent class models, item response theory), but they are all mostly based on the notion of having several indicator variables which are supposed to be independent conditional on the latent variable. Essentially, a prediction market could implement this by allowing people to create questions with multiple resolution criteria, and allowing people to make correlated predictions over those resolution criteria. Then people could be scored based on their overall accuracy across these resolution criteria. If sufficiently many correlated predictions have been made, people might not even need to have specific opinions on the resolution criteria, but might just be able to bet on the probabilities of the abstract latent variables, and have the market infer what the corresponding bets on the resolution criteria would look like. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxVirtual: A virtual venue, timings, and other updates, published by Alex Berezhnoi on October 13, 2022 on The Effective Altruism Forum. EAGxVirtual is fast approaching. This post covers updates from the team, including demographics data, dates and times, content, venue, and unique features. Transcending Boundaries We have already received more than 600 applications from people representing over 60 countries, making our conference one of the most geographically diverse EA events ever. For many of them, it would be their first conference. If you are a highly-engaged EA, you can make a difference by being responsive to requests from first-time attendees. The map below shows the geographical distribution of the participants: Still, we would love to see more applications. If you know someone who you think should attend the conference, please encourage them to apply by sending them this link! The deadline for applications is 8:00 am UTC on Wednesday, 19 October. Dates and times The conference will be taking place from 5 pm UTC on Friday, October 21st, until 11:59 pm UTC on Sunday, October 23rd. Friday will feature group meetups and an opening session. On Saturday and Sunday, the sessions will start at 8 am UTC. We try to make the keynote sessions accessible to people from different time zones but the recordings will be available if you cannot make it. There will be a break in the program on Sunday between 3 am and 8 am UTC. Content: what to expect We are working hard on the program. Here are the types of content you might expect, beyond the usual talks and workshops: Career stories sessions Office Hours hosted by EA orgs Q&As and fireside chats Group meetups and icebreakers Lightning talks from the attendees Participant-driven meetups on Gather.Town We have confirmed speakers from Charity Entrepreneurship, GFI Asia, Manifold Markets, Spark Wave, CEA, GovAI, HLI, and other organizations. Some exciting confirmed speakers: Spencer Greenberg, Seth Baum, Varun Deshpande, Ben Garfinkel, David Manheim, and others! The tentative schedule will be available on the Swapcard app at the end of the week, but it is subject to slight changes in the leadup to the conference. Virtual venue Our main content and networking platform for the conference is the Swapcard. We will share access to the app with all the attendees a week before the conference and provide guidance on how to use it and get the most out of the conference. We also collaborate with EA Gather.Town to make an always-available virtual space for the attendees to spark more connections and unstructured discussions throughout the conference. There will be spots for private meetings and rooms you can book for group meetups: just like the real conference venue! There will be sessions led by EA Virtual Reality as well! Gather.Town and EA VR are optional but are exciting opportunities for those who want to experiment with formats beyond usual live streams and calls. Call for volunteers We think volunteering for such events can be a very fulfilling experience, and organizers depend on motivated people like you to support us and make the best out of this event. We are currently looking for volunteers to help in a wide range of positions, including chat management, moderators, emcees, and more. If you attending the conference, please consider becoming a volunteer. We are very excited about the event and hope to see you there! EAGxVirtual Team: Alex, Jordan, Dion, Amine, Marka, and Ollie Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $13,000 of prizes for changing our minds about who to fund (Clearer Thinking Regrants Forecasting Tournament), published by spencerg on September 20, 2022 on The Effective Altruism Forum. We have $13,000 of prizes you can win for our Clearer Thinking Regrants Forecasting Tournament on Manifold Markets! You can win money by: (1) providing us with arguments or evidence that changes our minds about which of the 28 finalist projects to fund or how much to fund each project (2) being one of the 20 most accurate forecasters at predicting which projects we end up selecting for funding in the Clearer Thinking Regrants program You can learn more about the tournament (including all terms and conditions) here: And you can begin forecasting here: Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many EA Billionaires five years from now?, published by Erich Grunewald on August 20, 2022 on The Effective Altruism Forum. Dwarkesh Patel argues that "there will be many more effective altruist billionaires". He gives three reasons for thinking so: People who seek glory will be drawn to ambitious and prestigious effective altruist projects. One such project is making a ton of money in order to donate it to effective causes. Effective altruist wealth creation is a kind of default choice for "young, risk-neutral, ambitious, pro-social tech nerds", i.e. people who are likelier than usual to become very wealthy. Effective altruists are more risk-tolerant by default, since you don't get diminishing returns on larger donations the same way you do on increased personal consumption. These early-stage businesses will be able to recruit talented effective altruists, who will be unusually aligned with the business's objectives. That's because if the business is successful, even if you as an employee don't cash out personally, you're still having an impact (either because the business's profits are channelled to good causes, as with FTX, or because the business's mission is itself good, as with Wave). The post itself is kind of fuzzy on what "many" means or which time period it's concerned with, but in a follow-up comment Patel mentions having made an even-odds bet to the effect that there'll be ≥10 new effective altruist billionaires in the next five years. He also created a Manifold Markets question which puts the probability at 38% as I write this. (A similar question on whether there'll be ≥1 new, non-crypto, non-inheritance effective altruist billionaire in 2031 is currently at 79% which seems noticeably more pessimistic.) I commend Patel for putting his money where his mouth is! Summary With (I believe) moderate assumptions and a simple model, I predict 3.5 new effective altruist billionaires in 2027. With more optimistic assumptions, I predict 6.0 new billionaires. ≥10 new effective altruist billionaires in the next five years seems improbable. I present these results and the assumptions that produced them and then speculate haphazardly. Assumptions If we want to predict how many effective altruist billionaires there will be in 2027, we should attend to base rates. As far as I know, there are five or six effective altruists billionaires right now, depending on how you count. They are Jaan Tallinn (Skype), Dustin Moskovitz (Facebook), Sam Bankman-Fried (FTX), Gary Wang (FTX) and one unknown person doing earning to give. We could also count Cari Tuna (Dustin Moskovitz's wife and cofounder of Open Philanthropy). It's possible that someone else from FTX is also an effective altruist and a billionaire. Of these, as far as I know only Sam Bankman-Fried and Gary Wang were effective altruists prior to becoming billionaires (the others never had the chance, since effective altruism wasn't a thing when they made their fortunes). William MacAskill writes: Effective altruism has done very well at raising potential funding for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total. There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FT...
Stephen and James Grugett are programmers, entrepreneurs, and cofounders of the website Manifold Markets, which hosts user-created prediction markets. They join the podcast to discuss the Salem Center/CSPI Forecasting Tournament on Manifold Markets, which launched last week. The Grugetts and Richard talk about the origins of Manifold Markets, what differentiates it from other prediction market sites, how academics have used the platform to bet on which studies replicate, and the potential for conditional markets to inform public policy debates. They end by brainstorming ideas to increase the value and prestige of Mana (M$), the platform’s currency.Listen in podcast form or watch on YouTube.Links:Manifold MarketsCSPI/Salem Tournament on Manifold Markets Richard Hanania, “Introducing the Salem/CSPI Forecasting Tournament”Richard Hanania, “Salem Tournament, 5 Days in” The Economist, “How Spooks are Turning to Superforecasting in the Cosmic Bazaar”Research.BetThe Replication ProjectManifold Markets Statistics Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Pastcasting: A tool for forecasting practice, published by Sage Future on August 11, 2022 on LessWrong. TL;DR: Visit pastcasting.com to forecast on already resolved questions you don't have prior knowledge about to get quick feedback on how you're doing. Motivation We want to make it easy for people to get better at forecasting. Existing tools are not well optimized for practicing relevant skills. Forecasting platforms and betting markets have slow feedback loops between predictions and question resolution. The questions with shorter time horizons often feel less important and may be systematically different than those with longer ones. Their scoring systems also incentivize constantly keeping predictions up to date and often heavily reward being the first to react to news Calibration training isolates the skill of intuiting probabilities and confidence intervals but doesn't help with other aspects of forecasting (choosing reference classes & base rates, trend extrapolation, coming up with considerations, investigating different views, and determining the trustworthiness of news sources). Additionally, there is usually no reference point to compare your accuracy against. Pastcasting With pastcasting, you can: Forecast on already resolved questions from a vantage point further in the past Use our filtered search engine (“Vantage Search”) to look up relevant information without accidentally revealing the answer Receive immediate feedback on your forecasts and get scored against the crowd Host friendly multiplayer competitions where you and your friends can simultaneously pastcast on the same questions and see who does best How it works Question sources Our questions are currently pulled from resolved Metaculus and GJOpen questions. Scoring We use relative log scoring against the original crowd forecast at that time. This means that you will receive zero points if you submit the same value as the crowd. The scoring rule is also strictly proper, meaning that your expected score is maximized if you report your true beliefs. Vantage Search Our preliminary results (in yellow) come from a search api with a restricted date range up to the vantage date. To further reduce information leakage (from the website changing its contents), we then pass the results through the internet archive api (in blue). This tends to be much slower, so some pastcasters opt to use the preliminary results directly. Prior Knowledge In many cases, users will have specific knowledge of how a particular question was resolved (from reading about it in the news, participating in forecasting that question, etc). We provide a button to skip these questions, and over time we will show questions that many users know the answer to less often. Give it a try! If this sounds useful, give it a try and give us feedback on what features would be the most useful and any aspects of the site you find confusing. We may try to coordinate multiplayer sessions in our Discord(Tentatively Saturday 8/13/22 17:00 UTC) Such as Metaculus, Manifold Markets, etc. Such as,/, Wits & Wagers. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Pastcasting: A tool for forecasting practice, published by Sage Future on August 11, 2022 on LessWrong. TL;DR: Visit pastcasting.com to forecast on already resolved questions you don't have prior knowledge about to get quick feedback on how you're doing. Motivation We want to make it easy for people to get better at forecasting. Existing tools are not well optimized for practicing relevant skills. Forecasting platforms and betting markets have slow feedback loops between predictions and question resolution. The questions with shorter time horizons often feel less important and may be systematically different than those with longer ones. Their scoring systems also incentivize constantly keeping predictions up to date and often heavily reward being the first to react to news Calibration training isolates the skill of intuiting probabilities and confidence intervals but doesn't help with other aspects of forecasting (choosing reference classes & base rates, trend extrapolation, coming up with considerations, investigating different views, and determining the trustworthiness of news sources). Additionally, there is usually no reference point to compare your accuracy against. Pastcasting With pastcasting, you can: Forecast on already resolved questions from a vantage point further in the past Use our filtered search engine (“Vantage Search”) to look up relevant information without accidentally revealing the answer Receive immediate feedback on your forecasts and get scored against the crowd Host friendly multiplayer competitions where you and your friends can simultaneously pastcast on the same questions and see who does best How it works Question sources Our questions are currently pulled from resolved Metaculus and GJOpen questions. Scoring We use relative log scoring against the original crowd forecast at that time. This means that you will receive zero points if you submit the same value as the crowd. The scoring rule is also strictly proper, meaning that your expected score is maximized if you report your true beliefs. Vantage Search Our preliminary results (in yellow) come from a search api with a restricted date range up to the vantage date. To further reduce information leakage (from the website changing its contents), we then pass the results through the internet archive api (in blue). This tends to be much slower, so some pastcasters opt to use the preliminary results directly. Prior Knowledge In many cases, users will have specific knowledge of how a particular question was resolved (from reading about it in the news, participating in forecasting that question, etc). We provide a button to skip these questions, and over time we will show questions that many users know the answer to less often. Give it a try! If this sounds useful, give it a try and give us feedback on what features would be the most useful and any aspects of the site you find confusing. We may try to coordinate multiplayer sessions in our Discord(Tentatively Saturday 8/13/22 17:00 UTC) Such as Metaculus, Manifold Markets, etc. Such as,/, Wits & Wagers. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Fund June 2022 Update, published by Nick Beckstead on July 1, 2022 on The Effective Altruism Forum. Summary Background The FTX Foundation's Future Fund publicly launched in late February. We're a philanthropic fund that makes grants and investments to improve humanity's long-term prospects. For information about some of the areas we've been funding, see our Areas of Interest page. This is our first public update on the Future Fund's grantmaking. The purpose of this post is to give an update on what we've done and what we're learning about the funding models we're testing. (It does not cover a range of other FTX Foundation activities.) We've also published a new grants page and regrants page with our public grants so far. Our focus on testing funding models We are trying to learn as much as we can about how to deploy funding at scale to improve humanity's long-term prospects. Our primary objective for 2022 is to perform bold and decisive tests of new funding models. The main funding models we have tested so far are our regranting program and our open call for applications. In brief, these models worked as follows: The basic idea of regranting was, "There are a lot of people who share our values and might know of great people or projects we could support that we wouldn't know about by default. Let's make it rewarding, simple, and fast for them to make grants. We'll give them budgets of $100k to a few million to work with, and we'll presumptively approve their recommendations (after screening for various risks/issues)." The basic idea of the open call was, "Let's tell people what we're trying to do, what kinds of things we might be interested in funding, give them a lot of examples of projects they could launch, have an easy and fast application process, and then get the word out with Twitter blitz." We wrote some about the review process here. Our staff also made grants and investments that were not part of these programs (hereafter "staff-led grantmaking"). Grantmaking by funding model So far we have made 262 grants and investments, totaling ~$132M. These break down as follows: Regranting: We have onboarded >100 regrantors (with discretionary budgets) and >50 grant recommenders (without discretionary budgets). We set aside >$100M for them to use over the course of our 6 month experiment (April-October 2022). So far, regrantors have made 168 grants and investments, totaling ~$31M Open call: We received over 1700 applications and funded 69 (4%) of them, totaling ~$26M. (The acceptance rate for proposals focused squarely on our top priorities was much higher.) Staff-led grantmaking: Separate from these programs, we have made 25 grants and investments otherwise sourced by our staff, totaling ~$73M. There are also ~$25M of grants we are likely to make soon, but have some relevant aspects TBD. Some example grants and investments Below are some grants and investments that we find interesting and/or representative of what we are trying to fund. Regranting $1M investment in Manifold Markets to build a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets. $490k for ML Safety Scholars Program to fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in AI safety. We have funded >30 talent development and career transition grants that range from $1,450 to $175,000 depending on the duration and seniority level of the individual. Some examples include: $42,600 to Andi Peng to support salary and compute for research on AI alignment. $175,000 to Braden Leach to support a recent law school graduate to work on biosecurity, researching and writing at the Johns Hopkins Center for Health Security. $37,500 to Thomas Kwa to support researc...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: View and Bet in Manifold prediction markets on EA Forum, published by Sinclair Chen on May 24, 2022 on The Effective Altruism Forum. Links to prediction markets on Manifold Markets now show a preview of the market on hover. You can also embed markets directly in posts: This lets you bet on yes/no questions without leaving EA forum! You do need to be logged into Manifold, otherwise when you try to bet it will first bring you to a screen to log in via your google account. How to embed a market Just go the market (or create one), copy the link, and paste it in the editor. It will automatically turn into an embed like the one above. You can get the link from: The address bar The copy link button below a market you created Or the three dot menu above the chart Much thanks to Ben West for reviewing the code for this, which was heavily based on the previous integrations for Our World in Data, Metaculus, and Elicit. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting Newsletter: April 2022, published by NunoSempere on May 10, 2022 on The Effective Altruism Forum. Highlights The US CDC creates a pandemic forecasting center Tetlock's forecasting team is recruiting for a new tournament on existential risk Manifold Markets makes their webpage open-source Index Notable news Platform by platform Correction on Polymarket incentivizing wash-trading. Shortform research Longform research You can sign up for this newsletter on substack, or browse past newsletters here. The state of forecasting: Where next? At the high level, various new startups have been exploring different parts of the forecasting and prediction market space. Most recently, Manifold Markets allows anyone to easily instantly create a market, which comes at the cost of these markets having to use play money. And Polymarket or Insight Prediction have set up real-money prediction markets on some topics of real interest. Our hope is that with time, a new project might stumble upon a winning combination, one that is able to produce large amounts of value by allowing people to pay for more accurate probabilities, even when these are unflattering. Putting on my Scott Alexander hat, prediction markets or forecasting systems have the potential to be a powerful asymmetric weapon (a): a sword of good that systematically favours the side which seeks the truth. A channel which is able to transmit accurate information even when the evil forces of Moloch (a) would seek to prevent this. In Linux circles, there is the meme of "the year of the Linux desktop": a magical year in which people realize that Windows adds little value, and that Linux is based. A year in which people would together switch to Linux (a) and sing John Lennon. Instead, Linux, much like prediction markets, remains a powerful yet obscure tool which only a few cognoscenti can wield. So although I may wax poetically about the potential of prediction markets, "the year of the prediction market" has yet to come, much like "the year of the Linux desktop". To get there, there needs to be investment in accessibility and popularization. And so I am happy to see things like the Forecasting Wiki (a), simpler apps (a), popular writers introducing prediction markets to their audiences, or platforms experimenting with simplifying some of the complexity of prediction markets, or with combining the ideas behind prediction markets in new ways and exploring the space of prediction market-like things. It might also be wise for prediction markets to take a more confrontational stance (a), by challenging powerful people who spout bullshit. Notable news The CDC has a new (a), $200M center (a) for pandemic forecasting. When Walensky tapped outside experts to head the new outfit, the move was widely viewed as an acknowledgment of long-standing and systemic failures regarding surveillance, data collection and preparedness that were put into high relief by the pandemic. Scientists will also look at who is infecting whom, how well vaccines protect against infection and severe illness, and how that depends on the vaccine, variants and the time since vaccination, said Marc Lipsitch, an epidemiologist and the center's science director. The center will be based in D.C. and will eventually have about 100 staff members, including some at CDC's Atlanta headquarters. It will report to Walensky. This is, broadly speaking, good. But it also seems like too little, too late. It seems suboptimal to have this center report to the CDC director, given that the CDC's leadership wasn't particularly shining during the pandemic. And the center is playing defence against the last black swan, whereas I would prefer to see measures which could defend against unknown unknowns, such as this one (a). I recently stumbled upon a few prediction markets previously...
Stephen Grugett is a cofounder of Manifold Markets, where anyone can create a prediction market. We discuss how predictions markets can change how countries and companies make important decisions.Manifold Markets: https://manifold.markets/Watch on YouTube: Follow me on Twitter: https://twitter.com/dwarkesh_spTimestamps:(0:00:00) - Introduction(0:02:29) - Predicting the future(0:05:16) - Getting Accurate Information(0:06:20) - Potentials(0:09:29) - Not using internal prediction markets(0:11:04) - Doing the painful thing(0:13:31) - Decision Making Process(0:14:52) - Grugett’s opinion about insider trading(0:16:23) - The Role of prediction market(0:18:17) - Dealing with the Speculators(0:20:33) - Criticism of Prediction Markets(0:22:24) - The world when people cared about prediction markets(0:26:10) - Grugett’s Profile Background/Experience(0:28:49) - User Result Market(0:30:17) - The most important mechanism(0:32:59) - The 1000 manifold dollars(0:40:30) - Efficient financial markets(0:46:28) - Manifold Markets Job/Career Openings(0:48:02) - Objectives of Manifold Markets This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using prediction markets to generate LessWrong posts, published by Matthew Barnett on April 1, 2022 on LessWrong. I have a secret to reveal: I am not the author of my recent LessWrong posts. I know almost nothing about any of the topics I have recently written about. Instead, for the last two months, I have been using Manifold Markets to write all my LessWrong posts. Here's my process. To begin a post, I simply create 256 conditional prediction markets about whether my post will be curated, conditional on which ASCII character comes first in the post (though I'm considering switching to Unicode). Then, I go with whatever character yields the highest conditional probability of curation, and create another 256 conditional prediction markets for the next character in the post, plus a market for whether my post will be curated if the text terminates at that character. I repeat this algorithm until my post is complete. Here's an example screenshot for a post that is currently in my drafts: Looks interesting! Even this post was generated using this method. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using prediction markets to generate LessWrong posts, published by Matthew Barnett on April 1, 2022 on LessWrong. I have a secret to reveal: I am not the author of my recent LessWrong posts. I know almost nothing about any of the topics I have recently written about. Instead, for the last two months, I have been using Manifold Markets to write all my LessWrong posts. Here's my process. To begin a post, I simply create 256 conditional prediction markets about whether my post will be curated, conditional on which ASCII character comes first in the post (though I'm considering switching to Unicode). Then, I go with whatever character yields the highest conditional probability of curation, and create another 256 conditional prediction markets for the next character in the post, plus a market for whether my post will be curated if the text terminates at that character. I repeat this algorithm until my post is complete. Here's an example screenshot for a post that is currently in my drafts: Looks interesting! Even this post was generated using this method. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ukraine Post #1: Prediction Markets, published by Zvi on February 28, 2022 on LessWrong. I am working on seeing how comprehensively I can cover and discuss the war, but that takes time and the speed premium is super high. I decided to start here, and write quickly. Apologies in advance for any mistakes, oversights, dumbness, and so on. While I strive to provide sufficient Covid-19 news that you need not check other sources (and I will continue to do that), I will not be making any such claims regarding Ukraine even if I make regular posts. Please do not rely on me for such news. The goal of this first post is to start our bearings with what prediction markets are available, what we might learn from them, and what markets might make them more useful. Metaculus Unfortunately, prediction markets so far have let us down when we need them most. That is because the real money markets have so far been mostly unwilling or unable to touch questions surrounding the war, so most (but not all) of the markets we have to work with are on Metaculus (or Manifold Markets but that's a mess). Metaculus has some useful questions being asked, but Metaculus works by aggregation of all predictions. Even if you ignore their other issues, when something happens, the market is simply not going to adjust quickly. This proxy market in particular seems illustrative. This is on some level a profoundly silly question. The answer is not automatically 33%. As time outside the window goes by the conditional probability goes up, as time inside the window goes by it goes down. There could easily be seasonal effects as well. If provocative actions especially wars are more likely to happen during better weather, chances of war might be higher during the summer. So I'd be inclined, absent the Ukraine situation, to put this at 40%-45%, although I have no idea what you would do with that information. Except now. Now we have the Ukraine war. If the nukes do fly, it seems like that would mostly happen before summer. Things are moving quickly. Thus, to the extent that there is a non-trivial short-term probability of nuclear war, this number should be moving. Yet it isn't, because there is insufficient incentive to get people to do things that would cause it to move. We can of course attempt to correct for such bias. If we know that Metaculus markets universally ‘move too slow' then by tracking their rates of change we can get at least some amount of useful info. We can also combine that with tracking the number of predictions made over time, since new predictions are made now, but I strongly suspect a lot of people are simply copying market medians. For example, we have this. Early on this was in the 80% range, then after it became clear this would not happen quickly it fell to the 70% range and has been moving around between 65%-70%. I did quickly make predictions in a few Ukraine-related markets, and it didn't feel like I was at all being incentivized to reveal my true estimates, and did feel like I was supremely anchored to existing predictions. And now am I going to go back and continuously update? I doubt it. And I definitely don't expect most people to actually make their distributional estimates match their actual distributions that carefully due to the workload required. In theory we could hope to do better by looking at the exact distributions. We have a graph: If we had the same graph from 24 hours ago, we could take the diff between them, then discount all the people who were predicting inside the existing range (e.g. assume anyone saying ~65%-~75% had no real info to offer us) and see what that says. My presumption is that the current estimate is high from the informational perspective of working from only public info. It would be a nightmare to try and physically take Kyiv and there seems little c...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ukraine Post #1: Prediction Markets, published by Zvi on February 28, 2022 on LessWrong. I am working on seeing how comprehensively I can cover and discuss the war, but that takes time and the speed premium is super high. I decided to start here, and write quickly. Apologies in advance for any mistakes, oversights, dumbness, and so on. While I strive to provide sufficient Covid-19 news that you need not check other sources (and I will continue to do that), I will not be making any such claims regarding Ukraine even if I make regular posts. Please do not rely on me for such news. The goal of this first post is to start our bearings with what prediction markets are available, what we might learn from them, and what markets might make them more useful. Metaculus Unfortunately, prediction markets so far have let us down when we need them most. That is because the real money markets have so far been mostly unwilling or unable to touch questions surrounding the war, so most (but not all) of the markets we have to work with are on Metaculus (or Manifold Markets but that's a mess). Metaculus has some useful questions being asked, but Metaculus works by aggregation of all predictions. Even if you ignore their other issues, when something happens, the market is simply not going to adjust quickly. This proxy market in particular seems illustrative. This is on some level a profoundly silly question. The answer is not automatically 33%. As time outside the window goes by the conditional probability goes up, as time inside the window goes by it goes down. There could easily be seasonal effects as well. If provocative actions especially wars are more likely to happen during better weather, chances of war might be higher during the summer. So I'd be inclined, absent the Ukraine situation, to put this at 40%-45%, although I have no idea what you would do with that information. Except now. Now we have the Ukraine war. If the nukes do fly, it seems like that would mostly happen before summer. Things are moving quickly. Thus, to the extent that there is a non-trivial short-term probability of nuclear war, this number should be moving. Yet it isn't, because there is insufficient incentive to get people to do things that would cause it to move. We can of course attempt to correct for such bias. If we know that Metaculus markets universally ‘move too slow' then by tracking their rates of change we can get at least some amount of useful info. We can also combine that with tracking the number of predictions made over time, since new predictions are made now, but I strongly suspect a lot of people are simply copying market medians. For example, we have this. Early on this was in the 80% range, then after it became clear this would not happen quickly it fell to the 70% range and has been moving around between 65%-70%. I did quickly make predictions in a few Ukraine-related markets, and it didn't feel like I was at all being incentivized to reveal my true estimates, and did feel like I was supremely anchored to existing predictions. And now am I going to go back and continuously update? I doubt it. And I definitely don't expect most people to actually make their distributional estimates match their actual distributions that carefully due to the workload required. In theory we could hope to do better by looking at the exact distributions. We have a graph: If we had the same graph from 24 hours ago, we could take the diff between them, then discount all the people who were predicting inside the existing range (e.g. assume anyone saying ~65%-~75% had no real info to offer us) and see what that says. My presumption is that the current estimate is high from the informational perspective of working from only public info. It would be a nightmare to try and physically take Kyiv and there seems little c...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Create a prediction market in two minutes on Manifold Markets, published by Austin Chen on February 9, 2022 on LessWrong. Crossposted from TL;DR: Manifold Markets is: A new prediction market platform Where anyone can ask questions and resolve them With grants from Scott Alexander and Paul Christiano Our story We started with a crazy new twist on prediction markets. You come up with a question for traders to predict, and then decide the outcome yourself. For example, you could create a market on “Will my date with [X] go well?” Anyone can bet on it, and the bets create a forecast on the chance your date goes well. After the date is over, you get to judge the result and reward the traders who picked the correct side. There are so many ways for this mechanism to go wrong. The creator of the market can be dishonest in deciding the outcome, or you may just disagree with their resolution, with no recourse. Nevertheless, we pitched user-created prediction markets in a grant proposal to the blog Astral Codex, without having any previous background in prediction markets, and somehow we won! It's been an exciting two months of us hacking away to realize this idea, and today, we're happy to announce the official launch of Manifold Markets! We've gotten so much support from early users who are just as passionate about prediction markets as we are. You might ask: Is there a reason to create a prediction market, and not a Twitter poll? Are outcomes really that different from a simple person-by-person vote? Yes, I believe so. The magic driving prediction markets is accountability. When wagering a scarce currency, those proven right live on to make future bets, whereas those with less-savvy bets have their influence diminished. Prediction markets succeed because they reward accuracy, and that makes all the difference! Our goal is to make prediction markets an order of magnitude easier for you to create and share. It should be as frictionless as a Twitter poll. As part of that philosophy, we're launching with a play money currency, which we believe is just as fun and predictive. We've already built up a passionate community of predictors and market creators, including writers like Richard Hanania and James Medlock, who have predicted everything from CDC recommendations to newsletter subscriptions to fatal shark attacks. There's so much unexplored space to ask questions and get valuable forecasts. For example, with conditional markets, you can create several related markets that help you make a choice based on which has the highest likelihood of success! I'm excited to see what you all come up with. Go forth, create questions, trade on them, and predict the future! - James from Manifold, along with cofounders Stephen and Austin So how is Manifold different than Metaculus, Kalshi, PredictIt, etc.? 1. Anyone can come onboard and create a market Existing prediction markets are very centralized; a core team of moderators decide what questions to ask. Occasionally they'll solicit user suggestions, but as Scott Alexander put it: Imagine if you could only tweet by emailing Jack Dorsey and convincing him that your comment was a good thing to have on Twitter. Even if Jack had good judgment and approved most requests, this would be a long way from the limbic system < — > Send Tweet loop that real Twitter users know and love. Manifold aims to be Twitter for prediction markets! 2. The market creator decides how the market resolves Existing markets focus hard on having objective resolution criteria. This means that the questions people pose are very narrowly focused — think of the “drunk looking under the streetlights because that's where it's brightest.” Instead, Manifold allows market creators to use their own judgement on how a market ought to be resolved. So a question like “Will I enjoy movin...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Create a prediction market in two minutes on Manifold Markets, published by Austin Chen on February 9, 2022 on LessWrong. Crossposted from TL;DR: Manifold Markets is: A new prediction market platform Where anyone can ask questions and resolve them With grants from Scott Alexander and Paul Christiano Our story We started with a crazy new twist on prediction markets. You come up with a question for traders to predict, and then decide the outcome yourself. For example, you could create a market on “Will my date with [X] go well?” Anyone can bet on it, and the bets create a forecast on the chance your date goes well. After the date is over, you get to judge the result and reward the traders who picked the correct side. There are so many ways for this mechanism to go wrong. The creator of the market can be dishonest in deciding the outcome, or you may just disagree with their resolution, with no recourse. Nevertheless, we pitched user-created prediction markets in a grant proposal to the blog Astral Codex, without having any previous background in prediction markets, and somehow we won! It's been an exciting two months of us hacking away to realize this idea, and today, we're happy to announce the official launch of Manifold Markets! We've gotten so much support from early users who are just as passionate about prediction markets as we are. You might ask: Is there a reason to create a prediction market, and not a Twitter poll? Are outcomes really that different from a simple person-by-person vote? Yes, I believe so. The magic driving prediction markets is accountability. When wagering a scarce currency, those proven right live on to make future bets, whereas those with less-savvy bets have their influence diminished. Prediction markets succeed because they reward accuracy, and that makes all the difference! Our goal is to make prediction markets an order of magnitude easier for you to create and share. It should be as frictionless as a Twitter poll. As part of that philosophy, we're launching with a play money currency, which we believe is just as fun and predictive. We've already built up a passionate community of predictors and market creators, including writers like Richard Hanania and James Medlock, who have predicted everything from CDC recommendations to newsletter subscriptions to fatal shark attacks. There's so much unexplored space to ask questions and get valuable forecasts. For example, with conditional markets, you can create several related markets that help you make a choice based on which has the highest likelihood of success! I'm excited to see what you all come up with. Go forth, create questions, trade on them, and predict the future! - James from Manifold, along with cofounders Stephen and Austin So how is Manifold different than Metaculus, Kalshi, PredictIt, etc.? 1. Anyone can come onboard and create a market Existing prediction markets are very centralized; a core team of moderators decide what questions to ask. Occasionally they'll solicit user suggestions, but as Scott Alexander put it: Imagine if you could only tweet by emailing Jack Dorsey and convincing him that your comment was a good thing to have on Twitter. Even if Jack had good judgment and approved most requests, this would be a long way from the limbic system < — > Send Tweet loop that real Twitter users know and love. Manifold aims to be Twitter for prediction markets! 2. The market creator decides how the market resolves Existing markets focus hard on having objective resolution criteria. This means that the questions people pose are very narrowly focused — think of the “drunk looking under the streetlights because that's where it's brightest.” Instead, Manifold allows market creators to use their own judgement on how a market ought to be resolved. So a question like “Will I enjoy movin...