Podcasts about hypermind

  • 11PODCASTS
  • 21EPISODES
  • 36mAVG DURATION
  • ?INFREQUENT EPISODES
  • Aug 30, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about hypermind

Latest podcast episodes about hypermind

The Nonlinear Library
EA - Language models surprised us by Ajeya

The Nonlinear Library

Play Episode Listen Later Aug 30, 2023 11:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language models surprised us, published by Ajeya on August 30, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025. Kelsey Piper co-drafted this post. Thanks also to Isabel Juniewicz for research help. If you read media coverage of ChatGPT - which called it 'breathtaking', 'dazzling', 'astounding' - you'd get the sense that large language models (LLMs) took the world completely by surprise. Is that impression accurate? Actually, yes. There are a few different ways to attempt to measure the question "Were experts surprised by the pace of LLM progress?" but they broadly point to the same answer: ML researchers, superforecasters, and most others were all surprised by the progress in large language models in 2022 and 2023. Competitions to forecast difficult ML benchmarks ML benchmarks are sets of problems which can be objectively graded, allowing relatively precise comparison across different models. We have data from forecasting competitions done in 2021 and 2022 on two of the most comprehensive and difficult ML benchmarks: the MMLU benchmark and the MATH benchmark. First, what are these benchmarks? The MMLU dataset consists of multiple choice questions in a variety of subjects collected from sources like GRE practice tests and AP tests. It was intended to test subject matter knowledge in a wide variety of professional domains. MMLU questions are legitimately quite difficult: the average person would probably struggle to solve them. At the time of its introduction in September 2020, most models only performed close to random chance on MMLU (~25%), while GPT-3 performed significantly better than chance at 44%. The benchmark was designed to be harder than any that had come before it, and the authors described their motivation as closing the gap between performance on benchmarks and "true language understanding": Natural Language Processing (NLP) models have achieved superhuman performance on a number of recently proposed benchmarks. However, these models are still well below human level performance for language understanding as a whole, suggesting a disconnect between our benchmarks and the actual capabilities of these models. Meanwhile, the MATH dataset consists of free-response questions taken from math contests aimed at the best high school math students in the country. Most college-educated adults would get well under half of these problems right (the authors used computer science undergraduates as human subjects, and their performance ranged from 40% to 90%). At the time of its introduction in January 2021, the best model achieved only about ~7% accuracy on MATH. The authors say: We find that accuracy remains low even for the best models. Furthermore, unlike for most other text-based datasets, we find that accuracy is increasing very slowly with model size. If trends continue, then we will need algorithmic improvements, rather than just scale, to make substantial progress on MATH. So, these are both hard benchmarks - the problems are difficult for humans, the best models got low performance when the benchmarks were introduced, and the authors seemed to imply it would take a while for performance to get really good. In mid-2021, ML professor Jacob Steinhardt ran a contest with superforecasters at Hypermind to predict progress on MATH and MMLU. Superforecasters massively undershot reality in both cases. They predicted that performance on MMLU would improve moderately from 44% in 2021 to 57% by June 2022. The actual performance was 68%, which s...

The Nonlinear Library: LessWrong Daily
LW - AI Forecasting: Two Years In by jsteinhardt

The Nonlinear Library: LessWrong Daily

Play Episode Listen Later Aug 20, 2023 19:35


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting: Two Years In, published by jsteinhardt on August 20, 2023 on LessWrong.Two years ago, I commissioned forecasts for state-of-the-art performance on several popular ML benchmarks. Forecasters were asked to predict state-of-the-art performance on June 30th of 2022, 2023, 2024, and 2025. While there were four benchmarks total, the two most notable were MATH (a dataset of free-response math contest problems) and MMLU (a dataset of multiple-choice exams from the high school to post-graduate level).One year ago, I evaluated the first set of forecasts. Forecasters did poorly and underestimated progress, with the true performance lying in the far right tail of their predicted distributions. Anecdotally, experts I talked to (including myself) also underestimated progress. As a result of this, I decided to join the fray and registered my own forecasts for MATH and MMLU last July.June 30, 2023 has now passed, so we can resolve the forecasts and evaluate my own performance as well as that of other forecasters, including both AI experts and generalist "superforecasters". I'll evaluate the original forecasters that I commissioned through Hypermind, the crowd forecasting platform Metaculus, and participants in the XPT forecasting competition organized by Karger et al. (2023), which was stratified into AI experts and superforecasters.Overall, here is how I would summarize the results:Metaculus and I did the best and were both well-calibrated, with the Metaculus crowd forecast doing slightly better than me.The AI experts from Karger et al. did the next best. They had similar medians to me but were (probably) overconfident in the tails.The superforecasters from Karger et al. did the next best. They (probably) systematically underpredicted progress.The forecasters from Hypermind did the worst. They underpredicted progress significantly on MMLU.Interestingly, this is a reverse of my impressions from last year, where even though forecasters underpredicted progress, I thought of experts as underpredicting progress even more. In this case, it seems the experts did pretty well and better than generalist forecasters.What accounts for the difference? Some may be selection effects (experts who try to register forecasts are more likely to be correct). But I'd guess some is also effort: the expert "forecasts" I had in mind last year were from informal hallway conversations, while this year they were formal quantitative predictions with some (small) monetary incentive to be correct. In general, I think we should trust expert predictions more in this setting (relative to their informal statements), and I'm now somewhat more optimistic that experts can give accurate forecasts given a bit of training and the correct incentives.In the rest of the post, I'll first dive into everyone's forecasts and evaluate each in turn. Then, I'll consider my own forecast in detail, evaluating not just the final answer but the reasoning I used (which was preregistered and can be found here).My forecasts, and othersAs a reminder, forecasts are specified as probability distributions over some (hopefully unambiguously) resolvable future outcome. In this case the outcome was the highest credibly claimed benchmark accuracy by any ML system on the MATH and MMLU benchmarks as of June 30, 2023.My forecasts from July 17, 2022 are displayed below as probability density functions, as well as cumulative distribution functions and the actual result:MATHMMLUResult: 69.6% (Lightman et al., 2023)Result: 86.4% (GPT-4)Orange is my own forecast, while green is the crowd forecast of Metaculus on the same date. For MATH, the true result was at my 41st percentile, while for MMLU it was at my 66th percentile. I slightly overestimated progress on MATH and underestimated MMLU, but both were within my range of e...

The Nonlinear Library: LessWrong
LW - AI Forecasting: Two Years In by jsteinhardt

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 20, 2023 19:35


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting: Two Years In, published by jsteinhardt on August 20, 2023 on LessWrong. Two years ago, I commissioned forecasts for state-of-the-art performance on several popular ML benchmarks. Forecasters were asked to predict state-of-the-art performance on June 30th of 2022, 2023, 2024, and 2025. While there were four benchmarks total, the two most notable were MATH (a dataset of free-response math contest problems) and MMLU (a dataset of multiple-choice exams from the high school to post-graduate level). One year ago, I evaluated the first set of forecasts. Forecasters did poorly and underestimated progress, with the true performance lying in the far right tail of their predicted distributions. Anecdotally, experts I talked to (including myself) also underestimated progress. As a result of this, I decided to join the fray and registered my own forecasts for MATH and MMLU last July. June 30, 2023 has now passed, so we can resolve the forecasts and evaluate my own performance as well as that of other forecasters, including both AI experts and generalist "superforecasters". I'll evaluate the original forecasters that I commissioned through Hypermind, the crowd forecasting platform Metaculus, and participants in the XPT forecasting competition organized by Karger et al. (2023), which was stratified into AI experts and superforecasters. Overall, here is how I would summarize the results: Metaculus and I did the best and were both well-calibrated, with the Metaculus crowd forecast doing slightly better than me. The AI experts from Karger et al. did the next best. They had similar medians to me but were (probably) overconfident in the tails. The superforecasters from Karger et al. did the next best. They (probably) systematically underpredicted progress. The forecasters from Hypermind did the worst. They underpredicted progress significantly on MMLU. Interestingly, this is a reverse of my impressions from last year, where even though forecasters underpredicted progress, I thought of experts as underpredicting progress even more. In this case, it seems the experts did pretty well and better than generalist forecasters. What accounts for the difference? Some may be selection effects (experts who try to register forecasts are more likely to be correct). But I'd guess some is also effort: the expert "forecasts" I had in mind last year were from informal hallway conversations, while this year they were formal quantitative predictions with some (small) monetary incentive to be correct. In general, I think we should trust expert predictions more in this setting (relative to their informal statements), and I'm now somewhat more optimistic that experts can give accurate forecasts given a bit of training and the correct incentives. In the rest of the post, I'll first dive into everyone's forecasts and evaluate each in turn. Then, I'll consider my own forecast in detail, evaluating not just the final answer but the reasoning I used (which was preregistered and can be found here). My forecasts, and others As a reminder, forecasts are specified as probability distributions over some (hopefully unambiguously) resolvable future outcome. In this case the outcome was the highest credibly claimed benchmark accuracy by any ML system on the MATH and MMLU benchmarks as of June 30, 2023. My forecasts from July 17, 2022 are displayed below as probability density functions, as well as cumulative distribution functions and the actual result: MATHMMLUResult: 69.6% (Lightman et al., 2023)Result: 86.4% (GPT-4) Orange is my own forecast, while green is the crowd forecast of Metaculus on the same date. For MATH, the true result was at my 41st percentile, while for MMLU it was at my 66th percentile. I slightly overestimated progress on MATH and underestimated MMLU, but both were within my range of e...

The Nonlinear Library
LW - AI Forecasting: Two Years In by jsteinhardt

The Nonlinear Library

Play Episode Listen Later Aug 20, 2023 19:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting: Two Years In, published by jsteinhardt on August 20, 2023 on LessWrong. Two years ago, I commissioned forecasts for state-of-the-art performance on several popular ML benchmarks. Forecasters were asked to predict state-of-the-art performance on June 30th of 2022, 2023, 2024, and 2025. While there were four benchmarks total, the two most notable were MATH (a dataset of free-response math contest problems) and MMLU (a dataset of multiple-choice exams from the high school to post-graduate level). One year ago, I evaluated the first set of forecasts. Forecasters did poorly and underestimated progress, with the true performance lying in the far right tail of their predicted distributions. Anecdotally, experts I talked to (including myself) also underestimated progress. As a result of this, I decided to join the fray and registered my own forecasts for MATH and MMLU last July. June 30, 2023 has now passed, so we can resolve the forecasts and evaluate my own performance as well as that of other forecasters, including both AI experts and generalist "superforecasters". I'll evaluate the original forecasters that I commissioned through Hypermind, the crowd forecasting platform Metaculus, and participants in the XPT forecasting competition organized by Karger et al. (2023), which was stratified into AI experts and superforecasters. Overall, here is how I would summarize the results: Metaculus and I did the best and were both well-calibrated, with the Metaculus crowd forecast doing slightly better than me. The AI experts from Karger et al. did the next best. They had similar medians to me but were (probably) overconfident in the tails. The superforecasters from Karger et al. did the next best. They (probably) systematically underpredicted progress. The forecasters from Hypermind did the worst. They underpredicted progress significantly on MMLU. Interestingly, this is a reverse of my impressions from last year, where even though forecasters underpredicted progress, I thought of experts as underpredicting progress even more. In this case, it seems the experts did pretty well and better than generalist forecasters. What accounts for the difference? Some may be selection effects (experts who try to register forecasts are more likely to be correct). But I'd guess some is also effort: the expert "forecasts" I had in mind last year were from informal hallway conversations, while this year they were formal quantitative predictions with some (small) monetary incentive to be correct. In general, I think we should trust expert predictions more in this setting (relative to their informal statements), and I'm now somewhat more optimistic that experts can give accurate forecasts given a bit of training and the correct incentives. In the rest of the post, I'll first dive into everyone's forecasts and evaluate each in turn. Then, I'll consider my own forecast in detail, evaluating not just the final answer but the reasoning I used (which was preregistered and can be found here). My forecasts, and others As a reminder, forecasts are specified as probability distributions over some (hopefully unambiguously) resolvable future outcome. In this case the outcome was the highest credibly claimed benchmark accuracy by any ML system on the MATH and MMLU benchmarks as of June 30, 2023. My forecasts from July 17, 2022 are displayed below as probability density functions, as well as cumulative distribution functions and the actual result: MATHMMLUResult: 69.6% (Lightman et al., 2023)Result: 86.4% (GPT-4) Orange is my own forecast, while green is the crowd forecast of Metaculus on the same date. For MATH, the true result was at my 41st percentile, while for MMLU it was at my 66th percentile. I slightly overestimated progress on MATH and underestimated MMLU, but both were within my range of e...

Demystifying Science
Grand Unified Theory of Consciousness - Dr. Ogi Ogas, Harvard University

Demystifying Science

Play Episode Listen Later May 25, 2023 157:12


Dr. Ogi Ogas is a theorist, author, and visiting scholar at the Harvard Graduate School of Education, where he serves as Project Head for the Individual Mastery Project. Ogas boasts to be in possession of the end theory of consciousness, and he makes a rock solid case for it. You be the judge. Our conversation begins with an introduction to the problems of consciousness and walks through a topological framework for the emergence of increased complexity in biological awareness, culminating in rare cases of self-consciousness, perhaps best exemplified by our own species. We then consider the future evolution of consciousness as humans reach outward for the stars. Tell us what you think! Support both the podcast and Ogi when you buy his books here: https://amzn.to/438xf88 Ogi Ogas: https://www.ogiogas.com/ Support the scientific revolution by joining our Patreon: https://bit.ly/3lcAasB Tell us what you think in the comments or on our Discord: https://discord.gg/MJzKT8CQub (00:00:00) Go! (00:04:17) An exploration of consciousness (00:10:06) Who is Ogi Ogas? (00:16:07) Two Kinds of Dynamics (00:19:25) The Theory (00:23:10) Collective Activity (00:25:01) Three Laws of Consciousness (00:36:13) Types of Consciousness (00:42:03) Four Stages of Thinking (00:55:40) The Road to Hyperminds (01:03:08) Point Minds to Superminds (01:10:19) The Birth of Civilization (01:15:45) Representation of External Concepts (01:24:44) Consciousness is not the end game (01:34:27) The Metropolis Principle (01:42:04) Tyranny of the Hypermind (01:54:14) A question bigger than why (01:57:46) Death is the end of Dynamics (02:03:04) Extraterriestrials (02:13:47) A Mind for Time (02:20:04) Tragedy of the Schism #consciousness #science #mind Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671

The Wealth Witch Podcast
S3E28 - The Yin and Yang of Your Business.

The Wealth Witch Podcast

Play Episode Listen Later Feb 27, 2023 8:16


In the first of the Hypermind series, Leah answers your questions about your business. How does Yin and Yang relate to your business? Does your business have the balance needed to remain in the flow needed to ensure your business is successful. WANT IN OR WANT TO KNOW MORE ABOUT LEAH'S NEW OFFERING - The Moral Hedonist? Click here: https://theleahsteele.kartra.com/page/moralhedonist ***Want MORE House of Wealth? Join Leah's exclusive community, The House of Wealth Collective. It is the exclusive home of Leah's highly coveted monthly wealth forecasts, includes weekly wealth clearing, a monthly wealth activation with Leah, The Weekly Wealth Drip - content to expand your wealth consciousness, weekly journaling/reflection prompts, wealth expansion tips, first access to Leah's programs, exclusive offers, loyalty discounts, and MORE!Click here to get access NOW! https://theleahsteele.kartra.com/page/HOWC Make sure you have joined The Revolutionary Wealth Facebook group to remain connected with Leah. This group is where she shares all her free forward-facing content! Want to know how to get further connected to all of the things in LeahLand?  Find Leah on Social Media: Instagram: www.instagram.com/theleahsteeleFacebook: https://www.facebook.com/leahsteeleofficialYouTube: www.youtube.com/leahsteele Get your daily dose of Leah - REAL, RAW & UNCENSORED - by joining her FREE Telegram Channel HERE! For more information on Leah and her current offerings, visit her website: www.theleahsteele.comSee omnystudio.com/listener for privacy information.

Can I get that software in blue?
Episode 19 | Dr. Emile Servan-Schreiber, MD @ Hypermind | Crowd-Forecasting/Collective Intelligence

Can I get that software in blue?

Play Episode Listen Later Nov 7, 2022 84:15


Episode #19 of "Can I get that software in blue?", a podcast by and for people engaged in technology sales. If you are in the technology presales, solution architecture, sales, support or professional services career paths then this show is for you! Your hosts Steve Mayzak and Chad Tindel are joined by Dr. Emile Servan-Schreiber, Managing Director at Hypermind where he has pioneered many business applications of prediction markets and collective intelligence. He holds degrees in Computer Science, Applied Mathematics and a Ph.D. in Cognitive Psychology from Carnegie Mellon. Emile goes very deep into his research in the science of Collective Intelligence and how people can become smarter by borrowing the brains of others in their networks and describes the platforms his company Hypermind has built to help companies and governments use prediction markets to solve problems facing their organization. Contact us on Twitter or LinkedIn to suggest companies or tech news articles worthy of the podcast! Our website: https://softwareinblue.com Twitter: https://twitter.com/softwareinblue LinkedIn: https://www.linkedin.com/showcase/softwareinblue Make sure to subscribe or follow us to get notified about our upcoming episodes: Youtube: https://www.youtube.com/channel/UC8qfPUKO_rPmtvuB4nV87rg Apple Podcasts: https://podcasts.apple.com/us/podcast/can-i-get-that-software-in-blue/id1561899125 Spotify: https://open.spotify.com/show/25r9ckggqIv6rGU8ca0WP2 Stitcher: https://www.stitcher.com/podcast/can-i-get-that-software-in-blue Links mentioned in the episode: Emile's Company, Hypermind: https://www.hypermind.com Hypermind Prediction Contests: https://predict.hypermind.com/ Google CEO uses crowd intelligence: https://www.cnbc.com/2022/07/31/google-ceo-to-employees-productivity-and-focus-must-improve.html

The Nonlinear Library
LW - Forecasting ML Benchmarks in 2023 by jsteinhardt

The Nonlinear Library

Play Episode Listen Later Jul 19, 2022 23:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting ML Benchmarks in 2023, published by jsteinhardt on July 18, 2022 on LessWrong. Thanks to Collin Burns, Ruiqi Zhong, Cassidy Laidlaw, Jean-Stanislas Denain, and Erik Jones, who generated most of the considerations discussed in this post. Previously, I evaluated the accuracy of forecasts about performance on the MATH and MMLU (Massive Multitask) datasets. I argued that most people, including myself, significantly underestimated the rate of progress, and encouraged ML researchers to make forecasts for the next year in order to become more calibrated. In that spirit, I'll offer my own forecasts for state-of-the-art performance on MATH and MMLU. Following the corresponding Metaculus questions, I'll forecast accuracy as of June 30, 2023. My forecasts are based on a one-hour exercise I performed with my research group, where we brainstormed considerations, looked up relevant information, formed initial forecasts, discussed, and then made updated forecasts. It was fairly easy to devote one group meeting to this, and I'd encourage other research groups to do the same. Below, I'll describe my reasoning for the MATH and MMLU forecasts in turn. I'll review relevant background info, describe the key considerations we brainstormed followed, analyze those considerations, and then give my bottom-line forecast. MATH Background Metaculus does a good job of describing the MATH dataset and corresponding forecasting question: The MATH dataset is a dataset of challenging high school mathematics problems constructed by Hendrycks et al. (2021). Hypermind forecasters were commissioned to predict state-of-the-art performance on June 30, 2022, '23, '24, and '25. The 2022 result of 50.3% was significantly outside forecasters' prediction intervals, so we're seeing what the updated forecasts are for 2023, '24, and '25. What will be state-of-the-art performance on the MATH dataset in the following years? These questions should resolve identically to the Hypermind forecasts: "These questions resolve as the highest performance achieved on MATH by June 30 in the following years by an eligible model. Eligible models may use scratch space before outputting an answer (if desired) and may be trained in any way that does not use the test set (few-shot, fine tuned, etc.). The model need not be publicly released, as long as the resulting performance itself is reported in a published paper (on arxiv or a major ML conference) or through an official communication channel of an industry lab (e.g. claimed in a research blog post on the OpenAI blog, or a press release). In case of ambiguity, the question will resolve according to Jacob Steinhardt's expert judgement." It's perhaps a bit sketchy for me to be both making and resolving the forecast, but I expect in most cases the answer will be unambiguous. Key Considerations Below I list key considerations generated during our brainstorming: Why did Minerva do well on MATH? Is it easy to scale up those methods? Is there other low-hanging fruit? What kinds of errors is Minerva making? Do they seem easy or hard to fix? Minerva was trained on arXiv and other sources of technical writing. How much additional such data could be generated? Are there other methods that could lead to improvement on mathematical reasoning? Possibilities: self-supervised learning, verifiers, data retrieval Base rates: What has been the historical rate of progress on MATH? Base rates: How does progress typically occur on machine learning datasets (especially NLP datasets)? If there is a sudden large improvement, does that typically continue, or level off? How much will people work on improving MATH performance? Analyzing Key Consideratoins Why did Minerva do well? How much low-hanging fruit is there? Minerva incorporated several changes that improved performance relative to previou...

The Nonlinear Library: LessWrong
LW - Forecasting ML Benchmarks in 2023 by jsteinhardt

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 19, 2022 23:45


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting ML Benchmarks in 2023, published by jsteinhardt on July 18, 2022 on LessWrong. Thanks to Collin Burns, Ruiqi Zhong, Cassidy Laidlaw, Jean-Stanislas Denain, and Erik Jones, who generated most of the considerations discussed in this post. Previously, I evaluated the accuracy of forecasts about performance on the MATH and MMLU (Massive Multitask) datasets. I argued that most people, including myself, significantly underestimated the rate of progress, and encouraged ML researchers to make forecasts for the next year in order to become more calibrated. In that spirit, I'll offer my own forecasts for state-of-the-art performance on MATH and MMLU. Following the corresponding Metaculus questions, I'll forecast accuracy as of June 30, 2023. My forecasts are based on a one-hour exercise I performed with my research group, where we brainstormed considerations, looked up relevant information, formed initial forecasts, discussed, and then made updated forecasts. It was fairly easy to devote one group meeting to this, and I'd encourage other research groups to do the same. Below, I'll describe my reasoning for the MATH and MMLU forecasts in turn. I'll review relevant background info, describe the key considerations we brainstormed followed, analyze those considerations, and then give my bottom-line forecast. MATH Background Metaculus does a good job of describing the MATH dataset and corresponding forecasting question: The MATH dataset is a dataset of challenging high school mathematics problems constructed by Hendrycks et al. (2021). Hypermind forecasters were commissioned to predict state-of-the-art performance on June 30, 2022, '23, '24, and '25. The 2022 result of 50.3% was significantly outside forecasters' prediction intervals, so we're seeing what the updated forecasts are for 2023, '24, and '25. What will be state-of-the-art performance on the MATH dataset in the following years? These questions should resolve identically to the Hypermind forecasts: "These questions resolve as the highest performance achieved on MATH by June 30 in the following years by an eligible model. Eligible models may use scratch space before outputting an answer (if desired) and may be trained in any way that does not use the test set (few-shot, fine tuned, etc.). The model need not be publicly released, as long as the resulting performance itself is reported in a published paper (on arxiv or a major ML conference) or through an official communication channel of an industry lab (e.g. claimed in a research blog post on the OpenAI blog, or a press release). In case of ambiguity, the question will resolve according to Jacob Steinhardt's expert judgement." It's perhaps a bit sketchy for me to be both making and resolving the forecast, but I expect in most cases the answer will be unambiguous. Key Considerations Below I list key considerations generated during our brainstorming: Why did Minerva do well on MATH? Is it easy to scale up those methods? Is there other low-hanging fruit? What kinds of errors is Minerva making? Do they seem easy or hard to fix? Minerva was trained on arXiv and other sources of technical writing. How much additional such data could be generated? Are there other methods that could lead to improvement on mathematical reasoning? Possibilities: self-supervised learning, verifiers, data retrieval Base rates: What has been the historical rate of progress on MATH? Base rates: How does progress typically occur on machine learning datasets (especially NLP datasets)? If there is a sudden large improvement, does that typically continue, or level off? How much will people work on improving MATH performance? Analyzing Key Consideratoins Why did Minerva do well? How much low-hanging fruit is there? Minerva incorporated several changes that improved performance relative to previou...

The Nonlinear Library
LW - AI Forecasting: One Year In by jsteinhardt

The Nonlinear Library

Play Episode Listen Later Jul 4, 2022 10:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting: One Year In, published by jsteinhardt on July 4, 2022 on LessWrong. Last August, my research group created a forecasting contest to predict AI progress on four benchmarks. Forecasts were asked to predict state-of-the-art performance (SOTA) on each benchmark for June 30th 2022, 2023, 2024, and 2025. It's now past June 30th, so we can evaluate the performance of the forecasters so far. Forecasters were asked to provide probability distributions, so we can evaluate both their point estimates and their coverage (whether the true result was within their credible intervals). I'll dive into the data in detail below, but my high-level takeaways were that: Forecasters' predictions were not very good in general: two out of four forecasts were outside the 90% credible intervals. However, they were better than my personal predictions, and I suspect better than the median prediction of ML researchers (if the latter had been preregistered). Specifically, progress on ML benchmarks happened significantly faster than forecasters expected. But forecasters predicted faster progress than I did personally, and my sense is that I expect somewhat faster progress than the median ML researcher does. Progress on a robustness benchmark was slower than expected, and was the only benchmark to fall short of forecaster predictions. This is somewhat worrying, as it suggests that machine learning capabilities are progressing quickly, while safety properties are progressing slowly. Below I'll review the tasks and competition format, then go through the results. Forecasting Tasks and Overall Predictions As a reminder, the four benchmarks were: MATH, a mathematics problem-solving dataset; MMLU, a test of specialized subject knowledge using high school, college, and professional multiple choice exams; Something Something v2, a video recognition dataset; and CIFAR-10 robust accuracy, a measure of adversarially robust vision performance. Forecasters were asked to predict performance on each of these. Each forecasting question had a $5000 prize pool (distributed across the four years). There were also two questions about compute usage by different countries and organizations, but I'll ignore those here. Forecasters themselves were recruited with the platform Hypermind. You can read more details in the initial blog post from last August, but in brief, professional forecasters make money by providing accurate probabilistic forecasts about future events, and are typically paid according to a proper scoring rule that incentivizes calibration. They apply a wide range of techniques such as base rates, reference classes, trend extrapolation, examining and aggregating different expert views, thinking about possible surprises, etc. (see my class notes for more details). Here is what the forecasters' point estimates were for each of the four questions (based on Hypermind's dashboard): Expert performance is approximated as 90%. The 2021 datapoint represents the SOTA in August 2021, when the predictions were made. For June 2022, forecasters predicted 12.7% on MATH, 57.1% on MMLU (the multiple-choice dataset), 70.4% on adversarial CIFAR-10, and 73.0% on Something Something v2. At the time, I described being surprised by the 2025 prediction for the MATH dataset, which predicted over 50% performance, especially given that 2021 accuracy was only 6.9% and most humans would be below 50%. Here are the actual results, as of today: MATH: 50.3% (vs. 12.7% predicted) MMLU: 67.5% (vs. 57.1% predicted) Adversarial CIFAR-10: 66.6% (vs. 70.4% predicted) Something Something v2: 75.3% (vs. 73.0% predicted) MATH and MMLU progressed much faster than predicted. Something Something v2 progressed somewhat faster than predicted. In contrast, Adversarial CIFAR-10 progressed somewhat slower than predicted. Overall, progress on...

The Nonlinear Library: LessWrong
LW - AI Forecasting: One Year In by jsteinhardt

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 4, 2022 10:37


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting: One Year In, published by jsteinhardt on July 4, 2022 on LessWrong. Last August, my research group created a forecasting contest to predict AI progress on four benchmarks. Forecasts were asked to predict state-of-the-art performance (SOTA) on each benchmark for June 30th 2022, 2023, 2024, and 2025. It's now past June 30th, so we can evaluate the performance of the forecasters so far. Forecasters were asked to provide probability distributions, so we can evaluate both their point estimates and their coverage (whether the true result was within their credible intervals). I'll dive into the data in detail below, but my high-level takeaways were that: Forecasters' predictions were not very good in general: two out of four forecasts were outside the 90% credible intervals. However, they were better than my personal predictions, and I suspect better than the median prediction of ML researchers (if the latter had been preregistered). Specifically, progress on ML benchmarks happened significantly faster than forecasters expected. But forecasters predicted faster progress than I did personally, and my sense is that I expect somewhat faster progress than the median ML researcher does. Progress on a robustness benchmark was slower than expected, and was the only benchmark to fall short of forecaster predictions. This is somewhat worrying, as it suggests that machine learning capabilities are progressing quickly, while safety properties are progressing slowly. Below I'll review the tasks and competition format, then go through the results. Forecasting Tasks and Overall Predictions As a reminder, the four benchmarks were: MATH, a mathematics problem-solving dataset; MMLU, a test of specialized subject knowledge using high school, college, and professional multiple choice exams; Something Something v2, a video recognition dataset; and CIFAR-10 robust accuracy, a measure of adversarially robust vision performance. Forecasters were asked to predict performance on each of these. Each forecasting question had a $5000 prize pool (distributed across the four years). There were also two questions about compute usage by different countries and organizations, but I'll ignore those here. Forecasters themselves were recruited with the platform Hypermind. You can read more details in the initial blog post from last August, but in brief, professional forecasters make money by providing accurate probabilistic forecasts about future events, and are typically paid according to a proper scoring rule that incentivizes calibration. They apply a wide range of techniques such as base rates, reference classes, trend extrapolation, examining and aggregating different expert views, thinking about possible surprises, etc. (see my class notes for more details). Here is what the forecasters' point estimates were for each of the four questions (based on Hypermind's dashboard): Expert performance is approximated as 90%. The 2021 datapoint represents the SOTA in August 2021, when the predictions were made. For June 2022, forecasters predicted 12.7% on MATH, 57.1% on MMLU (the multiple-choice dataset), 70.4% on adversarial CIFAR-10, and 73.0% on Something Something v2. At the time, I described being surprised by the 2025 prediction for the MATH dataset, which predicted over 50% performance, especially given that 2021 accuracy was only 6.9% and most humans would be below 50%. Here are the actual results, as of today: MATH: 50.3% (vs. 12.7% predicted) MMLU: 67.5% (vs. 57.1% predicted) Adversarial CIFAR-10: 66.6% (vs. 70.4% predicted) Something Something v2: 75.3% (vs. 73.0% predicted) MATH and MMLU progressed much faster than predicted. Something Something v2 progressed somewhat faster than predicted. In contrast, Adversarial CIFAR-10 progressed somewhat slower than predicted. Overall, progress on...

The Nonlinear Library
LW - Features that make a report especially helpful to me by lukeprog

The Nonlinear Library

Play Episode Listen Later Apr 14, 2022 4:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Features that make a report especially helpful to me, published by lukeprog on April 14, 2022 on LessWrong. Cross-post from EA Forum, follow-up to EA needs consultancies. Below is a list of features that make a report on some research question more helpful to me, along with a list of examples. I wrote this post for the benefit of individuals and organizations from whom I might commission reports on specific research questions, but others might find it useful as well. Much of what's below is probably true for Open Philanthropy in general, but I've written it in my own voice so that I don't need to try to represent Open Philanthropy as a whole. For many projects, some of the features below are not applicable, or not feasible, or (most often) not worth the cost, especially time-cost. But if present, these features make a report more helpful and action-informing to me: The strongest forms of evidence available on the question were generated/collected. This is central but often highly constrained, e.g. we generally can't run randomized trials in geopolitics, and major companies won't share much proprietary data. But before commissioning a report, I'd typically want to know what the strongest evidence that could in theory be collected is, and how much that might cost to gather or produce. Thoughtful cost-benefit analysis, where relevant. Strong reasoning transparency throughout, of this particular type. In most cases this might be the most important feature I'm looking for, especially given that many research questions don't lend themselves to more than 1-3 types of evidence anyway, and all of them are weak. In many cases, especially when I don't have much prior context and trust built up with the producers of a report, I would like to pay for a report to be pretty "extreme" about reasoning transparency, e.g. possibly: a footnote or endnote indicating what kind of support nearly every substantive claim has, including lengthy blockquotes of the relevant passages from primary sources (as in a GiveWell intervention report[1]). explicit probabilities (from authors, experts, or superforecasters) provided for dozens or hundreds of claims and forecasts throughout the report, to indicate degrees of confidence. (Most people don't have experience giving plausibly-calibrated explicit probabilities for claims, but I'll often be willing to provide funding for explicit probabilities about some of a report's claims to be provided by companies that specialize in doing that, e.g. Good Judgment, Metaculus, or Hypermind.) Lots of appendices that lay out more detailed reasoning and evidence for claims that are argued more briefly in the main text of the report, a la my animal consciousness report, which is 83% appendices and endnotes (by word count). Authors and other major contributors who have undergone special training in calibration and forecasting,[2] e.g. from Hubbard and Good Judgment. This should help contributors to a report to "speak our language" of calibrated probabilities and general Bayesianism, and perhaps improve the accuracy/calibration of the claims in the report itself. I'm typically happy to pay for this training for people working on a project I've commissioned. External reviews of the ~final report, including possibly from experts with different relevant specializations and differing/opposed object-level views. This should be fairly straightforward with sufficient honoraria for reviewers, and sufficient time spent identifying appropriate experts. Some of the strongest examples of ideal reports of this type that I've seen are: GiveWell's intervention/program reports[3] and top charity reviews.[4] David Roodman's evidence reviews, e.g. on microfinance, alcohol taxes, and the effects of incarceration on crime (most of these were written for Open Philanthropy). Other example...

cross speech helpful ea external thoughtful hubbard givewell good judgment rationalist lesswrong bayesianism hypermind open philanthropy ea forum
The Nonlinear Library: LessWrong
LW - Features that make a report especially helpful to me by lukeprog

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 14, 2022 4:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Features that make a report especially helpful to me, published by lukeprog on April 14, 2022 on LessWrong. Cross-post from EA Forum, follow-up to EA needs consultancies. Below is a list of features that make a report on some research question more helpful to me, along with a list of examples. I wrote this post for the benefit of individuals and organizations from whom I might commission reports on specific research questions, but others might find it useful as well. Much of what's below is probably true for Open Philanthropy in general, but I've written it in my own voice so that I don't need to try to represent Open Philanthropy as a whole. For many projects, some of the features below are not applicable, or not feasible, or (most often) not worth the cost, especially time-cost. But if present, these features make a report more helpful and action-informing to me: The strongest forms of evidence available on the question were generated/collected. This is central but often highly constrained, e.g. we generally can't run randomized trials in geopolitics, and major companies won't share much proprietary data. But before commissioning a report, I'd typically want to know what the strongest evidence that could in theory be collected is, and how much that might cost to gather or produce. Thoughtful cost-benefit analysis, where relevant. Strong reasoning transparency throughout, of this particular type. In most cases this might be the most important feature I'm looking for, especially given that many research questions don't lend themselves to more than 1-3 types of evidence anyway, and all of them are weak. In many cases, especially when I don't have much prior context and trust built up with the producers of a report, I would like to pay for a report to be pretty "extreme" about reasoning transparency, e.g. possibly: a footnote or endnote indicating what kind of support nearly every substantive claim has, including lengthy blockquotes of the relevant passages from primary sources (as in a GiveWell intervention report[1]). explicit probabilities (from authors, experts, or superforecasters) provided for dozens or hundreds of claims and forecasts throughout the report, to indicate degrees of confidence. (Most people don't have experience giving plausibly-calibrated explicit probabilities for claims, but I'll often be willing to provide funding for explicit probabilities about some of a report's claims to be provided by companies that specialize in doing that, e.g. Good Judgment, Metaculus, or Hypermind.) Lots of appendices that lay out more detailed reasoning and evidence for claims that are argued more briefly in the main text of the report, a la my animal consciousness report, which is 83% appendices and endnotes (by word count). Authors and other major contributors who have undergone special training in calibration and forecasting,[2] e.g. from Hubbard and Good Judgment. This should help contributors to a report to "speak our language" of calibrated probabilities and general Bayesianism, and perhaps improve the accuracy/calibration of the claims in the report itself. I'm typically happy to pay for this training for people working on a project I've commissioned. External reviews of the ~final report, including possibly from experts with different relevant specializations and differing/opposed object-level views. This should be fairly straightforward with sufficient honoraria for reviewers, and sufficient time spent identifying appropriate experts. Some of the strongest examples of ideal reports of this type that I've seen are: GiveWell's intervention/program reports[3] and top charity reviews.[4] David Roodman's evidence reviews, e.g. on microfinance, alcohol taxes, and the effects of incarceration on crime (most of these were written for Open Philanthropy). Other example...

cross speech helpful ea external thoughtful hubbard givewell good judgment rationalist lesswrong bayesianism hypermind open philanthropy ea forum
The Nonlinear Library
EA - Features that make a report especially helpful to me by lukeprog

The Nonlinear Library

Play Episode Listen Later Apr 12, 2022 4:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Features that make a report especially helpful to me, published by lukeprog on April 12, 2022 on The Effective Altruism Forum. Follow-up to: EA needs consultancies Below is a list of features that make a report on some research question more helpful to me, along with a list of examples. I wrote this post for the benefit of individuals and organizations from whom I might commission reports on specific research questions, but others might find it useful as well. Much of what's below is probably true for Open Philanthropy in general, but I've written it in my own voice so that I don't need to try to represent Open Philanthropy as a whole. For many projects, some of the features below are not applicable, or not feasible, or (most often) not worth the cost, especially time-cost. But if present, these features make a report more helpful and action-informing to me: The strongest forms of evidence available on the question were generated/collected. This is central but often highly constrained, e.g. we generally can't run randomized trials in geopolitics, and major companies won't share much proprietary data. But before commissioning a report, I'd typically want to know what the strongest evidence that could in theory be collected is, and how much that might cost to gather or produce. Thoughtful cost-benefit analysis, where relevant. Strong reasoning transparency throughout, of this particular type. In most cases this might be the most important feature I'm looking for, especially given that many research questions don't lend themselves to more than 1-3 types of evidence anyway, and all of them are weak. In many cases, especially when I don't have much prior context and trust built up with the producers of a report, I would like to pay for a report to be pretty "extreme" about reasoning transparency, e.g. possibly: a footnote or endnote indicating what kind of support nearly every substantive claim has, including lengthy blockquotes of the relevant passages from primary sources (as in a GiveWell intervention report[1]). explicit probabilities (from authors, experts, or superforecasters) provided for dozens or hundreds of claims and forecasts throughout the report, to indicate degrees of confidence. (Most people don't have experience giving plausibly-calibrated explicit probabilities for claims, but I'll often be willing to provide funding for explicit probabilities about some of a report's claims to be provided by companies that specialize in doing that, e.g. Good Judgment, Metaculus, or Hypermind.) Lots of appendices that lay out more detailed reasoning and evidence for claims that are argued more briefly in the main text of the report, a la my animal consciousness report, which is 83% appendices and endnotes (by word count). Authors and other major contributors who have undergone special training in calibration and forecasting,[2] e.g. from Hubbard and Good Judgment. This should help contributors to a report to "speak our language" of calibrated probabilities and general Bayesianism, and perhaps improve the accuracy/calibration of the claims in the report itself. I'm typically happy to pay for this training for people working on a project I've commissioned. External reviews of the ~final report, including possibly from experts with different relevant specializations and differing/opposed object-level views. This should be fairly straightforward with sufficient honoraria for reviewers, and sufficient time spent identifying appropriate experts. Some of the strongest examples of ideal reports of this type that I've seen are: GiveWell's intervention/program reports[3] and top charity reviews.[4] David Roodman's evidence reviews, e.g. on microfinance, alcohol taxes, and the effects of incarceration on crime (most of these were written for Open Philanthropy). Other examples inclu...

The Nonlinear Library
EA - Prediction Markets in The Corporate Setting by NunoSempere

The Nonlinear Library

Play Episode Listen Later Dec 31, 2021 57:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prediction Markets in The Corporate Setting, published by NunoSempere on December 31, 2021 on The Effective Altruism Forum. What follows is a report that Misha Yagudin, Nuño Sempere, and Eli Lifland wrote back in October 2021 for Upstart, an AI lending platform that was interesting in exploring forecasting methods in general and prediction markets in particular. We believe that the report is of interest to EA as it relates to the institutional decision-making cause area and because it might inform EA organizations about which forecasting methods, if any, to use. In addition, the report covers a large number of connected facts about prediction markets and forecasting systems which might be of interest to people interested in the topic. Note that since this report was written, Google has started a new internal prediction market. Note also that this report mostly concerns company-internal prediction markets, rather than external prediction markets or forecasting platforms, such as Hypermind or Metaculus. However, one might think that the concerns we raise still apply to these. Executive Summary We reviewed the academic consensus on and corporate track record of prediction markets. We are much more sure about the fact that prediction markets fail to gain adoption than about any particular explanation of why this is. The academic consensus seems to overstate their benefits and promisingness. Lack of good tech, the difficulty of writing good and informative questions, and social disruptiveness are likely to be among the reasons contributing to their failure. We don't recommend adopting company-internal prediction markets for these reasons. We see room for exceptions: using them in limited contexts or delegating external macroeconomic questions to them. We survey some alternatives to prediction markets. Generally, we prefer these alternatives' pros and cons. Introduction This section: Defines prediction markets Outlines their value proposition What are prediction markets Prediction markets are markets in which contracts are traded that have some value if an event happens, and no value if an event doesn't happen. For example, a share of "Democrat" in a prediction market on the winner of the 2024 US presidential election will pay $1 if the winner of the 2024 election is a Democrat, and $0 if the winner is not. Prices in prediction markets can be interpreted as probabilities. For example, the expected value of a "Democrat" contract in the previous market is $1⋅p+$0⋅(1−p), where p is the chance that a Democrat will win. To the extent that the market is efficient, one expects the expected value of a contract to be equal to its current value. So if one observes a contract price of $0.54, one can deduce the expected probability by setting $0.54=$1∗p+$0∗(1−p), and thus p=0.54=54%. It is also in this sense that one says that "the market as a whole expects" Democrats to win with 54% probability. One might expect markets not to be efficient, for instance after remembering Keynes' adage that "markets can remain irrational longer than you can remain solvent." And we do see inefficiencies in modern prediction markets, sometimes glaring. However, note that, unlike the stock market, prediction markets have hard deadlines after which the market comes in contact with reality and gets resolved. Besides binary prediction markets, there are also markets with multiple options—e.g., "Who will win the 2024 election?", with multiple contracts only one of which will pay out in the end,— or markets that pay out proportionally to some yet unknown number—e.g., "How many Senate seats will Republicans control after the 2022 elections?", which pays out proportionally to the number of seats. Predictions markets thrive in some niches: Bookmakers are offering odds and taking bets for notable sports events. M...

The Nonlinear Library: EA Forum Top Posts
Introducing Metaforecast: A Forecast Aggregator and Search Tool by NunoSempere, Ozzie Gooen

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 12, 2021 7:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Metaforecast: A Forecast Aggregator and Search Tool, published by NunoSempere, Ozzie Gooen on the Effective Altruism Forum. Introduction The last few years have seen a proliferation of forecasting platforms. These platforms differ in many ways, and provide different experiences, filters, and incentives for forecasters. Some platforms like Metaculus and Hypermind use volunteers with prizes, others, like PredictIt and Smarkets are formal betting markets. Forecasting is a public good, providing information to the public. While the diversity among platforms has been great for experimentation, it also fragments information, making the outputs of forecasting far less useful. For instance, different platforms ask similar questions using different wordings. The questions may or may not be organized, and the outputs may be distributions, odds, or probabilities. Fortunately, most of these platforms either have APIs or can be scraped. We've experimented with pulling their data to put together a listing of most of the active forecasting questions and most of their current estimates in a coherent and more easily accessible platform. Metaforecast Metaforecast is a free & simple app that shows predictions and summaries from 10+ forecasting platforms. It shows simple summaries of the key information; just the immediate forecasts, no history. Data is fetched daily. There's a simple string search, and you can open the advanced options for some configurability. Currently between all of the indexed platforms we track ~2100 active forecasting questions, ~1200 (~55%) of which are on Metaculus. There are also 17,000 public models from Guesstimate. One obvious issue that arose was the challenge of comparing questions among platforms. Some questions have results that seem more reputable than others. Obviously a Metaculus question with 2000 predictions seems more robust than one with 3 predictions, but less obvious is how a Metaculus question with 3 predictions compares to one from Good Judgement Superforecasters where the number of forecasters is not clear, or to estimates from a Smarkets question with £1,000 traded. We believe that this is an area that deserves substantial research and design experimentation. In the meantime we use a star rating system. We created a function that estimates reputability as “stars” on a 1-5 system using the forecasting platform, forecast count, and liquidity for prediction markets. The estimation came from volunteers acquainted with the various forecasting platforms. We're very curious for feedback here, both on what the function should be, and how to best explain and show the results. Metaforecast is being treated as an experimental endeavor of QURI. We spent a few weeks on it so far, after developing technologies and skill sets that made it fairly straightforward. We're currently expecting to support it for at least a year and provide minor updates. We're curious to see what interest is like and respond accordingly. Metaforecast is being led by Nuño Sempere, with support from Ozzie Gooen, who also wrote much of this post. Select Search Screenshots Charity GiveWell Data Sources Platform Url Information used in Metaforecast Robustness Metaculus Active questions only. The current aggregate is shown for binary questions, but not for continuous questions. 2 stars if it has fewer than 100 forecasts, 3 stars when between 101 and 300, 4 stars if over 300 Foretell (CSET)/ All active questions 1 star if a question has fewer than 100 forecasts, 2 stars if it has more Hypermind Questions on various dashboards 3 stars Good Judgement/ We use various superforecaster dashboards. You can see them here and here 4 stars Good Judgement Open/ All active questions 2 stars if a question has fewer than 100 forecasts, 3 stars if it has more Smarkets/ Only take the polit...

SMART TECH
SMART TECH du mardi 27 avril 2021

SMART TECH

Play Episode Listen Later Apr 26, 2021 42:51


Mardi 27 avril 2021, SMART TECH reçoit Émile Servan-Schreiber (Directeur, Hypermind) , Anne-Cécile Descaillot (fondatrice, Gapianne) , Alexis Himeros (fondateur, Son du désir) , Christel Bony (fondatrice, Sextech For Good) et Hervé Le Jouan (président fondateur, Privowny.app)

ARENI Global: In Conversation
The Power of Collective Intelligence - In Conversation with Emile Servan Schreiber

ARENI Global: In Conversation

Play Episode Listen Later Nov 17, 2020 70:35


This week we meet French researcher Emile Servan Schreiber to discuss the power of collective intelligence, and the different ways to harness, measure and maximise the efficiency of our joined brains. As we navigate our way through the "wisdom of crowds", we also explore how women and diversity can help build better and smarter organisations. Dr Emile Servan-Schreiber holds a PhD in cognitive psychology from Carnegie-Mellon University in the US, as well as undergraduate degrees in mathematics and computing. He is the managing director of Hypermind which operates in both France and the US, and creates prediction markets.

Spiritual Smackdown
The Key to Alignment with Amy Elizabeth

Spiritual Smackdown

Play Episode Listen Later Oct 29, 2020 42:22


This week we are thrilled to be joined by our soul sister, Amy Elizabeth, the incredible founder of Align by Design. For the last two and a half years, we have risen side by side with Amy. We have served as each other's coaches, held space for each other in the Hypermind mastermind, and we've witnessed all of the ways in which Amy has risen to the badass woman that you see today. In January, Amy's life was flipped upside down by an unexpected divorce. Instead of allowing the winds of change to crater her life and business, Amy chose to surrender to the universe from a place of strength. She kept her heart open and her eyes on what was important: her abundant vision for the future and the safety of her three young children. Throughout the experience, Amy's business has experienced incredible growth. Amy attributes her success this year to authentically and consciously living in alignment. In this conversation we delve into what it truly looks like to choose alignment in a challenging moment. Get insight on how to show up with authenticity, while still creating boundaries for your privacy. Examine how choosing to be in alignment does not mean committing to one single set of emotions or intentions. And learn how to empower yourself by choosing your alignment day by day, moment by moment. Learn more about Amy and Aligned by Design! Visit www.alignedbydesignhd.com Follow on Instagram @alignbydesignListen to the podcast: https://podcasts.apple.com/us/podcast/align-by-design/id1482224007Have you joined our private Facebook Group, Wild Femmes? Our incredible community is dedicated to helping you rise to the woman that you're meant to be! Questions? Visit us on Instagram @forthewildfemme and shoot us a DM!

Spiritual Smackdown
Take Care of the Human with Dr. Rachel Yan

Spiritual Smackdown

Play Episode Listen Later May 14, 2020 44:53


This episode is a must listen for the female mind-body-soul entrepreneur! Get ready for a mindblowing conversation, because our Hypermind sister Dr. Rachel Yan is joining us today. Dr. Rachel Yan, DC, NTP, RWP is a leader in the fields of holistic nutrition, and functional medicine, and this week she's here to set you on the path toward having a body that can keep up with your dreams. As women, we have expanded into leadership roles that have been historically held by men, and so often we've done this by blazing forward with masculine energy. Time and time again we've seen this lead to burnout, stress, and exhaustion. In this episode, Dr. Yan helps you understand why, and encourages you to not simply see yourself as ‘a miniature man', but to connect with and nurture your feminine body's true biological needs.  You'll learn the true definition of functional medicine, and how it so vastly differs from the emergency health care that currently exists in our societies. Dr. Yan explains the importance of hormones, and how something as simple as an iron deficiency can drastically limit your brain functionality. Finally, explore Dr. Yan's dream for what she calls ‘abundance health care', a future where we will be able to work with doctors to achieve not just baseline health, but optimal health. To learn more about Dr. Yan's incredible programs, visit www.precisionempoweredhealth.com and follow her on Instagram and Facebook.  Have you joined our private Facebook Group, Wild Femmes? Our incredible community is dedicated to helping you rise to the woman that you're meant to be. Plus, email us at rise@forthewildfemme.com or DM us @forthewildfemme on Instagram to learn more about our programs and services!

しがないラジオ
sp.11a【ゲスト: shoya140】楽しい心を見える化する未踏スーパークリエータ

しがないラジオ

Play Episode Listen Later Nov 26, 2017 88:46


ゲストのShoya Ishimaruと、行動認識の研究、未踏事業、アイトラッキング、アプリコンテストで優勝した話、エンジニアリング・リサーチの振り子、などについて話しました。 【Show Notes】 Shoya Ishimaru - Google Scholar Citations 心を見える化!? 心温計を開発した未踏スーパークリエータの視点|ドイツ人工知能研究センター 石丸翔也 | CAREER HACK ep.6 楽しいエンジニアの哲学 | しがないラジオ 未踏事業ポータルページ:IPA 独立行政法人 情報処理推進機構 CeBIT 2017: The intelligent school book supports pupils using innovative sensor technology - DFKI Apple acquires German company specializing in AR and eye tracking - The Verge チコクイイワケロボ ドットインストール - 3分動画でマスターする初心者向けプログラミング学習サイト スパムメール・迷惑メールランキング【SPAM MUSEUM】 株式会社Paperboy&co.に半年ほどお世話になりました - shoya.io 『takram design engineering|デザイン・イノベーションの振り子』 (現代建築家コンセプト・シリーズ18) | takram design engineering(タクラム デザイン エンジニアリング) |本 | 通販 | Amazon エンジニアリングや研究開発について思うこと - 人間とウェブの未来 配信情報はtwitter ID @shiganaiRadio で確認することができます。 フィードバックは(#しがないラジオ)でつぶやいてください! 感想、話して欲しい話題、改善して欲しいことなどつぶやいてもらえると、今後のポッドキャストをより良いものにしていけるので、ぜひたくさんのフィードバックをお待ちしています。 【パーソナリティ】 gami@jumpei_ikegami zuckey@zuckey_17 【ゲスト】 Shoya Ishimaru@shoya140 【機材】 Blue Micro Yeti USB 2.0マイク 15374