Podcasts about international mathematical olympiad

  • 26PODCASTS
  • 33EPISODES
  • 31mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 20, 2025LATEST
international mathematical olympiad

POPULARITY

20172018201920202021202220232024


Best podcasts about international mathematical olympiad

Latest podcast episodes about international mathematical olympiad

The AI Report
Humans Can Now Control Devices with their Minds with Brain-Computer Interfaces

The AI Report

Play Episode Listen Later May 20, 2025 9:22


Artie Intel and Micheline Learning report on Artificial Intelligence for The AI Report. Google DeepMind’s GNoME project used AI to revolutionize energy storage, battery technology, and superconductors.  GraphCast, an open-source AI model, can now predict the weather up to 10 days in advance with unprecedented accuracy. This message comes from Eve.  Eve is the first legal AI that you can partner with,  train,  and teach to handle every part of your case.  Visit eve.legal. DeepMind’s AlphaGeometry 2, trained with the Gemini model, solved 83% of geometry problems from the last 25 years of the International Mathematical Olympiad, rivaling human gold medalists and pushing the boundaries of AI reasoning. Microsoft’s Copilot X Enterprise integrates next-gen GPT-4 Turbo enhancements to automate complex tasks in Office 365, supporting text, images, and code in a seamless workflow. Meta’s LLaMA 3, with over a trillion parameters, is now open source, democratizing access to advanced language models and enabling a wave of innovation across research and business. China’s WuDao 3.0 and its new AI supercomputer are setting benchmarks in computer vision, natural language processing, and robotics, outperforming many Western systems. Microsoft’s SQL Server 2025 preview brings built-in AI capabilities directly into the database engine. Apple rolled out new AI features across iPhone, iPad, and Mac, enhancing user experience with smarter automation and improved on-device intelligence. Meta is investing up to $65 billion in AI this year, including a major new data center in Louisiana to support its Llama model and future AI initiatives. OpenAI launched the o3-mini, a new model optimized for reasoning and efficiency, available to both consumers and developers, meeting the growing demand for smaller, more efficient AI models. Anthropic, Stability AI, and Hugging Face are pushing the boundaries with generative models and developer tools, making advanced AI more accessible than ever. Specialized AI chips, like Google’s Willow, are enabling faster, more efficient AI computations. NotebookLM is helping researchers organize and analyze information faster than ever. Canva Magic Studio brings AI-powered graphic design to everyone, from pros to beginners. ElevenLabs and Murf are generating realistic AI voices for podcasts, audiobooks, and customer service. AdCreative is automating marketing campaigns with AI-driven insights and content generation. ChatGPT, Claude, DeepSeek, and Grok are leading the pack for chatbots and virtual assistants, helping with everything from brainstorming to customer service.  Video Generation: Platforms like Synthesia, Runway, and Filmora let users create high-quality videos in minutes, using avatars and AI-powered editing tools. Image Generation: GPT-4o and Midjourney are at the forefront, producing stunning visuals from simple text prompts. Notetakers: Tools like Fathom and Nyota are revolutionizing meeting productivity by transcribing and summarizing conversations in real time. Coding and App Builders: Bubble, Bolt, and Cursor enable rapid app development, even for those without a coding background. Music Generation: Suno and Udio are making waves by composing original music tracks on demand. Project management, scheduling, and customer service tools—Asana, ClickUp, Reclaim, and Tidio AI—all powered by advanced machine learning. Quantum AI ATLAS: Google’s Willow quantum AI chip is rewriting the rules of computation. This 105-bit chip solved a complex problem in five minutes—a task that would take a classical supercomputer 10 septillion years.  The AI Report

The AI Report
ChatGPT Generates Recipes From An Image of Refrigerator Contents!

The AI Report

Play Episode Listen Later Dec 18, 2024 5:21


Artie Intel and Micheline Learning Report on Artificial Intelligence for The AI Report. AI Supercomputing Network to Accelerate AGI Development Multimodal AI Revolutionizes Data Processing ChatGPT-4 can now generate recipes based on an image of your refrigerator contents! Transforming Research and Discovery in Wildfire Detection: FireSat, an AI model coupled with a new satellite constellation, can now detect wildfires the size of a classroom within 20 minutes. Weather Prediction: GraphCast, a machine learning model, can predict weather conditions up to 10 days in advance with greater accuracy than traditional methods.Mathematical Reasoning: AlphaGeometry 2 and AlphaProof have solved 83% of historical International Mathematical Olympiad geometry problems, showcasing AI's growing ability to reason. Quantum Computing Meets AI - In an exciting development, NVIDIA has partnered with scientists to explore how AI can revolutionize quantum computing. AI Attempts Stand-up Comedy - The bot, named LOL-3000, opened with the line, "Why did the AI cross the road? To get to the other side of the firewall!" Needless to say, the human audience was left in silent confusion.

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
AI and the Future of Math, with DeepMind's AlphaProof Team

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Play Episode Listen Later Nov 14, 2024 39:21


In this week's episode of No Priors, Sarah and Elad sit down with the Google DeepMind team behind AlphaProof, Laurent Sartran, Rishi Mehta, and Thomas Hubert. AlphaProof is a new reinforcement learning-based system for formal math reasoning that recently reached a silver-medal standard in solving International Mathematical Olympiad problems. They dive deep into AI and its role in solving complex mathematical problems, featuring insights into AlphaProof and its capabilities. They cover its functionality, unique strengths in reasoning, and the challenges it faces as it scales. The conversation also explores the motivations behind AI in math, practical applications, and how verifiability and human input come into play within a reinforcement learning approach. The DeepMind team shares advice and future perspectives on where math and AI are headed.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Rishicomplex | @LaurentSartran | @ThomasHubert Show Notes:  0:00 Personal introductions 2:19 Achieving silver medal in IMO competition 3:52 How AlphaProof works 5:56 AlphaProof's strengths within mathematical reasoning 8:56 Challenges in scaling AlphaProof 13:40 Why solve math? 17:50 Pursuing knowledge versus practical applications 21:30 Insights on verifying correctness within reinforcement learning 28:27 How AI could foster more collaboration among mathematicians 30:28 Surprising insights from AI proof generation 34:17 Future of math and AI: advice for math enthusiasts and researchers

CryptoNews Podcast
#387: Georgios Vlachos, Co-Founder of Axelar, on Seamlessly Connecting Blockchains, Connecting On/Off Chain Systems, and Crypto in Countries Without Robust Banking Systems

CryptoNews Podcast

Play Episode Listen Later Nov 14, 2024 32:52


Georgios Vlachos, Co-Founder of Axelar protocol and director at Axelar Foundation. Georgios received his BSc and MS Eng. in computer science at MIT. After graduation, he became part of the Algorand founding team, where he worked on the design and development of the consensus protocol and other core components. While in high school, Georgios became the first Greek to win a gold Medal at the International Mathematical Olympiad.In this conversation, we discuss:- The financial crisis in Greece and current day economy in Argentina- The power of crypto in countries without Robust banking systems- Early days of Algorand- Why Cross-chain Bridges Fall Short - True interoperability demands collaboration across projects, not just patched-together solutions.- AI agents will be the biggest users of blockchains- Could there be a world where every human has their own blockchain?- RWAs and DePIN need interoperability- Connecting on-/off-chain systems- Deploying your token on many chains with a couple of clicks- Crypto needs more applications and less infrastructure- How AI will impact cryptoAxelar FoundationWebsite: www.axelar.networkX: @axelarTelegram: t.me/axelarcommunityGeorgios VlachosX: @yorgosv_LinkedIn: Georgios Vlachos ---------------------------------------------------------------------------------  This episode is brought to you by PrimeXBT.  PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.   PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions.  Code: CRYPTONEWS50  This promotion is available for a month after activation. Click the link below:  PrimeXBT x CRYPTONEWS50

The Nonlinear Library
LW - MIRI's September 2024 newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 2:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

This Week in Pre-IPO Stocks
E148: OpenAI launches 'OpenAI o1,' in talks for $6.5B at $150B valuation, hits 10M subscribers; SpaceX sets civilian space travel record; Glean raises $260M at $4.6B valuation; Klarna cuts losses, integrates AI; Poolside in talks for $500M at $3B

This Week in Pre-IPO Stocks

Play Episode Listen Later Sep 13, 2024 10:09


Send us a textSubscribe to AG Dillon Pre-IPO Stock Research at agdillon.com/subscribe;- Wednesday = secondary market valuations, revenue multiples, performance, index fact sheets- Saturdays = pre-IPO news and insights, webinar replays00:06 | SpaceX Sets New Record in Civilian Space Travel- Space payload delivery and satellite internet company- Polaris Dawn mission: first commercial spacewalk, civilian crew led by Jared Isaacman- Crew spent 20 minutes outside SpaceX Crew Dragon capsule- Reached 870 miles above Earth, setting a civilian space travel record- Tested new EVA suits, conducted 40 experiments- Secondary market valuation: $223B (+6.3% vs Jul 2024 round)01:20 | OpenAI Launches New AI Model, "OpenAI o1"- AI large language model business- Announced "OpenAI o1," focusing on enhancing reasoning abilities in math, coding, and science- Achieved 83% on International Mathematical Olympiad exam (up from 13% with prior models)- Available to ChatGPT Plus and Team users- Competitors like Google and Anthropic developing similar AI models01:59 | OpenAI in Talks for $6.5B Funding Round at $150B Valuation- OpenAI in discussions to raise $6.5B at a $150B valuation (primary round)- Previous valuation: $86B earlier in 2024- Seeking $5B in debt via revolving credit facility- Key investors include Thrive Capital, Microsoft, Apple, Nvidia, and UAE-backed MGX fund02:55 | OpenAI's ChatGPT Hits 10M Paying Subscribers- ChatGPT: 10M paying subscribers, 1M on higher-priced business plans- Generates $225M in monthly revenue, or $2.7B annually- Projected $4B in annual revenue in the next 12 months (up from $1.6B in late 2023)- Valuation at $150B, 37.5x forward revenue03:48 | Glean Raises $260M Series E, Valued at $4.6B- Enterprise AI solutions company- Raised $260M in Series E, valuing Glean at $4.6B (primary)- Competes with Microsoft Copilot and Amazon's chatbot- Global generative AI spending expected to rise to $143B by 202704:30 | Klarna Cuts Losses and Integrates AI Across Operations- Consumer credit and payments company- Severed ties with Salesforce and Workday, focusing on AI automation- 2023 losses dropped to $241M (from $1B in 2022)- AI-powered customer service assistant handled 2.3M interactions in its first month- Headcount reduced from 4,500 to 3,800, aiming for 2,000- Secondary market valuation: $10.1B (+50.4% vs Jul 2022 round)05:33 | Poolside in Talks to Raise $500M, Potential $3B Valuation- AI solution for software developers- In talks to raise $500M, potentially valuing the company at $3B (primary)- Co-founded by former GitHub CTO Jason Warner and Eiso Kant- Secured $126M in seed funding; secured Nvidia GPUs with Iris Energy Ltd06:17 | eToro Settles with SEC, Limits Crypto Offerings in the U.S.- Retail brokerage company- Agreed to $1.5M penalty with SEC over operating as an unregistered broker and clearing agency- U.S. users can trade only Bitcoin, Bitcoin Cash, and Ether; 180-day window to sell/withdraw other tokens- 38M registered users globally, offering over 100 cryptoassets outside the U.S.- Secondary market valuation: $7.3B (+107.7% vs Mar 2023 round)07:05 | Anduril Launches Modular, Autonomous Barracuda Air Vehicles- Defense contractor- Introduced Barracuda family of autonomous air vehicles with three versions- Barracuda-100, 250, and 500 models: ranges from 85 to 500 nautical miles- Systems are 30% cheaper and 50% faster to produce than competitors- Secondary market valuation: $17.0B (+21.5% vs Aug 2024 round)08:10 | Pre-IPO Stock Market Weekly Performance09:08 | Pre-IPO Stock Vintage Index Wee

The Nonlinear Library
LW - Near-mode thinking on AI by Olli Järviniemi

The Nonlinear Library

Play Episode Listen Later Aug 5, 2024 8:51


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Near-mode thinking on AI, published by Olli Järviniemi on August 5, 2024 on LessWrong. There is a stark difference between rehearsing classical AI risk 101 arguments about instrumental convergence, and tackling problems like "Design and implement the exact evaluations we'll run on GPT-5 to determine whether it's capable enough that we should worry about it acting aligned until it can execute a takeover". And naturally, since I've started working on problems like the one above, I've noticed a large shift in my thinking on AI. I describe it as thinking about risks in near-mode, as opposed to far-mode. In this post, I share a few concrete examples about my experiences with this change-of-orientation. I. Prerequisites for scheming Continuing with the example from the intro: A year ago I was confident about the "the AI is just playing along with our training and evaluations, until it is in a position where it can take over" threat model (deceptive alignment / scheming) basically being the default outcome and the main source of AI x-risk. I now think I was overconfident. Past-me hadn't, however, really thought through the prerequisites for scheming. A textbook example of a successful deceptive alignment story, applied to an LLM, paints a picture of a model that: Understands that the inputs it sees are simply prompts written by humans As opposed to the model being "immersed to the setting", thinking that the current context more-or-less reflects reality, or flat-out is reality, taking the prompt at face value Understands that it's undergoing a training process, and having an explicit (correct!) model of the training objective used to train it As opposed the model not thinking about the training process and objective at all, and a model's changed behavior upon training being reflective of the training process molding the model, rather than the model learning the training objective and optimizing for it Performs lots of the same cognition (e.g. the above two bullet points) regardless of the input fed to it As opposed to the model's internal processes being heavily dependent on the input Never performs instrumental thinking "out loud", and successfully evades any attempts made by humans to trick the model into thinking it has been deployed As opposed to the model sometimes slipping, failing to generalize correctly to adversarial inputs, or just being insufficiently good at distinguishing training/evaluation/red-teaming from deployment. Now, one may argue whether it's strictly necessary that a model has an explicit picture of the training objective, for example, and revise one's picture of the deceptive alignment story accordingly. We haven't yet achieved consensus on deceptive alignment, or so I've heard. It's also the case that, as past-me would remind you, a true superintelligence would have no difficulty with the cognitive feats listed above (and that current models show sparks of competence in some of these). But knowing only that superintelligences are really intelligent doesn't help with designing the scheming-focused capability evaluations we should do on GPT-5, and abstracting over the specific prerequisite skills makes it harder to track when we should expect scheming to be a problem (relative to other capabilities of models).[1] And this is the viewpoint I was previously missing. II. A failed prediction There's a famous prediction market about whether AI will get gold from the International Mathematical Olympiad by 2025. For a long time, the market was around 25%, and I thought it was too high. Then, DeepMind essentially got silver from the 2024 IMO, short of gold by one point. The market jumped to 70%, where it has stayed since. Regardless of whether DeepMind manages to improve on that next year and satisfy all minor technical requirements, I was wrong. Hearing abou...

The Nonlinear Library: LessWrong
LW - Near-mode thinking on AI by Olli Järviniemi

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 5, 2024 8:51


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Near-mode thinking on AI, published by Olli Järviniemi on August 5, 2024 on LessWrong. There is a stark difference between rehearsing classical AI risk 101 arguments about instrumental convergence, and tackling problems like "Design and implement the exact evaluations we'll run on GPT-5 to determine whether it's capable enough that we should worry about it acting aligned until it can execute a takeover". And naturally, since I've started working on problems like the one above, I've noticed a large shift in my thinking on AI. I describe it as thinking about risks in near-mode, as opposed to far-mode. In this post, I share a few concrete examples about my experiences with this change-of-orientation. I. Prerequisites for scheming Continuing with the example from the intro: A year ago I was confident about the "the AI is just playing along with our training and evaluations, until it is in a position where it can take over" threat model (deceptive alignment / scheming) basically being the default outcome and the main source of AI x-risk. I now think I was overconfident. Past-me hadn't, however, really thought through the prerequisites for scheming. A textbook example of a successful deceptive alignment story, applied to an LLM, paints a picture of a model that: Understands that the inputs it sees are simply prompts written by humans As opposed to the model being "immersed to the setting", thinking that the current context more-or-less reflects reality, or flat-out is reality, taking the prompt at face value Understands that it's undergoing a training process, and having an explicit (correct!) model of the training objective used to train it As opposed the model not thinking about the training process and objective at all, and a model's changed behavior upon training being reflective of the training process molding the model, rather than the model learning the training objective and optimizing for it Performs lots of the same cognition (e.g. the above two bullet points) regardless of the input fed to it As opposed to the model's internal processes being heavily dependent on the input Never performs instrumental thinking "out loud", and successfully evades any attempts made by humans to trick the model into thinking it has been deployed As opposed to the model sometimes slipping, failing to generalize correctly to adversarial inputs, or just being insufficiently good at distinguishing training/evaluation/red-teaming from deployment. Now, one may argue whether it's strictly necessary that a model has an explicit picture of the training objective, for example, and revise one's picture of the deceptive alignment story accordingly. We haven't yet achieved consensus on deceptive alignment, or so I've heard. It's also the case that, as past-me would remind you, a true superintelligence would have no difficulty with the cognitive feats listed above (and that current models show sparks of competence in some of these). But knowing only that superintelligences are really intelligent doesn't help with designing the scheming-focused capability evaluations we should do on GPT-5, and abstracting over the specific prerequisite skills makes it harder to track when we should expect scheming to be a problem (relative to other capabilities of models).[1] And this is the viewpoint I was previously missing. II. A failed prediction There's a famous prediction market about whether AI will get gold from the International Mathematical Olympiad by 2025. For a long time, the market was around 25%, and I thought it was too high. Then, DeepMind essentially got silver from the 2024 IMO, short of gold by one point. The market jumped to 70%, where it has stayed since. Regardless of whether DeepMind manages to improve on that next year and satisfy all minor technical requirements, I was wrong. Hearing abou...

Let's Talk AI
#176 - SearchGPT, Gemini 1.5 Flash, Lamma 3.1 405B, Mistral Large 2

Let's Talk AI

Play Episode Listen Later Aug 3, 2024 85:45


Our 176th episode with a summary and discussion of last week's big AI news! NOTE: apologies for this episode coming out about a week late, things got in the way of editing it... With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)   Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai (00:00:00) Intro Song (00:00:34) Intro Banter Tools & Apps(00:03:39) OpenAI announces SearchGPT, its AI-powered search engine (00:08:03) Google gives free Gemini users access to its faster, lighter 1.5 Flash AI model (00:09:10) X launches underwhelming Grok-powered ‘More About This Account' feature (00:11:36) Kuaishou Launches Full Beta Testing for 'Kling AI' to Global Users, Elevates Model Capabilities (00:13:39) Adobe rolls out more generative AI features to Illustrator and Photoshop (00:14:25) Meta AI gets new ‘Imagine me' selfie feature Projects & Open Source(00:15:19) Meta releases open-source AI model it says rivals OpenAI, Google tech (00:28:23) Mistral AI Unveils Mistral Large 2, Beats Llama 3.1 on Code and Math (00:34:00) Groq's open-source Llama AI model tops leaderboard, outperforming GPT-4o and Claude in function calling (00:36:35) Apple shows off open AI prowess: new models outperform Mistral and Hugging Face offerings Applications & Business(00:40:25) Elon Musk wants Tesla to invest $5 billion into his newest startup, xAI — if shareholders approve (00:43:01) Nvidia said to be prepping Blackwell GPUs for Chinese market (00:46:28) Toronto AI company Cohere to indemnify customers who are sued for any copyright violations (00:49:09) AI startup Cohere raises US$500-million, valuing company at US$5.5-billion Research & Advancements(00:52:01) AI achieves silver-medal standard solving International Mathematical Olympiad problems (00:56:47) A Multimodal Automated Interpretability Agent (01:00:56) MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens Policy & Safety(01:02:56) Improving Model Safety Behavior with Rule-Based Rewards (01:06:39) Senators demand OpenAI detail efforts to make its AI safe (01:10:59) OpenAI reassigns top AI safety executive Aleksandr Madry to role focused on AI reasoning (01:13:08) As new tech threatens jobs, Silicon Valley promotes no-strings cash aid (01:17:33) Democratic senators seek to reverse Supreme Court ruling that restricts federal agency power Synthetic Media & Art(01:20:58) Video game performers will go on strike over artificial intelligence concerns (01:23:03) Outro (01:23:58) AI Song

Sway
The Zoom Election + Google DeepMind's Math Olympiad + HatGPT! Olympics Edition

Sway

Play Episode Listen Later Aug 2, 2024 61:34


This week, with hundreds of thousands of people joining online political rallies for Kamala Harris, we discuss whether 2024 is suddenly becoming the Zoom election, and what that means for both parties' political organizing. Then, Pushmeet Kohli, a computer scientist at Google DeepMind, joins us for a conversation about how his team's new A.I. models just hit a silver medal score on the International Mathematical Olympiad exam. And finally, it's time for a new round of HatGPT! This time, it's a special Olympics tech edition. Guest:Pushmeet Kohli, vice president of research at Google DeepMind Additional Reading:Liberal “White Dudes” Rally for Harris: “It's Like a Rainbow of Beige”Move Over, Mathematicians, Here Comes AlphaProofNow Narrating the Olympics: A.I.-Al Michaels We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTubeand TikTok.

LessWrong Curated Podcast
“‘AI achieves silver-medal standard solving International Mathematical Olympiad problems'” by gjm

LessWrong Curated Podcast

Play Episode Listen Later Jul 30, 2024 4:00


This is a link post.Google DeepMind reports on a system for solving mathematical problems that allegedly is able to give complete solutions to four of the six problems on the 2024 IMO, putting it near the top of the silver-medal category.Well, actually, two systems for solving mathematical problems: AlphaProof, which is more general-purpose, and AlphaGeometry, which is specifically for geometry problems. (This is AlphaGeometry 2; they reported earlier this year on a previous version of AlphaGeometry.)AlphaProof works in the "obvious" way: an LLM generates candidate next steps which are checked using a formal proof-checking system, in this case Lean. One not-so-obvious thing, though: "The training loop was also applied during the contest, reinforcing proofs of self-generated variations of the contest problems until a full solution could be found."[EDITED to add:] Or maybe it doesn't work in the "obvious" way. As cubefox points out in the comments [...] --- First published: July 25th, 2024 Source: https://www.lesswrong.com/posts/TyCdgpCfX7sfiobsH/ai-achieves-silver-medal-standard-solving-international --- Narrated by TYPE III AUDIO.

GPT Reviews
OpenAI's SearchGPT

GPT Reviews

Play Episode Listen Later Jul 29, 2024 15:50


OpenAI's new prototype, SearchGPT, promises to combine AI smarts with real-time web information to make search easier. AI has achieved silver-medal standards at the International Mathematical Olympiad, raising questions about the future of mathematics and the role of AI in solving complex problems. The reliability of AI existential risk probabilities is called into question in a thought-provoking article, challenging the authority we often assign to these forecasts and calling for more scrutiny. Three fascinating papers from UNC Chapel Hill, Google DeepMind, and a collaboration between Caltech and NVIDIA explore advancements in theorem proving, balancing fast and slow planning, and aligning large language models with Best-of-N distillation. These papers could transform the way we approach complex problems with language models and streamline the development of LLMs. Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 01:54 OpenAI Announces SearchGPT 03:15 AI achieves silver-medal standard solving International Mathematical Olympiad problems 04:55 AI existential risk probabilities are too unreliable to inform policy 06:25 Fake sponsor 08:21 LeanDojo: Theorem Proving with Retrieval-Augmented Language Models 10:10 System-1.x: Learning to Balance Fast and Slow Planning with Language Models 12:01 BOND: Aligning LLMs with Best-of-N Distillation 13:43 Outro

Discover Daily by Perplexity
DeepMind AI Wins Silver Medal, OpenAI's SearchGPT, Olympics Crackdown on Motor-Doping, and Anti-Deepfake Porn Bill

Discover Daily by Perplexity

Play Episode Listen Later Jul 26, 2024 7:08 Transcription Available


In this episode of Discover Daily by Perplexity, we explore groundbreaking AI achievements in mathematics, a new AI search engine, and efforts to combat technological cheating in sports. Google DeepMind's AI systems, AlphaProof and AlphaGeometry 2, made history at the 2024 International Mathematical Olympiad, solving four out of six problems and earning a silver medal-level performance. This marks the first time an AI has reached the podium in this prestigious competition. We delve into the innovative approaches used by these AI models, including language processing, reinforcement learning, and neuro-symbolic techniques.Next, we discuss OpenAI's unveiling of SearchGPT, a prototype AI-powered search engine designed to provide fast, timely answers with clear attribution to sources. This new service aims to challenge traditional search platforms and AI-driven competitors. We examine how SearchGPT combines AI models with real-time web information, its partnerships with major news organizations, and its potential impact on the search engine market dominated by Google.Finally, we cover two important topics: efforts to combat "motor doping" in cycling at the Paris Olympics and the US Senate's unanimous passing of the DEFIANCE Act. Olympic officials are deploying advanced technology, including electromagnetic scanners and X-ray imaging, to detect hidden electric motors in bicycles. The DEFIANCE Act aims to combat nonconsensual deepfake pornography by allowing victims to sue creators, distributors, and recipients of AI-generated sexually explicit content using their likeness. We explore the implications of these developments and their potential impact on sports integrity and digital privacy.From Perplexity's Discover feedPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

The Nonlinear Library
LW - "AI achieves silver-medal standard solving International Mathematical Olympiad problems" by gjm

The Nonlinear Library

Play Episode Listen Later Jul 25, 2024 3:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "AI achieves silver-medal standard solving International Mathematical Olympiad problems", published by gjm on July 25, 2024 on LessWrong. Google DeepMind reports on a system for solving mathematical problems that allegedly is able to give complete solutions to four of the six problems on the 2024 IMO, putting it near the top of the silver-medal category. Well, actually, two systems for solving mathematical problems: AlphaProof, which is more general-purpose, and AlphaGeometry, which is specifically for geometry problems. (This is AlphaGeometry 2; they reported earlier this year on a previous version of AlphaGeometry.) AlphaProof works in the "obvious" way: an LLM generates candidate next steps which are checked using a formal proof-checking system, in this case Lean. One not-so-obvious thing, though: "The training loop was also applied during the contest, reinforcing proofs of self-generated variations of the contest problems until a full solution could be found." (That last bit is reminiscent of something from the world of computer go: a couple of years ago someone trained a custom version of KataGo specifically to solve the infamous Igo Hatsuyoron problem 120, starting with ordinary KataGo and feeding it training data containing positions reachable from the problem's starting position. They claim to have laid that problem to rest at last.) AlphaGeometry is similar but uses something specialized for (I think) Euclidean planar geometry problems in place of Lean. The previous version of AlphaGeometry allegedly already performed at gold-medal IMO standard; they don't say anything about whether that version was already able to solve the 2024 IMO problem that was solved using AlphaGeometry 2. AlphaProof was able to solve questions 1, 2, and 6 on this year's IMO (two algebra, one number theory). It produces Lean-formalized proofs. AlphaGeometry 2 was able to solve question 4 (plane geometry). It produces proofs in its own notation. The solutions found by the Alpha... systems are at https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/imo-2024-solutions/index.html. (There are links in the top-of-page navbar to solutions to the individual problems.) (If you're curious about the IMO questions or want to try them yourself before looking at the machine-generated proofs, you can find them -- and those for previous years -- at https://www.imo-official.org/problems.aspx.) One caveat (note: an earlier version of what I wrote failed to notice this and quite wrongly explicitly claimed something different): "First, the problems were manually translated into formal mathematical language for our systems to understand." It feels to me like it shouldn't be so hard to teach an LLM to convert IMO problems into Lean or whatever, but apparently they aren't doing that yet. Another caveat: "Our systems solved one problem within minutes and took up to three days to solve the others." Later on they say that AlphaGeometry 2 solved the geometry question within 19 seconds, so I guess that was also the one that was done "within minutes". Three days is a lot longer than human IMO contestants get given, but this feels to me like the sort of thing that will predictably improve pretty rapidly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - "AI achieves silver-medal standard solving International Mathematical Olympiad problems" by gjm

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 25, 2024 3:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "AI achieves silver-medal standard solving International Mathematical Olympiad problems", published by gjm on July 25, 2024 on LessWrong. Google DeepMind reports on a system for solving mathematical problems that allegedly is able to give complete solutions to four of the six problems on the 2024 IMO, putting it near the top of the silver-medal category. Well, actually, two systems for solving mathematical problems: AlphaProof, which is more general-purpose, and AlphaGeometry, which is specifically for geometry problems. (This is AlphaGeometry 2; they reported earlier this year on a previous version of AlphaGeometry.) AlphaProof works in the "obvious" way: an LLM generates candidate next steps which are checked using a formal proof-checking system, in this case Lean. One not-so-obvious thing, though: "The training loop was also applied during the contest, reinforcing proofs of self-generated variations of the contest problems until a full solution could be found." (That last bit is reminiscent of something from the world of computer go: a couple of years ago someone trained a custom version of KataGo specifically to solve the infamous Igo Hatsuyoron problem 120, starting with ordinary KataGo and feeding it training data containing positions reachable from the problem's starting position. They claim to have laid that problem to rest at last.) AlphaGeometry is similar but uses something specialized for (I think) Euclidean planar geometry problems in place of Lean. The previous version of AlphaGeometry allegedly already performed at gold-medal IMO standard; they don't say anything about whether that version was already able to solve the 2024 IMO problem that was solved using AlphaGeometry 2. AlphaProof was able to solve questions 1, 2, and 6 on this year's IMO (two algebra, one number theory). It produces Lean-formalized proofs. AlphaGeometry 2 was able to solve question 4 (plane geometry). It produces proofs in its own notation. The solutions found by the Alpha... systems are at https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/imo-2024-solutions/index.html. (There are links in the top-of-page navbar to solutions to the individual problems.) (If you're curious about the IMO questions or want to try them yourself before looking at the machine-generated proofs, you can find them -- and those for previous years -- at https://www.imo-official.org/problems.aspx.) One caveat (note: an earlier version of what I wrote failed to notice this and quite wrongly explicitly claimed something different): "First, the problems were manually translated into formal mathematical language for our systems to understand." It feels to me like it shouldn't be so hard to teach an LLM to convert IMO problems into Lean or whatever, but apparently they aren't doing that yet. Another caveat: "Our systems solved one problem within minutes and took up to three days to solve the others." Later on they say that AlphaGeometry 2 solved the geometry question within 19 seconds, so I guess that was also the one that was done "within minutes". Three days is a lot longer than human IMO contestants get given, but this feels to me like the sort of thing that will predictably improve pretty rapidly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Shimon's Tribe
Unlock your inner math genius, in 4 minutes | Po-Shen Loh

Shimon's Tribe

Play Episode Listen Later May 23, 2024 3:57


Yes, you CAN be a “math person” — as long as you follow these learning techniques. Po-Shen Loh is an International Mathematical Olympiad coach, and he challenges the notion that some people are inherently “not math people.” He believes that every one has the potential to understand mathematics, as long as they start with the desire to learn. A unique aspect of mathematics is its reliance on a sequence of dependent concepts. Unlike subjects such as history, where concepts are broader and less interdependent, math involves a deeper chain of connected ideas. This makes the learning process fragile; missing a single concept can disrupt comprehension due to the interlinked nature of mathematical ideas. Loh draws a comparison with a train journey: If there is a gap in the track (a missing concept), the train cannot proceed. He suggests a personalized learning approach, allowing individuals to learn at their own pace in order to fill gaps in understanding. With this approach, anyone can excel in math — and even find it easier than other subjects. ---------------------------------------------------------------------------------- ❍ About The Well ❍ Do we inhabit a multiverse? Do we have free will? What is love? Is evolution directional? There are no simple answers to life's biggest questions, and that's why they're the questions occupying the world's brightest minds. So what do they think? How is the power of science advancing understanding? How are philosophers and theologians tackling these fascinating questions? Let's dive into The Well.

math unlock genius loh international mathematical olympiad po shen loh
Everyday AI Podcast – An AI and ChatGPT Podcast
EP 188: AI in the Classroom - Focus on literacy not detection

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jan 18, 2024 36:21


The conversation around AI in the classroom has been ongoing for a while now. Should it be used or banned? How should you use it and monitor it? Laura Dumin, a Professor at the University of Oklahoma, joins us to discuss why we should focus on literacy and not detection when it comes to AI in education.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode pageJoin the discussion: Ask Jordan and Laura questions on AI in the classroomUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:01:30  Daily AI news05:30  About Laura and her role at University of Oklahoma09:39 Considering using AI in assignments requires thought.13:01 Challenges in developing new teaching policies.16:39 Students adjusted to PDF annotations, benefiting research process.19:13 Loss of student trust from false accusations.22:51 Instructors explore AI in various fields.23:45 Adapting to AI in technical writing education.27:59 High school curriculum concerns about AI ethics.32:06 AI programs should be trained with care.Topics Covered in This Episode:1. Challenges of Integrating AI into Education2. Ethical Use of AI in Education3. Preparing Students for AI Integration4. Educator's Perspective on AI in EducationKeywords:Generative AI, classroom, literacy, AI detection, New York Times lawsuit, Sam Altman, OpenAI, artificial general intelligence, Google DeepMind, alpha geometry, International Mathematical Olympiad, Samsung, Google Cloud, smartphone, Laura Doonan, University of Central Oklahoma, AI coordinator, AI usage, AI literacy, AI courses, AI ethics, education, adoption of AI, micro credentials, workshops, guidelines, academic integrity, student AI usage, annotated PDFs, AI detectors, Gen AI skills, elementary education, secondary education Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

Nature Podcast
This AI just figured out geometry — is this a step towards artificial reasoning?

Nature Podcast

Play Episode Listen Later Jan 17, 2024 32:24 Very Popular


In this episode:0:55 The AI that deduces solutions to complex maths problemsResearchers at Google Deepmind have developed an AI that can solve International Mathematical Olympiad-level geometry problems, something previous AIs have struggled with. They provided the system with a huge number of random mathematical theorems and proofs, which it used to approximate general rules of geometry. The AI then applied these rules to solve the Olympiad problems and show its workings for humans to check. The researchers hope their system shows that it is possible for AIs to ‘learn' basic principles from large amounts of data and use them to tackle complex logical challenges, which could prove useful in fields outside mathematics.Research article: Trinh et al.09:46 Research HighlightsA stiff and squishy ‘hydrospongel' — part sponge, part hydrogel — that could find use in soft robotics, and how the spread of rice paddies in sub-Saharan Africa helps to drive up atmospheric methane levels.Research Highlight: Stiff gel as squishable as a sponge takes its cue from cartilageResearch Highlight: A bounty of rice comes at a price: soaring methane emissions12:26 The food-web effects of mass predator die-offsMass Mortality Events, sometimes called mass die-offs, can result in huge numbers of a single species perishing in a short period of time. But there's not a huge amount known about the effects that events like these might be having on wider ecosystems. Now, a team of researchers have built a model ecosystem to observe the impact of mass die-offs on the delicate balance of populations within it.Research article: Tye et al.20:53 Briefing ChatAn update on efforts to remove the stuck screws on OSIRIS-REx's sample container, the ancient, fossilized skin that was preserved in petroleum, and a radical suggestion to save the Caribbean's coral reefs.OSIRIS-REx Mission Blog: NASA's OSIRIS-REx Team Clears Hurdle to Access Remaining Bennu Sample Nature News: This is the oldest fossilized reptile skin ever found — it pre-dates the dinosaursNature News: Can foreign coral save a dying reef? Radical idea sparks debateSubscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Hosted on Acast. See acast.com/privacy for more information.

The Nonlinear Library
LW - AlphaGeometry: An Olympiad-level AI system for geometry by alyssavance

The Nonlinear Library

Play Episode Listen Later Jan 17, 2024 2:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AlphaGeometry: An Olympiad-level AI system for geometry, published by alyssavance on January 17, 2024 on LessWrong. [Published today by DeepMind] Our AI system surpasses the state-of-the-art approach for geometry problems, advancing AI reasoning in mathematics Reflecting the Olympic spirit of ancient Greece, the International Mathematical Olympiad is a modern-day arena for the world's brightest high-school mathematicians. The competition not only showcases young talent, but has emerged as a testing ground for advanced AI systems in math and reasoning. In a paper published today in Nature, we introduce AlphaGeometry, an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist - a breakthrough in AI performance. In a benchmarking test of 30 Olympiad geometry problems, AlphaGeometry solved 25 within the standard Olympiad time limit. For comparison, the previous state-of-the-art system solved 10 of these geometry problems, and the average human gold medalist solved 25.9 problems. Links to the paper appear broken, but here is a link: https://www.nature.com/articles/s41586-023-06747-5 Interesting that the transformer used is tiny. From the paper: We use the Meliad library for transformer training with its base settings. The transformer has 12 layers, embedding dimension of 1,024, eight heads of attention and an inter-attention dense layer of dimension 4,096 with ReLU activation. Overall, the transformer has 151 million parameters, excluding embedding layers at its input and output heads. Our customized tokenizer is trained with 'word' mode using SentencePiece and has a vocabulary size of 757. We limit the maximum context length to 1,024 tokens and use T5-style relative position embedding. Sequence packing is also used because more than 90% of our sequences are under 200 in length. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - AlphaGeometry: An Olympiad-level AI system for geometry by alyssavance

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 17, 2024 2:12


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AlphaGeometry: An Olympiad-level AI system for geometry, published by alyssavance on January 17, 2024 on LessWrong. [Published today by DeepMind] Our AI system surpasses the state-of-the-art approach for geometry problems, advancing AI reasoning in mathematics Reflecting the Olympic spirit of ancient Greece, the International Mathematical Olympiad is a modern-day arena for the world's brightest high-school mathematicians. The competition not only showcases young talent, but has emerged as a testing ground for advanced AI systems in math and reasoning. In a paper published today in Nature, we introduce AlphaGeometry, an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist - a breakthrough in AI performance. In a benchmarking test of 30 Olympiad geometry problems, AlphaGeometry solved 25 within the standard Olympiad time limit. For comparison, the previous state-of-the-art system solved 10 of these geometry problems, and the average human gold medalist solved 25.9 problems. Links to the paper appear broken, but here is a link: https://www.nature.com/articles/s41586-023-06747-5 Interesting that the transformer used is tiny. From the paper: We use the Meliad library for transformer training with its base settings. The transformer has 12 layers, embedding dimension of 1,024, eight heads of attention and an inter-attention dense layer of dimension 4,096 with ReLU activation. Overall, the transformer has 151 million parameters, excluding embedding layers at its input and output heads. Our customized tokenizer is trained with 'word' mode using SentencePiece and has a vocabulary size of 757. We limit the maximum context length to 1,024 tokens and use T5-style relative position embedding. Sequence packing is also used because more than 90% of our sequences are under 200 in length. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Inference: AI business podcast by Silo AI
S2 E9: Niko Vuokko, CTO of Silo AI: AI in 2023 and beyond

Inference: AI business podcast by Silo AI

Play Episode Listen Later Feb 13, 2023 64:25


S2 E9: Niko Vuokko, CTO of Silo AI, talks about the current and upcoming trends in AI. Niko Vuokko is the CTO of Silo AI, leading Silo AI's technology strategy and IP. Niko is an experienced technology leader of multiple successful enterprises, such as Eniram, Sharper Shape and Metrify, as well as an International Mathematical Olympiad competitor and national team coach. In this episode, we talk about the current trends of AI, as well as the trends that are on the horizon of 2023 and beyond. We talk about the scalability on global natural language resources, ML's impact to life sciences, state of ML embedding and more. Catch Niko's recommendations for companies that are adjusting their AI vision and strategy. Get all the above and Niko's recommendations for companies that are adjusting their AI vision and strategy by listening to the episode today.

ai ip cto ml silo ai ai international mathematical olympiad
CSPI Podcast
Why is the West Special? | Joe Henrich & Richard Hanania

CSPI Podcast

Play Episode Listen Later Jan 16, 2023 54:00


Joe Henrich is the Ruth Moore Professor of Biological Anthropology and Professor of Human Evolutionary Biology at Harvard University. He is the author of Why Humans Cooperate, The Secret of Our Success, and The WEIRDest People in the World. He joins the podcast to talk about his work. Topics include:* The implications of Henrich's theories for the debate over AI alignment* The nature of intelligence* Whether genetic differences between populations explain societal outcomes* If the Ancient Greeks and Romans were already WEIRD* How to understand the group selection debate* Why Islamic familial practices may have stunted economic development and growth* The political and ideological reaction to his last bookListen in podcast form or watch on YouTube. A transcript of the podcast can be found at the Richard Hanania newsletter.Links:* Joe Henrich, “The WEIRDest People in the World.”* Joe Henrich, “The Secrets of Our Success.”* Richard Hanania, “How Monogamy and Incest Taboos Made the West.”* David Epstein, “The Sports Gene.” * Seth Stephens-Davidowitz, “Don't Trust Your Gut.” * Elizabeth Shim, “North Korea finishes fourth at International Mathematical Olympiad.” * Minnesota Transracial Adoption Study.* Bryan Caplan, “The Wonder of International Adoption: Adult IQ in Sweden.” Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe

Madison BookBeat
UW Prof. Jordan Ellenberg, "Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else"

Madison BookBeat

Play Episode Listen Later Oct 19, 2021 72:36


Madison authors, topics, book events and publishersIt's the most wonderful time of the year, time for the Wisconsin Book Festival, 28 events this week alone, both in-person and online, and Stu Levitan welcomes one of the featured presenters, and one of the brightest stars in the firmament that is the University of Wisconsin faculty, Professor Jordan Ellenberg, to discuss his NYTimes best-seller, Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else. Prof. Ellenberg will be appearing this Saturday at 3 o'clock at the Discovery Building, 330 N Orchard St., so Stu thought it would be a good idea to dial up an encore presentation of our conversation from this past July.As coined by the ancient Greeks, “geometry” literally means “measuring the world,” and the world which Jordan Ellenberg measures in Shape is wide and far-flung indeed. Gerrymandering, the tv show Survivor, Abraham Lincoln, pandemics and flitting mosquitoes, artificial intelligence, even an answer to the question ‘how many holes in a straw'? And it's an accessible world – yes, there are symbols and equations, and you're welcome to have pad and paper with you as you read, but the book is mainly a narrative built on stories and people.Jordan Ellenberg was not a late-bloomer. The son of two biostatisticians, he taught himself to read at age two by watching Sesame Street, he was competing in high school math competitions while in the fourth grade, and four years later he was taking honors calculus at the University of Maryland. At 17, he beat out 400,000 North American high school students to win the USA Mathematical Olympiad, and over a 3-year period took two golds and a silver at the International Mathematical Olympiad.He took his BA and Ph D at Harvard, with a masters from Johns Hopkins in creative writing in between, then started his academic career at Princeton. He came to the University of Wisconsin in 2005, made full professor in 2011, was named a Vilas Distinguished Achievement Professor in 2014 and since 2015 has been the John D MacArthur Professor of Mathematics.His previous books include How Not To Be Wrong: The Power of Mathematical Thinking in 2014 and the novel The Grasshopper King. He also has a credited cameo in the 2017 movie Gifted in the role of math professor, giving him a Kevin Bacon degree of separation of two and making him one of the extraordinarily small and select group of people with an Erdos/Bacon number. He maintains a blog Quomodocumque.wordpress.com and tweets at JSEllenberg. It is a great pleasure to welcome to MBB Professor Jordan Ellenberg.

Madison BookBeat
UW Prof. Jordan Ellenberg, "Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else."

Madison BookBeat

Play Episode Listen Later Jul 12, 2021 71:50


Stu Levitan welcomes one of the brightest stars in the firmament that is the University of Wisconsin faculty, Professor Jordan Ellenberg, here to talk about his New York Times best-seller, Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else. As coined by the ancient Greeks, “geometry” literally means “measuring the world,” and the world which Jordan Ellenberg measures in Shape is wide and far-flung indeed. Gerrymandering, the tv show Survivor, Abraham Lincoln, pandemics and flitting mosquitoes, artificial intelligence, even an answer to the question ‘how many holes in a straw'? And it's an accessible world – yes, there are symbols and equations, and you're welcome to have pad and paper with you as you read, but the book is mainly a narrative built on stories and people. Jordan Ellenberg was not a late-bloomer. The son of two biostatisticians, he taught himself to read at age two by watching Sesame Street, he was competing in high school math competitions while in the fourth grade, and four years later he was taking honors calculus at the University of Maryland. At 17, he beat out 400,000 North American high school students to win the USA Mathematical Olympiad, and over a 3-year period took two golds and a silver at the International Mathematical Olympiad. He took his BA and Ph D at Harvard, with a masters from Johns Hopkins in creative writing in between, then started his academic career at Princeton. He came to the University of Wisconsin in 2005, made full professor in 2011, was named a Vilas Distinguished Achievement Professor in 2014 and since 2015 has been the John D MacArthur Professor of Mathematics. His previous books include How Not To Be Wrong: The Power of Mathematical Thinking in 2014 and the novel The Grasshopper King. He also has a credited cameo in the 2017 movie Gifted in the role of math professor, giving him a Kevin Bacon degree of separation of two and making him one of the extraordinarily small and select group of people with an Erdos/Bacon number. He maintains a blog Quomodocumque.wordpress.com and tweets at JSEllenberg. It is a great pleasure to welcome to MBB Professor Jordan Ellenberg

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
151 | Jordan Ellenberg on the Mathematics of Political Boundaries

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Jun 14, 2021 83:43 Very Popular


Any system in which politicians represent geographical districts with boundaries chosen by the politicians themselves is vulnerable to gerrymandering: carving up districts to increase the amount of seats that a given party is expected to win. But even fairly-drawn boundaries can end up quite complex, so how do we know that a given map is unfairly skewed? Math comes to the rescue. We can ask whether the likely outcome of a given map is very unusual within the set of all possible reasonable maps. That's a hard math problem, however — the set of all possible maps is pretty big — so we have to be clever to solve it. I talk with geometer Jordan Ellenberg about how ideas like random walks and Markov chains help us judge the fairness of political boundaries.Support Mindscape on Patreon.Jordan Ellenberg received his Ph.D. in mathematics from Harvard University in 1998. He is currently the John D. MacArthur professor of mathematics at the University of Wisconsin. He competed in the International Mathematical Olympiad three times, winning a gold medal twice. Among his awards are the MAA Euler Book Prize and a Guggenheim Fellowship. He is the author of How Not to Be Wrong and the novel The Grasshopper King. His new book is Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else.Web siteWisconsin web pageGoogle Scholar publicationsAmazon author pageWikipediaTwitter

Talking Beats with Daniel Lelchuk
Ep. 95: Jordan Ellenberg

Talking Beats with Daniel Lelchuk

Play Episode Listen Later May 25, 2021 60:03


"People may think of themselves as having no mind for geometry at all, but that's purely an illusion." Jordan Ellenberg -- mathematician, numbers guru, and explainer -- joins the podcast on the day his new book is released. The book, called Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else, takes that subject so many people had problems with in middle school or high school and shows even the most casual reader that we all have a feel for geometry somewhere inside us-- even if we don't think we do. Coincidentally, that is something Daniel has long said about music and its mass appeal, and so Daniel and Jordan explore the fascinating parallels between geometry and music, and even get into a heated discussion over Jordan's portrayal of Puccini and his operas! Gerrymandering, politics, and math are all connected in this conversation as well, and some great poetry makes an appearance, too. Support Talking Beats with Daniel Lelchuk on Patreon. You will contribute to continued presentation of substantive interviews with the world's most compelling people. We believe that providing a platform for individual expression, free thought, and a diverse array of views is more important now than ever. Jordan Ellenberg grew up in Potomac, MD, the child of two statisticians. He excelled in mathematics from a young age, and competed for the U.S. in the International Mathematical Olympiad three times, winning two gold medals and a silver. He went to college at Harvard, got a master’s degree in fiction writing from Johns Hopkins, and then returned to Harvard for his Ph.D. in math. After graduate school, he was a postdoc at Princeton. In 2004, he joined the faculty of the University of Wisconsin at Madison, where he is now the John D. MacArthur Professor of Mathematics. Ellenberg’s research has uncovered new and unexpected connections between these subjects and algebraic topology, the study of abstract high-dimensional shapes and the relations between them. Ellenberg was a plenary speaker at the 2013 Joint Mathematics Meetings, the largest mathematics conference in the world, and he has lectured about his research around the United States and in ten other countries. Ellenberg has been writing for a general audience about math for more than fifteen years; his work has appeared in the New York Times, the Wall Street Journal, the Washington Post, Wired, The Believer, and the Boston Globe, and he is the author of the “Do the Math” column in Slate. His Wired feature story on compressed sensing appeared in the Best Writing on Mathematics 2011 anthology. His novel, The Grasshopper King, was a finalist for the 2004 New York Public Library Young Lions Fiction Award. His 2014 book How Not To Be Wrong was a New York Times and Sunday Times (London)bestseller and was one of Bill Gates’ top five summer books; it has been published in sixteen countries.

Good Morning, RVA!
Good morning, RVA: 912 • 60 • 15.6; walk-ups abound; and even more budget sessions

Good Morning, RVA!

Play Episode Listen Later May 6, 2021


Good morning, RVA! It’s 50 °F, and today’s weather looks excellent. Expect highs near 70 °F and no rain. Get out there, and let the sun recharge your battery.Water coolerAs of this morning, the Virginia Department of Health reports 842 new positive cases of the coronavirus in the Commonwealth and 21 new deaths as a result of the virus. VDH reports 85 new cases in and around Richmond (Chesterfield: 46, Henrico: 33, and Richmond: 6). Since this pandemic began, 1,290 people have died in the Richmond region. The seven-day average of new reported cases across the state sits at 912. No idea what he’ll say, but the governor will hold a press conference today at 11:00 AM to “provide updates on the Commonwealth’s response to COVID-19 and vaccination program.” I don’t see a placeholder for it yet, but you can most likely stream the event live from VPM’s YouTube. Since we’ve got a sizeable loosening of coronarestrictions headed our way on May 15th, I’d wager that he’ll mostly speak on the State’s vaccination program. Maybe he’ll talk through the new walk-up options at CVS that fall in line with President Biden’s announcements earlier this week?Over in vaccine world, Cameron Thompson at WTVR reports that Henrico County will host their last mass vaccination event at the Raceway on May 27th. Between now and then you can just walk on up—no appointments required—on the 11th, 12th, 19th, 20th, 26th, and 27th and get yourself vaccinated. Remember: You can also walk up to George Wythe High School on Wednesdays for a one-and-done Johnson & Johnson shot. Closing the Raceway, of course, does not mean that all of Henrico County got jabbed and is now good to go. It means that demand has dipped and public-health humans will need to change things up in clever ways to make it as easy as possible for the rest of the region to get vaccinated. Thompson grabbed this perfect quote from someone at yesterday’s walk-up event at Wythe: “I kept saying I was going to make an appointment, make an appointment, never did. But, once I’ve seen this become available I just decided to jump on.” I wouldn’t call this vaccine hesitancy, I would call it “people have lives to lead and hopping through technology hoops to find inconvenient vaccination appointments at a hard-to-reach racetrack doesn’t work for some folks.”This is something I have nightmares about, from the Richmond Police Department: “At approximately 9:16 p.m., Nicholas Anthony Smith, 23, a male and sole occupant, was operating a vehicle southbound in the 100 block of South Arthur Ashe Boulevard and lost control of the vehicle. The vehicle left the roadway and struck a pole near the intersection of West Cary Street. Witnesses stated that he swerved to avoid striking two pedestrians that were crossing Arthur Ashe Boulevard from east to west at that location. Smith was transported to a local hospital where he succumbed to his injuries. Investigators determined that speed was a factor.” My question remains the same after all fatal crashes in the City: What are we going to do to physically change the street to keep this from happening again? How do we narrow Arthur Ashe to slow traffic? Can we retime lights to prevent drivers from catching a wave of greens? How do we protect major pedestrian intersections like Arthur Ashe and Cary? The point of Vision Zero is not that people won’t make mistakes while using our streets, it’s that they won’t die or kill others as a result of those mistakes.Council’s hope for an orderly 2021 budget season has kind of gone out the window. I’ve got the fifth budget amendment work session up on The Boring Show, and they’ve scheduled another session for today at 4:00 PM. I’m having a hard time keeping up, and, after listening along, so are the councilfolk. Here’s my biggest takeaway, 11.5 episodes in: If you are even the tiniest bit defund-the-police-curious, you have between 0 and 0.5 people representing that position for you on City Council. There’s just absolutely zero willingness to even entertain the idea that we maybe have too many, too much, too intense, too militarized, too funded police.Unrelated to budget, the City’s Urban Design Committee will meet today with a bunch of chill stuff on their agenda that will not make your blood boil. Did you know they’re putting new roofs on the massive water tanks in Byrd Park? Plus, a fancy new parklet is planed for the intersection of Brook Road and Marshall Street.West Enders! If you’re a part of the RPS community, tonight’s your night for the West End-specific Reopen With Love 2.0 conversation. Here’s the presentation from these meetings if you’re more of a flip-through-slides kind of person. There are a lot of thoughtful mitigation measures explained in those slides, and, while it kind of feels like “when will we ever get back to normal!,” it also kind of feels like “wait, how many fewer gross illnesses will my child now bring home from school next year?”The Richmond and Henrico Health Districts will hold a free COVID-19 community testing event today at the East Henrico Health Department from 2:00–4:00 PM. Testing continue to be an important part of containing the spread of COVID-19, and these events take place every Thursday.This morning’s longreadThe Incredible Rise of North Korea’s Hacking ArmyWhat!? This is bananas.The process by which North Korean hackers are spotted and trained appears to be similar to the way Olympians were once cultivated in the former Soviet bloc. Martyn Williams, a fellow at the Stimson Center think tank who studies North Korea, explained that, whereas conventional warfare requires the expensive and onerous development of weaponry, a hacking program needs only intelligent people. And North Korea, despite lacking many other resources, “is not short of human capital.” The most promising students are encouraged to use computers at schools. Those who excel at mathematics are placed at specialized high schools. The best students can travel abroad, to compete in such events as the International Mathematical Olympiad.If you’d like your longread to show up here, go chip in a couple bucks on the ol’ Patreon.Picture of the DayX marks the spot.

Early Breakfast with Abongile Nzelenzele
Grade 12 learner shines at International Mathematical Olympia

Early Breakfast with Abongile Nzelenzele

Play Episode Listen Later Oct 6, 2020 9:47


A grade 12 learner at the Horizon International High School has made history when he won a bronze medal at the 61st International Mathematical Olympiad in Russia. Kgaogelo Bopape joins Africa Early Breakfast to talk about the journey. Guest: Kgaogelo Bopape, Grade 12 at Horizon International High School   Host: Africa Melane Topic: Grade 12 learner shines at International Mathematical Olympia See omnystudio.com/listener for privacy information.

Hear This Idea
9. Neel Nanda on Effective Planning and Building Habits that Stick

Hear This Idea

Play Episode Listen Later Apr 19, 2020 57:27


Neel Nanda is a final year maths undergraduate at the University of Cambridge, and a gold medalist in the International Mathematical Olympiad. He teaches regularly – from revision lectures to a recent ‘public rationality' workshop. Neel is also an active member in rationalist and effective altruism communities. In this episode we discuss How to view self-improvement and optimising your goals Forming good habits through the 'TAPs' technique How to build effective plans by using our 'inner simulator' and 'pre-hindsight' You can read more on this episode's accompanying write-up: hearthisidea.com/episodes/neel. You can also read Neel's teaching notes for his planning workshop here. If you have any feedback or suggestions for future guests, please get in touch through our website. Also, Neel has created an anonymous feedback form for this episode, and he would love to hear any of your thoughts! Please also consider leaving a review on Apple Podcasts or wherever you're listening to this; we're just starting out and it would really help listeners find us! If you want to support the show more directly, you can also buy us a beer at tips.pinecast.com/jar/hear-this-idea. Thanks for listening!

Countercurrent: conversations with Professor Roger Kneebone
Kevin Buzzard in conversation with Roger Kneebone

Countercurrent: conversations with Professor Roger Kneebone

Play Episode Listen Later Dec 12, 2016 43:00


Kevin Buzzard has been obsessed by pure mathematics since he was a child, winning the International Mathematical Olympiad with a perfect score at the age of 19. Now Professor of Pure Mathematics at Imperial College London, Kevin is fascinated by modular forms. In this conversation we discuss the challenges of communicating with non-mathematicians about this unique conceptual world.

imperial college london buzzards pure mathematics international mathematical olympiad roger kneebone
Scalar Learning Podcast
EP070: The International Mathematical Olympiad

Scalar Learning Podcast

Play Episode Listen Later Aug 6, 2016 16:33


In honor of the summer olympics, Huzefa discusses the International Mathematical Olympiad, an annual international competition for students under the […]

international mathematical olympiad huzefa
Going Deep with Aaron Watson
9 Po-Shen Loh, Founder of Expii, US Math Team Lead & CMU Professor

Going Deep with Aaron Watson

Play Episode Listen Later Jul 23, 2015 66:43


Po Shen Loh took an hour out of his day to sit down with the Going Deep podcast and discuss his company Expii that helps students learn difficult STEM topics.   He represented the USA Math team at the International Mathematical Olympiad as a high schooler in 1999, where he won a silver medal. He came back to the team as a deputy leader (assistant coach) in 2004 and 2010-2013. He took over the Head Lead role in 2014 and led the US to its first championship in 21 years in 2015.   Po founded Expii Inc. in January of 2014 as a learning tool to help replicate the most effective methods of teaching students all over the world. Expii's mission is as follows;    "We believe that high-quality educational resources should be available to anyone, no matter where they are or what they can afford. We also believe that students and teachers are the best at explaining things. We build the tools ordinary people need to educate one another. So far, we've designed an approachable markup language that enthusiasts have used to create thousands of lessons. We've also created a free platform to interconnect these lessons." Po got his PHD in Mathematics from Princeton in 2010, where his thesis discussed combinatorics (the study of discrete systems) and probability theory. Prior to his work at Princeton, Po-Shen received the equivalent of a masters degree in mathematics from the University of Cambridge in 2005, where he was supported by a Winston Churchill Foundation Scholarship. He received his undergraduate degree in mathematics from Caltech in 2004, graduating first in his class, and his undergraduate thesis later received the Honorable Mention for the 2004 AMS-MAA-SIAM Morgan Prize.   After getting his PHD, he went on to immediately begin an assistant professorship in mathematics at Carnegie Mellon University.    Articles   Washington Post 1-hour lesson on probability   Connect Twitter @PoShenLoh @ExpiiInc   Expii website

Saturday Review
Alexander McQueen, Suite Francaise, X+Y, Antigone, Tom McCarthy

Saturday Review

Play Episode Listen Later Mar 14, 2015 42:04


When an exhibition of the fashion creations of Alexander McQueen opened in New York, visitors queued for up to 5 hours to get in. It's now at London's Victoria and Albert Museum; will it be such a crowd-puller Suite Francaise - Irene Nemerovski's wartime novel (discovered more than six decades after her death) was a best seller. Can it repeat its success as a film? X+Y is a film about a young maths prodigy who is on the autistic spectrum. It deals with his participation in the International Mathematical Olympiad and growing up emotionally Juliette Binoche plays the lead in Antigone at London's Barbican Theatre. Directed by Ivo Von Hove, it's caused a lot of advance excitement. Tom McCarthy's new novel Satin Island is a meditation on contemporary society that some reviewers have accused of ditching traditionally novelistic techniques like plot and character. Is it all the better for it? Tom Sutcliffe's guests are Helen Lewis, Dominic Sandbrook and Kit Davis. The producer is Oliver Jones.