Podcasts about Central limit theorem

  • 34PODCASTS
  • 55EPISODES
  • 35mAVG DURATION
  • ?INFREQUENT EPISODES
  • Feb 7, 2025LATEST
Central limit theorem

POPULARITY

20172018201920202021202220232024


Best podcasts about Central limit theorem

Latest podcast episodes about Central limit theorem

Klaviyo Data Science Podcast
Klaviyo Data Science Podcast EP 56 | Evaluating AI Models: A Seminar (feat. Evan Miller)

Klaviyo Data Science Podcast

Play Episode Listen Later Feb 7, 2025 45:29


This month, the Klaviyo Data Science Podcast welcomes Evan Miller to deliver a seminar on his recently published paper, Adding Error Bars to Evals: A Statistical Approach to Language Model Evaluations! This episode is a mix of a live seminar Evan gave to the team at Klaviyo and an interview we conducted with him afterward. Suppose you're trying to understand the performance of an AI model — maybe one you built or fine-tuned and are comparing to state-of-the-art models, maybe one you're considering loading up and using for a project you're about to start. If you look at the literature today, you can get a sense of what the average performance for the model is on an evaluation or set of tasks. But often, that's unfortunately the extent of what it's possible to learn —there is much less emphasis placed on the variability or uncertainty inherent to those estimates. And as anyone who's worked with a statistical model in the past can affirm, variability is a huge part of why you might choose to use or discard a model.  This seminar explores how to best compute, summarize, and display estimates of variability for AI models. Listen along to hear about topics like: Why the Central Limit Theorem you learned about in Stats 101 is still relevant with the most advanced AI models developed today How to think about complications of classic assumptions, such as measurement error or clustering, in the AI landscape  When to do a sample size calculation for your AI model, and how to do it About Evan Miller You may already know our guest Evan Miller from his fantastic blog, which includes his celebrated A/B testing posts, such as “How not to run an A/B test.” You may also have used his A/B testing tools, such as the sample size calculator. Evan currently works as a research scientist at Anthropic.  About Anthropic Per Anthropic's website: You can find more information about Anthropic, including links to their social media accounts, on the company website. Anthropic is an AI safety and research company based in San Francisco. Our interdisciplinary team has experience across ML, physics, policy, and product. Together, we generate research and create reliable, beneficial AI systems. Special thanks to Chris Murphy at Klaviyo for organizing this seminar and making this episode possible!  For the full show notes, including who's who, see the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Medium writeup⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

StarTalk Radio
Our Mathematical Universe with Grant Sanderson (3Blue1Brown)

StarTalk Radio

Play Episode Listen Later Sep 24, 2024 56:56


Is math discovered or invented? Neil deGrasse Tyson & Chuck Nice explore information theory, talking to aliens with prime numbers, Mandelbrot sets, and why math is often called the "language of the universe" with Grant Sanderson, the math educator behind YouTube channel 3Blue1Brown. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here:https://startalkmedia.com/show/our-mathematical-universe-with-grant-sanderson-3blue1brown/Thanks to our Patrons Dr. Satish, Susan Kleiner, Harrison Phillips, Mark A, Rebeca Fuchs, Aaron Ciarla, Joe Reyna, David Grech, Fida Vuori, Paul A Hansen, Imran Yusufzai, CharlieVictor, Bob Cowles, Ryan Lyum, MunMun, Samuel Barnett, John DesMarteau, and Mary Anne Sanford for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.

Wildlife By The Numbers
Wildlife By The Numbers Episode 3 Sample Size Needs

Wildlife By The Numbers

Play Episode Listen Later Jul 25, 2024 27:46


Shifting focus to sample size determination, Matt, Grant, and Randy explore the challenges and considerations in choosing appropriate sample sizes for reliable ecological research. They discuss trade-offs, budget constraints, and introduce the concept of power analysis for enhancing the reliability of ecological studies.Quotes from this episode:"In this podcast, we're going to talk about sample size needs. How many samples does a person need to collect to get a representative sample of the population? So it leads us back to this whole representativeness idea. If a person samples too few, then there's a very good chance that person is going to include a disproportionate number of outliers, oddballs or anomalies in the sample.""...in the earlier episode we said, if all the plants have the same number of tomatoes we would just have to sample one of them. That was an invariant population. But we also spoke to that some plants had 100 tomatoes and some had none. And so we have extreme variability.""… (the amount) of uncertainty you're willing to deal with, and how much imprecision you're willing to deal with really drives your samplesize needs….You've got to take both of those things into consideration. How variable is my population and then how certain do I want to be? How much error am I willing to accept in my final estimate?"Episode music: Shapeshifter by Mr Smith is licensed under a Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/https://freemusicarchive.org/music/mr-smith/studio-city/shapeshifter/

Data Science Interview Prep
Central Limit Theorem

Data Science Interview Prep

Play Episode Listen Later May 3, 2023 8:24


Mastering the Central Limit Theorem in data science interviews helps you shine with solid statistical prowess! Make sure you brush up on this common statistical concept before your next interview. Become a Paid Subscriber for access to our full library: ⁠⁠⁠⁠⁠⁠⁠https://podcasters.spotify.com/pod/show/data-science-interview/subscribe⁠⁠⁠⁠⁠⁠⁠ ----------------------------------------------------------- If you enjoy our podcast, please consider becoming a premium member on either ⁠⁠Patreon⁠⁠ ($5 donation) or ⁠⁠Spotify⁠⁠ ($2.99 donation). Your donation goes directly to supporting this channel and the human labor that goes into each of these episodes. For each episode, we do research and fact-check our content to make sure that you get the best information possible, even on cutting edge topics. Become a Paid Subscriber: ⁠⁠https://podcasters.spotify.com/pod/show/data-science-interview/subscribe⁠⁠ Becoming a premium member also gives you access to our locked episodes, which include helpful content such as: - NLP - Deep Learning - Recurrent Neural Networks - Imbalanced Data - The Bias-Variance Tradeoff - Transformers in NLP - Self-Attention in NLP - Distributions - Statistics - A/B Testing - ROC-AUC .... and so much more!

The Nonlinear Library
LW - Why Are Maximum Entropy Distributions So Ubiquitous? by johnswentworth

The Nonlinear Library

Play Episode Listen Later Apr 6, 2023 15:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Are Maximum Entropy Distributions So Ubiquitous?, published by johnswentworth on April 5, 2023 on LessWrong. If we measure the distribution of particle velocities in a thin gas, we'll find that they're roughly normally distributed. Specifically, the probability density of velocity v will be proportional to e−12mv2/(kBT) - or, written differently, e−E(v)/(kBT), where E(v) is the kinetic energy of a particle of the gas with velocity v, T is temperature, and kB is Boltzmann's constant. The latter form, e−E/(kBT), generalizes even beyond thin gasses - indeed, it generalizes even to solids, fluids, and plasmas. It applies to the concentrations of chemical species in equilibrium solutions, or the concentrations of ions around an electrode. It applies to light emitted from hot objects. Roughly speaking, it applies to microscopic states in basically any physical system in thermal equilibrium where quantum effects aren't significant. It's called the Boltzmann distribution; it's a common sub-case of a more general class of relatively-elegant distributions called maximum entropy distributions. Even more generally, maximum entropy distributions show up remarkably often. The normal distribution is another good example: you might think of normal distributions mostly showing up when we add up lots of independent things (thanks to the Central Limit Theorem), but then what about particle velocities in a gas? Sure, there's conceptually lots of little things combining together to produce gas particle velocities, but it's not literally a bunch of numbers adding together; Central Limit Theorem doesn't directly apply. Point is: normal distributions show up surprisingly often, even when we're not adding together lots of numbers. Same story with lots of other maximum entropy distributions - poisson, geometric/exponential, uniform, dirichlet. most of the usual named distributions in a statistical library are either maximum entropy distributions or near relatives. Like the normal distribution, they show up surprisingly often. What's up with that? Why this particular class of distributions? If you have a Bayesian background, there's kind of a puzzle here. Usually we think of probability distributions as epistemic states, descriptions of our own uncertainty. Probabilities live “in the mind”. But here we have a class of distributions which are out there “in the territory”: we look at the energies of individual particles in a gas or plasma or whatever, and find that they have not just any distribution, but a relatively “nice” distribution, something simple. Why? What makes a distribution like that appear, not just in our own models, but out in the territory? What Exactly Is A Maximum Entropy Distribution? Before we dive into why maximum entropy distributions are so ubiquitous, let's be explicit about what maximum entropy distributions are. Any (finite) probability distribution has some information-theoretic entropy, the “amount of information” conveyed by a sample from the distribution, given by Shannon's formula: −∑ipilog(pi) As the name suggests, a maximum entropy distribution is the distribution with the highest entropy, subject to some constraints. Different constraints yield different maximum entropy distributions. Conceptually: if a distribution has maximum entropy, then we gain the largest possible amount of information by observing a sample from the distribution. On the flip side, that means we know as little as possible about the sample before observing it. Maximum entropy = maximum uncertainty. With that in mind, you can probably guess one maximum entropy distribution: what's the maximum entropy distribution over a finite number of outcomes (e.g. heads/tails, or 1/2/3/4/5/6), without any additional constraints? (Think about that for a moment if you want.) Intuitively, the “most unce...

The Nonlinear Library: LessWrong
LW - Why Are Maximum Entropy Distributions So Ubiquitous? by johnswentworth

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 6, 2023 15:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Are Maximum Entropy Distributions So Ubiquitous?, published by johnswentworth on April 5, 2023 on LessWrong. If we measure the distribution of particle velocities in a thin gas, we'll find that they're roughly normally distributed. Specifically, the probability density of velocity v will be proportional to e−12mv2/(kBT) - or, written differently, e−E(v)/(kBT), where E(v) is the kinetic energy of a particle of the gas with velocity v, T is temperature, and kB is Boltzmann's constant. The latter form, e−E/(kBT), generalizes even beyond thin gasses - indeed, it generalizes even to solids, fluids, and plasmas. It applies to the concentrations of chemical species in equilibrium solutions, or the concentrations of ions around an electrode. It applies to light emitted from hot objects. Roughly speaking, it applies to microscopic states in basically any physical system in thermal equilibrium where quantum effects aren't significant. It's called the Boltzmann distribution; it's a common sub-case of a more general class of relatively-elegant distributions called maximum entropy distributions. Even more generally, maximum entropy distributions show up remarkably often. The normal distribution is another good example: you might think of normal distributions mostly showing up when we add up lots of independent things (thanks to the Central Limit Theorem), but then what about particle velocities in a gas? Sure, there's conceptually lots of little things combining together to produce gas particle velocities, but it's not literally a bunch of numbers adding together; Central Limit Theorem doesn't directly apply. Point is: normal distributions show up surprisingly often, even when we're not adding together lots of numbers. Same story with lots of other maximum entropy distributions - poisson, geometric/exponential, uniform, dirichlet. most of the usual named distributions in a statistical library are either maximum entropy distributions or near relatives. Like the normal distribution, they show up surprisingly often. What's up with that? Why this particular class of distributions? If you have a Bayesian background, there's kind of a puzzle here. Usually we think of probability distributions as epistemic states, descriptions of our own uncertainty. Probabilities live “in the mind”. But here we have a class of distributions which are out there “in the territory”: we look at the energies of individual particles in a gas or plasma or whatever, and find that they have not just any distribution, but a relatively “nice” distribution, something simple. Why? What makes a distribution like that appear, not just in our own models, but out in the territory? What Exactly Is A Maximum Entropy Distribution? Before we dive into why maximum entropy distributions are so ubiquitous, let's be explicit about what maximum entropy distributions are. Any (finite) probability distribution has some information-theoretic entropy, the “amount of information” conveyed by a sample from the distribution, given by Shannon's formula: −∑ipilog(pi) As the name suggests, a maximum entropy distribution is the distribution with the highest entropy, subject to some constraints. Different constraints yield different maximum entropy distributions. Conceptually: if a distribution has maximum entropy, then we gain the largest possible amount of information by observing a sample from the distribution. On the flip side, that means we know as little as possible about the sample before observing it. Maximum entropy = maximum uncertainty. With that in mind, you can probably guess one maximum entropy distribution: what's the maximum entropy distribution over a finite number of outcomes (e.g. heads/tails, or 1/2/3/4/5/6), without any additional constraints? (Think about that for a moment if you want.) Intuitively, the “most unce...

The Cartesian Cafe
Greg Yang | Large N Limits: Random Matrices & Neural Networks

The Cartesian Cafe

Play Episode Listen Later Jan 4, 2023 181:27 Very Popular


Greg Yang is a mathematician and AI researcher at Microsoft Research who for the past several years has done incredibly original theoretical work in the understanding of large artificial neural networks. Greg received his bachelors in mathematics from Harvard University in 2018 and while there won the Hoopes prize for best undergraduate thesis. He also received an Honorable Mention for the Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student in 2018 and was an invited speaker at the International Congress of Chinese Mathematicians in 2019.     In this episode, we get a sample of Greg's work, which goes under the name "Tensor Programs" and currently spans five highly technical papers. The route chosen to compress Tensor Programs into the scope of a conversational video is to place its main concepts under the umbrella of one larger, central, and time-tested idea: that of taking a large N limit. This occurs most famously in the Law of Large Numbers and the Central Limit Theorem, which then play a fundamental role in the branch of mathematics known as Random Matrix Theory (RMT). We review this foundational material and then show how Tensor Programs (TP) generalizes this classical work, offering new proofs of RMT. We conclude with the applications of Tensor Programs to a (rare!) rigorous theory of neural networks.     Patreon: https://www.patreon.com/timothynguyen     Part I. Introduction 00:00:00 : Biography 00:02:45 : Harvard hiatus 1: Becoming a DJ 00:07:40 : I really want to make AGI happen (back in 2012) 00:09:09 : Impressions of Harvard math 00:17:33 : Harvard hiatus 2: Math autodidact 00:22:05 : Friendship with Shing-Tung Yau 00:24:06 : Landing a job at Microsoft Research: Two Fields Medalists are all you need 00:26:13 : Technical intro: The Big Picture 00:28:12 : Whiteboard outline Part II. Classical Probability Theory 00:37:03 : Law of Large Numbers 00:45:23 : Tensor Programs Preview 00:47:26 : Central Limit Theorem 00:56:55 : Proof of CLT: Moment method 1:00:20 : Moment method explicit computations Part III. Random Matrix Theory 1:12:46 : Setup 1:16:55 : Moment method for RMT 1:21:21 : Wigner semicircle law Part IV. Tensor Programs 1:31:03 : Segue using RMT 1:44:22 : TP punchline for RMT 1:46:22 : The Master Theorem (the key result of TP) 1:55:04 : Corollary: Reproof of RMT results 1:56:52 : General definition of a tensor program Part V. Neural Networks and Machine Learning 2:09:05 : Feed forward neural network (3 layers) example 2:19:16 : Neural network Gaussian Process 2:23:59 : Many distinct large N limits for neural networks 2:27:24 : abc parametrizations (Note: "a" is absorbed into "c" here): variance and learning rate scalings 2:36:54 : Geometry of space of abc parametrizations 2:39:41: Kernel regime 2:41:32 : Neural tangent kernel 2:43:35: (No) feature learning 2:48:42 : Maximal feature learning 2:52:33 : Current problems with deep learning 2:55:02 : Hyperparameter transfer (muP)  3:00:31 : Wrap up Further Reading: Tensor Programs I, II, III, IV, V by Greg Yang and coauthors.   Twitter: @iamtimnguyen   Webpage: http://www.timothynguyen.org

Flirting with Models
David Sun - Expectancy Hacking (S5E5)

Flirting with Models

Play Episode Listen Later Jun 27, 2022 49:55


Today I speak with David Sun, a retail trader who started his own hedge fund. Coming from a non-traditional background, David takes a non-traditional approach in his investment mandates.  Focused on selling options to capture the volatility risk premium, David believes that markets are ultimately efficient and therefore foregoes using any sort of active signal.  Instead, he focuses on explicitly controlling his win size relative to his loss size, and then choosing a strategy with a win rate that bumps him into positive expectancy.  By then maximizing the number of “at bats,” he lets the Central Limit Theorem take care of the rest.  It's an approach he calls “expectancy hacking.” We discuss this approach in both theory and practice, addressing issues such as trading costs and slippage drag, as well as both sequence and event risk.  David's approach is certainly non-traditional, but highlights some unique concepts of how traders may be able to architect a payoff profile around a risk premium. Please enjoy my episode with David Sun.

Naked Data Science
Central Limit Theorem in Plain English

Naked Data Science

Play Episode Listen Later Jun 20, 2022 38:44


We are trying out a different format in this episode. Nima gave me a topic, which is Central Limit Theorem. I spent an hour learning about it. And then we have a little chat. You will hear why we are doing this in the episode. And if you like this format, please send us an email at hello [at] nds.show . That helps us decide if we are going to make more episodes like this in the future.Meanwhile, if you are not a data scientist yet, but want to become one, you should really attend our webinar. We will demystify the transition into data science. We will show you the most effective way to build your skills. And we will advise you on the four possible options you can take to go from where you are to landing a data science job in as little as 9 months. Find out more here.

The tastytrade network
The Skinny on Options: Abstract Applications - June 6, 2022 - How Do We Know The Probabilities Will Work Out Over Time?

The tastytrade network

Play Episode Listen Later Jun 6, 2022 24:27


At tastytrade, we anchor our approach on statistical advantages and high probabilities. But how can we be sure that the probabilities will actually work out in our favor over time? If we turn to the relationship between the Law of Large Numbers and the Central Limit Theorem, we find the answer to that question.Did you catch our recent show on how to determine Delta/Theta levels for your portfolio?

The tastytrade network
The Skinny on Options: Abstract Applications - June 6, 2022 - How Do We Know The Probabilities Will Work Out Over Time?

The tastytrade network

Play Episode Listen Later Jun 6, 2022 23:36


At tastytrade, we anchor our approach on statistical advantages and high probabilities. But how can we be sure that the probabilities will actually work out in our favor over time? If we turn to the relationship between the Law of Large Numbers and the Central Limit Theorem, we find the answer to that question.Did you catch our recent show on how to determine Delta/Theta levels for your portfolio?

The Michael Sartain Podcast
Tai Lopez - The Michael Sartain Podcast

The Michael Sartain Podcast

Play Episode Listen Later Jun 1, 2022 211:46


Tai Lopez (IG: @TaiLopez) is an investor, partner, and advisor to over 20 multi-million dollar businesses. Tai has created some of the most successful advertisements in the history of YouTube. He's currently the owner of RadioShack, Pier 1, Ralph & Russo and Stein Mart. Tai has book club and hosts a podcasts The Tai Lopez Show. He has interviewed, Kobe Bryant, Hilary Clinton, Rihanna, Jordan Belfort, Steven Spielberg, Mark Cuban, Chris Paul, Caron Butler & more. Tai also owns the largest book shipping club in the world, Mentor Box. He's currently launching the Original Garage NFT. Michael's Men of Action program is a Master's course dedicated to helping people elevate their social lives by building elite social circles and becoming higher status. Click the link below to learn more: https://go.moamentoring.com/i/2 Subscribe on Youtube: https://www.youtube.com/user/MichaelSartain Listen on Apple Podcast: https://podcasts.apple.com/us/podcast/the-michael-sartain-podcast/id1579791157 Listen on Spotify: https://open.spotify.com/show/2faAYwvDD9Bvkpwv6umlPO?si=8Q3ak9HnSlKjuChsTXr6YQ&dl_branch=1 Filmed at Sticky Paws Studios: https://m.youtube.com/channel/UComrBVcqGLDs3Ue-yWAft8w 0:00 Intro 1:24 Dr David Buss 2:30 Stephen Pinker 2:42 ***Tai's Father 5:21 Tai's family 7:02 **My grandpa killed a guy with a hammer 8:02 *Prof Martin D Burkenroad 10:11 Robert Frost 11:17 Ernesto Lopez stories 13:28 Narcissistic Personality Disorder 15:01 *A lot of people think I'm a narcissist 16:45 Confessions of a Sociopath 17:08 *Evolutionary Psychology 18:49 Tai's Book List, Central Limit Theorem 20:09 The One Thing by Gary Keller 27:03 The Happiness Hypothesis by Jonathan Haidt 28:57 Civilization and Its Discontents by Sigmund Freud 31:42 Everything is genetic 32:30 Racism 34:16 Satoshi Kanazawa 36:54 Not much you can do to make your kids smarter 39:26 Parental Investment Hypothesis 41:13 Dr Buss, hypothesis are falsifiable but not falsified 42:17 Evolutionary Psychology textbook by Dr David Buss 47:18 Why are you not celebrating her leaving? 49:08 **Hugh Hefner, friends with ex's 51:36 Nine months isn't a struggle 54:25 *The best man with women in the world 55:50 Allan Nation and other mentors 59:59 ***Here in my garage video, paid ads 1:03:27 **I think they all copied you 1:06:51 I was hypnotizing people to read books 1:09:10 **Amber Heard 1:12:36 *How do you choose which video to promote? 1:15:40 Crypto pizza video 1:19:00 Steps to hypnotize 1:22:38 MOAMentoring.com 1:23:34 Controversial people, OJ Simpson, Kim Jong ill, 1:25:40 *Jordan Belfort, dealing with haters 1:29:04 The donkey, the man and his son 1:32:17 Brad Lea Podcast 1:34:23 Dan Bilzerian at my bday 1:35:05 Multiple streams of income, 60% - 30% - 10% 1:37:42 ***Crypto cult 1:39:17 Creating a digital product 1:43:49 ***I may have created the most millionaires 1:45:57 Vegetables and Desert 1:49:13 Two pizzas for 100 bitcoin 1:50:50 Opinion on Crypto 1:52:30 ***Pseudo-smart people 1:56:37 *** 95% of tokens will be worthless 1:59:12 Greater sucker theory 2:01:18 Terra Luna Project 2:05:25 Bitcoin Course 2:07:56 Stick to the name brand stuff 2:10:01 Meeting Dr. Buss 2:11:04 ***Stupid People 2:13:54 Wealth to poverty disparity 2:18:06 Dr. Alex Mehr, RadioShack token 2:22:36 Centralized versus decentralized exchange is 2:27:12 Slippage, three way trades 2:27:45 Moved to Virginia 2:29:39 **Stalkers breaking into my house 2:31:59 I've never met a billionaire who I would've traded for his life 2:14:13 Living in Vegas 2:38:58 ***Board Apes 2:40:20 Original garage NFT 2:45:11 Dating Tai Lopez 2:48:39 Marriage 2:50:22 She almost divorced him for being my friend 2:53:09 **You killed somebody 2:57:45 The best guy in the world at social skills, Drago 3:02:08 **Colombian drug dealers 3:05:53 Not good looking 3:08:20 Women are fickle 3:13:37 Rhianna 3:22:47 Feminism and the patriarchy 3:29:02 Closing Program 3:30:00 Outro

The Nonlinear Library
AF - Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4] by Dan Hendrycks

The Nonlinear Library

Play Episode Listen Later May 30, 2022 44:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4], published by Dan Hendrycks on May 30, 2022 on The AI Alignment Forum. This is the fourth post in a sequence of posts that describe our models for Pragmatic AI Safety. We argued in our last post that the overall AI safety community ought to pursue multiple well-reasoned research directions at once. In this post, we will describe two essential properties of the kinds of research that we believe are most important. First, we want research to be able to tractably produce tail impact. We will discuss how tail impact is created in general, as well as the fact that certain kinds of asymptotic reasoning exclude valuable lines of research and bias towards many forms of less tractable research. Second, we want research to avoid creating capabilities externalities: the danger that some safety approaches produce by way of the fact that they may speed up AGI timelines. It may at first appear that capabilities are the price we must pay for more tractable research, but we argue here and in the next post that these are easily avoidable in over a dozen lines of research. Strategies for Tail Impact It's not immediately obvious how to have an impact. In the second post in this sequence, we argued that research ability and impact is tail distributed, so most of the value will come from the small amount of research in the tails. In addition, trends such as scaling laws may make it appear that there isn't a way to “make a dent” in AI's development. It is natural to fear that the research collective will wash out individual impact. In this section, we will discuss high-level strategies for producing large or decisive changes and describe how they can be applied to AI safety. Processes that generate long tails and step changes Any researcher attempting to make serious progress will try to maximize their probability of being in the tail of research ability. It's therefore useful to understand some general mechanisms that tend to lead to tail impacts. The mechanisms below are not the only ones: others include thresholds (e.g. tipping points and critical mass). We will describe three processes for generating tail impacts: multiplicative processes, preferential attachment, and the edge of chaos. Multiplicative processes Sometimes forces are additive, where additional resources, effort, or expenditure in any one variable can be expected to drive the overall system forward in a linear way. In cases like this, the Central Limit Theorem often holds, and we should expect that outcomes will be normally distributed–in these cases one variable tends not to dominate. However, sometimes variables are multiplicative or interact nonlinearly: if one variable is close to zero, increasing other factors will not make much of a difference. In multiplicative scenarios, outcomes will be dominated by the combinations of variables where each of the variables is relatively high. For example, adding three normally distributed variables together will produce another normal distribution with a higher variance; multiplying them together will produce a long-tailed distribution. As a concrete example, consider the impact of an individual researcher with respect to the variables that impact their work: time, drive, GPUs, collaborators, collaborator efficiency, taste/instincts/tendencies, cognitive ability, and creativity/the number of plausible concrete ideas to explore. In many cases, these variables can interact nonlinearly. For example, it doesn't matter if a researcher has fantastic research taste and cognitive ability if they have no time to pursue their ideas. This kind of process will produce long tails, since it is hard for people to get all of the many different factors right (this is also the case in startups). The impl...

Common Science Podcast
Ep. 49 Free Will: Does it exist?

Common Science Podcast

Play Episode Listen Later Mar 29, 2022 56:32


Dré, Lauren, and Aidan ask, What's free will? Does it exist? And more. Website & Newsletter | https://commonscientists.com Support Us | https://patreon.com/commonscientists REFERENCES Unconscious brain activity shapes our decisions | https://www.nationalgeographic.com/science/article/unconscious-brain-activity-shapes-our-decisions William James | https://en.wikipedia.org/wiki/William_James Compatibilism | https://plato.stanford.edu/entries/compatibilism/ Paul in the Biblefree will and the devil, struggling to do what intended passage Predestination Calvinism | https://en.wikipedia.org/wiki/Calvinism The Value of Believing in Free Will: Encouraging a Belief in Determinism Increases Cheating | https://doi.org/10.1111%2Fj.1467-9280.2008.02045.x Daniel Dennett on Consciousness and Free Will | https://www.youtube.com/watch?v=R-Nj_rEqkyQ Just-world hypothesis | https://en.wikipedia.org/wiki/Just-world_hypothesis How to play mafia | https://www.instructables.com/How-To-Play-Mafia-with-And-Without-Cards/ Complex system | https://en.wikipedia.org/wiki/Complex_system Emergence | https://en.wikipedia.org/wiki/Emergence Building the periodic table | Dmitri Mendeleev | https://www.khanacademy.org/humanities/big-history-project/stars-and-elements/knowing-stars-elements/a/dmitri-mendeleev The Music of Life | Denis Noble | http://www.musicoflife.website/ Brain tumour causes uncontrollable paedophilia | https://www.newscientist.com/article/dn2943-brain-tumour-causes-uncontrollable-paedophilia/ Arthur Shopenhauer | https://en.wikipedia.org/wiki/Arthur_Schopenhauer Sam Harris | https://www.samharris.org/ Central Limit Theorem (video) | https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/what-is-sampling-distribution/v/central-limit-theorem PODCAST INFO Podcast Website | https://commonscientists.com/common-science/ Apple Podcasts | https://apple.co/2KDjQCK Spotify | https://spoti.fi/3pTK821 FOLLOW US Instagram | https://www.instagram.com/commonscientists/ Twitter | https://twitter.com/commscientists TAGS #Storytelling #Science #Society #Culture #Learning

Investment Terms
Investment Term For The Day - Central Limit Theorem

Investment Terms

Play Episode Listen Later Dec 15, 2021 2:03


In probability theory, the central limit theorem states that the distribution of a sample variable approximates a normal distribution as the sample size becomes larger, assuming that all samples are identical in size, and regardless of the population's actual distribution shape.CLT is a statistical premise that given a sufficiently large sample size from a population with a finite level of variance, the mean of all sampled variables from the same population will be approximately equal to the mean of the whole population. Furthermore, these samples approximate a normal distribution, with their variances being approximately equal to the variance of the population as the sample size gets larger, according to the law of large numbers.Although this concept was first developed by Abraham de Moivre in 1733, it was not formalized until 1930, when noted Hungarian mathematician George Polya dubbed it the Central Limit Theorem.

intuitions behind Data Science
Central Limit Theorem

intuitions behind Data Science

Play Episode Listen Later Dec 4, 2021 Very Popular


A quick introduction to central limit theorem and why it helps data analysis

The tastytrade network
The Skinny On Options Math - November 17, 2021 - Why Log Returns

The tastytrade network

Play Episode Listen Later Nov 17, 2021 20:05


For a trader who decides to dip their toes into financial mathematics, one of the most confusing features can be the focus on log-returns, rather than profits and losses or even simple returns.  Jacob joins Tom and Tony to explore what the logarithm does and how it sets up the Black-Scholes Model through the Efficient Market Hypothesis and the Central Limit Theorem.

The tastytrade network
The Skinny On Options Math - November 17, 2021 - Why Log Returns

The tastytrade network

Play Episode Listen Later Nov 17, 2021 19:14


For a trader who decides to dip their toes into financial mathematics, one of the most confusing features can be the focus on log-returns, rather than profits and losses or even simple returns.  Jacob joins Tom and Tony to explore what the logarithm does and how it sets up the Black-Scholes Model through the Efficient Market Hypothesis and the Central Limit Theorem.

The tastytrade network
The Skinny on Options: Abstract Applications - August 9, 2021 - Stable Paretian Distributions

The tastytrade network

Play Episode Listen Later Aug 9, 2021 17:47


As high probability traders, we rely on the characteristics of the Normal Distribution to analyze our positions and make daily decisions. And given both the Law of Large Numbers and Central Limit Theorem, the universal assumption of normality makes a lot of sense. But a case could also be made that financial markets are so unique that their distributions shouldn't put a cap on variance, but instead allow it to effectively be infinitely large. This is what Eugene Fama proposed back in 1963 with the Stable Paretian Hypothesis, and given the empirical data pointing to more of a Leptokurtic Distribution resulting from the equity market, a case could be made that it is more accurate than the more widely accepted Normal Distribution.

The tastytrade network
The Skinny on Options: Abstract Applications - August 9, 2021 - Stable Paretian Distributions

The tastytrade network

Play Episode Listen Later Aug 9, 2021 16:56


As high probability traders, we rely on the characteristics of the Normal Distribution to analyze our positions and make daily decisions. And given both the Law of Large Numbers and Central Limit Theorem, the universal assumption of normality makes a lot of sense. But a case could also be made that financial markets are so unique that their distributions shouldn't put a cap on variance, but instead allow it to effectively be infinitely large. This is what Eugene Fama proposed back in 1963 with the Stable Paretian Hypothesis, and given the empirical data pointing to more of a Leptokurtic Distribution resulting from the equity market, a case could be made that it is more accurate than the more widely accepted Normal Distribution.

The tastytrade network
Market Measures - May 7, 2021 - Central Limit Theorem and Option P/L

The tastytrade network

Play Episode Listen Later May 7, 2021 34:04


Option P/L distributions are highly non-normal, which makes it difficult to form concrete statistical expectations on a trade-by-trade basis.  What happens when we apply the CLT to these data?  tastyJoin Tony, Scott and Julia as they discuss the Central Limit Theorem and how it applies to option P/L distributions.

The tastytrade network
Market Measures - May 7, 2021 - Central Limit Theorem and Option P/L

The tastytrade network

Play Episode Listen Later May 7, 2021 33:14


Option P/L distributions are highly non-normal, which makes it difficult to form concrete statistical expectations on a trade-by-trade basis.  What happens when we apply the CLT to these data?  tastyJoin Tony, Scott and Julia as they discuss the Central Limit Theorem and how it applies to option P/L distributions.

The Get Up Girl
Upper Limits will kill your DREAM

The Get Up Girl

Play Episode Listen Later Apr 16, 2021 25:09


More than 10 years ago I woke up the day I was supposed to go to Tony Robbins and wanted to change my mind. My mom and I registered for Tony Robbins UPW (Unleash the Power Within) and I hit an upper limit and was looking for any and all reasons NOT TO GO. I was using the tools against myself. I thought that “it was a sign” that I shouldn't go. Well...we chose to go, and thank God we did. My life got better. This episode is all about how those dang Upper Limits can kill your dreams and how to be aware of them when they show up and disguise themself as “a tool”. They are not a tool to help you...they are aware of an upper limit.  In this episode:You will discover what an upper limit can look like I explain how we can use the tools we learn against ourselves and our dreams. You will learn how we talk ourselves out of creating more for our lives when we hit an upper limit.  SIGN UP >>>>> The Morning Edge "Morning Routine" Workshop: Saturday, April 24th Ready to LEVEL UP your habits? Join my FREE 30 Day Challenge on Instagram. Text: CHALLENGE to (323) 524-9857 to receive a daily morning text to get you up and out of bed. If you enjoyed this episode, make sure and give us a five star rating  and leave us a review on iTunes, Podcast Addict, Podchaser and Castbox. Resources: The Get Up GirlJoanna Vargas on InstagramLive Fully Academy on IG!Joanna Vargas on FacebookTikTok @joannavargasofficialJoin my monthly online academy: LIVE FULLY ACADEMYOperation Underground Railroad – OURRescue.orgLearn more: Dance Your LifeTEXT: BUSINESS to (323) 524-9857 to get on my VIP lift for my next upcoming business coaching group!  Resources: The Get Up GirlJoanna Vargas on InstagramLive Fully Academy on IG!Joanna Vargas on FacebookTikTok @joannavargasofficialJoin my monthly online academy: LIVE FULLY ACADEMYOperation Underground Railroad – OURRescue.orgLearn more: Dance Your LifeTEXT: BUSINESS to (323) 524-9857 to get on my VIP lift for my next upcoming business coaching group! 

Bundle Buddies
Episode 8 - Jimmy and the Pulsating Mass, Central Limit Theorem, Ripped Pants at Work w/ Erik Blood & Joe Garber

Bundle Buddies

Play Episode Listen Later Nov 3, 2020 99:30


Episode 8! Erik Blood & Joe Garber are our guests!They host the "It Was Murder Podcast" where they watch every episode and movie of Hart to Hart! @itwasmurderpod on twitter.Erik is an accomplished musician and producer, check out his website for TUNES and to HIRE HIMJoe is an amazing animator and illustrator, check out HIS website to see and to HIRE HIM.Point being they are both insanely talented and you should hire them or buy their work.This weeks cause is Shout Your Abortion, a movement working to normalize abortion through art, media, and community events all over the country. Donate here.As always if you send proof of donation to bundlebuddiespodcast@gmail.com we will shout you out on the pod!This week we played......Ripped Pants at WorkCentral Limit TheoremJimmy and the Pulsating Mass About the podcast.....Welcome to bundle buddies, we are playing through the ENTIRE itch.io Bundle for Racial Justice and Equality.What is the itch.io Bundle for Racial Justice and Equality? In June 2020, in response to the massive social movement following George Floyds murder, Indie gaming marketplace / community itch.io but together a game/media bundle with all proceeds going to support organizations that are working directly with those affected by racial injustice. When all was said and done the bundle included 1741 items from 840+ creators. It raised $8,153,803.03 for the NAACP Legal Defense and Educational Fund and Community Bail Fund split 50/50. A truly incredible amount, and a testament to peoples desire to see justice enacted.If you are one of those people who supported this worthy cause you have likely heard of the hits included in the bundle, but there is a LOT of stuff. Some great, some insane, and some bad. it's tough to know where to start, or if a game if worth it. So we're here! Each week we randomly dip into a few of the 1365 are games from the bundle and sharing our thoughts. In the spirit of this bundle, every episode we highlight a new cause and donate. If you donate we'll give you a shout out on the show.Tune and and play along or just tune in to listen to two video game enthusiasts play some very wonderful and weird games.....and also some bad ones.Theme Song: Neoishiki by Role MusicHosted & Produced by: Eric T Roth & Alex Honnet 

MBA8150 Business Analytics
WK06 Lecture: Ch8, Sampling Methods and the Central Limit Theorem.

MBA8150 Business Analytics

Play Episode Listen Later Jun 16, 2020 12:19


Dr. Jerz's lecture on sampling methods and the Central Limit Theorem.

MBA8150 Business Analytics
WK06 Excel: Ch8, Model for Sampling Methods and the Central Limit Theorem.

MBA8150 Business Analytics

Play Episode Listen Later Jun 16, 2020 6:37


Dr. Jerz shows how to use his Excel model for sampling methods.

Significant Statistics
Intro to Inference and the Central Limit Theorem

Significant Statistics

Play Episode Listen Later Mar 28, 2020 26:11


Audio Only version of Intro to Inference and the Central Limit Theorem Concept Video For More info: https://blogs.lt.vt.edu/jmrussell/topics/ --- Support this podcast: https://anchor.fm/john-russell10/support

Linear Digressions
The Normal Distribution and the Central Limit Theorem

Linear Digressions

Play Episode Listen Later Dec 9, 2018 27:11


When you think about it, it’s pretty amazing that we can draw conclusions about huge populations, even the whole world, based on datasets that are comparatively very small (a few thousand, or a few hundred, or even sometimes a few dozen). That’s the power of statistics, though. This episode is kind of a two-for-one but we’re excited about it—first we’ll talk about the Normal or Gaussian distribution, which is maybe the most famous probability distribution function out there, and then turn to the Central Limit Theorem, which is one of the foundational tenets of statistics and the real reason why the Normal distribution is so important.

AfterMath - The Math Citadel
005 - How to Annoy a Mathematician 1

AfterMath - The Math Citadel

Play Episode Listen Later Jan 19, 2018 20:03


AfterMath Episode 5, in which Rachel and Jason air some math grievances. Just what does probability zero mean? Do we really need a symbol for the word "let"? Can a theorem truly be broken? Listen in to find out. Visit www.themathcitadel.com for our latest research articles, technical commentary, book reviews, and more. As mentioned in this episode, check out our post on The Central Limit Theorem at http://bit.ly/2BeZkgX . We welcome feedback and suggestions for discussion topics. Reach out to us on Twitter at @MathCitadel. Be sure to like this track and follow us if you like what you hear. We are always working to provide more content. Business inquiries: www.themathcitadel.com/contact

Business Statistics - Undergraduate
Ch08 Lecture: Sampling Methods and the Central Limit Theorem.

Business Statistics - Undergraduate

Play Episode Listen Later Nov 3, 2016 12:19


Dr. Jerz's lecture on sampling methods and the Central Limit Theorem.

Business Statistics - Undergraduate
Ch08 Excel: Model for Sampling Methods and the Central Limit Theorem.

Business Statistics - Undergraduate

Play Episode Listen Later Nov 3, 2016 6:37


Dr. Jerz shows how to use his Excel model for sampling methods.

Business Statistics - Undergraduate
Ch08 Excel: Model for Sampling Methods and the Central Limit Theorem.

Business Statistics - Undergraduate

Play Episode Listen Later Nov 3, 2016 6:37


Dr. Jerz shows how to use his Excel model for sampling methods.

Business Statistics - Undergraduate
Ch08 Lecture: Sampling Methods and the Central Limit Theorem.

Business Statistics - Undergraduate

Play Episode Listen Later Nov 3, 2016 12:19


Dr. Jerz's lecture on sampling methods and the Central Limit Theorem.

Data Skeptic
[MINI] The Central Limit Theorem

Data Skeptic

Play Episode Listen Later Oct 16, 2015 13:07


The central limit theorem is an important statistical result which states that typically, the mean of a large enough set of independent trials is approximately normally distributed.  This episode explores how this might be used to determine if an amazon parrot like Yoshi produces or or less waste than an African Grey, under the assumption that the individual distributions are not normal.

Probabilistic Systems Analysis and Applied Probability (2013)

In this lecture, the professor discussed central limit theorem, Normal approximation, 1/2 correction for binomial approximation, and De Moivre–Laplace central limit theorem.

Probabilistic Systems Analysis and Applied Probability
Lecture 20: Central Limit Theorem

Probabilistic Systems Analysis and Applied Probability

Play Episode Listen Later Jun 29, 2015 51:22


In this lecture, the professor discussed central limit theorem, Normal approximation, 1/2 correction for binomial approximation, and De Moivre–Laplace central limit theorem.

Topics in Mathematics with Applications in Finance
Lecture 3: Probability Theory

Topics in Mathematics with Applications in Finance

Play Episode Listen Later Jun 22, 2015 78:25


This lecture is a review of the probability theory needed for the course, including random variables, probability distributions, and the Central Limit Theorem.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03
A variance decomposition and a Central Limit Theorem for empirical losses associated with resampling designs

Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03

Play Episode Listen Later Nov 1, 2014


The mean prediction error of a classification or regression procedure can be estimated using resampling designs such as the cross-validation design. We decompose the variance of such an estimator associated with an arbitrary resampling procedure into a small linear combination of covariances between elementary estimators, each of which is a regular parameter as described in the theory of $U$-statistics. The enumerative combinatorics of the occurrence frequencies of these covariances govern the linear combination's coefficients and, therefore, the variance's large scale behavior. We study the variance of incomplete U-statistics associated with kernels which are partly but not entirely symmetric. This leads to asymptotic statements for the prediction error's estimator, under general non-empirical conditions on the resampling design. In particular, we show that the resampling based estimator of the average prediction error is asymptotically normally distributed under a general and easily verifiable condition. Likewise, we give a sufficient criterion for consistency. We thus develop a new approach to understanding small-variance designs as they have recently appeared in the literature. We exhibit the $U$-statistics which estimate these variances. We present a case from linear regression where the covariances between the elementary estimators can be computed analytically. We illustrate our theory by computing estimators of the studied quantities in an artificial data example.

MATH 202: Introduction to Statistics - sc
L17. Central limit theorem intro 1

MATH 202: Introduction to Statistics - sc

Play Episode Listen Later Apr 6, 2014 12:57


Goes with handout. Print handout and follow along with this podcast

MATH 202: Introduction to Statistics - sc
L17.HANDOUT- central limit theorem intro notes- means

MATH 202: Introduction to Statistics - sc

Play Episode Listen Later Apr 6, 2014


The distribution of sample means

MATH 202: Introduction to Statistics - sc
L17. Central limit theorem IV

MATH 202: Introduction to Statistics - sc

Play Episode Listen Later Jul 22, 2013 8:18


central limit theorem

MATH 202: Introduction to Statistics - sc
Homework-central limit theorem-and sampling methods-answers attached

MATH 202: Introduction to Statistics - sc

Play Episode Listen Later Sep 14, 2011


MATH 202: Introduction to Statistics - sc
Lesson 18-rehearsal for exam on normal distributions, central limit theorem, and design of experiments

MATH 202: Introduction to Statistics - sc

Play Episode Listen Later Aug 2, 2011 37:48


Statistics
Central Limit Theorem

Statistics

Play Episode Listen Later Aug 21, 2010 9:48


Séminaires de probabilités et statistiques (SAMM, 2009-2010)
09 - Log-periodogram regression on non-Fourier frequencies sets (Mohamed Boutahar (GREQAM, Université de Marseille-Luminy))

Séminaires de probabilités et statistiques (SAMM, 2009-2010)

Play Episode Listen Later Oct 15, 2009 64:00


Résumé : In the log-periodogram regression, the Fourier frequencies are used to define the estimator of the long memory parameter . Moreover the number of frequencies considered depends on the sample size through the condition as . However, a rigorous asymptotic semiparametric theory to give a satisfactory choice for m is still lacking. The main objective of this paper is to fill this gap. We define a non-Fourier logperiodogram estimator by performing an OLS regression, in which non-Fourier frequencies independent of the sample size n are used. We show that this new estimator is consistent and asymptotically normal if and without imposing the rate condition . Based on the rate of convergence in the Central Limit Theorem, a moderate , say, is sufficient to obtain a reliable confidence interval for . Vous pouvez entendre l'intervention, tout en visualisant le Power Point, en cliquant sur ce lien : http://epn.univ-paris1.fr/modules/UFR27semSAMOS/SeminaireSAMM_20091016_Boutahar/SeminaireSAMM_20091016_Boutahar.html. Ecouter l'intervention : Bande son disponible au format mp3 Durée : 1H04

Limit theorems and applications (SAMSOS, 2008)
02 - Rates of convergence for minimal distances in the central limit theorem under projective criteria - Emmanuel RIO

Limit theorems and applications (SAMSOS, 2008)

Play Episode Listen Later Jan 12, 2008 43:11


In this paper, we give estimates of ideal or minimal distances between the distribution of the normalized partial sum and the limiting Gaussian distribution for stationary martingale difference sequences or stationary sequences satisfying projective criteria. Applications to functions of linear processes and to functions of expanding maps of the interval are given. This is a joint paper with J. Dedecker (Paris 6) and F. Merlevède (Paris 6). Emmanuel RIO. Université de Versailles. Ecouter l'intervention : Bande son disponible au format mp3 Durée : 44 mn

Limit theorems and applications (SAMSOS, 2008)
03 - Central limit theorem for sampled sums of dependent random variables - Clémentine PRIEUR

Limit theorems and applications (SAMSOS, 2008)

Play Episode Listen Later Jan 4, 2008 32:15


We prove a central limit theorem for linear triangular arrays under weak dependence conditions [1,3,4]. Our result is then applied to the study of dependent random variables sampled by a $Z$-valued transient random walk. This extends the results obtained by Guillotin-Plantard & Schneider [2]. An application to parametric estimation by random sampling is also provided. References: [1] Dedecker J., Doukhan P., Lang G., Leon J.R., Louhichi S. and Prieur C. (2007). Weak dependence: With Examples and Applications. Lect. notes in Stat. 190. Springer, XIV. [2] N. Guillotin-Plantard and D. Schneider (2003). Limit theorems for sampled dynamical systems. Stochastic and Dynamics 3, 4, p. 477-497. [3] M. Peligrad and S. Utev (1997). Central limit theorem for linear processes. Ann. Probab. 25, 1, p. 443-456. [4] S. A. Utev (1991). Sums of random variables with $varphi$-mixing. Siberian Advances in Mathematics 1, 3, p. 124-155. Clémentine PRIEUR. Université de Toulouse 1. Document associé : support de présentation : http://epi.univ-paris1.fr/servlet/com.univ.collaboratif.utils.LectureFichiergw?CODE_FICHIER=1207750339872 (pdf) Ecouter l'intervention : Bande son disponible au format mp3 Durée : 33 mn

Conference Stochastic Dynamics (SAMOS, 2007)
05 - The Korteweg-de Vries equation with multiplicative noise : existence of solutions and random modulation of solitons - Arnaud DEBUSSCHE

Conference Stochastic Dynamics (SAMOS, 2007)

Play Episode Listen Later May 21, 2007 44:38


In this work, we consider the Korteweg- de Vries equation perturbed by a random force of white noise type, additive or multiplicative. In a series of work, in collaboration with Y. Tsutsumi, we have studied existence and uniqueness in the additive case for very irregular noises. These use the functional framework introduced by J. Bourgain. We use similar tools to prove existence and uniqueness for a multiplicative noise. We are not able to consider irregular noises and have to assume that the driving Wiener process has paths in or . However, contrary to the additive case, we are able to treat spatially homogeneous noises. Then, we try to understand the effect of a small noise with amplitude on the propagation of a soliton. We prove that, on a time scale proportional to , a solution initially equal to the soliton but perturbed by a noise of the type above remains close to a soliton with modulated speed and position. The modulated speed and position are semi-martingales and we write the stochastic equations they satisfy. We prove also that a Central Limit Theorem holds so that, on the time scale described above, the solutions can formally be written as the sum of the modulated soliton and a gaussian remainder term of order . In the multiplicative case, we can go further. We prove that the gaussian part converges in distribution to a stationary process. Also, the equations for the modulation parameters allow to give a justification for the phenomenon called "soliton diffusion" observed in numerical simulations: the averaged soliton decays like . We obtain . This a joint work with A. de Bouard. Arnaud DEBUSSCHE. ENS Cachan. Document associé : support de présentation : http://epi.univ-paris1.fr/servlet/com.univ.collaboratif.utils.LectureFichiergw?CODE_FICHIER=1182789894119 (pdf) Bande son disponible au format mp3 Durée : 45 mn

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
A Note on Teaching Binomial Confidence Intervals

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 1997


For constructing confidence intervals for a binomial proportion $p$, Simon (1996, Teaching Statistics) advocates teaching one of two large-sample alternatives to the usual $z$-intervals $hat{p} pm 1.96 times S.E(hat{p})$ where $S.E.(hat{p}) = sqrt{ hat{p} times (1 - hat{p})/n}$. His recommendation is based on the comparison of the closeness of the achieved coverage of each system of intervals to their nominal level. This teaching note shows that a different alternative to $z$-intervals, called $q$-intervals, are strongly preferred to either method recommended by Simon. First, $q$-intervals are more easily motivated than even $z$-intervals because they require only a straightforward application of the Central Limit Theorem (without the need to estimate the variance of $hat{p}$ and to justify that this perturbation does not affect the normal limiting distribution). Second, $q$-intervals do not involve ad-hoc continuity corrections as do the proposals in Simon. Third, $q$-intervals have substantially superior achieved coverage than either system recommended by Simon.