Podcasts about bayes rule

  • 9PODCASTS
  • 15EPISODES
  • 45mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 31, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about bayes rule

Latest podcast episodes about bayes rule

The Nonlinear Library
LW - To Predict What Happens, Ask What Happens by Zvi

The Nonlinear Library

Play Episode Listen Later May 31, 2023 14:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: To Predict What Happens, Ask What Happens, published by Zvi on May 31, 2023 on LessWrong. When predicting conditional probability of catastrophe from loss of human control over AGI, there are many distinct cruxes. This essay does not attempt a complete case, or the most generally convincing case, or addressing the most common cruxes. Instead these are my best guesses for potentially mind-changing, armor-piercing questions people could ask themselves if they broadly accept many concepts like power seeking being a key existential risk, that default development paths are likely catastrophic and that AI could defeat all of us combined, have read and thought hard about alignment difficulties, yet think the odds of catastrophe are not so high. In addition to this entry, I attempt an incomplete extended list of cruxes here, an attempted taxonomy of paths through developing AGI and potentially losing control here, and an attempted taxonomy of styles of alignment here, while leaving to the future or others for now a taxonomy of alignment difficulties. Apologies in advance if some questions seem insulting or you rightfully answer with ‘no, I am not making that mistake.' I don't know a way around that. Here are the questions up front: What happens? To what extent will humanity seek to avoid catastrophe? How much will humans willingly give up, including control? You know people and companies and nations are dumb and make dumb mistakes constantly, and mostly take symbolic actions or gesture at things rather than act strategically, and you've taken that into account, right? What would count as a catastrophe? Are you consistently tracking what you mean by alignment? Would ‘human-strength' alignment be sufficient? If we figure out how to robustly align our AGIs, will we choose to and be able to make and keep them that way? Would we keep control? How much hope is there that a misaligned AGI would choose to preserve humanity once it no longer needed us? Are you factoring in unknown difficulties and surprises large and small that always arise, and in which direction do they point? re you treating doom as only happening through specified detailed logical paths, which if they break down mean it's going to be fine? Are you properly propagating your updates, and anticipating future updates? Are you counting on in-distribution heuristics to work out of distribution? Are you using instincts and heuristics rather than looking at mechanics, forming a model, doing math, using Bayes Rule? Is normalcy bias, hopeful thinking, avoidance of implications or social cognition subtly influencing your analysis? Are you unconsciously modeling after media? What happens? If you think first about ‘will there be doom?' or ‘will there be a catastrophe?' you are directly invoking hopeful or fearful thinking, shifting focus towards wrong questions, and concocting arbitrary ways for scenarios to end well or badly. Including expecting humans to, when in danger, act in ways humans don't act. Instead, ask: What happens? What affordances would this AGI have? What happens to the culture? What posts get written on Marginal Revolution? What other questions jump to mind? Then ask whether what happens was catastrophic. To what extent will humanity seek to avoid catastrophe? I often observe an importantly incorrect model here. Take the part of your model that explains why: We aren't doing better global coordination to slow AI capabilities. We frequently fail or muddle through when facing dangers. So many have willingly sided with what seem like obviously evil causes, often freely, also often under pressure or for advantage. Most people have no idea what is going on most of the time, and often huge things are happening that for long periods no one notices, or brings to general attention. Then notice these dynamics do not st...

The Nonlinear Library: LessWrong
LW - To Predict What Happens, Ask What Happens by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later May 31, 2023 14:25


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: To Predict What Happens, Ask What Happens, published by Zvi on May 31, 2023 on LessWrong. When predicting conditional probability of catastrophe from loss of human control over AGI, there are many distinct cruxes. This essay does not attempt a complete case, or the most generally convincing case, or addressing the most common cruxes. Instead these are my best guesses for potentially mind-changing, armor-piercing questions people could ask themselves if they broadly accept many concepts like power seeking being a key existential risk, that default development paths are likely catastrophic and that AI could defeat all of us combined, have read and thought hard about alignment difficulties, yet think the odds of catastrophe are not so high. In addition to this entry, I attempt an incomplete extended list of cruxes here, an attempted taxonomy of paths through developing AGI and potentially losing control here, and an attempted taxonomy of styles of alignment here, while leaving to the future or others for now a taxonomy of alignment difficulties. Apologies in advance if some questions seem insulting or you rightfully answer with ‘no, I am not making that mistake.' I don't know a way around that. Here are the questions up front: What happens? To what extent will humanity seek to avoid catastrophe? How much will humans willingly give up, including control? You know people and companies and nations are dumb and make dumb mistakes constantly, and mostly take symbolic actions or gesture at things rather than act strategically, and you've taken that into account, right? What would count as a catastrophe? Are you consistently tracking what you mean by alignment? Would ‘human-strength' alignment be sufficient? If we figure out how to robustly align our AGIs, will we choose to and be able to make and keep them that way? Would we keep control? How much hope is there that a misaligned AGI would choose to preserve humanity once it no longer needed us? Are you factoring in unknown difficulties and surprises large and small that always arise, and in which direction do they point? re you treating doom as only happening through specified detailed logical paths, which if they break down mean it's going to be fine? Are you properly propagating your updates, and anticipating future updates? Are you counting on in-distribution heuristics to work out of distribution? Are you using instincts and heuristics rather than looking at mechanics, forming a model, doing math, using Bayes Rule? Is normalcy bias, hopeful thinking, avoidance of implications or social cognition subtly influencing your analysis? Are you unconsciously modeling after media? What happens? If you think first about ‘will there be doom?' or ‘will there be a catastrophe?' you are directly invoking hopeful or fearful thinking, shifting focus towards wrong questions, and concocting arbitrary ways for scenarios to end well or badly. Including expecting humans to, when in danger, act in ways humans don't act. Instead, ask: What happens? What affordances would this AGI have? What happens to the culture? What posts get written on Marginal Revolution? What other questions jump to mind? Then ask whether what happens was catastrophic. To what extent will humanity seek to avoid catastrophe? I often observe an importantly incorrect model here. Take the part of your model that explains why: We aren't doing better global coordination to slow AI capabilities. We frequently fail or muddle through when facing dangers. So many have willingly sided with what seem like obviously evil causes, often freely, also often under pressure or for advantage. Most people have no idea what is going on most of the time, and often huge things are happening that for long periods no one notices, or brings to general attention. Then notice these dynamics do not st...

The Nonlinear Library
EA - A summary of every "Highlights from the Sequences" post by Akash

The Nonlinear Library

Play Episode Listen Later Jul 17, 2022 24:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A summary of every "Highlights from the Sequences" post, published by Akash on July 15, 2022 on The Effective Altruism Forum. 1 I recently finished reading Highlights from the Sequences, 49 essays from The Sequences that were compiled by the LessWrong team. Since moving to Berkeley several months ago, I've heard many people talking about posts from The Sequences. A lot of my friends and colleagues commonly reference biases, have a respect for Bayes Rule, and say things like “absence of evidence is evidence of absence!” So, I was impressed that the Highlights were not merely a refresher of things I had already absorbed through the social waters. There was plenty of new material, and there were also plenty of moments when a concept became much crisper in my head. It's one thing to know that dissent is hard, and it's another thing to internalize that lonely dissent doesn't feel like going to school dressed in black— it feels like going to school wearing a clown suit. 2 As I read, I wrote a few sentences summarizing each post. I mostly did this to improve my own comprehension/memory. You should treat the summaries as "here's what Akash took away from this post" as opposed to "here's an actual summary of what Eliezer said." Note that the summaries are not meant to replace the posts. Read them here. 3 Here are my notes on each post, in order. I also plan to post a reflection on some of my favorite posts. Thinking Better on Purpose One difference between humans and mice is that humans are able to think about thinking. Mice brains and human brains both have flaws. But a human brain is a lens that can understand its own flaws. This is powerful and this is extremely rare in the animal kingdom. Epistemic rationality is about building beliefs that correspond to reality (accuracy). Instrumental rationality is about steering the future toward outcomes we desire (winning). Sometimes, people have debates about whether things are “rational.” These often seem like debates over the definitions of words. We should not debate over whether something is “rational”— we should debate over whether something leads us to more accurate beliefs or leads us to steer the world more successfully. If it helps, every time Eliezer says “rationality”, you should replace it with “foozal” and then strive for what is foozal. An 8-year-old will fail a calculus test because there are a lot of ways to get the wrong answer and very few ways to get the right answer. When humans pursue goals, there are often many ways of pursuing goals ineffectively and only a few ways of pursuing them effectively. We rarely do seemingly-obvious things, like: Ask what we are trying to achieve Track progress Reflect on which strategies have worked & or haven't worked for us in the past Seek out strategies that have worked or haven't worked for others Test several different hypotheses for how to achieve goals Ask for help Experiment with alternative ways of studying, writing, working, or socializing Luke Skywalker tried to lift a spaceship once. Then, he gave up. Instantly. He told Yoda it was impossible. And then he walked away. And the audience sat along and nodded, because this is typical for humans. “People wouldn't try for more than five minutes before giving up if the fate of humanity were at stake.” If a model can explain any possible outcome, it is not a useful model. Sometimes, we encounter a situation, and we're like “huh. something doesn't seem right here”. But then we push this away and explain the situation in terms of our existing models. Instead, we should pay attention to this feeling. We should resist the impulse to go through mental gymnastics to explain a surprising situation with our existing models. Either our model is wrong, or the story isn't true. Sometimes, we investigate our beliefs because we are “suppos...

The Nonlinear Library
LW - A summary of every "Highlights from the Sequences" post by Akash

The Nonlinear Library

Play Episode Listen Later Jul 16, 2022 24:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A summary of every "Highlights from the Sequences" post, published by Akash on July 15, 2022 on LessWrong. 1 I recently finished reading Highlights from the Sequences, 49 essays from The Sequences that were compiled by the LessWrong team. Since moving to Berkeley several months ago, I've heard many people talking about posts from The Sequences. A lot of my friends and colleagues commonly reference biases, have a respect for Bayes Rule, and say things like “absence of evidence is evidence of absence!” So, I was impressed that the Highlights were not merely a refresher of things I had already absorbed through the social waters. There was plenty of new material, and there were also plenty of moments when a concept became much crisper in my head. It's one thing to know that dissent is hard, and it's another thing to internalize that lonely dissent doesn't feel like going to school dressed in black— it feels like going to school wearing a clown suit. 2 As I read, I wrote a few sentences summarizing each post. I mostly did this to improve my own comprehension/memory. You should treat the summaries as "here's what Akash took away from this post" as opposed to "here's an actual summary of what Eliezer said." Note that the summaries are not meant to replace the posts. Read them here. 3 Here are my notes on each post, in order. I also plan to post a reflection on some of my favorite posts. Thinking Better on Purpose One difference between humans and mice is that humans are able to think about thinking. Mice brains and human brains both have flaws. But a human brain is a lens that can understand its own flaws. This is powerful and this is extremely rare in the animal kingdom. Epistemic rationality is about building beliefs that correspond to reality (accuracy). Instrumental rationality is about steering the future toward outcomes we desire (winning). Sometimes, people have debates about whether things are “rational.” These often seem like debates over the definitions of words. We should not debate over whether something is “rational”— we should debate over whether something leads us to more accurate beliefs or leads us to steer the world more successfully. If it helps, every time Eliezer says “rationality”, you should replace it with “foozal” and then strive for what is foozal. An 8-year-old will fail a calculus test because there are a lot of ways to get the wrong answer and very few ways to get the right answer. When humans pursue goals, there are often many ways of pursuing goals ineffectively and only a few ways of pursuing them effectively. We rarely do seemingly-obvious things, like: Ask what we are trying to achieve Track progress Reflect on which strategies have worked & or haven't worked for us in the past Seek out strategies that have worked or haven't worked for others Test several different hypotheses for how to achieve goals Ask for help Experiment with alternative ways of studying, writing, working, or socializing Luke Skywalker tried to lift a spaceship once. Then, he gave up. Instantly. He told Yoda it was impossible. And then he walked away. And the audience sat along and nodded, because this is typical for humans. “People wouldn't try for more than five minutes before giving up if the fate of humanity were at stake.” If a model can explain any possible outcome, it is not a useful model. Sometimes, we encounter a situation, and we're like “huh. something doesn't seem right here”. But then we push this away and explain the situation in terms of our existing models. Instead, we should pay attention to this feeling. We should resist the impulse to go through mental gymnastics to explain a surprising situation with our existing models. Either our model is wrong, or the story isn't true. Sometimes, we investigate our beliefs because we are “supposed to.” A good rati...

The Nonlinear Library: LessWrong
LW - A summary of every "Highlights from the Sequences" post by Akash

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 16, 2022 24:13


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A summary of every "Highlights from the Sequences" post, published by Akash on July 15, 2022 on LessWrong. 1 I recently finished reading Highlights from the Sequences, 49 essays from The Sequences that were compiled by the LessWrong team. Since moving to Berkeley several months ago, I've heard many people talking about posts from The Sequences. A lot of my friends and colleagues commonly reference biases, have a respect for Bayes Rule, and say things like “absence of evidence is evidence of absence!” So, I was impressed that the Highlights were not merely a refresher of things I had already absorbed through the social waters. There was plenty of new material, and there were also plenty of moments when a concept became much crisper in my head. It's one thing to know that dissent is hard, and it's another thing to internalize that lonely dissent doesn't feel like going to school dressed in black— it feels like going to school wearing a clown suit. 2 As I read, I wrote a few sentences summarizing each post. I mostly did this to improve my own comprehension/memory. You should treat the summaries as "here's what Akash took away from this post" as opposed to "here's an actual summary of what Eliezer said." Note that the summaries are not meant to replace the posts. Read them here. 3 Here are my notes on each post, in order. I also plan to post a reflection on some of my favorite posts. Thinking Better on Purpose One difference between humans and mice is that humans are able to think about thinking. Mice brains and human brains both have flaws. But a human brain is a lens that can understand its own flaws. This is powerful and this is extremely rare in the animal kingdom. Epistemic rationality is about building beliefs that correspond to reality (accuracy). Instrumental rationality is about steering the future toward outcomes we desire (winning). Sometimes, people have debates about whether things are “rational.” These often seem like debates over the definitions of words. We should not debate over whether something is “rational”— we should debate over whether something leads us to more accurate beliefs or leads us to steer the world more successfully. If it helps, every time Eliezer says “rationality”, you should replace it with “foozal” and then strive for what is foozal. An 8-year-old will fail a calculus test because there are a lot of ways to get the wrong answer and very few ways to get the right answer. When humans pursue goals, there are often many ways of pursuing goals ineffectively and only a few ways of pursuing them effectively. We rarely do seemingly-obvious things, like: Ask what we are trying to achieve Track progress Reflect on which strategies have worked & or haven't worked for us in the past Seek out strategies that have worked or haven't worked for others Test several different hypotheses for how to achieve goals Ask for help Experiment with alternative ways of studying, writing, working, or socializing Luke Skywalker tried to lift a spaceship once. Then, he gave up. Instantly. He told Yoda it was impossible. And then he walked away. And the audience sat along and nodded, because this is typical for humans. “People wouldn't try for more than five minutes before giving up if the fate of humanity were at stake.” If a model can explain any possible outcome, it is not a useful model. Sometimes, we encounter a situation, and we're like “huh. something doesn't seem right here”. But then we push this away and explain the situation in terms of our existing models. Instead, we should pay attention to this feeling. We should resist the impulse to go through mental gymnastics to explain a surprising situation with our existing models. Either our model is wrong, or the story isn't true. Sometimes, we investigate our beliefs because we are “supposed to.” A good rati...

Embedded
271: Shell Scripts for the Soul (Repeat)

Embedded

Play Episode Listen Later Aug 13, 2021 74:10


Alex Glow filled our heads with project ideas. Alex is the Resident Hardware Nerd at Hackster.io. Her page is glowascii and you might want to see Archimedes the AI robot owl and the Hardware 101 channel. They have many sponsored contests including BadgeLove. You can find her on Twitter at @glowascii. Lightning round led us to many possibles: It you were building an IoT stuffed animal, what would you use? Mycroft and Snips are what is inside Archimedes. If you were building a camera to monitor a 3d printer, what would you use? For her M3D Micro Printer, Alex would use the Raspberry Pi based OctoPi to monitor it. If you were going to a classroom of 2nd graders, what boards would you take? The BBC Micro:bit (based on Code Bug) or some LittleBits kits (Star Wars Droid Inventor Kit and Korg Synth Kit are on Amazon (those are Embedded affiliate links, btw). If you were going to make a car-sized fighting robot, what dev system would you use? The Open Source Novena DIY Laptop initially designed Bunnie Huang There were more software and hardware kits to explore: Google DIY AI Arduino Maker1000 Raspberry Pi Chirp.io    For your amusement Floppotron plays Bohemian Rhapsody Alex gave a shout out to her first hackerspace All Hands Active Ableton is audio workstation and sequencer software. Alex recommends Women's Audio Mission as a good way to learn audio production and recording if you are in the San Francisco area. There is an Interplanetary File System and Alex worked on a portable printer console for it. Elecia is always willing to talk about Ty the typing robot and/or narwhals teaching Bayes Rule. She recommended the book There Are No Electrons: Electronics for Earthlings by Kenn Amdahl.  

More or Less: Behind the Stats
Bayes: the clergyman whose maths changed the world

More or Less: Behind the Stats

Play Episode Listen Later May 2, 2021 8:58


Bayes’ Rule has been used in AI, genetic studies, translating foreign languages and even cracking the Enigma Code in the Second World War. We find out about Thomas Bayes - the 18th century English statistician and clergyman whose work was largely forgotten until the 20th century.

The Local Maximum
Ep. 137 - Bayesian Covid Tests, Topological Computation, and Electoral Systems

The Local Maximum

Play Episode Listen Later Sep 22, 2020 33:27


Today's show features callers and emails from the Local Maximum Audience. One listenered learned about Bayesian inference and published a manuscript on Bayes Rule applied to Covid-19 Testing. Another listener introduced me to a topological approach to representing numbers in computer science. A third listener had some point to make on the Electoral College.

Greater Than Code
186: The Universe Makes it Happen with Emily Gorcenski

Greater Than Code

Play Episode Listen Later Jun 10, 2020 65:39


Emily has been on GTC before! Check out her previous episode: 037: Failure Mode (https://www.greaterthancode.com/failure-mode) 01:08 - Emily’s Superpower: Hunting Nazis and Data Science * Emily’s Motivation to Do This Work – Fighting Back * What is the question behind the question? * Data Science + SCIENCE 14:55 - Being Willing to Be Wrong / Failure and Learning * Methods for Determining You’re Going to be Tracking Wrong Things * Simpson’s Paradox (https://www.britannica.com/topic/Simpsons-paradox#:~:text=Simpson's%20paradox%2C%20also%20called%20Yule,one%20or%20more%20other%20variables.) * Means Are a Lie * Detecting Nonlinearities 34:49 - Cybernetics (https://en.wikipedia.org/wiki/Cybernetics) * 093: BOOK CLUB! Cybernetic Revolutionaries with Eden Medina (https://www.greaterthancode.com/cybernetic-revolutionaries) 38:43 - The COVID-19 Pandemic Crisis and The Need For Systematic Restructuring * The Problem with Testing * Conditional Probability (https://en.wikipedia.org/wiki/Conditional_probability) * Bayes Rule (https://towardsdatascience.com/what-is-bayes-rule-bb6598d8a2fd?gi=2073fb276511) * Ventilator Production 47:16 - Nuance, Power, and Authority * High-Reliability Organizations * Devolution * The Safety Anarchist (https://www.amazon.com/Safety-Anarchist-innovation-bureaucracy-compliance/dp/1138300462) * The Difference Between: * Innovation * A Work-Around * A Shortcut * A Non-Compliant * Deciphering Good Actors & Bad Actors This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode) To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Special Guest: Emily Gorcenski.

Embedded
271: Shell Scripts for the Soul

Embedded

Play Episode Listen Later Dec 20, 2018 74:11


Alex Glow (@glowascii) filled our heads with project ideas. Alex is the Resident Hardware Nerd at Hackster.io. Her page is glowascii and you might want to see Archimedes the AI robot owl and the Hardware 101 channel. They have many sponsored contests including BadgeLove. Lightning round led us to many possibles: It you were building an IoT stuffed animal, what would you use? Mycroft and Snips are what is inside Archimedes. If you were building a camera to monitor a 3d printer, what would you use? For her M3D Micro Printer, Alex would use the Raspberry Pi based OctoPi to monitor it. If you were going to a classroom of 2nd graders, what boards would you take? The BBC Micro:bit (based on Code Bug) or some LittleBits kits (Star Wars Droid Inventor Kit and Korg Synth Kit are on Amazon (those are Embedded affiliate links, btw). If you were going to make a car-sized fighting robot, what dev system would you use? The Open Source Novena DIY Laptop initially designed Bunnie Huang There were more software and hardware kits to explore: Google DIY AI Arduino Maker1000 Raspberry Pi Chirp.io    For your amusement Floppotron plays Bohemian Rhapsody Alex gave a shout out to her first hackerspace All Hands Active Ableton is audio workstation and sequencer software. Alex recommends Women’s Audio Mission as a good way to learn audio production and recording if you are in the San Francisco area. There is an Interplanetary File System and Alex worked on a portable printer console for it. Elecia is always willing to talk about Ty the typing robot and/or narwhals teaching Bayes Rule. She recommended the book There Are No Electrons: Electronics for Earthlings by Kenn Amdahl.

stopGOstop » sound collage – field recording – sound art – john wanzel

A sound collage featuring Space 1999, Vice President Wallace, Triangles, Gravity, and Bayes Rule alongside field recordings and digital signal manipulation. stopGOstop is produced by John Wanzel. Don’t forget to subscribe to the podcast via this link RSS, or search for … Continue reading →

Advanced Manufacturing Now
Everything you ever wanted to know about CELDi

Advanced Manufacturing Now

Play Episode Listen Later Apr 13, 2017 12:28


Randolph Bradley, Technical Fellow in Supply Chain Management at Boeing, explains the Center for Excellence in Logistics and Distribution's reason for being, and Boeing’s involvement with the industry-academia partnership. Among what is covered: energy use in the supply chain; demand forecasting using Bayes’ Rule (also used to crack Germany's Enigma code during WWII), and insights gained from MHI’s US Roadmap for Material Handling and Logistics.

Embedded
167: All Aliens Are Shiny

Embedded

Play Episode Listen Later Sep 7, 2016 61:15


Chris and Elecia chat about Bayes Rule, aliens, bit-banging, VGA, and unit testing. Elecia is working on A Narwhal's Guide to Bayes' Rule.  ACM has a code of software engineering ethics Toads have trackers (NPR story) An introduction to bit-banging SPI (Arduino, WS2812) We talked to James Grenning extensively about testing on 30: Eventually Lightning Strikes (and about his excellent book Test Driven Development for Embedded C). We spoke with James again on 109: Resurrection of Extreme Programming. We also talked about unit testing with Mark Vandervoord on 103: Tentacles of the Kraken. A neat TED Talk involving octo-copters, still four short of dodecahedracopter. Neat Z80 based very minimal computer kit    

LISA: Laboratory for Interdisciplinary Statistical Analysis - Short Courses
Bayesian Methods for Regression in R by Nels Johnson

LISA: Laboratory for Interdisciplinary Statistical Analysis - Short Courses

Play Episode Listen Later Mar 13, 2012 115:32


An outline for questions I hope to answer: What is Bayes’ Rule? (lecture portion) ► What is the likelihood? ► What is the prior distribution? ► How should I choose it? ► Why use a conjugate prior? ► What is a subjective versus objective prior? ► What is the posterior distribution? ► How do I use it to make statistical inference? ► How is this inference different from frequentist/classical inference? ► What computational tools do I need in order to make inference? How can I use R to do regression in a Bayesian paradigm? (computer portion) ► What libraries in R support Bayesian analysis? ► How do I use some of these libraries? ► How do I interpret the output? ► How do I produce diagnostic plots? ► What common topics do these libraries not support? ► How can I do them myself? ► How can LISA help me? ► What resources are available to help me Bayesian methods in R? Course files available here: www.lisa.stat.vt.edu/?q=node/3382.

regression bayesian nels bayesian methods bayes rule
LISA: Laboratory for Interdisciplinary Statistical Analysis - Short Courses
Bayesian Methods for Regression in R by Nels Johnson

LISA: Laboratory for Interdisciplinary Statistical Analysis - Short Courses

Play Episode Listen Later Mar 14, 2011 96:40


We’ll discuss some basic concepts and vocabulary in Bayesian statistics such as the likelihood, prior and posterior distributions, and how they relate to Bayes’ Rule. R statistical software will be used to discuss how parameter estimation and inference changes in a Bayesian paradigm versus in a classical paradigm, with a particular focus on applications using regression. Course files available here: www.lisa.stat.vt.edu/?q=node/1784.

regression bayesian nels bayesian methods bayes rule