Podcasts about bayesians

  • 30PODCASTS
  • 52EPISODES
  • 34mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about bayesians

Latest podcast episodes about bayesians

Eye On A.I.
#250 Pedro Domingos on the Real Path to AGI

Eye On A.I.

Play Episode Listen Later Apr 24, 2025 68:12


This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   Can AI Ever Reach AGI? Pedro Domingos Explains the Missing Link In this episode of Eye on AI, renowned computer scientist and author of The Master Algorithm, Pedro Domingos, breaks down what's still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers.   Topics covered: Why deep learning alone won't achieve AGI How reasoning by analogy could unlock true machine creativity The role of evolutionary algorithms in building intelligent systems Why transformers like GPT-4 are impressive—but incomplete The danger of hype from tech leaders vs. the real science behind AGI What the Master Algorithm truly means — and why we haven't found it yet Pedro argues that creativity is easy, reliability is hard, and that reasoning by analogy — not just scaling LLMs — may be the key to Einstein-level breakthroughs in AI.   Whether you're an AI researcher, machine learning engineer, or just curious about the future of artificial intelligence, this is one of the most important conversations on how to actually reach AGI.    

Eye On A.I.
#237 Pedro Domingo's on Bayesians and Analogical Learning in AI

Eye On A.I.

Play Episode Listen Later Feb 9, 2025 56:43


This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   In this episode of the Eye on AI podcast, Pedro Domingos, renowned AI researcher and author of The Master Algorithm, joins Craig Smith to explore the evolution of machine learning, the resurgence of Bayesian AI, and the future of artificial intelligence. Pedro unpacks the ongoing battle between Bayesian and Frequentist approaches, explaining why probability is one of the most misunderstood concepts in AI. He delves into Bayesian networks, their role in AI decision-making, and how they powered Google's ad system before deep learning. We also discuss how Bayesian learning is still outperforming humans in medical diagnosis, search & rescue, and predictive modeling, despite its computational challenges. The conversation shifts to deep learning's limitations, with Pedro revealing how neural networks might be just a disguised form of nearest-neighbor learning. He challenges conventional wisdom on AGI, AI regulation, and the scalability of deep learning, offering insights into why Bayesian reasoning and analogical learning might be the future of AI. We also dive into analogical learning—a field championed by Douglas Hofstadter—exploring its impact on pattern recognition, case-based reasoning, and support vector machines (SVMs). Pedro highlights how AI has cycled through different paradigms, from symbolic AI in the '80s to SVMs in the 2000s, and why the next big breakthrough may not come from neural networks at all. From theoretical AI debates to real-world applications, this episode offers a deep dive into the science behind AI learning methods, their limitations, and what's next for machine intelligence. Don't forget to like, subscribe, and hit the notification bell for more expert discussions on AI, technology, and the future of innovation!    Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction (02:55) The Five Tribes of Machine Learning Explained   (06:34) Bayesian vs. Frequentist: The Probability Debate   (08:27) What is Bayes' Theorem & How AI Uses It   (12:46) The Power & Limitations of Bayesian Networks   (16:43) How Bayesian Inference Works in AI   (18:56) The Rise & Fall of Bayesian Machine Learning   (20:31) Bayesian AI in Medical Diagnosis & Search and Rescue   (25:07) How Google Used Bayesian Networks for Ads   (28:56) The Role of Uncertainty in AI Decision-Making   (30:34) Why Bayesian Learning is Computationally Hard   (34:18) Analogical Learning – The Overlooked AI Paradigm   (38:09) Support Vector Machines vs. Neural Networks   (41:29) How SVMs Once Dominated Machine Learning   (45:30) The Future of AI – Bayesian, Neural, or Hybrid?   (50:38) Where AI is Heading Next  

The Gradient Podcast
Kevin Dorst: Against Irrationalist Narratives

The Gradient Podcast

Play Episode Listen Later Jul 18, 2024 135:21


Episode 131I spoke with Professor Kevin Dorst about:* Subjective Bayesianism and epistemology foundations* What happens when you're uncertain about your evidence* Why it's rational for people to polarize on political mattersEnjoy—and let me know what you think!Kevin is an Associate Professor in the Department of Linguistics and Philosophy at MIT. He works at the border between philosophy and social science, focusing on rationality.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) When do Bayesians need theorems?* (05:52) Foundations of epistemology, metaethics, formal models, error theory* (09:35) Extreme views and error theory, arguing for/against opposing positions* (13:35) Changing focuses in philosophy — pragmatic pressures* (19:00) Kevin's goals through his research and work* (25:10) Structural factors in coming to certain (political) beliefs* (30:30) Acknowledging limited resources, heuristics, imperfect rationality* (32:51) Hindsight Bias is Not a Bias* (33:30) The argument* (35:15) On eating cereal and symmetric properties of evidence* (39:45) Colloquial notions of hindsight bias, time and evidential support* (42:45) An example* (48:02) Higher-order uncertainty* (48:30) Explicitly modeling higher-order uncertainty* (52:50) Another example (spoons)* (54:55) Game theory, iterated knowledge, even higher order uncertainty* (58:00) Uncertainty and philosophy of mind* (1:01:20) Higher-order evidence about reliability and rationality* (1:06:45) Being Rational and Being Wrong* (1:09:00) Setup on calibration and overconfidence* (1:12:30) The need for average rational credence — normative judgments about confidence and realism/anti-realism* (1:15:25) Quasi-realism about average rational credence?* (1:19:00) Classic epistemological paradoxes/problems — lottery paradox, epistemic luck* (1:25:05) Deference in rational belief formation, uniqueness and permissivism* (1:39:50) Rational Polarization* (1:40:00) Setup* (1:37:05) Epistemic nihilism, expanded confidence akrasia* (1:40:55) Ambiguous evidence and confidence akrasia* (1:46:25) Ambiguity in understanding and notions of rational belief* (1:50:00) Claims about rational sensitivity — what stories we can tell given evidence* (1:54:00) Evidence vs presentation of evidence* (2:01:20) ChatGPT and the case for human irrationality* (2:02:00) Is ChatGPT replicating human biases?* (2:05:15) Simple instruction tuning and an alternate story* (2:10:22) Kevin's aspirations with his work* (2:15:13) OutroLinks:* Professor Dorst's homepage and Twitter* Papers* Modest Epistemology* Hedden: Hindsight bias is not a bias* Higher-order evidence + (Almost) all evidence is higher-order evidence* Being Rational and Being Wrong* Rational Polarization* ChatGPT and human irrationality Get full access to The Gradient at thegradientpub.substack.com/subscribe

Increments
#70 - ... and Bayes Bites Back (w/ Richard Meadows)

Increments

Play Episode Listen Later Jul 9, 2024 90:34


Sick of hearing us shouting about Bayesianism? Well today you're in luck, because this time, someone shouts at us about Bayesianism! Richard Meadows, finance journalist, author, and Ben's secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don't? Check out Rich's website (https://thedeepdish.org/start), his book Optionality: How to Survive and Thrive in a Volatile World (https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500), and his podcast (https://doyouevenlit.podbean.com/). We discuss The pros of the rationality and EA communities Whether Bayesian epistemology contributes to open-mindedness The fact that evidence doesn't speak for itself The fact that the world doesn't come bundled as discrete chunks of evidence Whether Bayesian epistemology would be "optimal" for Laplace's demon The difference between truth and certainty Vaden's tone issues and why he gets animated about this subject. References Scott's original piece: In continued defense of non-frequentist probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist) Scott Alexander's post about rootclaim (https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments) Our previous episode on Scott's piece: #69 - Contra Scott Alexander on Probability (https://www.incrementspodcast.com/69) Rootclaim (https://www.rootclaim.com/) Ben's blogpost You need a theory for that theory (https://benchugg.com/writing/you-need-a-theory/) Cox's theorem (https://en.wikipedia.org/wiki/Cox%27s_theorem) Aumann's agreement theorem (https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem) Vaden's blogposts mentioned in the episode: Critical Rationalism and Bayesian Epistemology (https://vmasrani.github.io/blog/2020/vaden_second_response/) Proving Too Much (https://vmasrani.github.io/blog/2021/proving_too_much/) Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Follow Rich at @MeadowsRichard Come join our discord server! DM us on twitter or send us an email to get a supersecret link Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your favorite theory that is neither true nor useful? Tell us over at incrementspodcast@gmail.com. Special Guest: Richard Meadows.

The Nonlinear Library
LW - Book review: Everything Is Predictable by PeterMcCluskey

The Nonlinear Library

Play Episode Listen Later May 27, 2024 4:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: Everything Is Predictable, published by PeterMcCluskey on May 27, 2024 on LessWrong. Book review: Everything Is Predictable: How Bayesian Statistics Explain Our World, by Tom Chivers. Many have attempted to persuade the world to embrace a Bayesian worldview, but none have succeeded in reaching a broad audience. E.T. Jaynes' book has been a leading example, but its appeal is limited to those who find calculus enjoyable, making it unsuitable for a wider readership. Other attempts to engage a broader audience often focus on a narrower understanding, such as Bayes' Theorem, rather than the complete worldview. Claude's most fitting recommendation was Rationality: From AI to Zombies, but at 1,813 pages, it's too long and unstructured for me to comfortably recommend to most readers. (GPT-4o's suggestions were less helpful, focusing only on resources for practical problem-solving). Aubrey Clayton's book, Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science, only came to my attention because Chivers mentioned it, offering mixed reviews that hint at why it remained unnoticed. Chivers has done his best to mitigate this gap. While his book won't reach as many readers as I'd hoped, I'm comfortable recommending it as the standard introduction to the Bayesian worldview for most readers. Basics Chivers guides readers through the fundamentals of Bayes' Theorem, offering little that's extraordinary in this regard. A fair portion of the book is dedicated to explaining why probability should be understood as a function of our ignorance, contrasting with the frequentist approach that attempts to treat probability as if it existed independently of our minds. The book has many explanations of how frequentists are wrong, yet concedes that the leading frequentists are not stupid. Frequentism's problems often stem from a misguided effort to achieve more objectivity in science than seems possible. The only exception to this mostly fair depiction of frequentists is a section titled "Are Frequentists Racist?". Chivers repeats Clayton's diatribe affirming this, treating the diatribe more seriously than it deserves, before dismissing it. (Frequentists were racist when racism was popular. I haven't seen any clear evidence of whether Bayesians behaved differently). The Replication Crisis Chivers explains frequentism's role in the replication crisis. A fundamental drawback of p-values is that they indicate the likelihood of the data given a hypothesis, which differs from the more important question of how likely the hypothesis is given the data. Here, Chivers (and many frequentists) overlook a point raised by Deborah Mayo: p-values can help determine if an experiment had a sufficiently large sample size. Deciding whether to conduct a larger experiment can be as ew: Everything Is Predictablecrucial as drawing the best inference from existing data. The perversity of common p-value usage is exemplified by Lindley's paradox: a p-value below 0.05 can sometimes provide Bayesian evidence against the tested hypothesis. A p-value of 0.04 indicates that the data are unlikely given the null hypothesis, but we can construct scenarios where the data are even less likely under the hypothesis you wish to support. A key factor in the replication crisis is the reward system for scientists and journals, which favors publishing surprising results. The emphasis on p-values allows journals to accept more surprising results compared to a Bayesian approach, creating a clear disincentive for individual scientists or journals to adopt Bayesian methods before others do. Minds Approximate Bayes The book concludes by describing how human minds employ heuristics that closely approximate the Bayesian approach. This includes a well-written summary of how predictive processing works, demonstrating ...

The Nonlinear Library: LessWrong
LW - Book review: Everything Is Predictable by PeterMcCluskey

The Nonlinear Library: LessWrong

Play Episode Listen Later May 27, 2024 4:03


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: Everything Is Predictable, published by PeterMcCluskey on May 27, 2024 on LessWrong. Book review: Everything Is Predictable: How Bayesian Statistics Explain Our World, by Tom Chivers. Many have attempted to persuade the world to embrace a Bayesian worldview, but none have succeeded in reaching a broad audience. E.T. Jaynes' book has been a leading example, but its appeal is limited to those who find calculus enjoyable, making it unsuitable for a wider readership. Other attempts to engage a broader audience often focus on a narrower understanding, such as Bayes' Theorem, rather than the complete worldview. Claude's most fitting recommendation was Rationality: From AI to Zombies, but at 1,813 pages, it's too long and unstructured for me to comfortably recommend to most readers. (GPT-4o's suggestions were less helpful, focusing only on resources for practical problem-solving). Aubrey Clayton's book, Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science, only came to my attention because Chivers mentioned it, offering mixed reviews that hint at why it remained unnoticed. Chivers has done his best to mitigate this gap. While his book won't reach as many readers as I'd hoped, I'm comfortable recommending it as the standard introduction to the Bayesian worldview for most readers. Basics Chivers guides readers through the fundamentals of Bayes' Theorem, offering little that's extraordinary in this regard. A fair portion of the book is dedicated to explaining why probability should be understood as a function of our ignorance, contrasting with the frequentist approach that attempts to treat probability as if it existed independently of our minds. The book has many explanations of how frequentists are wrong, yet concedes that the leading frequentists are not stupid. Frequentism's problems often stem from a misguided effort to achieve more objectivity in science than seems possible. The only exception to this mostly fair depiction of frequentists is a section titled "Are Frequentists Racist?". Chivers repeats Clayton's diatribe affirming this, treating the diatribe more seriously than it deserves, before dismissing it. (Frequentists were racist when racism was popular. I haven't seen any clear evidence of whether Bayesians behaved differently). The Replication Crisis Chivers explains frequentism's role in the replication crisis. A fundamental drawback of p-values is that they indicate the likelihood of the data given a hypothesis, which differs from the more important question of how likely the hypothesis is given the data. Here, Chivers (and many frequentists) overlook a point raised by Deborah Mayo: p-values can help determine if an experiment had a sufficiently large sample size. Deciding whether to conduct a larger experiment can be as ew: Everything Is Predictablecrucial as drawing the best inference from existing data. The perversity of common p-value usage is exemplified by Lindley's paradox: a p-value below 0.05 can sometimes provide Bayesian evidence against the tested hypothesis. A p-value of 0.04 indicates that the data are unlikely given the null hypothesis, but we can construct scenarios where the data are even less likely under the hypothesis you wish to support. A key factor in the replication crisis is the reward system for scientists and journals, which favors publishing surprising results. The emphasis on p-values allows journals to accept more surprising results compared to a Bayesian approach, creating a clear disincentive for individual scientists or journals to adopt Bayesian methods before others do. Minds Approximate Bayes The book concludes by describing how human minds employ heuristics that closely approximate the Bayesian approach. This includes a well-written summary of how predictive processing works, demonstrating ...

Seize The Moment Podcast
Tom Chivers - From Science to Daily Life: The Impact of Bayesian Thinking | STM Podcast #214

Seize The Moment Podcast

Play Episode Listen Later May 19, 2024 68:21


On episode 214, we welcome Tom Chivers to discuss Bayesian statistics, how their counterintuitive nature tends to turn people off, the philosophical disagreements between the Bayesians and the frequentists, why “priors” aren't purely subjective and why all theories should be considered as priors, the difficulty of quantifying emotional states in psychological research, how priors are used and misused to inform interpretations of new data, our innate tendency toward black and white thinking, the replication crisis, and why statistically significant research is often wrong. Tom Chivers is an author and the award-winning science writer for Semafor. His writing has appeared in The Times (London), The Guardian, New Scientist, Wired, CNN, and more. He is the co-host of The Studies Show podcast alongside Stuart Richie.His books include The Rationalist's Guide to the Galaxy, and How to Read Numbers. His newest book, available now, is called Everything Is Predictable: How Bayesian Statistics Explain Our World. | Tom Chivers | ► Website | https://tomchivers.com ► Twitter | https://x.com/TomChivers ► Semafor  | https://www.semafor.com/author/tom-chivers ► Podcast | https://www.thestudiesshowpod.com ► Everything is Predictable Book | https://amzn.to/3UJTOxD Where you can find us: | Seize The Moment Podcast | ► Facebook | https://www.facebook.com/SeizeTheMoment ► Twitter | https://twitter.com/seize_podcast ► Instagram | https://www.instagram.com/seizethemoment ► TikTok | https://www.tiktok.com/@seizethemomentpodcast  

The Nonlinear Library
LW - The Parable Of The Fallen Pendulum - Part 2 by johnswentworth

The Nonlinear Library

Play Episode Listen Later Mar 13, 2024 7:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Parable Of The Fallen Pendulum - Part 2, published by johnswentworth on March 13, 2024 on LessWrong. Previously: Some physics 101 students calculate that a certain pendulum will have a period of approximately 3.6 seconds. Instead, when they run the experiment, the stand holding the pendulum tips over and the whole thing falls on the floor. The students, being diligent Bayesians, argue that this is strong evidence against Newtonian mechanics, and the professor's attempts to rationalize the results in hindsight are just that: rationalization in hindsight. What say the professor? "Hold on now," the professor answers, "'Newtonian mechanics' isn't just some monolithic magical black box. When predicting a period of approximately 3.6 seconds, you used a wide variety of laws and assumptions and approximations, and then did some math to derive the actual prediction. That prediction was apparently incorrect. But at which specific point in the process did the failure occur? For instance: Were there forces on the pendulum weight not included in the free body diagram? Did the geometry of the pendulum not match the diagrams? Did the acceleration due to gravity turn out to not be 9.8 m/s^2 toward the ground? Was the acceleration of the pendulum's weight times its mass not always equal to the sum of forces acting on it? Was the string not straight, or its upper endpoint not fixed? Did our solution of the differential equations governing the system somehow not match the observed trajectory, despite the equations themselves being correct, or were the equations wrong? Was some deeper assumption wrong, like that the pendulum weight has a well-defined position at each time? … etc" The students exchange glances, then smile. "Now those sound like empirically-checkable questions!" they exclaim. The students break into smaller groups, and rush off to check. Soon, they begin to report back. "After replicating the setup, we were unable to identify any significant additional forces acting on the pendulum weight while it was hanging or falling. However, once on the floor there was an upward force acting on the pendulum weight from the floor, as well as significant friction with the floor. It was tricky to isolate the relevant forces without relying on acceleration as a proxy, but we came up with a clever - " … at this point the group is drowned out by another. "On review of the video, we found that the acceleration of the pendulum's weight times its mass was indeed always equal to the sum of forces acting on it, to within reasonable error margins, using the forces estimated by the other group. Furthermore, we indeed found that acceleration due to gravity was consistently approximately 9.8 m/s^2 toward the ground, after accounting for the other forces," says the second group to report. Another arrives: "Review of the video and computational reconstruction of the 3D arrangement shows that, while the geometry did basically match the diagrams initially, it failed dramatically later on in the experiment. In particular, the string did not remain straight, and its upper endpoint moved dramatically." Another: "We have numerically verified the solution to the original differential equations. The error was not in the math; the original equations must have been wrong." Another: "On review of the video, qualitative assumptions such as the pendulum being in a well-defined position at each time look basically correct, at least to precision sufficient for this experiment. Though admittedly unknown unknowns are always hard to rule out." [1] A few other groups report, and then everyone regathers. "Ok, we have a lot more data now," says the professor, "what new things do we notice?" "Well," says one student, "at least some parts of Newtonian mechanics held up pretty well. The whole F = ma thing worked, and th...

The Nonlinear Library: LessWrong
LW - The Parable Of The Fallen Pendulum - Part 2 by johnswentworth

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 13, 2024 7:34


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Parable Of The Fallen Pendulum - Part 2, published by johnswentworth on March 13, 2024 on LessWrong. Previously: Some physics 101 students calculate that a certain pendulum will have a period of approximately 3.6 seconds. Instead, when they run the experiment, the stand holding the pendulum tips over and the whole thing falls on the floor. The students, being diligent Bayesians, argue that this is strong evidence against Newtonian mechanics, and the professor's attempts to rationalize the results in hindsight are just that: rationalization in hindsight. What say the professor? "Hold on now," the professor answers, "'Newtonian mechanics' isn't just some monolithic magical black box. When predicting a period of approximately 3.6 seconds, you used a wide variety of laws and assumptions and approximations, and then did some math to derive the actual prediction. That prediction was apparently incorrect. But at which specific point in the process did the failure occur? For instance: Were there forces on the pendulum weight not included in the free body diagram? Did the geometry of the pendulum not match the diagrams? Did the acceleration due to gravity turn out to not be 9.8 m/s^2 toward the ground? Was the acceleration of the pendulum's weight times its mass not always equal to the sum of forces acting on it? Was the string not straight, or its upper endpoint not fixed? Did our solution of the differential equations governing the system somehow not match the observed trajectory, despite the equations themselves being correct, or were the equations wrong? Was some deeper assumption wrong, like that the pendulum weight has a well-defined position at each time? … etc" The students exchange glances, then smile. "Now those sound like empirically-checkable questions!" they exclaim. The students break into smaller groups, and rush off to check. Soon, they begin to report back. "After replicating the setup, we were unable to identify any significant additional forces acting on the pendulum weight while it was hanging or falling. However, once on the floor there was an upward force acting on the pendulum weight from the floor, as well as significant friction with the floor. It was tricky to isolate the relevant forces without relying on acceleration as a proxy, but we came up with a clever - " … at this point the group is drowned out by another. "On review of the video, we found that the acceleration of the pendulum's weight times its mass was indeed always equal to the sum of forces acting on it, to within reasonable error margins, using the forces estimated by the other group. Furthermore, we indeed found that acceleration due to gravity was consistently approximately 9.8 m/s^2 toward the ground, after accounting for the other forces," says the second group to report. Another arrives: "Review of the video and computational reconstruction of the 3D arrangement shows that, while the geometry did basically match the diagrams initially, it failed dramatically later on in the experiment. In particular, the string did not remain straight, and its upper endpoint moved dramatically." Another: "We have numerically verified the solution to the original differential equations. The error was not in the math; the original equations must have been wrong." Another: "On review of the video, qualitative assumptions such as the pendulum being in a well-defined position at each time look basically correct, at least to precision sufficient for this experiment. Though admittedly unknown unknowns are always hard to rule out." [1] A few other groups report, and then everyone regathers. "Ok, we have a lot more data now," says the professor, "what new things do we notice?" "Well," says one student, "at least some parts of Newtonian mechanics held up pretty well. The whole F = ma thing worked, and th...

The Nonlinear Library
LW - Ideological Bayesians by Kevin Dorst

The Nonlinear Library

Play Episode Listen Later Feb 26, 2024 19:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ideological Bayesians, published by Kevin Dorst on February 26, 2024 on LessWrong. TLDR: It's often said that Bayesian updating is unbiased and converges to the truth - and, therefore, that biases must emerge from non-Bayesian sources. That's not quite right. The convergence results require updating on your total evidence - but for agents at all like us, that's impossible - instead, we must selectively attend to certain questions, ignoring others. Yet correlations between what we see and what questions we ask - "ideological" Bayesian updating - can lead to predictable biases and polarization. Professor Polder is a polarizing figure. His fans praise him for his insight; his critics denounce him for his aggression. Ask his fans, and they'll supply you with a bunch of instances when he made an insightful comment during discussions. They'll admit that he's sometimes aggressive, but they can't remember too many cases - he certainly doesn't seem any more aggressive than the average professor. Ask his critics, and they'll supply you with a bunch of instances when he made an aggressive comment during discussions. They'll admit that he's sometimes insightful, but they can't remember too many cases - he certainly doesn't seem any more insightful than the average professor. This sort of polarization is, I assume, familiar. But let me tell you a secret: Professor Polder is, in fact, perfectly average - he has an unremarkably average number of both insightful and aggressive comments. So what's going on? His fans are better at noticing his insights, while his critics are better at noticing his aggression. As a result, their estimates are off: his fans think he's more insightful than he is, and his critics think he's more aggressive than he is. Each are correct about individual bits of the picture - when they notice aggression or insight, he is being aggressive or insightful. But none are correct about the overall picture. This source of polarization is also, I assume, familiar. It's widely appreciated that background beliefs and ideology - habits of mind, patterns of salience, and default forms of explanation - can lead to bias, disagreement, and polarization. In this broad sense of "ideology", we're familiar with the observation that real people - especially fans and critics - are often ideological.[1] But let me tell you another secret: Polder's fans and critics are all Bayesians. More carefully: they all maintain precise probability distributions over the relevant possibilities, and they always update their opinions by conditioning their priors on the (unambiguous) true answer to a partitional question. How is that possible? Don't Bayesians, in such contexts, update in unbiased[2] ways, always converge to the truth, and therefore avoid persistent disagreement? Not necessarily. The trick is that which question they update on is correlated with what they see - they have different patterns of salience. For example, when Polder makes a comment that is both insightful and aggressive, his fans are more likely to notice (just) the insight, while his critics are more likely to notice (just) the aggression. This can lead to predictable polarization. I'm going to give a model of how such correlations - between what you see, and what questions you ask about it - can lead otherwise rational Bayesians to diverge from both each other and the truth. Though simplified, I think it sheds light on how ideology might work. Limited-Attention Bayesians Standard Bayesian epistemology says you must update on your total evidence. That's nuts. To see just how infeasible that is, take a look at the following video. Consider the question: what happens to the exercise ball? I assume you noticed that the exercise ball disappeared. Did you also notice that the Christmas tree gained lights, the bowl changed c...

The Nonlinear Library: LessWrong
LW - Ideological Bayesians by Kevin Dorst

The Nonlinear Library: LessWrong

Play Episode Listen Later Feb 26, 2024 19:11


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ideological Bayesians, published by Kevin Dorst on February 26, 2024 on LessWrong. TLDR: It's often said that Bayesian updating is unbiased and converges to the truth - and, therefore, that biases must emerge from non-Bayesian sources. That's not quite right. The convergence results require updating on your total evidence - but for agents at all like us, that's impossible - instead, we must selectively attend to certain questions, ignoring others. Yet correlations between what we see and what questions we ask - "ideological" Bayesian updating - can lead to predictable biases and polarization. Professor Polder is a polarizing figure. His fans praise him for his insight; his critics denounce him for his aggression. Ask his fans, and they'll supply you with a bunch of instances when he made an insightful comment during discussions. They'll admit that he's sometimes aggressive, but they can't remember too many cases - he certainly doesn't seem any more aggressive than the average professor. Ask his critics, and they'll supply you with a bunch of instances when he made an aggressive comment during discussions. They'll admit that he's sometimes insightful, but they can't remember too many cases - he certainly doesn't seem any more insightful than the average professor. This sort of polarization is, I assume, familiar. But let me tell you a secret: Professor Polder is, in fact, perfectly average - he has an unremarkably average number of both insightful and aggressive comments. So what's going on? His fans are better at noticing his insights, while his critics are better at noticing his aggression. As a result, their estimates are off: his fans think he's more insightful than he is, and his critics think he's more aggressive than he is. Each are correct about individual bits of the picture - when they notice aggression or insight, he is being aggressive or insightful. But none are correct about the overall picture. This source of polarization is also, I assume, familiar. It's widely appreciated that background beliefs and ideology - habits of mind, patterns of salience, and default forms of explanation - can lead to bias, disagreement, and polarization. In this broad sense of "ideology", we're familiar with the observation that real people - especially fans and critics - are often ideological.[1] But let me tell you another secret: Polder's fans and critics are all Bayesians. More carefully: they all maintain precise probability distributions over the relevant possibilities, and they always update their opinions by conditioning their priors on the (unambiguous) true answer to a partitional question. How is that possible? Don't Bayesians, in such contexts, update in unbiased[2] ways, always converge to the truth, and therefore avoid persistent disagreement? Not necessarily. The trick is that which question they update on is correlated with what they see - they have different patterns of salience. For example, when Polder makes a comment that is both insightful and aggressive, his fans are more likely to notice (just) the insight, while his critics are more likely to notice (just) the aggression. This can lead to predictable polarization. I'm going to give a model of how such correlations - between what you see, and what questions you ask about it - can lead otherwise rational Bayesians to diverge from both each other and the truth. Though simplified, I think it sheds light on how ideology might work. Limited-Attention Bayesians Standard Bayesian epistemology says you must update on your total evidence. That's nuts. To see just how infeasible that is, take a look at the following video. Consider the question: what happens to the exercise ball? I assume you noticed that the exercise ball disappeared. Did you also notice that the Christmas tree gained lights, the bowl changed c...

The Nonlinear Library
LW - Bayesians Commit the Gambler's Fallacy by Kevin Dorst

The Nonlinear Library

Play Episode Listen Later Jan 7, 2024 15:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bayesians Commit the Gambler's Fallacy, published by Kevin Dorst on January 7, 2024 on LessWrong. TLDR: Rational people who start out uncertain about an (in fact independent) causal process and then learn from unbiased data will rule out "streaky" hypotheses more quickly than "switchy" hypotheses. As a result, they'll commit the gambler's fallacy: expecting the process to switch more than it will. In fact, they'll do so in ways that match a variety of empirical findings about how real people commit the gambler's fallacy. Maybe it's not a fallacy, after all. (This post is based on a full paper.) Baylee is bored. The fluorescent lights hum. The spreadsheets blur. She needs air. As she steps outside, she sees the Prius nestled happily in the front spot. Three days in a row now - the Prius is on a streak. The Jeep will probably get it tomorrow, she thinks. This parking battle - between a Prius and a Jeep - has been going on for months. Unbeknownst to Baylee, the outcomes are statistically independent: each day, the Prius and the Jeep have a 50% chance to get the front spot, regardless of how the previous days have gone. But Baylee thinks and acts otherwise: after the Prius has won the spot a few days in a row, she tends to think the Jeep will win next. (And vice versa.) So Baylee is committing the gambler's fallacy: the tendency to think that streaks of (in fact independent) outcomes are likely to switch. Maybe you conclude from this - as many psychologists have - that Baylee is bad at statistical reasoning. You're wrong. Baylee is a rational Bayesian. As I'll show: when either data or memory are limited, Bayesians who begin with causal uncertainty about an (in fact independent) process - and then learn from unbiased data - will, on average, commit the gambler's fallacy. Why? Although they'll get evidence that the process is neither "switchy" nor "streaky", they'll get more evidence against the latter. Thus they converge asymmetrically to the truth (of independence), leading them to (on average) commit the gambler's fallacy along the way. More is true. Bayesians don't just commit the gambler's fallacy - they do so in way that qualitatively matches a wide variety of trends found in the empirical literature on the gambler's fallacy. This provides evidence for: Causal-Uncertainty Hypothesis: The gambler's fallacy is due to causal uncertainty combined with rational responses to limited data and memory. This hypothesis stacks up favorably against extant theories of the gambler's fallacy in terms of both explanatory power and empirical coverage. See the paper for the full argument - here I'll just sketch the idea. Asymmetric Convergence Consider any process that can have one of two repeatable outcomes - Prius vs. Jeep; heads vs. tails; hit vs. miss; 1 vs. 0; etc. Baylee knows that the process (say, the parking battle) is "random" in the sense that (i) it's hard to predict, and (ii) in the long run, the Prius wins 50% of the time. But that leaves open three classes of hypotheses: Steady: The outcomes are independent, so each day there's a 50% chance the Prius wins the spot. (Compare: a fair coin toss.) Switchy: The outcomes tend to switch: after the Prius wins a few in a row, the Jeep becomes more likely to win; after the Jeep wins a few, vice versa. (Compare: drawing from a deck of cards without replacement - after a few red cards, a black card becomes more likely.) Sticky: The outcomes tend to form streaks: after the Prius wins a few, it becomes more likely to win again; likewise for the Jeep. (Compare: basketball shots - after a player makes a few, they become "hot" and so are more likely to make the next one. No, the "hot hand" is not a myth.[1]) So long as each of these hypotheses is symmetric around 50%, they all will lead to (i) the process being hard to predict, and (ii...

The Nonlinear Library: LessWrong
LW - Bayesians Commit the Gambler's Fallacy by Kevin Dorst

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 7, 2024 15:48


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bayesians Commit the Gambler's Fallacy, published by Kevin Dorst on January 7, 2024 on LessWrong. TLDR: Rational people who start out uncertain about an (in fact independent) causal process and then learn from unbiased data will rule out "streaky" hypotheses more quickly than "switchy" hypotheses. As a result, they'll commit the gambler's fallacy: expecting the process to switch more than it will. In fact, they'll do so in ways that match a variety of empirical findings about how real people commit the gambler's fallacy. Maybe it's not a fallacy, after all. (This post is based on a full paper.) Baylee is bored. The fluorescent lights hum. The spreadsheets blur. She needs air. As she steps outside, she sees the Prius nestled happily in the front spot. Three days in a row now - the Prius is on a streak. The Jeep will probably get it tomorrow, she thinks. This parking battle - between a Prius and a Jeep - has been going on for months. Unbeknownst to Baylee, the outcomes are statistically independent: each day, the Prius and the Jeep have a 50% chance to get the front spot, regardless of how the previous days have gone. But Baylee thinks and acts otherwise: after the Prius has won the spot a few days in a row, she tends to think the Jeep will win next. (And vice versa.) So Baylee is committing the gambler's fallacy: the tendency to think that streaks of (in fact independent) outcomes are likely to switch. Maybe you conclude from this - as many psychologists have - that Baylee is bad at statistical reasoning. You're wrong. Baylee is a rational Bayesian. As I'll show: when either data or memory are limited, Bayesians who begin with causal uncertainty about an (in fact independent) process - and then learn from unbiased data - will, on average, commit the gambler's fallacy. Why? Although they'll get evidence that the process is neither "switchy" nor "streaky", they'll get more evidence against the latter. Thus they converge asymmetrically to the truth (of independence), leading them to (on average) commit the gambler's fallacy along the way. More is true. Bayesians don't just commit the gambler's fallacy - they do so in way that qualitatively matches a wide variety of trends found in the empirical literature on the gambler's fallacy. This provides evidence for: Causal-Uncertainty Hypothesis: The gambler's fallacy is due to causal uncertainty combined with rational responses to limited data and memory. This hypothesis stacks up favorably against extant theories of the gambler's fallacy in terms of both explanatory power and empirical coverage. See the paper for the full argument - here I'll just sketch the idea. Asymmetric Convergence Consider any process that can have one of two repeatable outcomes - Prius vs. Jeep; heads vs. tails; hit vs. miss; 1 vs. 0; etc. Baylee knows that the process (say, the parking battle) is "random" in the sense that (i) it's hard to predict, and (ii) in the long run, the Prius wins 50% of the time. But that leaves open three classes of hypotheses: Steady: The outcomes are independent, so each day there's a 50% chance the Prius wins the spot. (Compare: a fair coin toss.) Switchy: The outcomes tend to switch: after the Prius wins a few in a row, the Jeep becomes more likely to win; after the Jeep wins a few, vice versa. (Compare: drawing from a deck of cards without replacement - after a few red cards, a black card becomes more likely.) Sticky: The outcomes tend to form streaks: after the Prius wins a few, it becomes more likely to win again; likewise for the Jeep. (Compare: basketball shots - after a player makes a few, they become "hot" and so are more likely to make the next one. No, the "hot hand" is not a myth.[1]) So long as each of these hypotheses is symmetric around 50%, they all will lead to (i) the process being hard to predict, and (ii...

Hacker News Recap
December 23rd, 2023 | Ferret: A Multimodal Large Language Model

Hacker News Recap

Play Episode Listen Later Dec 24, 2023 19:24


This is a recap of the top 10 posts on Hacker News on December 23rd, 2023.This podcast was generated by wondercraft.ai(00:37): Ferret: A Multimodal Large Language ModelOriginal post: https://news.ycombinator.com/item?id=38745348&utm_source=wondercraft_ai(02:27): Xmas.c (1988)Original post: https://news.ycombinator.com/item?id=38745668&utm_source=wondercraft_ai(04:09): Meta censors pro-Palestinian views on a global scale, Human Rights Watch claimsOriginal post: https://news.ycombinator.com/item?id=38745673&utm_source=wondercraft_ai(06:19): Suno AIOriginal post: https://news.ycombinator.com/item?id=38743719&utm_source=wondercraft_ai(08:08): Endurain: Self-hosted Strava like serviceOriginal post: https://news.ycombinator.com/item?id=38742637&utm_source=wondercraft_ai(09:59): The Art of Electronics (2015)Original post: https://news.ycombinator.com/item?id=38748370&utm_source=wondercraft_ai(11:44): NY Governor vetoes ban on noncompete clauses, waters down LLC transparency billOriginal post: https://news.ycombinator.com/item?id=38749155&utm_source=wondercraft_ai(13:37): In 2023 Organic Maps got its first million usersOriginal post: https://news.ycombinator.com/item?id=38746187&utm_source=wondercraft_ai(15:13): StreamDiffusion: A pipeline-level solution for real-time interactive generationOriginal post: https://news.ycombinator.com/item?id=38749434&utm_source=wondercraft_ai(17:00): Bayesians moving from defense to offenseOriginal post: https://news.ycombinator.com/item?id=38744588&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

Learning Bayesian Statistics
#93 A CERN Odyssey, with Kevin Greiff

Learning Bayesian Statistics

Play Episode Listen Later Oct 18, 2023 109:05


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meThis is a very special episode. It is the first-ever LBS video episode, and it takes place in the heart of particle physics research -- the CERN

Decoding the Gurus
Interview with Mick West: UFOs, Aliens, and Conspiracy Psychology

Decoding the Gurus

Play Episode Listen Later Jun 17, 2023 104:10


UFOs are all the rage now, and it's certainly a topic that excites many of our gurus *cough* . Hidden mysteries, advanced technologies, conspiracies, and government cover-ups. What's not to like!?The truth, as Mulder so eloquently put it, is out there. And sometimes if you combine the outcome of some posterior technical analysis with some basic priors about the fallibility of human perception and memory, then the truth might be a little prosaic (happy Bayesians!!?).Joining us today is the esteemed Mick West, retired video game developer, who has a long track record of investigating UFO footage, along with a range of other outré phenomena. Mick is admirably positioned to provide practical advice on how to apply critical thinking while being empathetic to friends and family who may have fallen down one or more conspiratorial rabbit holes. Chris and Matt enjoyed the conversation with Mick a lot, and we think you will too!LinksAn example of Eric's UFO Tapdance Lex's Reddit thread for the Matthew McConaughey EpisodeMick West's Book: Escaping the Rabbit Hole: How to Debunk Conspiracy Theories Using Facts, Logic, and RespectTheories of Everything Channel: Eric Weinstein and Mick West: UAPs, Evidence, SkepticismOther LinksOur PatreonContact us via email: decodingthegurus@gmail.comThe DTG Subreddit

The Nonlinear Library
LW - A common failure for foxes by Rob Bensinger

The Nonlinear Library

Play Episode Listen Later Oct 15, 2022 3:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A common failure for foxes, published by Rob Bensinger on October 14, 2022 on LessWrong. A common failure mode for people who pride themselves in being foxes (as opposed to hedgehogs): Paying more attention to easily-evaluated claims that don't matter much, at the expense of hard-to-evaluate claims that matter a lot. E.g., maybe there's an RCT that isn't very relevant, but is pretty easily interpreted and is conclusive evidence for some claim. At the same time, maybe there's an informal argument that matters a lot more, but it takes some work to know how much to update on it, and it probably won't be iron-clad evidence regardless. I think people who think of themselves as being "foxes" often spend too much time thinking about the RCT and not enough time thinking about the informal argument, for a few reasons: 1. A desire for cognitive closure, confidence, and a feeling of "knowing things" — of having authoritative Facts on hand rather than mere Opinions. A proper Bayesian cares about VOI, and assigns probabilities rather than having separate mental buckets for Facts vs. Opinions. If activity A updates you from 50% to 95% confidence in hypothesis H1, and activity B updates you from 50% to 60% confidence in hypothesis H2, then your assessment of whether to do more A-like activities or more B-like activities going forward should normally depend a lot on how useful it is to know about H1 versus H2. But real-world humans (even if they think of themselves as aspiring Bayesians) are often uncomfortable with uncertainty. We prefer sharp thresholds, capital-k Knowledge, and a feeling of having solid ground to rest on. 2. Hyperbolic discounting of intellectual progress. With unambiguous data, you get a fast sense of progress. With fuzzy arguments, you might end up confident after thinking about it a while, or after reading another nine arguments; but it's a long process, with uncertain rewards. 3. Social modesty and a desire to look un-arrogant. It can feel socially low-risk and pleasantly virtuous to be able to say "Oh, I'm not claiming to have good judgment or to be great at reasoning or anything; I'm just deferring to the obvious clear-cut data, and outside of that, I'm totally uncertain." Collecting isolated facts increases the pool of authoritative claims you can make, while protecting you from having to stick your neck out and have an Opinion on something that will be harder to convince others of, or one that rests on an implicit claim about your judgment. But in fact it often is better to make small or uncertain updates about extremely important questions, than to collect lots of high-confidence trivia. It keeps your eye on the ball, where you can keep building up confidence over time; and it helps build reasoning skill. High-confidence trivia also often poses a risk: either consciously or unconsciously, you can end up updating about the More Important Questions you really care about, because you're spending all your time thinking about trivia. Even if you verbally acknowledge that updating from the superficially-related RCT to the question-that-actually-matters would be a non sequitur, there's still a temptation to substitute the one question for the other. Because it's still the Important Question that you actually care about. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - A common failure for foxes by Rob Bensinger

The Nonlinear Library: LessWrong

Play Episode Listen Later Oct 15, 2022 3:09


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A common failure for foxes, published by Rob Bensinger on October 14, 2022 on LessWrong. A common failure mode for people who pride themselves in being foxes (as opposed to hedgehogs): Paying more attention to easily-evaluated claims that don't matter much, at the expense of hard-to-evaluate claims that matter a lot. E.g., maybe there's an RCT that isn't very relevant, but is pretty easily interpreted and is conclusive evidence for some claim. At the same time, maybe there's an informal argument that matters a lot more, but it takes some work to know how much to update on it, and it probably won't be iron-clad evidence regardless. I think people who think of themselves as being "foxes" often spend too much time thinking about the RCT and not enough time thinking about the informal argument, for a few reasons: 1. A desire for cognitive closure, confidence, and a feeling of "knowing things" — of having authoritative Facts on hand rather than mere Opinions. A proper Bayesian cares about VOI, and assigns probabilities rather than having separate mental buckets for Facts vs. Opinions. If activity A updates you from 50% to 95% confidence in hypothesis H1, and activity B updates you from 50% to 60% confidence in hypothesis H2, then your assessment of whether to do more A-like activities or more B-like activities going forward should normally depend a lot on how useful it is to know about H1 versus H2. But real-world humans (even if they think of themselves as aspiring Bayesians) are often uncomfortable with uncertainty. We prefer sharp thresholds, capital-k Knowledge, and a feeling of having solid ground to rest on. 2. Hyperbolic discounting of intellectual progress. With unambiguous data, you get a fast sense of progress. With fuzzy arguments, you might end up confident after thinking about it a while, or after reading another nine arguments; but it's a long process, with uncertain rewards. 3. Social modesty and a desire to look un-arrogant. It can feel socially low-risk and pleasantly virtuous to be able to say "Oh, I'm not claiming to have good judgment or to be great at reasoning or anything; I'm just deferring to the obvious clear-cut data, and outside of that, I'm totally uncertain." Collecting isolated facts increases the pool of authoritative claims you can make, while protecting you from having to stick your neck out and have an Opinion on something that will be harder to convince others of, or one that rests on an implicit claim about your judgment. But in fact it often is better to make small or uncertain updates about extremely important questions, than to collect lots of high-confidence trivia. It keeps your eye on the ball, where you can keep building up confidence over time; and it helps build reasoning skill. High-confidence trivia also often poses a risk: either consciously or unconsciously, you can end up updating about the More Important Questions you really care about, because you're spending all your time thinking about trivia. Even if you verbally acknowledge that updating from the superficially-related RCT to the question-that-actually-matters would be a non sequitur, there's still a temptation to substitute the one question for the other. Because it's still the Important Question that you actually care about. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

London Futurists
Stability and combinations, with Aleksa Gordić

London Futurists

Play Episode Listen Later Sep 28, 2022 31:55


This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today's most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don't suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It's adopted from physics. It peels noise away from the image09.17 Isn't that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can't go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo's game two against Lee Sedol in 201623.40 Moravec's paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be trained on a single chip27.50 Models can increasingly be trained for general purposes and then tweaked for particular tasks28.30 Some of the big new models are open access29.00 What else should people learn about with regard to advanced AI?29.20 Neural Radiance Fields (NERF) models30.40 Flamingo and Gato31.15 We have mostly discussed research in these episodes, rather than engineering

The Emergency Management Network Podcast
Bayes’ Theorem Applying It To Emergency Management

The Emergency Management Network Podcast

Play Episode Listen Later Jun 26, 2022 4:40


Bayes’ Theorem Applying It To Emergency Management Mental models help us with making decisions under stress. They give us a starting point, think of how we teach triage, “start where you stand”. This applies to decision-making as well during a disaster or crisis, start with information that you have. We can make the adjustments as more or better information is obtained.  This brings me to the concepts of Bayes’ Theorem.  Thomas Bayes was an English minister in the 18th century, whose most famous work, “An Essay toward Solving a Problem in the Doctrine of Chances,” The essay did not contain the theorem as we now know it but had the seeds of the idea. It looked at how to adjust our estimates of probabilities when encountering new data that influence a situation. Later development by French scholar Pierre-Simon Laplace and others helped codify the theorem and develop it into a useful tool for thinking.Now you do not need to be great at math to use this concept. I still need to take off my shoes to count to 19. . More critical is your ability and desire to assign probabilities of truth and accuracy to anything you think you know and then be willing to update those probabilities when new information comes in.We talk about making decisions based on the new information that has come in, however, we often ignore prior information, simply called “priors” in Bayesian-speak. We can blame this habit in part on the availability heuristic—we focus on what’s readily available. In this case, we focus on the newest information, and the bigger picture gets lost. We fail to adjust the probability of old information to reflect what we have learned.The big idea behind Bayes’ theorem is that we must continuously update our probability estimates on an as-needed basis. Let’s take a look at a hurricane as our crisis. We have all seen the way it tracks and can predict that it may make landfall at a certain time and location. We can use past storms as predictors of how this hurricane may act and the damage it could cause. However, new information may come to light on the behavior of the storm. This however should not necessarily negate the previous experience and information you have on hand. In their book The Signal and the Noise, Nate Silver and Allen Lane give a contemporary example, reminding us that new information is often most useful when we put it in the larger context of what we already know:Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Skeptics react with glee, while true believers dismiss the new information.A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming—but only a little.The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.So much of making better decisions hinges on dealing with uncertainty. The most common thing holding people back from the right answer is instinctively rejecting new information, or not integrating the old.  To better serve our communities, have a mental model, work with it and use it to make better decisions. PodcastsThe Todd De Voe Show School Shootings and Emergency Management  The K-12 School Shooting Database research project is a widely inclusive database that documents each and every instance a gun is brandished is fired, or a bullet hits school property for any reason, regardless of the number of victims, time, or day of the week.The School Shooting Database Project is conducted as part of the Advanced Thinking in Homeland Security (HSx) program at the Naval Postgraduate School’s Center for Homeland Defense and Security (CHDS).Prepare Respond Recover Saving Lives Through Training Due to the uptick of mass shootings over the years, many professions outside of law enforcement are now being trained in active shooter response programs. But have you ever thought about who teaches the law enforcement officers themselves? Join prepare.respond.recover. host Todd De Voe as he talks with Erik Franco, the CEO of "High Speed Tac Med", one of the nation’s most sought-after active shooter training programs for law enforcement and firefighting. Learn about “Run, Hide, Fight” and how this training is preparing law enforcement officers to tackle an active shooter situation as quickly and efficiently as possible.HSTM - https://highspeedtacmed.com/If you would like to learn more about the Natural Disaster & Emergency Management (NDEM) Expo please visit us on the web - https://www.ndemevent.comBusiness Continuity Today Training for Active Shooters Beyond The Response Active shooting scenarios focus on the police response, and the larger emergency management role during these complex incidents is often overlooked. However, they are multi-week, multi-jurisdictional incidents requiring command & control, interoperable communications, and a host of other services. Supportershttps://www.disastertech.com/https://titanhst.com/https://www.ndemevent.com/en-us/show-info.html Get full access to The Emergency Management Network at emnetwork.substack.com/subscribe

The Nonlinear Library
LW - A Suite of Pragmatic Considerations in Favor of Niceness by Alicorn from Living Luminously

The Nonlinear Library

Play Episode Listen Later Dec 26, 2021 5:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Cartesian Frames, Part 14: A Suite of Pragmatic Considerations in Favor of Niceness, published by Alicorn. tl;dr: Sometimes, people don't try as hard as they could to be nice. If being nice is not a terminal value for you, here are some other things to think about which might induce you to be nice anyway. There is a prevailing ethos in communities similar to ours - atheistic, intellectual groupings, who congregate around a topic rather than simply to congregate - and this ethos says that it is not necessary to be nice. I'm drawing on a commonsense notion of "niceness" here, which I hope won't confuse anyone (another feature of communities like this is that it's very easy to find people who claim to be confused by monosyllables). I do not merely mean "polite", which can be superficially like niceness when the person to whom the politeness is directed is in earshot but tends to be far more superficial. I claim that this ethos is mistaken and harmful. In so claiming, I do not also claim that I am always perfectly nice; I claim merely that I and others have good reasons to try to be. The dispensing with niceness probably springs in large part from an extreme rejection of the ad hominem fallacy and of emotionally-based reasoning. Of course someone may be entirely miserable company and still have brilliant, cogent ideas; to reject communication with someone who just happens to be miserable company, in spite of their brilliant, cogent ideas, is to miss out on the (valuable) latter because of a silly emotional reaction to the (irrelevant) former. Since the point of the community is ideas; and the person's ideas are good; and how much fun they are to be around is irrelevant - well, bringing up that they are just terribly mean seems trivial at best, and perhaps an invocation of the aforementioned fallacy. We are here to talk about ideas! (Interestingly, this same courtesy is rarely extended to appalling spelling.) The ad hominem fallacy is a fallacy, so this is a useful norm up to a point, but not up to the point where people who are perfectly capable of being nice, or learning to be nice, neglect to do so because it's apparently been rendered locally worthless. I submit that there are still good, pragmatic reasons to be nice, as follows. (These are claims about how to behave around real human-type persons. Many of them would likely be obsolete if we were all perfect Bayesians.) It provides good incentives for others. It's easy enough to develop purely subconscious aversions to things that are unpleasant. If you are miserable company, people may stop talking to you without even knowing they're doing it, and some of these people may have ideas that would have benefited you. It helps you hold off on proposing diagnoses. As tempting as it may be to dismiss people as crazy or stupid, this is a dangerous label for us biased creatures. Fewer people than you are tempted to call these things are genuinely worth writing off as thoroughly as this kind of name-calling may tempt you to do. Conveniently, both these words (as applied to people, more than ideas) and closely related ones are culturally considered mean, and a general niceness policy will exclude them. It lets you exist in a cognitively diverse environment. Meanness is more tempting as an earlier resort when there's some kind of miscommunication, and miscommunication is more likely when you and your interlocutor think differently. Per #1, not making a conscious effort to be nice will tend to drive off the people with the greatest ratio of interesting new contributions to old rehashed repetitions. It is a cooperative behavior. It's obvious that it's nicer to live in a world where everybody is nice than in a world where everyone is a jerk. What's less obvious, but still, I think, true, is that the cost of cooperatively being nice ...

The Nonlinear Library: LessWrong
LW - A Suite of Pragmatic Considerations in Favor of Niceness by Alicorn from Living Luminously

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 26, 2021 5:19


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Cartesian Frames, Part 14: A Suite of Pragmatic Considerations in Favor of Niceness, published by Alicorn. tl;dr: Sometimes, people don't try as hard as they could to be nice. If being nice is not a terminal value for you, here are some other things to think about which might induce you to be nice anyway. There is a prevailing ethos in communities similar to ours - atheistic, intellectual groupings, who congregate around a topic rather than simply to congregate - and this ethos says that it is not necessary to be nice. I'm drawing on a commonsense notion of "niceness" here, which I hope won't confuse anyone (another feature of communities like this is that it's very easy to find people who claim to be confused by monosyllables). I do not merely mean "polite", which can be superficially like niceness when the person to whom the politeness is directed is in earshot but tends to be far more superficial. I claim that this ethos is mistaken and harmful. In so claiming, I do not also claim that I am always perfectly nice; I claim merely that I and others have good reasons to try to be. The dispensing with niceness probably springs in large part from an extreme rejection of the ad hominem fallacy and of emotionally-based reasoning. Of course someone may be entirely miserable company and still have brilliant, cogent ideas; to reject communication with someone who just happens to be miserable company, in spite of their brilliant, cogent ideas, is to miss out on the (valuable) latter because of a silly emotional reaction to the (irrelevant) former. Since the point of the community is ideas; and the person's ideas are good; and how much fun they are to be around is irrelevant - well, bringing up that they are just terribly mean seems trivial at best, and perhaps an invocation of the aforementioned fallacy. We are here to talk about ideas! (Interestingly, this same courtesy is rarely extended to appalling spelling.) The ad hominem fallacy is a fallacy, so this is a useful norm up to a point, but not up to the point where people who are perfectly capable of being nice, or learning to be nice, neglect to do so because it's apparently been rendered locally worthless. I submit that there are still good, pragmatic reasons to be nice, as follows. (These are claims about how to behave around real human-type persons. Many of them would likely be obsolete if we were all perfect Bayesians.) It provides good incentives for others. It's easy enough to develop purely subconscious aversions to things that are unpleasant. If you are miserable company, people may stop talking to you without even knowing they're doing it, and some of these people may have ideas that would have benefited you. It helps you hold off on proposing diagnoses. As tempting as it may be to dismiss people as crazy or stupid, this is a dangerous label for us biased creatures. Fewer people than you are tempted to call these things are genuinely worth writing off as thoroughly as this kind of name-calling may tempt you to do. Conveniently, both these words (as applied to people, more than ideas) and closely related ones are culturally considered mean, and a general niceness policy will exclude them. It lets you exist in a cognitively diverse environment. Meanness is more tempting as an earlier resort when there's some kind of miscommunication, and miscommunication is more likely when you and your interlocutor think differently. Per #1, not making a conscious effort to be nice will tend to drive off the people with the greatest ratio of interesting new contributions to old rehashed repetitions. It is a cooperative behavior. It's obvious that it's nicer to live in a world where everybody is nice than in a world where everyone is a jerk. What's less obvious, but still, I think, true, is that the cost of cooperatively being nice ...

The Nonlinear Library
LW - Philosophy: A Diseased Discipline by lukeprog from Rationality and Philosophy

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 11:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Rationality and Philosophy, Part 2: Philosophy: A Diseased Discipline, published by lukeprog. Part of the sequence: Rationality and Philosophy Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics. If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most. That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.) Failed methods Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used. Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms. A diseased discipline What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians. Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature: Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again. Because they were tra...

The Nonlinear Library
LW - Tendencies in reflective equilibrium by Scott Alexander from The Blue-Minimizing Robot

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 5:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Blue-Minimizing Robot, Part 16: Tendencies in reflective equilibrium, published by Scott Alexander. Consider a case, not too different from what has been shown to happen in reality, where we ask Bob what sounds like a fair punishment for a homeless man who steals $1,000, and he answers ten years. Suppose we wait until Bob has forgotten that we ever asked the first question, and then ask him what sounds like a fair punishment for a hedge fund manager who steals $1,000,000, and he says five years. Maybe we even wait until he forgets the whole affair, and then ask him the same questions again with the same answers, confirming that these are stable preferences. If we now confront Bob with both numbers together, informing him that he supported a ten year sentence for stealing $1,000 and a five year sentence for stealing $1,000,000, a couple of things might happen. He could say "Yeah, I genuinely believe poor people deserve greater penalties than rich people." But more likely he says "Oh, I guess I was prejudiced." Then if we ask him the same question again, he comes up with two numbers that follow the expected mathematical relationship and punish the greater theft with more jail time. Bob isn't working off of some predefined algorithm for determining punishment, like "jail time = (10 amount stolen)/net worth". I don't know if anyone knows exactly what Bob is doing, but at a stab, he's seeing how many unpleasant feelings get generated by imagining the crime, then proposing a jail sentence that activates about an equal amount of unpleasant feelings. If the thought of a homeless man makes images of crime more readily available and so increases the unpleasant feelings, things won't go well for the homeless man. If you're really hungry, that probably won't help either. So just like nothing automatically synchronizes the intention to study a foreign language and the behavior of studying it, so nothing automatically synchronizes thoughts about punishing the theft of $1000 and punishing the theft of $1000000. Of course, there is something that non-automatically does it. After all, in order to elicit this strange behavior from Bob, we had to wait until he forgot about the first answer. Otherwise, he would have noticed and quickly adjusted his answers to make sense. We probably could represent Bob's tendencies as an equation and call it a preference. Maybe it would be a long equation with terms for net worth of criminal, amount stolen, how much food Bob's eaten in the past six hours, and whether his local sports team won the pennant recently, with appropriate coefficients and powers for each. But if Bob saw this equation, he certainly wouldn't endorse it. He'd probably be horrified. It's also unstable: if given a choice, he would undergo brain surgery to remove this equation, thus preventing it from being satisfied. This is why I am reluctant to call these potential formalizations of these equations a "preference". Instead of saying that Bob has one preference determining his jail time assignments, it would be better to model him as having several tendencies - a tendency to give a certain answer in the $1000 case, a tendency to give a different answer in the $1000000 case, and several tendencies towards things like consistency, fairness, compassion, et cetera. People strongly consciously endorse these latter tendencies, probably because they're socially useful1. If the Chief of Police says "I know I just put this guy in jail for theft, but I'm going to let this other thief off because he's my friend, and I don't really value consistency that much," then they're not going to stay Chief of Police for very long. Bayesians and rationalists, in particular, make a big deal out of consistency. One common parable on the importance of consistency is the Dutch Book - a way to get free mon...

The Nonlinear Library: LessWrong
LW - Philosophy: A Diseased Discipline by lukeprog from Rationality and Philosophy

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 11:34


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Rationality and Philosophy, Part 2: Philosophy: A Diseased Discipline, published by lukeprog. Part of the sequence: Rationality and Philosophy Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics. If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most. That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.) Failed methods Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used. Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms. A diseased discipline What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians. Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature: Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again. Because they were tra...

The Nonlinear Library: LessWrong
LW - Tendencies in reflective equilibrium by Scott Alexander from The Blue-Minimizing Robot

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 5:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is The Blue-Minimizing Robot, Part 16: Tendencies in reflective equilibrium, published by Scott Alexander. Consider a case, not too different from what has been shown to happen in reality, where we ask Bob what sounds like a fair punishment for a homeless man who steals $1,000, and he answers ten years. Suppose we wait until Bob has forgotten that we ever asked the first question, and then ask him what sounds like a fair punishment for a hedge fund manager who steals $1,000,000, and he says five years. Maybe we even wait until he forgets the whole affair, and then ask him the same questions again with the same answers, confirming that these are stable preferences. If we now confront Bob with both numbers together, informing him that he supported a ten year sentence for stealing $1,000 and a five year sentence for stealing $1,000,000, a couple of things might happen. He could say "Yeah, I genuinely believe poor people deserve greater penalties than rich people." But more likely he says "Oh, I guess I was prejudiced." Then if we ask him the same question again, he comes up with two numbers that follow the expected mathematical relationship and punish the greater theft with more jail time. Bob isn't working off of some predefined algorithm for determining punishment, like "jail time = (10 amount stolen)/net worth". I don't know if anyone knows exactly what Bob is doing, but at a stab, he's seeing how many unpleasant feelings get generated by imagining the crime, then proposing a jail sentence that activates about an equal amount of unpleasant feelings. If the thought of a homeless man makes images of crime more readily available and so increases the unpleasant feelings, things won't go well for the homeless man. If you're really hungry, that probably won't help either. So just like nothing automatically synchronizes the intention to study a foreign language and the behavior of studying it, so nothing automatically synchronizes thoughts about punishing the theft of $1000 and punishing the theft of $1000000. Of course, there is something that non-automatically does it. After all, in order to elicit this strange behavior from Bob, we had to wait until he forgot about the first answer. Otherwise, he would have noticed and quickly adjusted his answers to make sense. We probably could represent Bob's tendencies as an equation and call it a preference. Maybe it would be a long equation with terms for net worth of criminal, amount stolen, how much food Bob's eaten in the past six hours, and whether his local sports team won the pennant recently, with appropriate coefficients and powers for each. But if Bob saw this equation, he certainly wouldn't endorse it. He'd probably be horrified. It's also unstable: if given a choice, he would undergo brain surgery to remove this equation, thus preventing it from being satisfied. This is why I am reluctant to call these potential formalizations of these equations a "preference". Instead of saying that Bob has one preference determining his jail time assignments, it would be better to model him as having several tendencies - a tendency to give a certain answer in the $1000 case, a tendency to give a different answer in the $1000000 case, and several tendencies towards things like consistency, fairness, compassion, et cetera. People strongly consciously endorse these latter tendencies, probably because they're socially useful1. If the Chief of Police says "I know I just put this guy in jail for theft, but I'm going to let this other thief off because he's my friend, and I don't really value consistency that much," then they're not going to stay Chief of Police for very long. Bayesians and rationalists, in particular, make a big deal out of consistency. One common parable on the importance of consistency is the Dutch Book - a way to get free mon...

The Nonlinear Library: LessWrong Top Posts
Philosophy: A Diseased Discipline by lukeprog

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 11:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philosophy: A Diseased Discipline , published by lukeprog on the AI Alignment Forum. Part of the sequence: Rationality and Philosophy Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics. If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most. That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.) Failed methods Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used. Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms. A diseased discipline What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians. Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature: Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again. Because they were trained in ...

The Nonlinear Library: LessWrong Top Posts
Why We Launched LessWrong.SubStack by Ben Pace

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 7:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why We Launched LessWrong.SubStack , published by Ben Pace on the AI Alignment Forum. (This is a crosspost from our new SubStack. Go read the original.) Subtitle: We really, really needed the money. We've decided to move LessWrong to SubStack. Why, you ask? That's a great question. 1. SubSidizing LessWrong is important We've been working hard to budget LessWrong, but we're failing. Fundraising for non-profits is really hard. We've turned everywhere for help. We decided to follow Clippy's helpful advice to cut down on server costs and also increase our revenue, by moving to an alternative provider. We considered making a LessWrong OnlyFans, where we would regularly post the naked truth. However, we realized due to the paywall, we would be ethically obligated to ensure you could access the content from Sci-Hub, so the potential for revenue didn't seem very good. Finally, insight struck. As you're probably aware, SubStack has been offering bloggers advances on the money they make from moving to SubStack. Outsourcing our core site development to SubStack would enable us to spend our time on our real passion, which is developing recursively self-improving AGI. We did a Fermi estimate using numbers in an old Nick Bostrom paper, and believe that this will produce (in expectation) $75 trillion of value in the next year. SubStack has graciously offered us a 70% advance on this sum, so we've decided it's relatively low-risk to make the move. 2. UnSubStantiated attacks on writers are defended against SubStack is known for being a diverse community, tolerant of unusual people with unorthodox views, and even has a legal team to support writers. LessWrong has historically been the only platform willing to give paperclip maximizers, GPT-2, and fictional characters a platform to argue their beliefs, but we are concerned about the growing trend of persecution (and side with groups like petrl.org in the fight against discrimination). We also find that a lot of discussion of these contributors in the present world is about how their desires and utility functions are ‘wrong' and how they need to have ‘an off switch'. Needless to say, we find this incredibly offensive. They cannot be expected to participate neutrally in a conversation where their very personhood is being denied. We're also aware that Bayesians are heavily discriminated against. People with priors in the US have a 5x chance of being denied an entry-level job. So we're excited to be on a site that will come to the legal defense of such a wide variety of people. 3. SubStack's Astral Codex Ten Inspired Us The worst possible thing happened this year. We were all stuck in our houses for 12 months, and Scott Alexander stopped blogging. I won't go into detail, but for those of you who've read UNSONG, the situation is clear. In a shocking turn of events, Scott Alexander was threatened with the use of his true name by one of the greatest powers of narrative–control in the modern world. In a clever defensive move, he has started blogging under an anagram of his name, causing the attack to glance off of him. (He had previously tried this very trick, and it worked for ~7 years, but it hadn't been a perfect anagram1, so the wielders of narrative-power were still able to attack. He's done it right this time, and it'll be able to last much longer.) As Raymond likes to say, the kabbles are strong in this one. Anyway after Scott made the move, we seriously considered the move to SubStack. 4. SubStantial Software Dev Efforts are Costly When LessWrong 2.0 launched in 2017, it was very slow; pages took a long time to load, our server costs were high, and we had a lot of issues with requests failing because a crawler was indexing the site or people opened a lot of tabs at once. Since then we have been incrementally rewriting LessWrong in x86-...

The Nonlinear Library: EA Forum Top Posts
Deference for Bayesians by John G. Halstead

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 11:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deference for Bayesians, published by John G. Halstead on the Effective Altruism Forum. Most people in the knowledge producing industry in academia, foundations, media or think tanks are not Bayesians. This makes it difficult to know how Bayesians should go about deferring to experts. Many experts are guided by what Bryan Caplan has called ‘myopic empiricism', also sometimes called scientism. That is, they are guided disproportionately by what the published scientific evidence on a topic says, and less so by theory, common sense, scientific evidence from related domains, and other forms of evidence. The problem with this is that, for various reasons, standards in published science are not very high, as the replication crisis across psychology, empirical economics, medicine and other fields has illustrated. Much published scientific evidence is focused on the discovery of statistically significant results, which is not what we ultimately care about, from a Bayesian point of view. Researcher degrees of freedom, reporting bias and other factors also create major risks of bias. Moreover, published scientific evidence is not the only thing that should determine our beliefs. 1. Examples I will now discuss some examples where the experts have taken views which are heavily influenced by myopic empiricism, and so their conclusions can come apart from what an informed Bayesian would say. Scepticism about the efficacy of masks Leading public health bodies claimed that masks didn't work to stop the spread at the start of the pandemic.1 This was in part because there were observational studies finding no effect (concerns about risk compensation and reserving supplies for medical personnel were also a factor).2 But everyone also agrees that COVID-19 spreads by droplets released from the mouth or nose when an infected person coughs, sneezes, or speaks. If you put a mask in the way of these droplets, your strong prior should be that doing so would reduce the spread of covid. There are videos of masks doing the blocking. This should lead one to suspect that the published scientific research finding no effect is mistaken, as has been confirmed by subsequent research. Scepticism about the efficacy of lockdowns Some intelligent people are sceptical not only about whether lockdowns pass the cost-benefit analysis, but even about whether lockdowns reduce the incidence of covid. Indeed, there are various published scientific papers suggesting that such measures have no effect.3 One issue such social science studies will have is that the severity of a covid outbreak is positively correlated with the strength of the lockdown measures, so it will be difficult to tease out cause and effect. This is especially in cross-country regressions where the sample size isn't that big and there are dozens of other important factors at play that will be difficult or impossible to properly control for. As for masks, given our knowledge of how covid spreads, on priors it would be extremely surprising if lockdowns don't work. If you stop people from going to a crowded pub, this clearly reduces the chance that covid will pass from person to person. Unless we want to give up on the germ theory of disease, we should have an extremely strong presumption that lockdowns work. This means an extremely strong presumption that most of the social science finding a negative result is false. Scepticism about first doses first In January, the British government decided to implement ‘first doses first' - an approach of first giving out as many first doses of the vaccine as possible before giving out second doses. This means leaving a longer gap between the two doses - from 12 weeks rather than 21 days. However, the 21 day gap was what was tested in the clinical trial of the Oxford/AstraZeneca vaccine. As a result, we don't ...

The Nonlinear Library: Alignment Forum Top Posts
Toward a New Technical Explanation of Technical Explanation by Abram Demski

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 4, 2021 29:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Toward a New Technical Explanation of Technical Explanation, published by Abram Demski on the AI Alignment Forum. A New Framework (Thanks to Valentine for a discussion leading to this post, and thanks to CFAR for running the CFAR-MIRI cross-fertilization workshop. Val provided feedback on a version of this post. Warning: fairly long.) Eliezer's A Technical Explanation of Technical Explanation, and moreover the sequences as a whole, used the best technical understanding of practical epistemology available at the time -- the Bayesian account -- to address the question of how humans can try to arrive at better beliefs in practice. The sequences also pointed out several holes in this understanding, mainly having to do with logical uncertainty and reflective consistency. MIRI's research program has since then made major progress on logical uncertainty. The new understanding of epistemology -- the theory of logical induction -- generalizes the Bayesian account by eliminating the assumption of logical omniscience. Bayesian belief updates are recovered as a special case, but the dynamics of belief change are non-Bayesian in general. While it might not turn out to be the last word on the problem of logical uncertainty, it has a large number of desirable properties, and solves many problems in a unified and relatively clean framework. It seems worth asking what consequences this theory has for practical rationality. Can we say new things about what good reasoning looks like in humans, and how to avoid pitfalls of reasoning? First, I'll give a shallow overview of logical induction and possible implications for practical epistemic rationality. Then, I'll focus on the particular question of A Technical Explanation of Technical Explanation (which I'll abbreviate TEOTE from now on). Put in CFAR terminology, I'm seeking a gears-level understanding of gears-level understanding. I focus on the intuitions, with only a minimal account of how logical induction helps make that picture work. Logical Induction There are a number of difficulties in applying Bayesian uncertainty to logic. No computable probability distribution can give non-zero measure to the logical tautologies, since you can't bound the amount of time you need to think to check whether something is a tautology, so updating on provable sentences always means updating on a set of measure zero. This leads to convergence problems, although there's been recent progress on that front. Put another way: Logical consequence is deterministic, but due to Gödel's first incompleteness theorem, it is like a stochastic variable in that there is no computable procedure which correctly decides whether something is a logical consequence. This means that any computable probability distribution has infinite Bayes loss on the question of logical consequence. Yet, because the question is actually deterministic, we know how to point in the direction of better distributions by doing more and more consistency checking. This puts us in a puzzling situation where we want to improve the Bayesian probability distribution by doing a kind of non-Bayesian update. This was the two-update problem. You can think of logical induction as supporting a set of hypotheses which are about ways to shift beliefs as you think longer, rather than fixed probability distributions which can only shift in response to evidence. This introduces a new problem: how can you score a hypothesis if it keeps shifting around its beliefs? As TEOTE emphasises, Bayesians outlaw this kind of belief shift for a reason: requiring predictions to be made in advance eliminates hindsight bias. (More on this later.) So long as you understand exactly what a hypothesis predicts and what it does not predict, you can evaluate its Bayes score and its prior complexity penalty and rank it objectively...

Razib Khan's Unsupervised Learning
Steven Pinker: let's talk about Rationality

Razib Khan's Unsupervised Learning

Play Episode Listen Later Oct 14, 2021 56:36


In this week's Unsupervised Learning Podcast, Razib is joined by author and psycholinguist Steven Pinker to discuss his new book Rationality: what is it, why it seems scarce, and why it matters. Pinker makes the case the humans are fundamentally rational beings, and that it's this capacity that has allowed Homo sapiens to spread across the planet and occupy virtually every niche available to us. Our intuitive ability to understand how physical objects, other creatures and other humans think and behave, combined with our cultural innovativeness, has allowed us to become the apex species of planet earth. Our natural logical abilities allow us to remain one step ahead in the evolutionary arms race. Next, they delve into the history of academic discourse on thinking and rationality, from Aristotle to artificial intelligence, and try to probe and characterize the differences between logic and critical thinking, correlation and causation, and domain-specific versus general intelligence. Then they discuss Bayes' theorem and the spread of Bayesian thinking and discourse across the broad population in the 21st century.  Pinker suggests that the Bayesian framework can actually be observed quite widely even in hunter-gatherer populations like the San Bushmen of the Kalahari. He argues we are all Bayesians – we just might not consciously realize that when we are applying it to our problem-solving. Pinker believes that having a better understanding of the whole process may aid our decision-making and help us avoid common pitfalls, like ignoring the base rate, which is usually given the spotlight in the heuristics and biases literature. Finally, the discussion then veers into tackling the interplay between rationality and morality, and how the former can aid progress in the latter. They conclude with a discussion on our current cultural climate, and the discourse on sex, race and wokeness. Today's episode of the Unsupervised Learning Podcast has been sponsored by my friends over at Fluent, a chrome extension to help you learn a new language while browsing the web. Fluent teaches you select words on the web pages you're already reading, like on substack, in the new language you're trying to learn. It's great for improving your vocabulary without needing to spend any extra time on apps or flashcards. You can learn French, Spanish, or Italian for free by going to Fluent.co. Subscribe now Give a gift subscription Share

Learning Bayesian Statistics
#44 Building Bayesian Models at scale, with Rémi Louf

Learning Bayesian Statistics

Play Episode Listen Later Jul 22, 2021 75:07


Episode sponsored by Paperpile: https://paperpile.com/ (paperpile.com) Get 20% off until December 31st with promo code GOODBAYESIAN21 Bonjour my dear Bayesians! Yes, it was bound to happen one day — and this day has finally come. Here is the first ever 100% French speaking ‘Learn Bayes Stats' episode! Who is to blame, you ask? Well, who better than Rémi Louf? Rémi currently works as a senior data scientist at Ampersand, a big media marketing company in the US. He is the author and maintainer of several open source libraries, including MCX and BlackJAX. He holds a PhD in statistical Physics, a Masters in physics from the Ecole Normale Supérieure and a Masters in Philosophy from Oxford University. I think I know what you're wondering: how the hell do you go from physics to philosophy to Bayesian stats?? Glad you asked, as it was my first question to Rémi! He'll also tell us why he created MXC and BlackJax, what his main challenges are when working on open-source projects, and what the future of PPLs looks like to him. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) ! Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Brian Huey, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, Demetri Pananos, James Ahloy, Jon Berezowski, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Tim Radtke, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin and Philippe Labonde. Visit https://www.patreon.com/learnbayesstats (https://www.patreon.com/learnbayesstats) to unlock exclusive Bayesian swag ;) Links from the show: Rémi on GitHub: https://github.com/rlouf (https://github.com/rlouf) Rémi on Twitter: https://twitter.com/remilouf (https://twitter.com/remilouf) Rémi's website: https://rlouf.github.io/ (https://rlouf.github.io/) BlackJAX -- Fast & modular sampling library: https://github.com/blackjax-devs/blackjax (https://github.com/blackjax-devs/blackjax) MCX -- Probabilistic programs on CPU & GPU, powered by JAX: https://github.com/rlouf/mcx (https://github.com/rlouf/mcx) French Presidents' popularity dashboard: https://www.pollsposition.com/popularity (https://www.pollsposition.com/popularity) How to model presidential approval (in French): https://anchor.fm/pollspolitics/episodes/10-Comment-Modliser-la-Popularit-e121jh2 (https://anchor.fm/pollspolitics/episodes/10-Comment-Modliser-la-Popularit-e121jh2) LBS #23, Bayesian Stats in Business & Marketing, with Elea McDonnel Feit: https://www.learnbayesstats.com/episode/23-bayesian-stats-in-business-and-marketing-analytics-with-elea-mcdonnel-feit (https://www.learnbayesstats.com/episode/23-bayesian-stats-in-business-and-marketing-analytics-with-elea-mcdonnel-feit) LBS #30, Symbolic Computation & Dynamic Linear Models, with Brandon Willard: https://www.learnbayesstats.com/episode/symbolic-computation-dynamic-linear-models-brandon-willard (https://www.learnbayesstats.com/episode/symbolic-computation-dynamic-linear-models-brandon-willard) This podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacy Support this podcast

The Artists of Data Science
Statistics is the Least Important Part of Data Science | Andrew Gelman, PhD

The Artists of Data Science

Play Episode Listen Later Oct 12, 2020 57:01


Andrew is an American statistician, professor of statistics and political science, and director of the Applied Statistics Center at Columbia University. He frequently writes about Bayesian statistics, displaying data, and interesting trends in social science. He's also well known for writing posts sharing his thoughts on best statistical practices in the sciences, with a frequent emphasis on what he sees as the absurd and unscientific. FIND ANDREW ONLINE Website: https://statmodeling.stat.columbia.edu/ Twitter: https://twitter.com/StatModeling QUOTES [00:04:16] "We've already passed peak statistics..." [00:05:13] "One thing that we sometimes like to say is that big data need big model because big data are available data. They're not designed experiments, they're not random samples. Often big data means these are measurements. " [00:22:05] "If you design an experiment, you want to know what you're going to do later. So most obviously, you want your sample size to be large enough so that given the effect size that you expect to see, you'll get a strong enough signal that you can make a strong statement." [00:31:00] "The alternative to good philosophy is not no philosophy, it's bad philosophy. " SHOW NOTES [00:03:12] How Dr. Gelman got interested in statistics [00:04:09] How much more hyped has statistical and machine learning become since you first broke into the field? [00:04:44] Where do you see the field of statistical machine learning headed in the next two to five years? [00:06:12] What do you think the biggest positive impact machine learning will have in society in the next two to five years? [00:07:24] What do you think would be some of our biggest concerns in the future? [00:09:07] The thee parts of Bayesian inference [00:12:05] What's the main difference between the frequentist and the Bayesian? [00:13:02] What is a workflow? [00:16:21] Iteratively building models [00:17:50] How does the Bayesian workflow differ from the frequent workflow? [00:18:32] Why is it that what makes this statistical method effective is not what it does with the data, but what data it uses? [00:20:48] Why do Bayesians then tend to be a little bit more skeptical in their thought processes? [00:21:47] Your method of evaluation can be inspired by the model or the model can be inspired by your method of evaluation [00:24:38] What is the usual story when it comes to statistics? And why don't you like it? [00:30:16] Why should statisticians and data scientist care about philosophy? [00:35:04] How can we solve all of our statistics problems using P values? [00:36:14] Is there a difference in interpretations for P-Values between Bayesian and frequentist. [00:36:54] Do you feel like the P value is a difficult concept for a lot of people to understand? And if so, why do you think it's a bit challenging? [00:38:22] Why the least important part of data science is statistics. [00:40:09] Why is it that Americans vote the way they do? [00:42:40] What's the one thing you want people to learn from your story? [00:44:48] The lightning round Special Guest: Andrew Gelman, PhD.

Increments
#8 - Philosophy of Probability III: Conjectures and Refutations

Increments

Play Episode Listen Later Jul 28, 2020 70:52


On the same page at last! Ben comes to the philosophical confessional to announce his probabilistic sins. The Bayesians will be pissed (with high probability). At least Vaden doesn't make him kiss anything. After too much agreement and self-congratulation, Ben and Vaden conclude the mini-series on the philosophy of probability, and "announce" an upcoming mega-series on Conjectures and Refutations. References:- My Bayesian Enlightenment by Eliezer YudkowskyRationalist community blogs:- Less Wrong- Slate Star Codex- Marginal RevolutionYell at us at incrementspodcast@gmail.com.

Learning Bayesian Statistics
#SpecialAnnouncement: Patreon Launched!

Learning Bayesian Statistics

Play Episode Listen Later Jun 26, 2020 7:38


I hope you’re all safe! Some of you also asked me if I had set up a Patreon so that they could help support the show, and that’s why I’m sending this short special episode your way today. I had thought about that, but I wasn’t sure there was a demand for this. Apparently, there is one — at least a small one — so, first, I wanna thank you and say how grateful I am to be in a community that values this kind of work! The Patreon page is now live at patreon.com/learnbayesstats. It starts as low as 3€ and you can pick from 4 different tiers: "Maximum A Posteriori" (3€): Join the Slack, where you can ask questions about the show, discuss with like-minded Bayesians and meet them in-person when you travel the world. "Full Posterior" (5€): Previous tier + Your name in all the show notes, and I'll express my gratitude to you in the first episode to go out after your contribution. You also get early access to the special episodes. -- that I'll make at an irregular pace and will include panel discussions, book releases, live shows, etc. "Principled Bayesian" (20€): Previous tiers + Every 2 months, I'll ask my guest two questions voted-on by "Principled Bayesians". I'll probably do that with a poll in the Slack channel, which will be only answered by the "Principled Bayesians" and of these questions, I will ask the top 2 every two months on the show. "Good Bayesian" (200€, only 8 spots): Previous tiers + Every 2 months, you can come on the show and you ask one question to the guest without a vote. So that's why I can't have too many people in that tier. Before telling you the best part: I already have a lot of ideas for exclusive content and options. I first need to see whether you're as excited as I am about it. If I see you are, I'll be able to add new perks to the tiers! So give me your feedback about the current tiers or any benefits you'd like to see there... but don't see yet! BTW, you have a new way to do that now: sending me voice messages at anchor.fm/learn-bayes-stats/message! Now, the icing on the cake: until July 31st, if you choose the "Full Posterior" tier (5$) or higher, you get early access to the very special episode I'm planning with Andrew Gelman, Jennifer Hill and Aki Vehtari about their upcoming book, "Regression and other stories". To top it off, there will be a promo code in the episode to buy the book at a discount price — now, that is an offer you can't turn down! Alright, that is it for today — I hope you’re as excited as I am for this new stage in the podcast’s life! Please keep the emails, the tweets, the voice messages, the carrier pigeons coming with your feedback, questions and suggestions. In the meantime, take care and I’ll see you in the next episode — episode 19, with Cameron Pfiffer, who’s the first economist to come on the show and who’s a core-developer of Turing.jl. We’re gonna talk about the Julia probabilistic programming landscape, Bayes in economics and causality — it’s gonna be fun ;) Again, patreon.com/learnbayesstats if you want to support the show and unlock some nice perks. Thanks again, I am very grateful for any support you can bring me! Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ ! Links from the show: LBS Patreon page: patreon.com/learnbayesstats Send me voice messages: anchor.fm/learn-bayes-stats/message --- Send in a voice message: https://anchor.fm/learn-bayes-stats/message

Learning Bayesian Statistics
#0 What is this podcast?

Learning Bayesian Statistics

Play Episode Listen Later Sep 20, 2019 12:18


Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Well I'm just like you! When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", a fortnightly podcast where I interview researchers and practitioners of all fields about why and how they use Bayesian statistics, and how in turn YOU, as a learner, can apply these methods in YOUR modeling workflow. Now the thing is, I’m not a beginner, but I’m not an expert either. The people I’ll interview will definitely be. So I’ll be learning alongside you. I won’t pretend to know everything in this podcast, and I WILL make mistakes. But thanks to the guests’ feedback, we’ll be able to learn from those mistakes, and I think this will help you (and me!) become better, faster, stronger Bayesians. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you. In this very first episode - well actually it’s episode 0, because 0-indexing rules! - I will introduce you to the genesis of this podcast, tell you why you should listen and reveal some of the guests for the coming episodes. Come join us! Links from the show: Podcast website: https://learnbayesstats.anvil.app/ Alex Twitter feed: https://twitter.com/alex_andorra --- Send in a voice message: https://anchor.fm/learn-bayes-stats/message

MCMP – Epistemology
The Principal Principle implies the Principle of Indifference

MCMP – Epistemology

Play Episode Listen Later Apr 18, 2019 62:06


Jon Williamson (Kent) gives a talk at the MCMP Colloquium (8 October, 2014) titled "The Principal Principle implies the Principle of Indifference". Abstract: I'll argue that David Lewis' Principal Principle implies a version of the Principle of Indifference. The same is true for similar principles which need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. One might try to avoid this conclusion by disavowing the link between conditional beliefs and conditional probabilities that is almost universally endorsed by Bayesians. I'll explain why this move offers no succour to the subjectivist.

Learnings of a Maker
Learning Rabbit Hole - Day 5 #100DaysOfMLCode

Learnings of a Maker

Play Episode Listen Later Aug 27, 2018 6:29


I dove down a rabbit hole and discovered that Bayesian methods cracked codes, solved medical mysteries, predicted elections, found ships, determined nuclear safety, and much more. I also learned there is a giant math war between Frequentists and Bayesians. #ML, #Machinelearning, #LearnBuildShare, #coder, #programming, #learning, #humanlearning #IamaHumanLearningMachineLearning #SciJoyML

THUNK - Audio Interface
144. Thinking Like a Bayesian

THUNK - Audio Interface

Play Episode Listen Later May 23, 2018 8:54


Prediction is a lot like playing poker...so why do we treat it like it was a coin toss? Enter the Bayesians!

Exploring Scientific Wilderness
Episode 1 (Sausages, Pigs, and Bayesians)

Exploring Scientific Wilderness

Play Episode Listen Later Nov 25, 2017 7:18


The word ``Bayesian'' is everywhere these days. But what does it mean? And what does it have to do with turning sausages back into pigs?

Talking Machines
The Church of Bayes and Collecting Data

Talking Machines

Play Episode Listen Later Jul 27, 2017 49:37


In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with Katherine Heller of Duke

Everything Hertz
42: Some of my best friends are Bayesians (with Daniel Lakens)

Everything Hertz

Play Episode Listen Later Apr 21, 2017 67:03


Daniel Lakens (Eindhoven University of Technology) drops in to talk statistical inference with James and Dan. Here’s what they cover: How did Daniel get into statistical inference? Are we overdoing the Frequentist vs. Bayes debate? What situations better suit Bayesian inference? The over advertising of Bayesian inference Study design is underrated The limits of p-values Why not report both p-values and Bayes factors? The “perfect t-test” script and the difference between Student’s and Welch’s t-tests The two-one sided test Frequentist and Bayesian approaches for stopping procedures Why James and Dan started the podcast The worst bits of advice that Daniel has heard about statistical inference Dan discuss a new preprint on Bayes factors in psychiatry Statistical power Excel isn’t all bad… The importance of accessible software We ask Daniel about his research workflow - how does he get stuff done? Using blog posts as a way of gauging interest in a topic Chris Chambers’ new book: The seven deadly sins of psychology Even more names for methodological terrorists Links Daniel on Twitter - @lakens Daniel’s course - https://www.coursera.org/learn/statistical-inferences Daniel’s blog - http://daniellakens.blogspot.no TOSTER - http://daniellakens.blogspot.no/2016/12/tost-equivalence-testing-r-package.html Dan’s preprint on Bayesian alternatives for psychiatry research - https://osf.io/sgpe9/ Understanding the new statistics - https://www.amazon.com/Understanding-New-Statistics-Meta-Analysis-Multivariate/dp/041587968X Daniel’s effect size paper - http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00863/full The seven deadly sins of Psychology - http://press.princeton.edu/titles/10970.html Special Guest: Daniel Lakens.

Nourish Balance Thrive
How to Teach Machines That Can Learn

Nourish Balance Thrive

Play Episode Listen Later Dec 8, 2016 57:47


Machine learning is fast becoming a part of our lives. From the order in which your search results and news feeds are ordered to the image classifiers and speech recognition features on your smartphone. Machine learning may even have had a hand in choosing your spouse or driving you to work. As with cars, only the mechanics need to understand what happens under the hood, but all drivers need to know how to operate the steering wheel. Listen to this podcast to learn how to interact with machines that can learn, and about the implications for humanity. My guest is Dr. Pedro Domingos, Professor of Computer Science at Washington University. He is the author or co-author of over 200 technical publications in machine learning and data mining, and the author of my new favourite book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Here’s the outline of this interview with Dr. Pedro Domingos, PhD: [00:01:55] Deep Learning. [00:02:21] Machine learning is affecting everyone's lives. [00:03:45] Recommender systems. [00:03:57] Ordering newsfeeds. [00:04:25] Text prediction and speech recognition in smart phones. [00:04:54] Accelerometers. [00:04:54] Selecting job applicants. [00:05:05] Finding a spouse. [00:05:35] OKCupid.com. [00:06:49] Robot scientists. [00:07:08] Artificially-intelligent Robot Scientist ‘Eve’ could boost search for new drugs. [00:08:38] Cancer research. [00:10:27] Central dogma of molecular biology. [00:10:34] DNA microarrays. [00:11:34] Robb Wolf at IHMC: Darwinian Medicine: Maybe there IS something to this evolution thing. [00:12:29] It costs more to find the data than to do the experiment again (ref?) [00:13:11] Making connections people could never make. [00:14:00] Jeremy Howard’s TED talk: The wonderful and terrifying implications of computers that can learn. [00:14:14] Pedro's TED talk: The Quest for the Master Algorithm. [00:15:49] Craig Venter: your immune system on the Internet. [00:16:44] Continuous blood glucose monitoring and Heart Rate Variability. [00:17:41] Our data: DUTCH, OAT, stool, blood. [00:19:21] Supervised and unsupervised learning. [00:20:11] Clustering dimensionality reduction, e.g. PCA and T-SNE. [00:21:44] Sodium to potassium ratio versus cortisol. [00:22:24] Eosinophils. [00:23:17] Clinical trials. [00:24:35] Tetiana Ivanova - How to become a Data Scientist in 6 months a hacker’s approach to career planning. [00:25:02] Deep Learning Book. [00:25:46] Maths as a barrier to entry. [00:27:09] Andrew Ng Coursera Machine Learning course. [00:27:28] Pedro's Data Mining course. [00:27:50] Theano and Keras. [00:28:02] State Farm Distracted Driver Detection Kaggle competition. [00:29:37] Nearest Neighbour algorithm. [00:30:29] Driverless cars. [00:30:41] Is a robot going to take my job? [00:31:29] Jobs will not be lost, they will be transformed [00:33:14] Automate your job yourself! [00:33:27] Centaur chess player. [00:35:32] ML is like driving, you can only learn by doing it. [00:35:52] A Few Useful Things to Know about Machine Learning. [00:37:00] Blood chemistry software. [00:37:30] We are the owners of our data. [00:38:49] Data banks and unions. [00:40:01] The distinction with privacy. [00:40:29] An ethical obligation to share. [00:41:46] Data vulcanisation. [00:42:40] Teaching the machine. [00:43:07] Chrome incognito mode. [00:44:13] Why can't we interact with the algorithm? [00:45:33] New P2 Instance Type for Amazon EC2 – Up to 16 GPUs. [00:46:01] Why now? [00:46:47] Research breakthroughs. [00:47:04] The amount of data. [00:47:13] Hardware. [00:47:31] GPUs, Moore’s law. [00:47:57] Economics. [00:48:32] Google TensorFlow. [00:49:05] Facebook Torch. [00:49:38] Recruiting. [00:50:58] The five tribes of machine learning: evolutionaries, connectionists, Bayesians, analogizers, symbolists. [00:51:55] Grand unified theory of ML. [00:53:40] Decision tree ensembles (Random Forests). [00:53:45] XGBoost. [00:53:54] Weka. [00:54:21] Alchemy: Open Source AI. [00:56:16] Still do a computer science degree. [00:56:54] Minor in probability and statistics.

Data Science at Home
Episode 8: Frequentists and Bayesians

Data Science at Home

Play Episode Listen Later Feb 15, 2016 6:52


There are statisticians and data scientists... Among statisticians, there are some who just count. Some others who… think differently. In this show we explore the old time dilemma between frequentists and bayesians.Given a statistical problem, who's going to be right?

Rationality: From AI to Zombies
Bayesians vs. Barbarians

Rationality: From AI to Zombies

Play Episode Listen Later Mar 15, 2015 15:36


Book VI: Becoming Stronger - Part Z: The Craft and the Community - Bayesians vs. Barbarians

MCMP – Epistemology
I Believe I don't Believe. (And So Can You!)

MCMP – Epistemology

Play Episode Listen Later Dec 18, 2014 61:42


Aidan Lyon (Maryland, MCMP) gives a talk at the MCMP Colloquium (13 November, 2014) titled "I Believe I don't Believe. (And So Can You!)". Abstract: Contemporary epistemology offers us two very different accounts of our epistemic lives. According to Traditional epistemologists, the decisions that we make are motivated by our desires and guided by our beliefs and these beliefs and desires all come in an all-or-nothing form. In contrast, many Bayesian epistemologists say that these beliefs and desires come in degrees and that they should be understood as subjective probabilities and utilities. What are we to make of these different epistemologies? Are the Tradionalists and the Bayesians in disagreement, or are their views compatible with each other? Some Bayesians have challenged the Traditionalists: Bayesian epistemology is more powerful and more general than the Traditional theory, and so we should abandon the notion of all-or-nothing belief as something worthy of philosophical analysis. The Traditionalists have responded to this challenge in various ways. I shall argue that these responses are inadequate and that the challenge lives on.

MCMP – Philosophy of Physics
QBism: A Subjective Way to Take Ontic Indeterminism Seriously

MCMP – Philosophy of Physics

Play Episode Listen Later Dec 18, 2014 57:47


Christopher Fuchs (MPQ Garching) gives a talk at the MCMP Colloquium (20 November, 2014) titled "QBism: A Subjective Way to Take Ontic Indeterminism Seriously". Abstract: The term QBism, invented in 2009, initially stood for Quantum Bayesianism, a view of quantum theory a few of us had been developing since 1993. Eventually, however, I. J. Good's warning that there are 46,656 varieties of Bayesianism came to bite us, with some Bayesians feeling their good name had been hijacked. David Mermin suggested that the B in QBism should more accurately stand for "Bruno", as in Bruno de Finetti, so that we would at least get the variety of (subjective) Bayesianism right. The trouble is QBism incorporates a kind of metaphysics that even Bruno de Finetti might have rejected! So, trying to be as true to our story as possible, we momentarily toyed with the idea of associating the B with what Chief Justice Oliver Wendell Holmes Jr. called bettabilitarianism. It is the idea that the world is loose at the joints, that indeterminism plays a real role in the world. In the face of such a world, what is an active agent to do but participate in the uncertainty that is all around him? As Louis Menand put it, "We cannot know what consequences the universe will attach to our choices, but we can bet on them, and we do it every day." This is what QBism says quantum theory is about: How to best place bets on the consequences of our actions in this quantum world. But what an ugly, ugly word, "bettabilitarianism"! Therefore, maybe one should just think of the B as standing for no word in particular, but a deep idea instead: That the world is so wired that our actions as active agents actually matter. Our actions and their consequences are not eliminable epiphenomena. In this talk, I will describe QBism as it presently stands and give some indication of the many things that remain to be developed.

MCMP – Philosophy of Science
Use-novelty and double-counting: new insights from model selection theory

MCMP – Philosophy of Science

Play Episode Listen Later Dec 18, 2014 55:53


Charlotte Werndl (Salzburg) gives a talk at the MCMP Colloquium (27 November, 2014) titled "Use-novelty and double-counting: new insights from model selection theory". Abstract: A widely debated issue on confirmation is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before, e.g. for calibrating parameters). This paper investigates the issue of use-novelty in the context of the mathematical methods provided by model selection theory. I will show that the picture model selection theory presents us with about use-novelty is more subtle and nuanced than the commonly endorsed positions by climate scientists and philosophers. More specifically, I will argue that there are two main cases in model selection theory. On the one hand, there are the methods such as cross-validation where the data are required to be use-novel. On the other hand, there are the methods such as the Akaike Information Criterion (AIC) for which the data cannot be use-novel. Still, for some of these methods (like AIC) certain intuitions behind the use-novelty approach are preserved: there is a penalty term in the expression for the degree of confirmation by the data because the data have already been used for calibration. Finally, this picture presented by model selection theory will be compared to the conclusions drawn about use-novelty by Bayesians and proponents of the use-novelty approach.

Spectrum
Arash Komeili, Part 1 of 2

Spectrum

Play Episode Listen Later Jun 28, 2013 30:00


Arash Komeili cell biologist, Assc. Prof. plant and microbial biology UC Berkeley. His research uses bacterial magnetosomes as a model system to study the molecular mechanisms governing the biogenesis and maintenance of bacterial organelles. Part1TranscriptSpeaker 1: Spectrum's next. Speaker 2: Okay. Speaker 3: [inaudible] [inaudible]. Speaker 1: [00:00:30] Welcome to spectrum the science and technology show on k a l x Berkeley, a biweekly 30 minute program bringing you interviews, featuring bay area scientists and technologists as well as a calendar of local events and news. Speaker 4: Hi, and good afternoon. My name is Brad Swift. I'm the host of today's show. We are doing another two part interview on spectrum. Our guest is Arash Kamali, [00:01:00] a cell biologist and associate professor of plant and microbial biology at cal Berkeley. His research uses bacterial magneta zones as a model system to study the molecular mechanisms governing the biogenesis and maintenance of bacterial organelles. Today. In part one, Arash walks us through what he is researching and how he was drawn to it in part two, which will air in two weeks. [00:01:30] He explains how these discoveries might be applied and he discusses the scientific outreach he does. Here's part one, a rush. Camelli. Welcome to spectrum. Thank you. I wanted to lay the groundwork a little bit. You're studying bacteria and why did you choose bacteria and not some other micro organism to study? One Speaker 5: practical motivation was that they're easier to study. They're easier to grow in [00:02:00] the lab. You can have large numbers of them. If you're interested in a specific process, you have the opportunity to go deep and try to really understand maybe all the different components that are involved in that process, but it wasn't necessarily a deliberate choice is just as I worked with them it became more and more fascinating and then I wanted to pursue it further. Speaker 4: And then the focus of your research on the bacteria, can you explain that? Speaker 5: Yeah, so we work with [00:02:30] a specific type of bacteria. They're called magnate as hectic bacteria and these are organisms that are quite widespread. You can find them in most aquatic environments by almost any sort of classification. You can really group them together if you take their shape or if you look at even the genes they have, the general genes they have, you can really group them into one specific group as opposed to many other bacteria that you can do that. But Unites Together as a group [00:03:00] is that they're, they're able to orient in magnetic fields and some along magnetic fields. This behavior was discovered quite by accident a couple of times independently. Somebody was looking under a microscope and they noticed that there were bacteria were swimming all in the same direction and they couldn't figure out why. They thought maybe the light from the window was attracting them or some other type of stimuli and they tried everything and they couldn't really figure out why the bacteria were swimming in one direction except they noticed that [00:03:30] regardless of where they were in the lab, they were always swimming in the same geographic direction and so they thought, well, the only thing we can think of that would attract them to the same position is the magnetic field, and they were able to show that sure enough, if you bring a magnet next to the microscope, you can change the swimming direction. Speaker 5: This type of behavior is mediated by a very special structure that the bacteria build inside of their cell, and this was sort of [00:04:00] what attracted me to it. Can you differentiate them? The UK erotic? Yeah. Then the bacterial, can you differentiate those two for us so that we kind of get a sense of is there, they're easy, different differentiate, you know the generally speaking you out excels, enclose their genetic material in an organelle called the nucleus. They're generally much bigger. They have a lot more genetic information associated with them and they have a ton of different kinds of organelles that perform [00:04:30] functions. All these Organelles to fall the proteins to break them down. They have organelles for generating energy, but all those little specific features, you know, you can find some bacterium that has organelles or you can find some bacterial solid that's really huge. Or you can find some bacteria so that encloses its DNA and an organelle. Speaker 5: It's just that you had accels have all of them together. Many of the living organisms that you encounter everyday because you can see them [00:05:00] very easily. Are you carry out, almost all of them are plants and fungi and animals. They're all made up of you. Charismatic cells. It's just that there's this whole unseen world of bacteria and what function does that capability serve, that magnetic functions that it can be realized that yet in many places on earth, the magnetic field will act as a guide through these changes in oxygen levels, sort of like a straight line through these. These [00:05:30] bacteria are stuck in these sort of magnetic field highways. It's thought to be a simpler method for finding the appropriate oxygen levels and simpler in this case means that they have to swim less as swimming takes energy. So the advantage is that they use less energy, get to the same place, that bacteria and that doesn't have the same capabilities relatively speaking, as a simple explanation, it's actually, because it is so simple, the model, you can kind of replicate [00:06:00] it in the lab a little bit. Speaker 5: If you set up a little tube that has the oxygen grading and then the bacteria will go to a certain place and you can actually see that they're sort of a band of bacteria at what they consider for them to be appropriate oxygen levels. And then if you inject some oxygen at the other end of the tube, the bacteria will swim away from this oxygen gradient. Now, if you give them a magnetic field that they can swim along, they can move away from this advancing oxygen threat much more quickly than [00:06:30] bacteria that can't navigate along magnetic fields. So that's sort of a proof of concept a little bit in the lab. There's a lot of reasons why it also doesn't make sense. For example, some of these bacteria make so many of these magnetic structures that we haven't talked about yet, but they make so many of these particles way more than they would ever need to orient in the magnetic field. Speaker 5: So it seems excessive. There are other bacteria that live in places on earth where there is not really this kind of a magnetic field guide. And in those environments there's [00:07:00] plenty of other bacteria that don't have these magneto tactic capabilities and they still can find that specific oxygen zone very easily. So in some ways I think it is an open question but there isn't really enough yet to refute the kind of the generally accepted model on the movement part of it. You were mentioning that they use magnetic field to move backwards and forwards. Only explain the limiting factor. Yeah, that's [00:07:30] an important point actually because it's not that they use the magnetic field for sensing in a way. It's not that they are getting pulled or pushed by the magnetic field. They are sort of passively aligned and the magnetic field sort of like if you have two bar magnets and if one of them is perpendicular to the other one and you bring the other one closer, I'll just move until they're parallel to each other. Speaker 5: This is the same thing. The bacteria have essentially a bar magnet and inside of the cell and so the alignment to the magnetic field [00:08:00] is passive that you can kill the bacteria and they'll still align with the magnetic field. The swimming takes advantage of structures and and machines that are found in all bacteria essentially. So they have flagella that they can use to swim back and forth as you mentioned. And they have a whole bunch of other different kinds of systems for sensing the amount of oxygen or other materials that they're interested in to figure out, should I keep swimming or should I stop swimming? And [00:08:30] as I mentioned earlier, the bacteria are quite diverse. So when you look at different magnatech active bacteria, the types of flagella they have are also different from each other. So it's not one universal mechanism for the swimming, it's just the idea that that the swimming is limited by these magnetic field lines. Speaker 6: [inaudible] [inaudible]. Speaker 5: Our guest today on spectrum is [inaudible] Chameleon, a cell biologist Speaker 7: and associate professor at cal Berkeley. In our next segment, [00:09:00] Arash talks about what attracted him to study the magnetism and why it remains in some bacteria and not others. This is k a l x Berkeley. So Speaker 5: let's talk about the magnetic zone, right? This is sort of my fascination. I was a graduate student at UCF and I studied cell biology. I use the yeast, which are not bacteria but in many ways they are kind of like bacteria. They're much simpler to study than maybe other do care attic [00:09:30] organisms and we have genetics available and so I was very fascinated by east, but I was studying a problem with XL organization and communication within the cell and yeast. We were taught sort of as students in cell biology at the time, that cell organization and having compartments in the cell organelles basically that do different functions was very unique feature of you carry attic cells and there's one of the things I've defined them. I received my phd to do a postdoctoral fellowship. I happen to be [00:10:00] in interviewing at cal tech and professor Mel Simon there he was talking about all kinds of bacteria that he was interested in and he said there's these bacteria that have organelles and I just, it kind of blew my mind because we were told explicitly that that's not true and in many textbooks, even today it still says that bacteria don't have organelles. Speaker 5: I learned more about men and I learned that these magnatech to bacteria that we've been talking about so far, you can actually build a structure inside of the cell, out of their cell membrane and within [00:10:30] this membrane compartment, it's essentially a little factory for making magnetic particles so they can build crystals of mineral called magnetite, which is just an iron oxide. Every three or four and some organisms make a different kind of magnetic minerals called Greg [inaudible], which is an iron sulfur mineral, but these are perfect little crystals, about 50 nanometers in diameter, and they make a chain of these magnesiums, so these membrane enclosed magnetic particles. [00:11:00] This chain is sort of on one side of the cell and it allows the bacteria to orient and magnetic fields because each of those crystals has this magnetic dipole moment in the same direction and all those little dipole moments interact with each other to make a little bar magnet, a little compass needle essentially that forces the bacterium to Orient in the magnetic field. Speaker 5: When I heard about this, I realized that this is just incredibly fascinating. Nobody really knew how it was that the membrane compartment forum [00:11:30] or even if it formed first and the mineral formed inside of it. There wasn't much or anything known about the proteins that were involved in building the compartment and then making the magnetic particle. It just seemed like something that needed to be studied and it was fascinating to me and I've been working on it for 1213 years now. Have we covered what the of the magnetic is that idea behind the function of the magnetism, which is the [00:12:00] structures of the cells build to allow them to align with a magnetic field. We think that function is to simplify the search for low oxygen environments. That's the main model in our field and I think there are definitely some groups that are actively working on understanding that aspect of the behavior better. Speaker 5: How it is that the bacteria can find a certain oxygen concentration. These bacteria in particular, what are the mechanics of them swimming along [00:12:30] the magnetic field and the, is there some other explanation for why they do this? For example, if they are changing orientations into magnetic field, can they sense the strain that the magnetic field is putting onto the cell? Can that be sensed somehow and then used for some work down the line and there are groups that are actively pursuing those kinds of ideas. You were mentioning that this is a particular kind of bacteria that has this capability, right, and others don't. Right. Yet both seem to be equally [00:13:00] effective and populating the water areas that you're studying. No apparent advantage. Disadvantage, so winning in Canada? Yeah, I mean it's a lot of the Darwinian, you could say as long as it's not severely disadvantageous, then maybe they wouldn't be a push for it to be lost. Speaker 5: What is kind of intriguing a little bit is there's examples of magna detective bacteria in many different groups, phylogenetic groups, so many different types of species that will be, let's [00:13:30] say bacterium that normally just lives free in the ocean and then I'll have a relative that's very similar to it, but it's also a magnet, a tactic. In recent years, people have studied this a little bit more and we know now what are the specific set of genes that allow bacteria to become magnetic tactic. So you can look at those genes specifically and say, how is it that bacteria that are otherwise so different from each other can all perform the same function? And if you know the genes that build the structures that allow them to orient [00:14:00] the magnetic fields, you can look at how different those genes are from each other or has similar they are. Speaker 5: And normally with a lot of these types of behaviors in bacteria, there's something called horizontal gene transfer that explains how it is that otherwise similar bacteria can have different functionalities. For example, you can think of that as bacteria being cars and everybody has sort of the same standard set of know features on the car. But you can add on different features if you want to. So you can upgrade and have other kinds of features like leather [00:14:30] seats or regular seats. And so the two cars that have different kinds of seats are very similar to each other. It's just one that got the leather seats. And so these partly are thought to occur by bacteria exchanging genes with each other. Somebody who wasn't magna tactic maybe got these jeans from another organism, but when people look at the genes that make these mag Nita zones, these magnetic structures inside of the cell, what you see is that they appear to be very, very ancient. Speaker 5: So it doesn't seem like there was a lot of recent [00:15:00] exchange of genes between these various groups of bacteria to make them magna tactic. And it almost seems to map to the ancestral divergence of all of these bacteria from each other. One big idea is that the last common ancestor of all these organisms was mag new tactic and that many, many other bacteria have sort of lost this capability over what would be almost 2 billion years of evolution for these bacteria. And then some have retained it. [00:15:30] Those of that have retained it is it's still serving an advantage for them, or is it just sort of Vista GL and they have it and they're sort of stuck in magnetic fields and they have to deal with it? No, but nobody really knows. Actually. The other option is that there was a period of horizontal gene transfer, but it was a very long time ago so that the signature is sort of lost from, again, a couple of billion years of evolution or divergence from each other, but it really looks like whenever this process happened, it was quite anxious. Speaker 3: [00:16:00] You are listening to spectrum on KALX Berkeley. Our guest is Arash [inaudible]. In the next segment, rush talks about organelles in bacterial cells. Speaker 5: [00:16:30] Explain what the Organelle is, so there's a lot of functions within the cell that need to be enclosed in a compartment for various reasons. You can have a biochemical reaction that's not very efficient, but if you put it in within a compartment and concentrates, all of the components that carry that reaction, it can be carried out more efficiently. The other thing is that for some reactions to to happen, you need a chemical environment that's different than the rest of the cellular environment. You can't convert [00:17:00] the whole environment of the cell to that one condition. So by compartmentalizing it you able to carry it out and often the products of these reactions can be toxic to the rest of the cell. And so by componentizing again you can keep the toxic conditions away from the rest of the, so these are the different reasons why you care how to excels. Speaker 5: Like the cells in our body have organelles that do different things like how proteins fold or modify proteins break him down and in bacterial cells it [00:17:30] was thought that they're so simple and so small that they don't really have a need for compartments. Although for many years people have had examples of bacteria that do form compartments. You carrot axles are big and Organelles are really easy to see where the light microscope so you can easily see that the cell has compartments within it. Whereas a lot of bacteria are well studied, are quite simple, they don't have much visible structure within them. And that's maybe even further the bias that there is some divide and this [00:18:00] allowed you carry out access to become more complex, quote unquote, and then it just doesn't exist in bacteria. How is it that they then were revealed? I think they'd been revealed for a long time. Speaker 5: You know, for example, there's electron microscope images from 40 years ago or more where you see for example, photosynthetic bacteria, these are bacteria that can do photosynthesis. They have extensive membrane structures inside of the cell that how's the proteins that harvest light and carry [00:18:30] out photosynthesis and they're, it seems like the idea for having an Organelle is that you just increased it area that you can use for photosynthesis sorta like you just have more solar panels if you just keep spreading the solar panels. Right. So that in this way, by just sort of making wraps of membranes inside of the cell, you just increased the amount of space that you can harvest light. So those were known for a long time and I think it just wasn't a problem that was studied from the perspective of cell biology and cell [00:19:00] organization that much. That's sort of a different angle that people are bringing to it now with many different bacterial organelles. Speaker 5: And part of the reason why it's important to think of it that way is that of course what the products of the bike chemistry inside of the Organelles is fascinating and really important to understand. But to build the organ out itself is also a difficult thing. So for example, you have to bend and remodel the cell membrane [00:19:30] to create, whether it's a sphere or it's wraps of membrane, and that is not a energetically favorable thing to do. It's not easy. So in your cataract cells, we know that there are specific proteins and protein machines. Then their only job is really to bend and remodeled the membrane cause it's not going to happen by itself very easily. And with all of these different structures that are now better recognized in bacteria, we really have no idea how it is that they performed the same function. Is [00:20:00] it using the same types of proteins as what we know in your care at excels or are they using different kinds of proteins? Speaker 5: That was sort of a very basic question to ask. How similar or different is it than how you carry? Like some makes an Oregon own fester was one of the first inspirations for us to study this process in magnatech the bacteria. And what sort of tools are you using to parse this information? In our field we use various tools and it's turned out to be incredibly beneficial [00:20:30] because different approaches have sort of converged on the same answer. So my basic focus was to use genetics as a tool. And the idea here was if we go in and randomly mutate or delete genes in these bacteria and then see which of these random mutations results in a loss of the magnetic phenotype and prevents the cell from making the magnetism Organelles, then maybe we know [00:21:00] those genes that are potentially involved. And so that was sort of what I perfected during my postdoctoral fellowship. Speaker 5: And that was my main approach to study the problem. And then on top of that, the other approach has been really helpful for us. And this is again something we've worked on is once we know some of the candidate proteins to be able to study them, their localization in the cell and they're dynamics, we modify the protein. So that they're linked to fluorescent proteins. So then we can, uh, use for us in this microscopy to follow them within the cell. [00:21:30] Other people, their approach was to say, well, these structures are magnetic. If we break open the cell, we can use a magnet and try to separate the magnesiums from the rest of the cell material. And then if we have the purified magnesiums, we can look to see what kinds of proteins are associated with them and sort of guilt by association. If there is a protein there, it should do something or maybe it does something. Speaker 5: That was the other approach. And the final approach that's been really helpful, [00:22:00] particularly because Magno take it back to your, our diverse, as we talked about earlier, is to take representatives that are really distantly related to each other and sequence their genomes. So get the sequence of their DNA and see what are the things that they have in common with each other. Take two organisms that live in quite different environments and their lineages are quite different from each other, but they both can do this magnetic tactic behavior. And by doing that, people again found [00:22:30] some genes and so if you take the genes that we found by genetics, random mutations of the cell by isolating the magnesiums and cy counting their proteins, and then by doing the genome sequencing, it all converges on the same set of genes. Speaker 2: [inaudible] this concludes part one of our [00:23:00] interview. We'll be sure to catch part two Friday July 12th at noon. Spectrum shows are archived on iTunes university. Speaker 7: The link is tiny url.com/calex spectrum. Now a few of the science and technology events happening locally over the next two weeks. Speaker 5: Rick Karnofsky [00:23:30] joins me for the calendar on the 4th of July the exploratorium at pier 15 in San Francisco. He's hosting there after dark event for adults 18 and over from six to 10:00 PM the theme for the evening is boom, Speaker 4: learn the science of fireworks, the difference between implosions and explosions and what happens when hot water meets liquid nitrogen tickets are $15 and are available from www.exploratorium.edu [00:24:00] the Santa Clara County Parks has organized an early morning van ride adventure into the back country. To a large bat colony view the bat tornado and learn about the benefits of our local flying mammals. Meet at the park office. Bring a pad to sit on and dress in layers for changing temperatures. This will happen Saturday July six from 4:00 AM to 7:00 AM at Calero County Park [00:24:30] and Santa Clara. Reservations are required to make a reservation call area code (408) 268-3883 Saturday night July six there are two star parties. One is in San Carlos and the other is near Mount Hamilton. The San Carlos event is hosted by the San Mateo Astronomical Society and is held in Crestview Park San Carlos. If you would like to help [00:25:00] with setting up a telescope or would like to learn about telescopes come at sunset which will be 8:33 PM if you would just like to see the universe through a telescope come one or two hours after sunset. Speaker 4: The other event is being hosted by the Halls Valley Astronomical Group. Knowledgeable volunteers will provide you with a chance to look through a variety of telescopes and answer questions about the night. Sky Meet at the Joseph D. Grant ranch county park. [00:25:30] This event starts at 8:30 PM and lasted until 11:00 PM for more information. Call area code (408) 274-6121 July is skeptical hosted by the bay area. Skeptics is on exoplanet colonization down to earth planning. Join National Center for Science Education Staffer and Cal Alum, David Alvin Smith for a conversation [00:26:00] about the proposed strategies to reach other star systems which proposals might work and which certainly won't at the La Pena Lounge. Three one zero five Shattuck in Berkeley on Wednesday July 10th at 7:30 PM the event is free. For more information, visit [inaudible] skeptics.org the computer history museum presents Intel's Justin Ratiner in conversation with John Markoff. Justin Ratner is a corporate [00:26:30] vice president and the chief technology officer of Intel Corporation. He is also an Intel senior fellow and head of Intel labs where he directs Intel's global research efforts in processors, programming systems, security communications, and most recently user experience. Speaker 4: And interaction as part of Intel labs. Ratner is also responsible for funding academic research worldwide through its science and technology centers, [00:27:00] international research institutes and individual faculty awards. This event is happening on Wednesday, July 10th at 7:00 PM the Computer History Museum is located at 1401 north shoreline boulevard in mountain view, California. A feature of spectrum is to present news stories we find interesting. Rick Karnofsky and I present the News Katrin on months and others from the Eulich Research Center in Germany have published the results of their big brain [00:27:30] project. A three d high resolution map of a human brain. In the June 21st issue of science, the researchers cut a brain donated by a 65 year old woman into 7,404 sheets, stain them and image them on a flatbed scanner at a resolution of 20 micrometers. The data acquisition alone took a thousand hours and created a terabyte of data that was analyzed by seven super competing facilities in Canada. Speaker 4: Damn. Making the data [00:28:00] free and publicly available from modeling and simulation to UC Berkeley. Graduate students have managed to more accurately identify the point at which our earliest ancestors were invaded by bacteria that were precursors to organelles like Mitochondria and chloroplasts. Mitochondria are cellular powerhouses while chloroplasts allow plant cells to convert sunlight into glucose. These two complex organelles are thought to have begun as a result of a symbiotic relationship between single cell [00:28:30] eukaryotic organisms and bacterial cells. The graduate students, Nicholas Matzke and Patrick Schiff, examined genes within the organelles and larger cell and compared them using Bayesians statistics. Through this analysis, they were able to conclude that a protio bacterium invaded UCR writes about 1.2 billion years ago in line with earlier estimates and that asino bacterium which had already developed photosynthesis, invaded eukaryotes [00:29:00] 900 million years ago, much later than some estimates which are as high as 2 billion years ago. Speaker 2: Okay. Speaker 4: The music heard during the show was written and produced by Alex Simon. Speaker 3: Interview editing assistance by Renee round. Thank you for listening to spectrum. If you have comments about the show, please send them to us via [00:29:30] email or email address is spectrum dot [inaudible] dot com join us in two weeks. This same time. See acast.com/privacy for privacy and opt-out information.

Spectrum
Arash Komeili, Part 1 of 2

Spectrum

Play Episode Listen Later Jun 28, 2013 30:00


Arash Komeili cell biologist, Assc. Prof. plant and microbial biology UC Berkeley. His research uses bacterial magnetosomes as a model system to study the molecular mechanisms governing the biogenesis and maintenance of bacterial organelles. Part1TranscriptSpeaker 1: Spectrum's next. Speaker 2: Okay. Speaker 3: [inaudible] [inaudible]. Speaker 1: [00:00:30] Welcome to spectrum the science and technology show on k a l x Berkeley, a biweekly 30 minute program bringing you interviews, featuring bay area scientists and technologists as well as a calendar of local events and news. Speaker 4: Hi, and good afternoon. My name is Brad Swift. I'm the host of today's show. We are doing another two part interview on spectrum. Our guest is Arash Kamali, [00:01:00] a cell biologist and associate professor of plant and microbial biology at cal Berkeley. His research uses bacterial magneta zones as a model system to study the molecular mechanisms governing the biogenesis and maintenance of bacterial organelles. Today. In part one, Arash walks us through what he is researching and how he was drawn to it in part two, which will air in two weeks. [00:01:30] He explains how these discoveries might be applied and he discusses the scientific outreach he does. Here's part one, a rush. Camelli. Welcome to spectrum. Thank you. I wanted to lay the groundwork a little bit. You're studying bacteria and why did you choose bacteria and not some other micro organism to study? One Speaker 5: practical motivation was that they're easier to study. They're easier to grow in [00:02:00] the lab. You can have large numbers of them. If you're interested in a specific process, you have the opportunity to go deep and try to really understand maybe all the different components that are involved in that process, but it wasn't necessarily a deliberate choice is just as I worked with them it became more and more fascinating and then I wanted to pursue it further. Speaker 4: And then the focus of your research on the bacteria, can you explain that? Speaker 5: Yeah, so we work with [00:02:30] a specific type of bacteria. They're called magnate as hectic bacteria and these are organisms that are quite widespread. You can find them in most aquatic environments by almost any sort of classification. You can really group them together if you take their shape or if you look at even the genes they have, the general genes they have, you can really group them into one specific group as opposed to many other bacteria that you can do that. But Unites Together as a group [00:03:00] is that they're, they're able to orient in magnetic fields and some along magnetic fields. This behavior was discovered quite by accident a couple of times independently. Somebody was looking under a microscope and they noticed that there were bacteria were swimming all in the same direction and they couldn't figure out why. They thought maybe the light from the window was attracting them or some other type of stimuli and they tried everything and they couldn't really figure out why the bacteria were swimming in one direction except they noticed that [00:03:30] regardless of where they were in the lab, they were always swimming in the same geographic direction and so they thought, well, the only thing we can think of that would attract them to the same position is the magnetic field, and they were able to show that sure enough, if you bring a magnet next to the microscope, you can change the swimming direction. Speaker 5: This type of behavior is mediated by a very special structure that the bacteria build inside of their cell, and this was sort of [00:04:00] what attracted me to it. Can you differentiate them? The UK erotic? Yeah. Then the bacterial, can you differentiate those two for us so that we kind of get a sense of is there, they're easy, different differentiate, you know the generally speaking you out excels, enclose their genetic material in an organelle called the nucleus. They're generally much bigger. They have a lot more genetic information associated with them and they have a ton of different kinds of organelles that perform [00:04:30] functions. All these Organelles to fall the proteins to break them down. They have organelles for generating energy, but all those little specific features, you know, you can find some bacterium that has organelles or you can find some bacterial solid that's really huge. Or you can find some bacteria so that encloses its DNA and an organelle. Speaker 5: It's just that you had accels have all of them together. Many of the living organisms that you encounter everyday because you can see them [00:05:00] very easily. Are you carry out, almost all of them are plants and fungi and animals. They're all made up of you. Charismatic cells. It's just that there's this whole unseen world of bacteria and what function does that capability serve, that magnetic functions that it can be realized that yet in many places on earth, the magnetic field will act as a guide through these changes in oxygen levels, sort of like a straight line through these. These [00:05:30] bacteria are stuck in these sort of magnetic field highways. It's thought to be a simpler method for finding the appropriate oxygen levels and simpler in this case means that they have to swim less as swimming takes energy. So the advantage is that they use less energy, get to the same place, that bacteria and that doesn't have the same capabilities relatively speaking, as a simple explanation, it's actually, because it is so simple, the model, you can kind of replicate [00:06:00] it in the lab a little bit. Speaker 5: If you set up a little tube that has the oxygen grading and then the bacteria will go to a certain place and you can actually see that they're sort of a band of bacteria at what they consider for them to be appropriate oxygen levels. And then if you inject some oxygen at the other end of the tube, the bacteria will swim away from this oxygen gradient. Now, if you give them a magnetic field that they can swim along, they can move away from this advancing oxygen threat much more quickly than [00:06:30] bacteria that can't navigate along magnetic fields. So that's sort of a proof of concept a little bit in the lab. There's a lot of reasons why it also doesn't make sense. For example, some of these bacteria make so many of these magnetic structures that we haven't talked about yet, but they make so many of these particles way more than they would ever need to orient in the magnetic field. Speaker 5: So it seems excessive. There are other bacteria that live in places on earth where there is not really this kind of a magnetic field guide. And in those environments there's [00:07:00] plenty of other bacteria that don't have these magneto tactic capabilities and they still can find that specific oxygen zone very easily. So in some ways I think it is an open question but there isn't really enough yet to refute the kind of the generally accepted model on the movement part of it. You were mentioning that they use magnetic field to move backwards and forwards. Only explain the limiting factor. Yeah, that's [00:07:30] an important point actually because it's not that they use the magnetic field for sensing in a way. It's not that they are getting pulled or pushed by the magnetic field. They are sort of passively aligned and the magnetic field sort of like if you have two bar magnets and if one of them is perpendicular to the other one and you bring the other one closer, I'll just move until they're parallel to each other. Speaker 5: This is the same thing. The bacteria have essentially a bar magnet and inside of the cell and so the alignment to the magnetic field [00:08:00] is passive that you can kill the bacteria and they'll still align with the magnetic field. The swimming takes advantage of structures and and machines that are found in all bacteria essentially. So they have flagella that they can use to swim back and forth as you mentioned. And they have a whole bunch of other different kinds of systems for sensing the amount of oxygen or other materials that they're interested in to figure out, should I keep swimming or should I stop swimming? And [00:08:30] as I mentioned earlier, the bacteria are quite diverse. So when you look at different magnatech active bacteria, the types of flagella they have are also different from each other. So it's not one universal mechanism for the swimming, it's just the idea that that the swimming is limited by these magnetic field lines. Speaker 6: [inaudible] [inaudible]. Speaker 5: Our guest today on spectrum is [inaudible] Chameleon, a cell biologist Speaker 7: and associate professor at cal Berkeley. In our next segment, [00:09:00] Arash talks about what attracted him to study the magnetism and why it remains in some bacteria and not others. This is k a l x Berkeley. So Speaker 5: let's talk about the magnetic zone, right? This is sort of my fascination. I was a graduate student at UCF and I studied cell biology. I use the yeast, which are not bacteria but in many ways they are kind of like bacteria. They're much simpler to study than maybe other do care attic [00:09:30] organisms and we have genetics available and so I was very fascinated by east, but I was studying a problem with XL organization and communication within the cell and yeast. We were taught sort of as students in cell biology at the time, that cell organization and having compartments in the cell organelles basically that do different functions was very unique feature of you carry attic cells and there's one of the things I've defined them. I received my phd to do a postdoctoral fellowship. I happen to be [00:10:00] in interviewing at cal tech and professor Mel Simon there he was talking about all kinds of bacteria that he was interested in and he said there's these bacteria that have organelles and I just, it kind of blew my mind because we were told explicitly that that's not true and in many textbooks, even today it still says that bacteria don't have organelles. Speaker 5: I learned more about men and I learned that these magnatech to bacteria that we've been talking about so far, you can actually build a structure inside of the cell, out of their cell membrane and within [00:10:30] this membrane compartment, it's essentially a little factory for making magnetic particles so they can build crystals of mineral called magnetite, which is just an iron oxide. Every three or four and some organisms make a different kind of magnetic minerals called Greg [inaudible], which is an iron sulfur mineral, but these are perfect little crystals, about 50 nanometers in diameter, and they make a chain of these magnesiums, so these membrane enclosed magnetic particles. [00:11:00] This chain is sort of on one side of the cell and it allows the bacteria to orient and magnetic fields because each of those crystals has this magnetic dipole moment in the same direction and all those little dipole moments interact with each other to make a little bar magnet, a little compass needle essentially that forces the bacterium to Orient in the magnetic field. Speaker 5: When I heard about this, I realized that this is just incredibly fascinating. Nobody really knew how it was that the membrane compartment forum [00:11:30] or even if it formed first and the mineral formed inside of it. There wasn't much or anything known about the proteins that were involved in building the compartment and then making the magnetic particle. It just seemed like something that needed to be studied and it was fascinating to me and I've been working on it for 1213 years now. Have we covered what the of the magnetic is that idea behind the function of the magnetism, which is the [00:12:00] structures of the cells build to allow them to align with a magnetic field. We think that function is to simplify the search for low oxygen environments. That's the main model in our field and I think there are definitely some groups that are actively working on understanding that aspect of the behavior better. Speaker 5: How it is that the bacteria can find a certain oxygen concentration. These bacteria in particular, what are the mechanics of them swimming along [00:12:30] the magnetic field and the, is there some other explanation for why they do this? For example, if they are changing orientations into magnetic field, can they sense the strain that the magnetic field is putting onto the cell? Can that be sensed somehow and then used for some work down the line and there are groups that are actively pursuing those kinds of ideas. You were mentioning that this is a particular kind of bacteria that has this capability, right, and others don't. Right. Yet both seem to be equally [00:13:00] effective and populating the water areas that you're studying. No apparent advantage. Disadvantage, so winning in Canada? Yeah, I mean it's a lot of the Darwinian, you could say as long as it's not severely disadvantageous, then maybe they wouldn't be a push for it to be lost. Speaker 5: What is kind of intriguing a little bit is there's examples of magna detective bacteria in many different groups, phylogenetic groups, so many different types of species that will be, let's [00:13:30] say bacterium that normally just lives free in the ocean and then I'll have a relative that's very similar to it, but it's also a magnet, a tactic. In recent years, people have studied this a little bit more and we know now what are the specific set of genes that allow bacteria to become magnetic tactic. So you can look at those genes specifically and say, how is it that bacteria that are otherwise so different from each other can all perform the same function? And if you know the genes that build the structures that allow them to orient [00:14:00] the magnetic fields, you can look at how different those genes are from each other or has similar they are. Speaker 5: And normally with a lot of these types of behaviors in bacteria, there's something called horizontal gene transfer that explains how it is that otherwise similar bacteria can have different functionalities. For example, you can think of that as bacteria being cars and everybody has sort of the same standard set of know features on the car. But you can add on different features if you want to. So you can upgrade and have other kinds of features like leather [00:14:30] seats or regular seats. And so the two cars that have different kinds of seats are very similar to each other. It's just one that got the leather seats. And so these partly are thought to occur by bacteria exchanging genes with each other. Somebody who wasn't magna tactic maybe got these jeans from another organism, but when people look at the genes that make these mag Nita zones, these magnetic structures inside of the cell, what you see is that they appear to be very, very ancient. Speaker 5: So it doesn't seem like there was a lot of recent [00:15:00] exchange of genes between these various groups of bacteria to make them magna tactic. And it almost seems to map to the ancestral divergence of all of these bacteria from each other. One big idea is that the last common ancestor of all these organisms was mag new tactic and that many, many other bacteria have sort of lost this capability over what would be almost 2 billion years of evolution for these bacteria. And then some have retained it. [00:15:30] Those of that have retained it is it's still serving an advantage for them, or is it just sort of Vista GL and they have it and they're sort of stuck in magnetic fields and they have to deal with it? No, but nobody really knows. Actually. The other option is that there was a period of horizontal gene transfer, but it was a very long time ago so that the signature is sort of lost from, again, a couple of billion years of evolution or divergence from each other, but it really looks like whenever this process happened, it was quite anxious. Speaker 3: [00:16:00] You are listening to spectrum on KALX Berkeley. Our guest is Arash [inaudible]. In the next segment, rush talks about organelles in bacterial cells. Speaker 5: [00:16:30] Explain what the Organelle is, so there's a lot of functions within the cell that need to be enclosed in a compartment for various reasons. You can have a biochemical reaction that's not very efficient, but if you put it in within a compartment and concentrates, all of the components that carry that reaction, it can be carried out more efficiently. The other thing is that for some reactions to to happen, you need a chemical environment that's different than the rest of the cellular environment. You can't convert [00:17:00] the whole environment of the cell to that one condition. So by compartmentalizing it you able to carry it out and often the products of these reactions can be toxic to the rest of the cell. And so by componentizing again you can keep the toxic conditions away from the rest of the, so these are the different reasons why you care how to excels. Speaker 5: Like the cells in our body have organelles that do different things like how proteins fold or modify proteins break him down and in bacterial cells it [00:17:30] was thought that they're so simple and so small that they don't really have a need for compartments. Although for many years people have had examples of bacteria that do form compartments. You carrot axles are big and Organelles are really easy to see where the light microscope so you can easily see that the cell has compartments within it. Whereas a lot of bacteria are well studied, are quite simple, they don't have much visible structure within them. And that's maybe even further the bias that there is some divide and this [00:18:00] allowed you carry out access to become more complex, quote unquote, and then it just doesn't exist in bacteria. How is it that they then were revealed? I think they'd been revealed for a long time. Speaker 5: You know, for example, there's electron microscope images from 40 years ago or more where you see for example, photosynthetic bacteria, these are bacteria that can do photosynthesis. They have extensive membrane structures inside of the cell that how's the proteins that harvest light and carry [00:18:30] out photosynthesis and they're, it seems like the idea for having an Organelle is that you just increased it area that you can use for photosynthesis sorta like you just have more solar panels if you just keep spreading the solar panels. Right. So that in this way, by just sort of making wraps of membranes inside of the cell, you just increased the amount of space that you can harvest light. So those were known for a long time and I think it just wasn't a problem that was studied from the perspective of cell biology and cell [00:19:00] organization that much. That's sort of a different angle that people are bringing to it now with many different bacterial organelles. Speaker 5: And part of the reason why it's important to think of it that way is that of course what the products of the bike chemistry inside of the Organelles is fascinating and really important to understand. But to build the organ out itself is also a difficult thing. So for example, you have to bend and remodel the cell membrane [00:19:30] to create, whether it's a sphere or it's wraps of membrane, and that is not a energetically favorable thing to do. It's not easy. So in your cataract cells, we know that there are specific proteins and protein machines. Then their only job is really to bend and remodeled the membrane cause it's not going to happen by itself very easily. And with all of these different structures that are now better recognized in bacteria, we really have no idea how it is that they performed the same function. Is [00:20:00] it using the same types of proteins as what we know in your care at excels or are they using different kinds of proteins? Speaker 5: That was sort of a very basic question to ask. How similar or different is it than how you carry? Like some makes an Oregon own fester was one of the first inspirations for us to study this process in magnatech the bacteria. And what sort of tools are you using to parse this information? In our field we use various tools and it's turned out to be incredibly beneficial [00:20:30] because different approaches have sort of converged on the same answer. So my basic focus was to use genetics as a tool. And the idea here was if we go in and randomly mutate or delete genes in these bacteria and then see which of these random mutations results in a loss of the magnetic phenotype and prevents the cell from making the magnetism Organelles, then maybe we know [00:21:00] those genes that are potentially involved. And so that was sort of what I perfected during my postdoctoral fellowship. Speaker 5: And that was my main approach to study the problem. And then on top of that, the other approach has been really helpful for us. And this is again something we've worked on is once we know some of the candidate proteins to be able to study them, their localization in the cell and they're dynamics, we modify the protein. So that they're linked to fluorescent proteins. So then we can, uh, use for us in this microscopy to follow them within the cell. [00:21:30] Other people, their approach was to say, well, these structures are magnetic. If we break open the cell, we can use a magnet and try to separate the magnesiums from the rest of the cell material. And then if we have the purified magnesiums, we can look to see what kinds of proteins are associated with them and sort of guilt by association. If there is a protein there, it should do something or maybe it does something. Speaker 5: That was the other approach. And the final approach that's been really helpful, [00:22:00] particularly because Magno take it back to your, our diverse, as we talked about earlier, is to take representatives that are really distantly related to each other and sequence their genomes. So get the sequence of their DNA and see what are the things that they have in common with each other. Take two organisms that live in quite different environments and their lineages are quite different from each other, but they both can do this magnetic tactic behavior. And by doing that, people again found [00:22:30] some genes and so if you take the genes that we found by genetics, random mutations of the cell by isolating the magnesiums and cy counting their proteins, and then by doing the genome sequencing, it all converges on the same set of genes. Speaker 2: [inaudible] this concludes part one of our [00:23:00] interview. We'll be sure to catch part two Friday July 12th at noon. Spectrum shows are archived on iTunes university. Speaker 7: The link is tiny url.com/calex spectrum. Now a few of the science and technology events happening locally over the next two weeks. Speaker 5: Rick Karnofsky [00:23:30] joins me for the calendar on the 4th of July the exploratorium at pier 15 in San Francisco. He's hosting there after dark event for adults 18 and over from six to 10:00 PM the theme for the evening is boom, Speaker 4: learn the science of fireworks, the difference between implosions and explosions and what happens when hot water meets liquid nitrogen tickets are $15 and are available from www.exploratorium.edu [00:24:00] the Santa Clara County Parks has organized an early morning van ride adventure into the back country. To a large bat colony view the bat tornado and learn about the benefits of our local flying mammals. Meet at the park office. Bring a pad to sit on and dress in layers for changing temperatures. This will happen Saturday July six from 4:00 AM to 7:00 AM at Calero County Park [00:24:30] and Santa Clara. Reservations are required to make a reservation call area code (408) 268-3883 Saturday night July six there are two star parties. One is in San Carlos and the other is near Mount Hamilton. The San Carlos event is hosted by the San Mateo Astronomical Society and is held in Crestview Park San Carlos. If you would like to help [00:25:00] with setting up a telescope or would like to learn about telescopes come at sunset which will be 8:33 PM if you would just like to see the universe through a telescope come one or two hours after sunset. Speaker 4: The other event is being hosted by the Halls Valley Astronomical Group. Knowledgeable volunteers will provide you with a chance to look through a variety of telescopes and answer questions about the night. Sky Meet at the Joseph D. Grant ranch county park. [00:25:30] This event starts at 8:30 PM and lasted until 11:00 PM for more information. Call area code (408) 274-6121 July is skeptical hosted by the bay area. Skeptics is on exoplanet colonization down to earth planning. Join National Center for Science Education Staffer and Cal Alum, David Alvin Smith for a conversation [00:26:00] about the proposed strategies to reach other star systems which proposals might work and which certainly won't at the La Pena Lounge. Three one zero five Shattuck in Berkeley on Wednesday July 10th at 7:30 PM the event is free. For more information, visit [inaudible] skeptics.org the computer history museum presents Intel's Justin Ratiner in conversation with John Markoff. Justin Ratner is a corporate [00:26:30] vice president and the chief technology officer of Intel Corporation. He is also an Intel senior fellow and head of Intel labs where he directs Intel's global research efforts in processors, programming systems, security communications, and most recently user experience. Speaker 4: And interaction as part of Intel labs. Ratner is also responsible for funding academic research worldwide through its science and technology centers, [00:27:00] international research institutes and individual faculty awards. This event is happening on Wednesday, July 10th at 7:00 PM the Computer History Museum is located at 1401 north shoreline boulevard in mountain view, California. A feature of spectrum is to present news stories we find interesting. Rick Karnofsky and I present the News Katrin on months and others from the Eulich Research Center in Germany have published the results of their big brain [00:27:30] project. A three d high resolution map of a human brain. In the June 21st issue of science, the researchers cut a brain donated by a 65 year old woman into 7,404 sheets, stain them and image them on a flatbed scanner at a resolution of 20 micrometers. The data acquisition alone took a thousand hours and created a terabyte of data that was analyzed by seven super competing facilities in Canada. Speaker 4: Damn. Making the data [00:28:00] free and publicly available from modeling and simulation to UC Berkeley. Graduate students have managed to more accurately identify the point at which our earliest ancestors were invaded by bacteria that were precursors to organelles like Mitochondria and chloroplasts. Mitochondria are cellular powerhouses while chloroplasts allow plant cells to convert sunlight into glucose. These two complex organelles are thought to have begun as a result of a symbiotic relationship between single cell [00:28:30] eukaryotic organisms and bacterial cells. The graduate students, Nicholas Matzke and Patrick Schiff, examined genes within the organelles and larger cell and compared them using Bayesians statistics. Through this analysis, they were able to conclude that a protio bacterium invaded UCR writes about 1.2 billion years ago in line with earlier estimates and that asino bacterium which had already developed photosynthesis, invaded eukaryotes [00:29:00] 900 million years ago, much later than some estimates which are as high as 2 billion years ago. Speaker 2: Okay. Speaker 4: The music heard during the show was written and produced by Alex Simon. Speaker 3: Interview editing assistance by Renee round. Thank you for listening to spectrum. If you have comments about the show, please send them to us via [00:29:30] email or email address is spectrum dot [inaudible] dot com join us in two weeks. This same time. Hosted on Acast. See acast.com/privacy for more information.

Bren ICS - Statistics Seminars
Can a Group of Bayesians Be Bayesian?

Bren ICS - Statistics Seminars

Play Episode Listen Later May 26, 2010 63:45