Physics of large number of particles' statistical behavior
POPULARITY
In today's episode of Theories of Everything, Curt Jaimungal and Julian Barbour challenge conventional physics by exploring Barbour's revolutionary ideas on time as an emergent property of change, the universe's increasing order contrary to entropy, and the foundational nature of shape dynamics. SPONSOR (THE ECONOMIST): As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe TOE'S TOP LINKS: - Support TOE on Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Enjoy TOE on Spotify! https://tinyurl.com/SpotifyTOE - Become a YouTube Member Here: https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Join TOE's Newsletter 'TOEmail' at https://www.curtjaimungal.org LINKED MENTIONED: - The Janus Point (Julian Barbour's book): https://www.amazon.com/Janus-Point-New-Theory-Time/dp/0465095461 - ‘Relational Concepts of Space and Time' (Julian Barbour's 1982 paper): https://www.jstor.org/stable/687224 - ‘The Theory of Gravitation' (Paul Dirac's 1958 paper): https://www.jstor.org/stable/100497 - Carlo Rovelli on TOE: https://www.youtube.com/watch?v=hF4SAketEHY - ‘On the Nature of Things' (book): https://www.hup.harvard.edu/books/9780674992009 - Leibniz: Philosophical Writings (book): https://www.amazon.com/Leibniz-Philosophical-Writings-Everymans-University/dp/0460119052 - Elementary Principles of Statistical Mechanics (book): https://www.amazon.com/Elementary-Principles-Statistical-Mechanics-Physics/dp/0486789950 - The interpretations of quantum mechanics in 5 minutes (article): https://curtjaimungal.substack.com/p/the-interpretations-of-quantum-mechanics - Sean Carroll on TOE: https://www.youtube.com/watch?v=9AoRxtYZrZo Timestamps: 00:00 - Introduction 02:12 - Working Outside of Academia 03:53 - Space, Time, Dimension 10:40 - Mach's Principle 21:33 - Mach Confused Einstein 24:22 - Two Particle Universe 31:46 - Carlo Rovelli 35:02 - Julian's Ontology 43:37 - Julian's Theory ‘Shape Statistics' 51:11 - Leinbiz's Philosophical Writings 56:14 - Expansion of the Universe (Scale Invariance) 01:05:02 - Cosmological Principle 01:15:34 - Thermodynamics 01:17:15 - Entropy and Complexity 01:30:40 - Wave Function / Double Slit Experiment 01:39:21 - God 01:44:48 - The Role of Instruments 01:47:44 - Etymology of Pattern and Matter 01:51:25 - Join My Substack! Other Links: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything #science #sciencepodcast #physics #theoreticalphysics #time #space #dimensions Learn more about your ad choices. Visit megaphone.fm/adchoices
Episode: 1239 Ensemble theory offers us insight through parallel universes. Today, a practical use for an abstract theory.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forget Everything (Statistical Mechanics Part 1), published by J Bostock on April 23, 2024 on LessWrong. EDIT: I somehow missed that John Wentworth and David Lorell are also in the middle of a sequence on this same topic here. I will see where this goes from here! Introduction to a sequence on the statistical thermodynamics of some things and maybe eventually everything. This will make more sense if you have a basic grasp on quantum mechanics, but if you're willing to accept "energy comes in discrete units" as a premise then you should be mostly fine. The title of this post has a double meaning: Forget the thermodynamics you've learnt before, because statistical mechanics starts from information theory. The main principle of doing things with statistical mechanics is can be summed up as follows: Forget as much as possible, then find a way to forget some more. Particle(s) in a Box All of practical thermodynamics (chemistry, engines, etc.) relies on the same procedure, although you will rarely see it written like this: Take systems which we know something about Allow them to interact in a controlled way Forget as much as possible If we have set our systems correctly, the information that is lost will allow us to learn some information somewhere else. For example, consider a particle in a box. What does it mean to "forget everything"? One way is forgetting where the particle is, so our knowledge of the particle's position could be represented by a uniform distribution over the interior of the box. Now imagine we connect this box to another box: If we forget everything about the particle now, we should also forget which box it is in! If we instead have a lot of particles in our first box, we might describe it as a box full of gas. If we connect this to another box and forget where the particles are, we would expect to find half in the first box and half in the second box. This means we can explain why gases expand to fill space without reference to anything except information theory. A new question might be, how much have we forgotten? Our knowledge gas particle has gone from the following distribution over boxes 1 and 2 P(Box)={1 Box 1 0 Box 2 To the distribution P(Box)={0.5 Box 1 0.5 Box 2 Which is the loss of 1 bit of information per particle. Now lets put that information to work. The Piston Imagine a box with a movable partition. The partition restricts particles to one side of the box. If the partition moves to the right, then the particles can access a larger portion of the box: In this case, to forget as much as possible about the particles means to assume they are in the largest possible space, which involves the partition being all the way over to the right. Of course there is the matter of forgetting where the partition is, but we can safely ignore this as long as the number of particles is large enough. What if we have a small number of particles on the right side of the partition? We might expect the partition to move some, but not all, of the way over, when we forget as much as possible. Since the region in which the pink particles can live has decreased, we have gained knowledge about their position. By coupling forgetting and learning, anything is possible. The question is, how much knowledge have we gained? Maths of the Piston Let the walls of the box be at coordinates 0 and 1, and let x be the horizontal coordinate of the piston. The position of each green particle can be expressed as a uniform distribution over (0,x), which has entropy log2(x), and likewise each pink particle's position is uniform over (x,1), giving entropy log2(1x). If we have ng green particles and np pink particles, the total entropy becomes nglog2(x)+nplog2(1x), which has a minimum at x=ngng+np. This means that the total volume occupied by each population of particles is proportion...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forget Everything (Statistical Mechanics Part 1), published by J Bostock on April 23, 2024 on LessWrong. EDIT: I somehow missed that John Wentworth and David Lorell are also in the middle of a sequence on this same topic here. I will see where this goes from here! Introduction to a sequence on the statistical thermodynamics of some things and maybe eventually everything. This will make more sense if you have a basic grasp on quantum mechanics, but if you're willing to accept "energy comes in discrete units" as a premise then you should be mostly fine. The title of this post has a double meaning: Forget the thermodynamics you've learnt before, because statistical mechanics starts from information theory. The main principle of doing things with statistical mechanics is can be summed up as follows: Forget as much as possible, then find a way to forget some more. Particle(s) in a Box All of practical thermodynamics (chemistry, engines, etc.) relies on the same procedure, although you will rarely see it written like this: Take systems which we know something about Allow them to interact in a controlled way Forget as much as possible If we have set our systems correctly, the information that is lost will allow us to learn some information somewhere else. For example, consider a particle in a box. What does it mean to "forget everything"? One way is forgetting where the particle is, so our knowledge of the particle's position could be represented by a uniform distribution over the interior of the box. Now imagine we connect this box to another box: If we forget everything about the particle now, we should also forget which box it is in! If we instead have a lot of particles in our first box, we might describe it as a box full of gas. If we connect this to another box and forget where the particles are, we would expect to find half in the first box and half in the second box. This means we can explain why gases expand to fill space without reference to anything except information theory. A new question might be, how much have we forgotten? Our knowledge gas particle has gone from the following distribution over boxes 1 and 2 P(Box)={1 Box 1 0 Box 2 To the distribution P(Box)={0.5 Box 1 0.5 Box 2 Which is the loss of 1 bit of information per particle. Now lets put that information to work. The Piston Imagine a box with a movable partition. The partition restricts particles to one side of the box. If the partition moves to the right, then the particles can access a larger portion of the box: In this case, to forget as much as possible about the particles means to assume they are in the largest possible space, which involves the partition being all the way over to the right. Of course there is the matter of forgetting where the partition is, but we can safely ignore this as long as the number of particles is large enough. What if we have a small number of particles on the right side of the partition? We might expect the partition to move some, but not all, of the way over, when we forget as much as possible. Since the region in which the pink particles can live has decreased, we have gained knowledge about their position. By coupling forgetting and learning, anything is possible. The question is, how much knowledge have we gained? Maths of the Piston Let the walls of the box be at coordinates 0 and 1, and let x be the horizontal coordinate of the piston. The position of each green particle can be expressed as a uniform distribution over (0,x), which has entropy log2(x), and likewise each pink particle's position is uniform over (x,1), giving entropy log2(1x). If we have ng green particles and np pink particles, the total entropy becomes nglog2(x)+nplog2(1x), which has a minimum at x=ngng+np. This means that the total volume occupied by each population of particles is proportion...
Complex systems are found across a diverse array of situations in nature: from human societies to groups of interacting quantum particles.Dr Alberto Robledo shows how his results may lead to new discoveries about many areas of nature where complex systems can be found. Read more in Research OutreachRead the original research: doi.org/10.31349/SuplRevMexFis.1.4.32
My new book Reframe Your Brain, available now on Amazon https://tinyurl.com/3bwr9fm8 Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: Politics, President Javier Milei, Vivek Ramaswamy, Ilya Sutskever, Elon Musk, Grok, ChatGPT, Hillary Clinton 2024, President Trump, Alex Jones, WaPo Fake News, Hoax Quiz 7, Edelman PR, Anne Applebaum, Mike Benz, Harvard Claudine Gay, DEI Failure, Statistical Mechanics, President Zelensky, Ukraine War, Scott Adams ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support
Tsallis entropy aims to extend traditional statistical mechanics, but some physicists believe the theory is incompatible with the fundamental principles of thermodynamics.Dr Alberto Robledo, however, shows for the first time how Tsallis entropy can explain natural phenomena that turn out to be surprisingly linked to the transitions from regular to chaotic behaviours, a result that has eluded researchers so far. His discovery could lead to a deeper understanding of how thermodynamic systems behave. Read more in Research Outreach Read the original research: doi.org/10.3390/e24121761
Today's podcast is titled, “Physics at 2000: The Characteristics of Time.” Dr. Tomio Petrosky, Senior Research Scientist at the Prigogine Center for Statistical Mechanics, Dr. Dean Driebe, Research Associate at the Prigogine Center, and Dr. Ilya Prigogine, Director of the Prigogine Center and 1977 Nobel Prize winner in Chemistry, discuss physics and the characteristics of time. Recorded in 1999. Listen now, and don't forget to subscribe to get updates each week for the Free To Choose Media Podcast.
Barry Loewer is Distinguished Professor of Philosophy at Rutgers. Before that he did his PhD in philosophy at Stanford (!). Barry works largely in the philosophy of physics, the philosophy of science, and metaphysics, and is a good friend of and frequent collaborator with another denizen of the Robinson's Podcast universe, David Albert. It is their joint work on the “Mentaculus,” something approximating a “probability map of the universe,” that occupies much of the discussion in this episode. Robinson and Barry also talk about statistical mechanics and his upcoming book, What Breathes Fire into the Equations (Oxford University Press, to be released fall 2023 or early 2024), which is about laws, chances, and fundamental ontology. Check out Barry's book on David: Essays on David Albert's Time and Chance. Background on Counterfactuals: https://plato.stanford.edu/entries/counterfactuals/ Background on Statistical Mechanics: https://plato.stanford.edu/entries/statphys-statmech/ OUTLINE 00:00 In This Episode… 00:29 Introduction 06:21 Barry‘s Road to the Philosophy of Physics 28:37 Fire in the Equations 43:16 Conditional Probability 54:11 Non-Humean and Humean Accounts of Laws 01:06:44 Probability Robinson's Website: http://robinsonerhardt.com Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between. --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support
Nalini AnantharamanGéométrie spectraleCollège de FranceAnnée 2022-2023Séminaire - Ergodicité et thermalisation des fonctions propres : From Unitary Dynamics to Statistical Mechanics in Isolated Quantum SystemsIntervenant(s) : Marcos Rigol, Penn State UniversityRésuméExperiments with ultracold gases have made it possible to study dynamics of (nearly-) isolated quantum many-body systems, which has revived theoretical interest on this topic. In generic isolated systems, one expects nonequilibrium dynamics to result in thermalization: a relaxation to states where the values of macroscopic quantities are stationary, universal with respect to widely differing initial conditions, and predictable through the time-tested recipe of statistical mechanics. However, it is not obvious what feature of a many-body system makes quantum thermalization possible, in a sense analogous to that in which dynamical chaos makes classical thermalization possible. Underscoring that new rules could apply in the quantum case, experimental studies in one-dimensional systems have shown that traditional statistical mechanics can provide wrong predictions for the outcomes of relaxation dynamics. We show that isolated "nonintegrable" systems do in fact relax to states in which observables are well-described by statistical mechanics. Moreover, we argue that the time evolution itself only plays an auxiliary role as thermalization occurs at the level of individual eigenstates.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scaling Laws Literature Review, published by Pablo Villalobos on January 27, 2023 on LessWrong. Common shape of a scaling law, taken from Hestness et al. (2017) Executive summary Scaling laws are predictable relations between the scale of a mode and performance or other useful properties. I have collected a database of scaling laws for different tasks and architectures, and reviewed dozens of papers in the scaling law literature. My main takeaways are: Functional forms: a basic power law can effectively model the scaling behavior in the power-law region but not the transitions to the other two regions. For this, either the M4 estimator or the BNSL estimator introduced below seem to be the best options right now. Transfer learning: there is not a simple universal scaling law for transfer learning between arbitrary tasks. When the tasks are similar enough, upstream loss and downstream performance are closely related, but when tasks are very different, the details of the architecture and hyperparameters become very relevant. See the full table of scaling laws here. Introduction The term “scaling laws” in deep learning refers to relations between functional properties of interest (usually the test loss or some performance metric for fine-tuning tasks) and properties of the architecture or optimization process (like model size, width, or training compute). These laws can help inform the design and training of deep learning models, as well as provide insights into their underlying principles. In this document, I present a detailed table outlining the known scaling laws and a summary of each paper's contributions to the understanding of scaling in deep learning. My main goal for this document is to serve as a comprehensive and up-to-date resource for information on scaling laws in deep learning. By presenting a curated list of papers that have explored various scaling laws, and providing a detailed table outlining the known scaling laws and a summary of each paper's contributions, I aim to make it easier to access and understand the current state of knowledge on scaling in deep learning. I selected the papers using the following criteria: Publication date: I only focus on recent papers, and exclude work prior to 2015 Contribution: I include papers which contribute in one of these ways: Reporting empirical scaling laws Proposing better functional forms or fitting methods Proposing theoretical models to explain scaling behavior Connecting scaling behavior with lower-level properties of models Overview While the scaling behavior of machine learning models has been studied for a long time (e.g., Statistical Mechanics of Learning from Examples), it was only recently that empirical research into scaling deep learning models became widely known and incorporated into practice. Previous theoretical analyses often predicted that the test loss would decrease as a power law of training data, L=BD−b+E, with exponent b=1 or 1/2. However, this clashes with empirical results, in which the scaling exponent is usually smaller than 1/2. The modern study of scaling laws arguably started with Hestness et al. (2017), who empirically identified power-law scaling of the test loss with respect to training data size in several different domains. In Hestness et al. (2019) this previous result was used to predict the increases in model and dataset sizes that would be needed to reach important performance milestones. Shortly after, Rosenfeld et al. (2020) constructed a joint error function with respect to model and data sizes, given by L=AN−a+BD−b+E and showed that this form could accurately reproduce the empirical error landscape. During 2020 and 2021, our understanding of scaling laws was greatly expanded: Henighan et al. (2020) found scaling laws for more tasks and architectures. Kaplan et al. (2020...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scaling Laws Literature Review, published by Pablo Villalobos on January 27, 2023 on LessWrong. Common shape of a scaling law, taken from Hestness et al. (2017) Executive summary Scaling laws are predictable relations between the scale of a mode and performance or other useful properties. I have collected a database of scaling laws for different tasks and architectures, and reviewed dozens of papers in the scaling law literature. My main takeaways are: Functional forms: a basic power law can effectively model the scaling behavior in the power-law region but not the transitions to the other two regions. For this, either the M4 estimator or the BNSL estimator introduced below seem to be the best options right now. Transfer learning: there is not a simple universal scaling law for transfer learning between arbitrary tasks. When the tasks are similar enough, upstream loss and downstream performance are closely related, but when tasks are very different, the details of the architecture and hyperparameters become very relevant. See the full table of scaling laws here. Introduction The term “scaling laws” in deep learning refers to relations between functional properties of interest (usually the test loss or some performance metric for fine-tuning tasks) and properties of the architecture or optimization process (like model size, width, or training compute). These laws can help inform the design and training of deep learning models, as well as provide insights into their underlying principles. In this document, I present a detailed table outlining the known scaling laws and a summary of each paper's contributions to the understanding of scaling in deep learning. My main goal for this document is to serve as a comprehensive and up-to-date resource for information on scaling laws in deep learning. By presenting a curated list of papers that have explored various scaling laws, and providing a detailed table outlining the known scaling laws and a summary of each paper's contributions, I aim to make it easier to access and understand the current state of knowledge on scaling in deep learning. I selected the papers using the following criteria: Publication date: I only focus on recent papers, and exclude work prior to 2015 Contribution: I include papers which contribute in one of these ways: Reporting empirical scaling laws Proposing better functional forms or fitting methods Proposing theoretical models to explain scaling behavior Connecting scaling behavior with lower-level properties of models Overview While the scaling behavior of machine learning models has been studied for a long time (e.g., Statistical Mechanics of Learning from Examples), it was only recently that empirical research into scaling deep learning models became widely known and incorporated into practice. Previous theoretical analyses often predicted that the test loss would decrease as a power law of training data, L=BD−b+E, with exponent b=1 or 1/2. However, this clashes with empirical results, in which the scaling exponent is usually smaller than 1/2. The modern study of scaling laws arguably started with Hestness et al. (2017), who empirically identified power-law scaling of the test loss with respect to training data size in several different domains. In Hestness et al. (2019) this previous result was used to predict the increases in model and dataset sizes that would be needed to reach important performance milestones. Shortly after, Rosenfeld et al. (2020) constructed a joint error function with respect to model and data sizes, given by L=AN−a+BD−b+E and showed that this form could accurately reproduce the empirical error landscape. During 2020 and 2021, our understanding of scaling laws was greatly expanded: Henighan et al. (2020) found scaling laws for more tasks and architectures. Kaplan et al. (2020...
One way to frame the science of complexity is as a revelation of the hidden order under seemingly separate phenomena — a teasing-out of music from the noise of history and nature. This effort follows centuries of work to find the rules that structure language, music, and society. How strictly analogous are the patterns governing a symphony and those that describe a social transformation? Math and music are old friends, but new statistical and computational techniques afford the possibility of going even deeper. What fundamental insights — and what sounds — emerge by bringing physicists, composers, social scientists, data artists, and biologists together?Welcome to COMPLEXITY, the official podcast of the Santa Fe Institute. I'm your host, Michael Garfield, and every other week we'll bring you with us for far-ranging conversations with our worldwide network of rigorous researchers developing new frameworks to explain the deepest mysteries of the universe.This week on Complexity, we sit with two of SFI's External Professors — Miguel Fuentes at the Argentine Society for Philosophical Analysis and the Institute of Complex Systems of Valparaiso, and Marco Buongiorno Nardelli at the University of North Texas — for a discussion that roams from their working group on the complexity of music, to fundamental questions about the nature of emergence, to how we might bring all of these ideas together to think about social transformation as a kind of music in its own right.A show that spend so much time exploring sense and nonsense would hardly be complete without technical errors, so please accept our apologies for losing some of Miguel's backstory to a recording glitch. For this reason, be extra sure to check out our extensive show notes with links to all our references at complexity.simplecast.com.Note that applications are now open for our Complexity Postdoctoral Fellowships! If you value our research and communication efforts, please subscribe, rate and review us at Apple Podcasts or Spotify, and consider making a donation — or finding other ways to engage with us — at santafe.edu/engage.Thank you for listening!Join our Facebook discussion group to meet like minds and talk about each episode.Podcast theme music by Mitch Mignano.Follow us on social media:Twitter • YouTube • Facebook • Instagram • LinkedInReferenced in this episode:An ‘integrated mess of music lovers in science'on the 2020 Music & Complexity SFI Working Group(with YouTube playlist of talks)Expanding our understanding of musical complexityon the 2022 Music & Complexity SFI Working GroupTopology of Networks in Generalized Musical Spacesby Marco Buongiorno NardelliTonal harmony and the topology of dynamical score networksby Marco Buongiorno Nardellia computer-aided data-driven composition environment for the sonification and dramatization of scientific data streamsby Marco Buongiorno NardelliMachines that listen: towards a machine listening model based on perceptual descriptorsby Marco Buongiorno Nardelli, Mitsuko Aramaki, Sølvi Ystad, and Richard Kronland-MartinetDoes network complexity help organize Babel's library?by Juan Pablo Cárdenas Iván González, Gerardo Vidal, and Miguel FuentesComplexity and the Emergence of Physical Propertiesby Miguel FuentesThe Structure of Online Information Behind Social Crisesby Juan Pablo Cárdenas, Gastón Olivares, Gerardo Vidal, Carolina Urbina and Miguel Fuentes88 - Aviv Bergman on The Evolution of Robustness and Integrating The DisciplinesComplexity Podcast86 - Dmitri Tymoczko on The Shape of Music: Mathematical Order in Western TonalityComplexity Podcast81 - C. Brandon Ogbunu on Epistasis & The Primacy of Context in Complex SystemsComplexity Podcast67 - Tyler Marghetis on Breakdowns & Breakthroughs: Critical Transitions in Jazz & MathematicsComplexity Podcast36 - Geoffrey West on Scaling, Open-Ended Growth, and Accelerating Crisis/Innovation Cycles: Transcendence or Collapse? (Part 2)Complexity Podcast27 - COVID-19 & Complex Time in Biology & Economics with David Krakauer (Transmission Series Ep. 2)Complexity PodcastIgnorance, Failure, Uncertainty, and the Optimism of Scienceby Stuart Firestein (SFI Community Lecture)SFI's Operating Principlesby Cormac McCarthy
November 21, 1996 – Santa Barbara, California, USA Anik Hassan and his two classmates had been grinding through their latest Statistical Mechanics homework for five straight hours. When he first started as a graduate student at the University of California, Santa Barbara, Anik had mostly worked alone, but he quickly realized he could get … Tortilla Basketball Read More » The post Tortilla Basketball first appeared on 500 Ironic Stories.
Episode: 2092 Is thermodynamics a science, or it is only derived from physics? Today, we wonder what makes a science.
This week we welcome Dr. Jay L. Nadeau to the Weekly Space Hangout. Jay is an Associate Professor in the Physics department at Portland State University and the founder of the Nadeau Lab (https://motility.research.pdx.edu/index.html) where they research nanoparticles, fluorescence imaging, and develop instrumentation for the detection of life elsewhere in the solar system. Prior to PSU, she was associate professor of biomedical engineering and physics at McGill University (2004–2015) and a Research Professor at Caltech (2015-2017). Before McGill, she was a member of the Jet Propulsion Laboratory's Center for Life Detection, and previous to that a Burroughs-Wellcome postdoctoral scholar in the laboratory of Henry A. Lester at Caltech. She received her PhD in physics from the University of Minnesota in 1996. The Nadeau Lab receives funding from NASA, the NSF, private foundations, and industry. The group features chemists, microbiologists, roboticists, physicists, and physician-scientists, all learning from each other and hoping to speak each other's language. A believer in bringing biology to physicists as well as physics to biologists, Jay teaches upper-division courses in Biophysics, Statistical Mechanics, and Quantum Mechanics. She also teaches Radiation Physics in the OHSU Medical Physics program. She is the author of two textbooks, Introduction to Experimental Biophysics and Truly Tricky Graduate Physics Problems, and a popular science photo book, Going To MARS: Science in Canada's High Arctic. Jay is committed to fostering diversity and inclusion in STEM, and seeks to broaden the pipeline into physics by creating new courses and programs that are friendly to people who are coming to physics after other careers or majors, to working professionals including teachers, and to underrepresented students. She is a member of the International Society for Optics and Photonics (SPIE), the American Association of Physics Teachers (AAPT), the American Physical Society, and the American Chemical Society. **************************************** The Weekly Space Hangout is a production of CosmoQuest. Want to support CosmoQuest? Here are some specific ways you can help: ► Subscribe FREE to our YouTube channel at https://www.youtube.com/c/cosmoquest ► Subscribe to our podcasts Astronomy Cast and Daily Space where ever you get your podcasts! ► Watch our streams over on Twitch at https://www.twitch.tv/cosmoquestx – follow and subscribe! ► Become a Patreon of CosmoQuest https://www.patreon.com/cosmoquestx ► Become a Patreon of Astronomy Cast https://www.patreon.com/astronomycast ► Buy stuff from our Redbubble https://www.redbubble.com/people/cosmoquestx ► Join our Discord server for CosmoQuest - https://discord.gg/X8rw4vv ► Join the Weekly Space Hangout Crew! - http://www.wshcrew.space/ Don't forget to like and subscribe! Plus we love being shared out to new people, so tweet, comment, review us... all the free things you can do to help bring science into people's lives.
The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009
https://youtu.be/wADm5xRgLNc Host: Fraser Cain ( @fcain )Special Guest: This week we welcome Dr. Jay L. Nadeau to the Weekly Space Hangout. Jay is an Associate Professor in the Physics department at Portland State University and the founder of the Nadeau Lab (https://motility.research.pdx.edu/ind...) where they research nanoparticles, fluorescence imaging, and develop instrumentation for the detection of life elsewhere in the solar system. Prior to PSU, she was associate professor of biomedical engineering and physics at McGill University (2004–2015) and a Research Professor at Caltech (2015-2017). Before McGill, she was a member of the Jet Propulsion Laboratory’s Center for Life Detection, and previous to that a Burroughs-Wellcome postdoctoral scholar in the laboratory of Henry A. Lester at Caltech. She received her PhD in physics from the University of Minnesota in 1996. The Nadeau Lab receives funding from NASA, the NSF, private foundations, and industry. The group features chemists, microbiologists, roboticists, physicists, and physician-scientists, all learning from each other and hoping to speak each other’s language. A believer in bringing biology to physicists as well as physics to biologists, Jay teaches upper-division courses in Biophysics, Statistical Mechanics, and Quantum Mechanics. She also teaches Radiation Physics in the OHSU Medical Physics program. She is the author of two textbooks, Introduction to Experimental Biophysics and Truly Tricky Graduate Physics Problems, and a popular science photo book, Going To MARS: Science in Canada’s High Arctic. Jay is committed to fostering diversity and inclusion in STEM, and seeks to broaden the pipeline into physics by creating new courses and programs that are friendly to people who are coming to physics after other careers or majors, to working professionals including teachers, and to underrepresented students. She is a member of the International Society for Optics and Photonics (SPIE), the American Association of Physics Teachers (AAPT), the American Physical Society, and the American Chemical Society. Regular Guests: Dr. Moiya McTier ( https://www.moiyamctier.com/ & @GoAstroMo ) Dave Dickinson ( http://astroguyz.com/ & @Astroguyz ) Pam Hoffman ( http://spacer.pamhoffman.com/ & http://everydayspacer.com/ & @EverydaySpacer ) This week's stories: - A lunar satellite constellation. - Last night's lunar eclipse! - Another eclipse! Annular this time. - Europa may have deep sea volcanoes. We've added a new way to donate to 365 Days of Astronomy to support editing, hosting, and production costs. Just visit: https://www.patreon.com/365DaysOfAstronomy and donate as much as you can! Share the podcast with your friends and send the Patreon link to them too! Every bit helps! Thank you! ------------------------------------ Do go visit http://astrogear.spreadshirt.com/ for cool Astronomy Cast and CosmoQuest t-shirts, coffee mugs and other awesomeness! http://cosmoquest.org/Donate This show is made possible through your donations. Thank you! (Haven't donated? It's not too late! Just click!) The 365 Days of Astronomy Podcast is produced by Astrosphere New Media. http://www.astrosphere.org/ Visit us on the web at 365DaysOfAstronomy.org or email us at info@365DaysOfAstronomy.org.
Čo je entropia? V čom je tento koncept neintuitívny na pochopenie ale stále nesmierne dôležitý? Prečo by muchy nelietali len v jednom rohu izby, ale zaplnia ju rovnomerne? Prečo sa kocka ľadu v pohári s teplejšou vodou roztopí a prečo pena na káve dlho nevydrží? Prečo starneme a prečo vlastne plynie čas len jedným smerom? Aj dnes s vedcom Jarom a filozofom Jakubom. ----more---- V rozhovore sme spomenuli alebo odporúčame: What is entropy? - Jeff Phillips (TED-ed), https://bit.ly/3eFj3gy Philosophy of Statistical Mechanics, 5. Irreversibility (SEP), https://stanford.io/3w7gEBp Theodore Sider, Writing the Book of the World (2011), https://bit.ly/3hrQTHJ The Dance of Life: Symmetry, Cells and How We Become Human (2021), https://bit.ly/3odZ6kn *** Sú pre Teba naše diskusie kameňom v topánke? Udrž náš podcast pri živote podporou cez PATREON.com/kvantumidei alebo priamo na SK15 8360 5207 0042 0502 4979
On today’s ID the Future, host Eric Anderson sits down with Canceled Science* author and physicist Eric Hedin to discuss Hedin’s new book and, in particular, the book’s take on the origin-of-life problem. Hedin says the second law of thermodynamics poses a serious problem for the idea of a mindless origin of the first single-celled organism from prebiotic materials. Such an event would have involved a breathtaking increase in new information, and Hedin says that physics tells us pretty clearly that mindless nature degrades information; it doesn’t create it. Are there workarounds? Listen in as he explains why he’s not optimistic. And grab a copy of his new book to get his extended take. (*As an Amazon Associate, Discovery earns Read More › Source
We're unbelievably honored and excited to have the legendary Prof. Katja Lindenberg on the show to take us through her life in physics and physical chemistry. We talk to Prof. Lindenberg about her early life and education, her transition from mathematics to physics, and of course her incredible 50 year career at UCSD, during which she used "the language of science" (statistical mechanics) to describe problems involving random processes in nearly every discipline you could possibly imagine. We also get to hear a few stories about some of her first friends and mentors at UCSD, Maria Goeppert Mayer and her husband Joe Mayer!
As a followup to our series on thermodynamics, the briefest of introductions to one of the most fascinating and beautiful areas of physics - Statistical Mechanics.
Episode: 1893 Satyendranath Bose, quantum physics pioneer and Indian hero. Today, Satyendranath Bose.
2019 Arnold Sommerfeld School: the Physics of Life
2019 Arnold Sommerfeld School: the Physics of Life
2019 Arnold Sommerfeld School: the Physics of Life
One in a series of talks from the 2019 Models of Consciousness conference. Ramón Guevara Erra Integrative Neuroscience and Cognition Center (UMR 8002), CNRS and Université Paris Descartes, Paris, France It has been argued that consciousness could be an emergent property of large neuronal networks, associated to the integration of information in the brain. However, it is not yet clear how is consciousness related to the complexity of functional brain networks. Based on a statistical mechanics approach, we sought to identify features of brain organization that are optimal for sensory processing, and that may guide the emergence of consciousness, by analyzing neurophysiological recordings in conscious and unconscious states. We find a surprisingly simple result: Normal wakeful states are characterized by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values. Therefore, the information content is larger in the network associated to conscious states, suggesting that consciousness could be the result of an optimization of information processing. These findings help to guide in a more formal sense inquiry into how consciousness arises from the organization of matter. Filmed at the Models of Consciousness conference, University of Oxford, September 2019.
Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders
Mike serves as Head of Data Science at Uber ATG and lecturer for UC Berkeley iSchool Data Science master’s program. Mike has led several teams of Data Scientists in the bay area as Chief Data Scientist for InterTrust and Takt, Director of Data Sciences for MetaScale, and Chief Science Officer for Galvanize he oversaw all data science product development and created the MS in Data Science program in partnership with UNH. Mike began his career in academia serving as a mathematics teaching fellow for Columbia University and graduate student at the University of Pittsburgh. His early research focused on developing the epsilon-anchor methodology for resolving both an inconsistency he highlighted in the dynamics of Einstein’s general relativity theory and the convergence of “large N” Monte Carlo simulations in Statistical Mechanics’ universality models of criticality phenomena. In this episode, Michael talks about how he accidentally got into data and his work with simulation. Then, Michael discusses his background in data science product development and data science education. He reveals all the mistakes he made with his transition from academics to industry. Later, Michael tells us what attracted him to data science education and how he balances industry projects with his teachings. Rapid growth is a challenge with technology management because your skillset will get rusty as the technology advances. Lastly, Michael talks fake news, bootstrapping, and Fake or Fact. In This Episode: [00:20] Michael accidentally got into data [02:15] About Michael Tamir [03:40] Transition to industry [06:40] Software engineering challenges [08:45] Data Science Education [15:15] Adaptive learning [17:15] Team management [19:05] Challenges with rapid growth [24:25] Fake news [27:25] Toughest challenge [28:50] Fake or Fact [31:20] Listener questions Mike's quotes from the episode: “You have to be really careful about what you do and what you do not teach in order to make sure students are successful in the long-term.” “Decisions are going to be best made by those who are closest to the ground.” “You’re not going to be the expert in every group you are managing.” “I take full responsibility for any failures with the algorithm.” “Most of my time is spent on my day job.” “Find out what you enjoy about data science skills; find the role that is looking for those skills.” “I enjoy the science and making sure we are asking the questions in a scientifically sound way.” Connect: Twitter - https://twitter.com/MikeTamir LinkedIn – https://www.linkedin.com/in/miketamir/ Website - http://www.fakeorfact.org Now you can support Data Futurology on Patreon! https://www.patreon.com/datafuturology Thank you to our sponsors: UNSW Master of Data Science Online: studyonline.unsw.edu.au Datasource Services: datasourceservices.com.au or email Will Howard on will@datasourceservices.com.au Fyrebox - Make Your Own Quiz! And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show! --- Send in a voice message: https://anchor.fm/datafuturology/message
Mike Tamir is the Head of Data Science at Uber ATG. He is a leader in data science, specializing in deep learning and distributed scalable machine learning, and he’s also a faculty member at UC Berkeley. Mike has led several teams of Data Scientists in the San Francisco Bay Area as Chief Data Scientist for InterTrust and Formation, Director of Data Sciences for MetaScale, and Chief Science Officer for Galvanize, where he oversaw all data science product development. He also created an MS degree program in Data Science in partnership with UNH. Mike began his career in academia serving as a mathematics teaching fellow for Columbia University and graduate student at the University of Pittsburgh. His early research focused on developing the epsilon-anchor methodology for resolving both an inconsistency he highlighted in the dynamics of Einstein’s general relativity theory and the convergence of “large N” Monte Carlo simulations in Statistical Mechanics’ universality models of criticality phenomena. The focus of today’s conversation was on his fake news detection AI project called Faker Fact. Show notes: 0:00 First, a life update from AJ. Read about his new opportunity in Portland here on his blog. 5:28 What is the evolutionary explanation for why a human’s capacity for careful, rational thought often takes a back seat to emotion? Explained in a comic on the project website. 6:17 Emotions often win over rational though, but as a result, it can be difficult to think clearly on issues we’re passionate about. 7:05 Why people should be aware of their emotional biases, even though it’s not our fault that we have them. 7:50 Why Facebook deleted over a billion fake accounts recently, and why fake accounts, clickbait, blatantly false content, and other forms of fake news are everywhere on social media. 9:10 What mechanisms can we put in place to counterbalance the parts of our nature that compel us to create and engage with content on an emotional level? 9:51 Since a majority of our information is second-hand, how do we distingush what’s really true? 11:44 How did Mike become motivated to pursue this problem, on top of his full time job at Uber ATG? 12:45 How can we tackle “fake news” without censorship? 16:40 Post-Walter Cronkite era, how do we create a sense of credibility and neutrality in our information? 21:00 Why would it be a mistake if the algorithm learned to only classify right or left wing content as fake news? 22:19 The algorithm only looks at the title and words on a page, not the url. 23:15 How Walt (the FakerFact AI) classifies different types of content. Satire, journalism, etc. 26:46 How do you strike the balance of entertainment and informativeness in content? 31:10 What features and characteristics defines each different category of content that Walt identifies? 36:16 What is Walt’s ideal use case? 36:55 You can use the FakerFact Chrome extension to view the “nutrition facts” of the page you’re reading. 37:42 How does research on run-on sentences and other grammatical choices help Walt understand and score an article? 40:34 What techniques were used to train the Walt AI? 42:41 A discussion on the use of wisdom of the crowds in algorithms. 45:30 What makes it difficult to use the wisdom of the crowds when answers are too closely correlated (because of political affiliations or the news cycle?) 46:47 Visit Humanetech.com for tips on regulating your daily notifications and escaping the “24-hour news cycle” to prevent media from controlling your emotions. 50:15 Rapid fire questions! 52:27 Mike’s advice to his 20 year old self. 52:40 What was his best investment in himself? 53:18 The Deep Learning Book a starting point for basic literacy in data science. 53:20 Mike, like lots of guests on this show, makes a distinction between things he believes but couldn’t prove right now, and believing things for no good reason. Show Notes: https://ajgoldstein.com/podcast/ep22 AJ’s Twitter: https://twitter.com/ajgoldstein393/ Mike’s LinkedIn: https://www.linkedin.com/in/miketamir/ Mike’s Twitter: https://twitter.com/MikeTamir
Finally, our sixth episode! After a forced hiatus we are so happy to be back and giving you this wonderful ep! We start off by giving you some updates about what we've been doing since the last episode and some exciting updates about Canadian women in science! Then we learn about Ada Lovelace and why she is still honoured today on Ada Lovelace Day! We interview a fun and inspiring mathematician Dr. Kay Kirkpatrick who shared wonderful stories about her path through science. Finally, we talk about an organization we have wanted to share for a while, Black Girls Code! facebook: https://www.facebook.com/SuperwomenSci/ twitter: https://twitter.com/SuperwomenSci website: www.superwomeninscience.wordpress.com Ada Lovelace podcast: https://findingada.com/blog/2017/08/29/ep-16-fire-engineering-liquid-paper-and-understanding-memory-and-cognition/ Perceptions of scientists blog post: https://girlsinstemblog.wixsite.com/blog/single-post/2017/08/22/Perceptions-of-Scientists Wings World Quest list: http://www.wingsworldquest.org/blog/2017/9/29/5-women-in-stem-podcasts-to-check-out?utm_content=bufferb50b8&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer Make Change Conference: https://l.facebook.com/l.php?u=http%3A%2F%2Fmakechangeconference.com%2F&h=ATM1MUHr_fmwCcyNMZZqD51CNTXJ-PkkrmU5wY25bu8fsX6JQlxN6YidjxFOlKUHyHA8WDANjEb75fIAAm5UNko1yqmDcp0OPuXaq45rNSn9STULD5FgRIP5MIHq-zrJEiuAevNeJNS2y7ZJNiHVNYTPuvw https://www.facebook.com/events/1759315511028580/?__mref=mb https://www.eventbrite.ca/e/make-change-conference-2017-tickets-37917079004 Dr. Mona Nemer: https://evidencefordemocracy.ca/en/worth-the-wait http://www.cbc.ca/news/technology/mona-nemer-new-chief-scientist-1.4308262 Rachel Ignotofsky Women in Science: https://www.amazon.ca/Women-Science-Fearless-Pioneers-Changed/dp/1607749769 Dr. Kay Kirkpatrick https://faculty.math.illinois.edu/~kkirkpat/personal.html https://twitter.com/kay314159 Reading suggestion! https://www.amazon.ca/Too-Much-Happiness-Alice-Munro/dp/0143170996 Black Girls Code: http://www.blackgirlscode.com/ https://twitter.com/BlackGirlsCode http://www.blackgirlscode.com/donations.html
This week welcome to the show Dr. David Hamilton. David has a first class honors degree in chemistry, specializing in biological and medicinal chemistry. He has the slightly geeky honour of having achieved 100% in his 3rd year university degree exam in ‘Statistical Mechanics’, which is a branch of quantum physics applied to chemistry. After completing his PhD, he worked for 4 years in the pharmaceutical industry, first developing drugs for cardiovascular disease and cancer, then a year managing leadership and change projects. During this time he also served as an athletics coach and manager of Sale Harriers Manchester, one of the UK’s largest athletics clubs, leading the Junior Men’s team to three successive UK finals. Upon leaving the pharmaceutical industry, he co-founded the international relief charity Spirit Aid Foundation and served as a director for 2 years. While writing his first book (2004-2005), he taught chemistry (main) plus ecology and mathematics (secondary) at West College Scotland (formerly, James Watt College of Further and Higher Education) and tutored chemistry at the University of Glasgow. He’s now a bestselling author of 8 books (No’s 9 and 10 are due out in 2017) published by Hay House UK, and offer talks and workshops that use science to inspire – fusing neuroscience, the mind-body connection, kindness, and philosophical and eastern spiritual teachings. He also writes a regular blog on his website as well as occasional blogs for the Huffington Post (US edition) and Psychologies Life Labs, and is a columnist for Soul and Spirit Magazine. In 2016, David won the Kindred Spirit, ‘Best MBS Writer Award’. He has been featured in numerous publications, including ELLE, RED Magazine, Psychologies, YOU Magazine, Good Housekeeping (both UK & US), and several newspapers. You might wonder how he got into writing on the subjects he does. In his own words, “Well, during my time in the pharmaceutical industry, I was fascinated by the placebo effect – how people improve through believing they are receiving a drug – so I began to study mind-body interactions in my spare time. I decided to leave the pharmaceutical industry after 4 years because I wanted to educate about the mind-body connection, help people to believe in themselves more, and spread a little more kindness in the world in my own way.” Questions we ask in this episode: You specialized in biological and medicinal chemistry. Why did you decide to look at the placebo effect? How powerful do you think the placebo really is? What inspired you to write your new book, The 5 Side Effects of Kindness? What are the 5 side effects of kindness? Can you tell this listeners what "molecules of kindness are?" Shop: http://shop.180nutrition.com.au/ We have on the show Dr David Hamilton, and I have to say I truly loved every minute of this show. Now David has a first class honors degree in chemistry, specializing in biological and medicinal chemistry. After completing his PhD he worked for four years in the pharmaceutical industry, first developing drugs for cardiovascular disease and cancer, then a year managing leadership and change projects. Now he is also a best selling author of 8 books published by Hay House in the UK, and offers talks and workshops that use science to inspire, fusing neuroscience, the mind body connection, kindness and a philosophical and eastern spiritual teaching, so yes you can see why I loved this episode so much. [00:01:30] [00:02:00] And what I loved about David's journey, A) he is just an upbeat positive awesome dude, but from that as well we get stuck into the placebo at first and what he was seeing from back in his pharmaceutical days, and comparing that to the actual drugs people were taking, which is just fascinating, and then we get into his new book, the Side Effects of Kindness, and he talks at it from a biological and physical perspective as well of what is going on, and it was just fascinating, and yeah it was awesome, so if you want to do an act of kindness after listening to this show, feel free to leave us a review on iTunes, or actually share this episode with someone that you think might really appreciate it. Because there is a lot in here and I have no doubt you will enjoy it as well. Now you might have heard me mention on a couple of podcasts in the past, I've personally myself set up a closed Facebook group and the group is designed, if you kind of enjoy this information and this content, me personally I have a lot of interest in meditation, neuroscience and the changes that are going on within myself and my life, and it has had a huge positive impact on me over the last 3 or 4 years as I have been exploring a lot of it. [00:02:30] Anyway I have created this Facebook group called Guy Lawrence Let It In, and if this kind of stuff inspires you, you enjoy it and you want to hang out with like minded people, come on over and, yeah I will welcome you into the group and come and introduce yourself and let me know you found it on the podcast. Anyway, so I just want to mention that. It is Guy Lawrence Let It In, just do a Facebook search. Anyway lets go over to Dr David Hamilton, this episode is awesome. Hi this is Guy Lawrence, I am joined with Stuart Cook as always, good afternoon Stu. Stu Hello mate. Guy And our awesome guest today all the way from the UK, is Dr David Hamilton. David, welcome to the show. David Oh well, thanks very much, nice to be here. [00:03:00] Guy It is a pleasure mate, and we ask one little question just to kick start the show to all our guests, and that is if a complete stranger stopped you on the street and asked you what you did for a living, what would you say? David I write books and I give talks. In a nutshell. I would probably tell them the kind of topic that I wrote on but it depends on how ... if we were moving fast in opposite directions. [00:03:30] Full Transcript & Video Version: http://180nutrition.com.au/180-tv/david-hamilton-interview/
In this classic GameTek Geoff continues his exploration of thermodynamics by talking about Statistical Mechanics and what it tells us about the world. Duration: 7:05
Among the very many puzzling aspects of the physical world is this: how do we explain the fact that the laws of thermodynamics are time-asymmetric while those of statistical mechanics are time-symmetric? If the fundamental physical laws do not require events to occur in any particular temporal direction, why do we observe a world in which, for example, we will always see milk dispersing in tea but never coming together in tea – at least not unless we film the dispersal and then run the film backwards? In The Road to Maxwell's Demon: Conceptual Foundations of Statistical Mechanics (Cambridge University Press, 2012), Meir Hemmo of the University of Haifa and Orly Shenker of the Hebrew University of Jerusalem provide a fascinating and accessible defense of the position that the laws of thermodynamics are observer-relative, that the evolutions of physical microstates in classical mechanics have a direction of time but no determinate direction, and that the relation between observers and the dynamics determines the direction of time that we observe and capture in our thermodynamical laws. In consequence, they argue, it's just a contingent fact that we remember the past rather than the future, and Maxwellian Demons – perpetual motion machines that can exploit more and more energy while putting in less and less work – are possible.