Podcasts about Markov chain Monte Carlo

Class of dependent sampling algorithms

  • 18PODCASTS
  • 30EPISODES
  • 35mAVG DURATION
  • ?INFREQUENT EPISODES
  • Nov 30, 2022LATEST
Markov chain Monte Carlo

POPULARITY

20172018201920202021202220232024


Best podcasts about Markov chain Monte Carlo

Latest podcast episodes about Markov chain Monte Carlo

Astro arXiv | all categories
Constraints on dark matter annihilation and decay from the large-scale structure of the nearby universe

Astro arXiv | all categories

Play Episode Listen Later Nov 30, 2022 0:52


Constraints on dark matter annihilation and decay from the large-scale structure of the nearby universe by Deaglan J. Bartlett et al. on Wednesday 30 November Decaying or annihilating dark matter particles could be detected through gamma-ray emission from the species they decay or annihilate into. This is usually done by modelling the flux from specific dark matter-rich objects such as the Milky Way halo, Local Group dwarfs, and nearby groups. However, these objects are expected to have significant emission from baryonic processes as well, and the analyses discard gamma-ray data over most of the sky. Here we construct full-sky templates for gamma-ray flux from the large-scale structure within $sim$200 Mpc by means of a suite of constrained $N$-body simulations (CSiBORG) produced using the Bayesian Origin Reconstruction from Galaxies algorithm. Marginalising over uncertainties in this reconstruction, small-scale structure, and parameters describing astrophysical contributions to the observed gamma-ray sky, we compare to observations from the Fermi Large Area Telescope to constrain dark matter annihilation cross sections and decay rates through a Markov Chain Monte Carlo analysis. We rule out the thermal relic cross section for $s$-wave annihilation for all $m_chi lesssim 7 {rm , GeV}/c^2$ at 95% confidence if the annihilation produces gluons or quarks less massive than the bottom quark. We infer a contribution to the gamma-ray sky with the same spatial distribution as dark matter decay at $3.3sigma$. Although this could be due to dark matter decay via these channels with a decay rate $Gamma approx 6 times 10^{-28} {rm , s^{-1}}$, we find that a power-law spectrum of index $p=-2.75^{+0.71}_{-0.46}$, likely of baryonic origin, is preferred by the data. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2205.12916v2

Astro arXiv | all categories
Photometry and transit modelling of exoplanet WASP-140b

Astro arXiv | all categories

Play Episode Listen Later Sep 22, 2022 0:42


Photometry and transit modelling of exoplanet WASP-140b by Allen North et al. on Thursday 22 September Eleven transit light curves for the exoplanet WASP-140b were studied with the primary objective to investigate the possibility of transit timing variations (TTVs). Previously unstudied MicroObservatory and Las Cumbres Global Telescope Network photometry were analysed using Markov Chain Monte Carlo techniques, including new observations collected by this study of a transit in December 2021. No evidence was found for TTVs. We used two transit models coupled with Bayesian optimization to explore the physical parameters of the system. The radius for WASP-140b was estimated to be $1.38^{+0.18}_{-0.17}$ Jupiter radii, with the planet orbiting its host star in $2.235987 pm 0.000008$ days at an inclination of $85.75 pm 0.75$ degrees. The derived parameters are in formal agreement with those in the exoplanet discovery paper of 2016, and somewhat larger than a recent independent study based on photometry by the TESS space telescope. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.10582v1

PaperPlayer biorxiv neuroscience
Brain glucose metabolism and ageing: A 5-year longitudinal study in a large PET cohort

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Sep 17, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.09.15.508088v1?rss=1 Authors: Pak, K., Malen, T., Santavirta, S., Shin, S., Nam, H. Y., De Maeyer, S., Nummenmaa, L. Abstract: Background: Ageing and clinical factors impact brain glucose metabolism. However, there is a substantial variation of the reported effects on brain glucose metabolism across studies due to the limited statistical power and cross-sectional study designs. Methods: We retrospectively analyzed data from 441 healthy males (mean 42.8, range 38-50 years) who underwent health check-up program twice at baseline and 5-year follow-up. Health check-up program included 1) brain 18F-Fluorodeoxyglucose (FDG) positron emission tomography (PET), 2) anthropometric and body composition measurements, 3) blood samples, and 4) questionnaires for stress and depression. After spatial normalization of brain FDG PET scans, standardized uptake value ratio (SUVR) was measured from 12 region-of-interests. We used hierarchical clustering analysis to reduce their dimensionality before the Bayesian hierarchical modelling. Five clusters were established for predicting regional SUVR; 1) metabolic cluster (body mass index, waist-to-hip ratio, fat percentage, muscle percentage, homeostatic model assessment index-insulin resistance), 2) blood pressure (systolic, diastolic), 3) glucose (fasting plasma glucose level, HbA1c), 4): psychological cluster (stress, depression), and 5) heart rate. The effects of clinical variable clusters on regional SUVR were investigated using Bayesian hierarchical modelling with brms that applies the Markov-Chain Monte Carlo sampling tools. Results: All the clinical variables except depression changed during the 5-year follow-up. SUVR decreased in caudate, cingulate, frontal lobe and parietal lobe and increased in cerebellum, hippocampus, occipital lobe, pallidum, putamen, temporal lobe and thalamus. SUVRs of thalamus, pallidum, hippocampus, putamen and parietal lobe were negatively associated with metabolic cluster and the effects of glucose on SUVRs varied across regions. SUVRs of thalamus, hippocampus, cingulate, cerebellum increased and those with occipital lobe decreased with heart rate. The effects of blood pressure and psychological cluster markedly overlapped with zero across regions. Conclusion: Regionally selective decline in brain glucose utilization begins already in the middle age, while individual differences in brain glucose metabolism remain stable. In addition to ageing, brain glucose utilization is also associated with metabolic cluster, blood glucose levels and heart rate. These effects are also consistent over the studied period of 5 years in the middle adulthood. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer

Advent of Computing
Episode 87 - The ILLIAC Suite

Advent of Computing

Play Episode Listen Later Jul 24, 2022 68:09


Can a computer be creative? Can we program a machine to make art? It turns out the answer is yes, and it doesn't even take artificial intelligence. This episode we are diving in to the ILLIAC Suite, a piece for string quartet that was composed by a computer. Along the way we will examine the Markov Chain Monte Carlo method, and how methods used to create the hydrogen bomb were adapted to create music.   Selected Sources:   https://archive.org/details/experimentalmusi0000hill/page/n5/mode/1up - Experimental Music   https://web.archive.org/web/20171107072033/http://www.computing-conference.ugent.be/file/12 - Algoryhythmic Listening(page 40)   https://www.youtube.com/playlist?list=PLEb-H1Xb9XcIyrrN5qauFr2KAolSbPi0c - The ILLIAC Suite, in 4 parts

Seismic Soundoff
135: The new paradigms in seismic inversion

Seismic Soundoff

Play Episode Listen Later Dec 9, 2021 15:28


Miguel Bosch discusses his Honorary Lecture, "The new paradigms in seismic inversion." Miguel explains how elastic Full Waveform Inversion and the Markov Chain Monte Carlo approach improve seismic inversion, discusses if data analysis and machine learning are essential to practice inversion, and highlights new tools that will improve the accuracy of inversion. This conversation provides great value and insight into the essential work of inversion. RELATED LINKS * Watch Miguel's course: The New Paradigms in Seismic Inversion (https://www.knowledgette.com/p/the-new-paradigms-in-seismic-inversion) * Discover SEG on Demand (https://seg.org/Education/SEG-on-Demand) * The SEG podcast archive (https://seg.org/podcast) BIOGRAPHY Miguel Bosch's expertise is in the field of geophysical inversion with a focus on advanced seismic inversion methods and data integration in complex reservoir models. He has worked on inference problems at different earth scales. In the topic of oil and gas reservoir description, he develops services and technology for the upstream oil and gas industry. Miguel has supervised a large number of projects on seismic inversion, reservoir characterization, and integration, and developed advanced technology and software for these fields. His recent research involves focused Full Waveform Inversion and quantitative Knowledge Networks for data integration. He graduated with a Ph.D. in Geophysics from the Institut de Physique du Globe de Paris, working with Albert Tarantola, and was a full professor and Head of the Applied Physics Department at the Universidad Central de Venezuela. He is an active member of the SEG, AGU, EAGE, IAMG, AAPG, GSH and serves as associate editor in the area of reservoir geophysics for the journal GEOPHYSICS. He is presently the founder and CEO of Info Geosciences Technology and Services. SPONSOR This episode is brought to you by CGG. When you need accurate estimates of reservoir properties, it all comes down to the details. For more than 90 years, CGG has led the industry in advanced subsurface imaging, providing the best possible input for reservoir characterization. Our proprietary time-lag FWI technology provides detailed and robust velocity models and remarkable FWI imaging results in even the most complex geological settings. Better images, better knowledge, better outcomes: upgrade your reservoir imaging and see things differently with CGG. Visit https://www.cgg.com/ to learn more. CREDITS Original music by Zach Bridges. This episode was hosted, edited, and produced by Andrew Geary at 51 features, LLC. Thank you to the SEG podcast team: Ted Bakamjian, Kathy Gamble, and Ally McGinnis. You can follow the podcast to hear the latest episodes on Apple Podcasts, Google Podcasts, and Spotify

The History of Computing
The Von Neumann Architecture

The History of Computing

Play Episode Listen Later Nov 12, 2021 12:24


John Von Neumann was born in Hungary at the tail end of the Astro-Hungarian Empire. The family was made a part of the nobility and as a young prodigy in Budapest, He learned languages and by 8 years old was doing calculus. By 17 he was writing papers on polynomials. He wrote his dissertation in 1925 he added to set theory with the axiom of foundation and the notion of class, or properties shared by members of a set. He worked on the minimax theorem in 1928, the proof of which established zero-sum games and started another discipline within math, game theory. By 1929 he published the axiom system that led to Von Neumann–Bernays–Gödel set theory. And by 1932 he'd developed foundational work on ergodic theory which would evolve into a branch of math that looks at the states of dynamical systems, where functions can describe a points time dependence in space. And so he of course penned a book on quantum mechanics the same year. Did we mention he was smart and given the way his brain worked it made sense that he would eventually gravitate into computing. He went to the best schools with other brilliant scholars who would go on to be called the Martians. They were all researching new areas that required more and more computing - then still done by hand or a combination of hand and mechanical calculators. The Martians included De Hevesy, who won a Nobel prize for Chemistry. Von Kármán got the National Medal of Science and a Franklin Award. Polanyl developed the theory of knowledge and the philosophy of science. Paul Erdős was a brilliant mathematician who published over 1,500 articles. Edward Teller is known as the father of the hydrogen bomb, working on nuclear energy throughout his life and lobbying for the Strategic Defense Initiative, or Star Wars. Dennis Gabor wrote Inventing the Future and won a Nobel Prize in Physics. Eugene Wigner also took home a Nobel Prize in Physics and a National Medal of Science. Leo Szilard took home an Albert Einstein award for his work on nuclear chain reactions and joined in the Manhattan Project as a patent holder for a nuclear reactor. Physicists and brilliant scientists. And here's a key component to the explosion in science following World War II: many of them fled to the United States and other western powers because they were Jewish, to get away from the Nazis, or to avoid communists controlling science. And then there was Harsanyl, Halmos, Goldmark, Franz Alexander, Orowan, and John Kemeny who gave us BASIC. They all contributed to the world we live in today - but von Neumann sometimes hid how smart he was, preferring to not show just how much arithmetic computed through his head. He was married twice and loved fast cars, fine food, bad jokes, and was an engaging and enigmatic figure. He studied measure theory and broke dimension theory into algebraic operators. He studied topological groups, operator algebra, spectral theory, functional analysis and abstract Hilbert space. Geometry and Lattice theory. As with other great thinkers, some of his work has stood the test of time and some has had gaps filled with other theories. And then came the Manhattan project. Here, he helped develop explosive lenses - a key component to the nuclear bomb. Along the way he worked on economics and fluid mechanics. And of course, he theorized and worked out the engineering principals for really big explosions. He was a commissioner of the Atomic Energy Commission and at the height of the Cold War after working out game theory, developed the concept of mutually assured destruction - giving the world hydrogen bombs and ICBMs and reducing the missile gap. Hard to imagine but at the times the Soviets actually had a technical lead over the US, which was proven true when they launched Sputnik. As with the other Martians, he fought Communism and Fasciscm until his death - which won him a Medal of Freedom from then president Eisenhower. His friend Stanislaw Ulam developed the modern Markov Chain Monte Carlo method and Von Neumann got involved in computing to work out those calculations. This combined with where his research lay landed him as an early power user of ENIAC. He actually heard about the machine at a station while waiting for a train. He'd just gotten home from England and while we will never know if he knew of the work Turing was doing on Colossus at Bletchley Park, we do know that he offered Turing a job at the Institute for Advanced Study that he was running in Princeton before World War II and had read Turing's papers, including “On Computable Numbers” and understood the basic concepts of stored programs - and breaking down the logic into zeros and ones. He discussed using ENIAC to compute over 333 calculations per second. He could do a lot in his head, but he wasn't that good of a computer. His input was taken and when Eckert and Mauchly went from ENIAC to EDVAC, or the Electronic Discrete Variable Calculator, the findings were published in a paper called “First Draft of a Report on the EDVAC” - a foundational paper in computing for a number of reasons. One is that Mauchly and Eckert had an entrepreneurial spirit and felt that not only should their names have been on the paper but that it was probably premature and so they quickly filed a patent in 1945, even though some of what they told him that went into the paper helped to invalidate the patent later. They considered these trade secrets and didn't share in von Neumann's idea that information must be set free. In the paper lies an important contribution, Von Neumann broke down the parts of a modern computer. He set the information for how these would work free. He broke down the logical blocks of how a computer works into the modern era. How once we strip away the electromechanical computers that a fully digital machine works. Inputs go into a Central Processing Unit, which has an instruction register, a clock to keep operations and data flow in sync, and a counter - it does the math. It then uses quick-access memory, which we'd call Random Access Memory, or RAM today, to make processing data instructions faster. And it would use long-term memory for operations that didn't need to be as highly available to the CPU. This should sound like a pretty familiar way to architect devices at this point. The result would be sent to an output device. Think of a modern Swift app for an iPhone - the whole of what the computer did could be moved into a single wafer once humanity worked out how first transistors and then multiple transistors on a single chip worked. Yet another outcome of the paper was to inspire Turing and others to work on computers after the war. Turing named his ACE or Automatic Computing Engine out of respect to Charles Babbage. That led to the addition of storage to computers. After all, punched tape was used for Colossus during the war and and punched cards and tape had been around for awhile. It's ironic that we think of memory as ephemeral data storage and storage as more long-term storage. But that's likely more to do with the order these scientific papers came out than anything - and homage to the impact each had. He'd write The Computer and the Brain, Mathematical Foundations of Quantum Mechanics, The Theory of Games and Economic Behavior, Continuous Geometry, and other books. He also studied DNA and cognition and weather systems, inferring we could predict the results of climate change and possibly even turn back global warming - which by 1950 when he was working on it was already acknowledged by scientists. As with many of the early researchers in nuclear physics, he died of cancer - invoking Pascal's wager on his deathbed. He died in 1957 - just a few years too early to get a Nobel Prize in one of any number of fields. One of my favorite aspects of Von Neumann was that he was a lifelong lover of history. He was a hacker - bouncing around between subjects. And he believed in human freedom. So much so that this wealthy and charismatic pseudo-aristocrat would dedicate his life to the study of knowledge and public service. So thank you for the Von Neumann Architecture and breaking computing down into ways that it couldn't be wholesale patented too early to gain wide adoption. And thank you for helping keep the mutually assured destruction from happening and for inspiring generations of scientists in so many fields. I'm stoked to be alive and not some pile of nuclear dust. And to be gainfully employed in computing. He had a considerable impact in both.

Machine Learning Street Talk
#037 - Tour De Bayesian with Connor Tann

Machine Learning Street Talk

Play Episode Listen Later Jan 11, 2021 95:25


Connor Tan is a physicist and senior data scientist working for a multinational energy company where he co-founded and leads a data science team. He holds a first-class degree in experimental and theoretical physics from Cambridge university. With a master's in particle astrophysics. He specializes in the application of machine learning models and Bayesian methods. Today we explore the history, pratical utility, and unique capabilities of Bayesian methods. We also discuss the computational difficulties inherent in Bayesian methods along with modern methods for approximate solutions such as Markov Chain Monte Carlo. Finally, we discuss how Bayesian optimization in the context of automl may one day put Data Scientists like Connor out of work. Panel: Dr. Keith Duggar, Alex Stenlake, Dr. Tim Scarfe 00:00:00 Duggars philisophical ramblings on Bayesianism 00:05:10 Introduction 00:07:30 small datasets and prior scientific knowledge 00:10:37 Bayesian methods are probability theory 00:14:00 Bayesian methods demand hard computations 00:15:46 uncertainty can matter more than estimators 00:19:29 updating or combining knowledge is a key feature 00:25:39 Frequency or Reasonable Expectation as the Primary Concept 00:30:02 Gambling and coin flips 00:37:32 Rev. Thomas Bayes's pool table 00:40:37 ignorance priors are beautiful yet hard 00:43:49 connections between common distributions 00:49:13 A curious Universe, Benford's Law 00:55:17 choosing priors, a tale of two factories 01:02:19 integration, the computational Achilles heel 01:35:25 Bayesian social context in the ML community 01:10:24 frequentist methods as a first approximation 01:13:13 driven to Bayesian methods by small sample size 01:18:46 Bayesian optimization with automl, a job killer? 01:25:28 different approaches to hyper-parameter optimization 01:30:18 advice for aspiring Bayesians 01:33:59 who would connor interview next? Connor Tann: https://www.linkedin.com/in/connor-tann-a92906a1/ https://twitter.com/connossor

PaperPlayer biorxiv neuroscience
Bayesian Connective Field Modeling: a Markov Chain Monte Carlo approach.

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Sep 3, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.03.281162v1?rss=1 Authors: Invernizzi, A., Haak, K. V., Carvalho, J., Renken, R., Cornelissen, F. Abstract: The majority of neurons in the human brain process signals from neurons elsewhere in the brain. Connective Field (CF) modeling is a biologically-grounded method to describe this essential aspect of the brain s circuitry. It allows characterizing the response of a population of neurons in terms of the activity in another part of the brain. CF modeling translates the concept of the receptive field (RF) into the domain of connectivity by assessing the spatial dependency between signals in distinct cortical visual field areas. Standard CF model estimation has some intrinsic limitations in that it cannot estimate the uncertainty associated with each of its parameters. Obtaining the uncertainty will allow identification of model biases, e.g. related to an over- or under-fitting or a co-dependence of parameters, thereby improving the CF prediction. To enable this, here we present a Bayesian framework for the CF model. Using a Markov Chain Monte Carlo (MCMC) approach, we estimate the underlying posterior distribution of the CF parameters and consequently, quantify the uncertainty associated with each estimate. We applied the method and its new Bayesian features to characterize the cortical circuitry of the early human visual cortex of 12 healthy participants that were assessed using 3T fMRI. In addition, we show how the MCMC approach enables the use of effect size (beta) as a data-driven parameter to retain relevant voxels for further analysis. Finally, we demonstrate how our new method can be used to compare different CF models. Our results show that single Gaussian models are favoured over differences of Gaussians (i.e. center-surround) models, suggesting that the cortico-cortical connections of the early visual system do not possess center-surround organisation. We conclude that our new Bayesian CF framework provides a comprehensive tool to improve our fundamental understanding of the human cortical circuitry in health and disease. Copy rights belong to original authors. Visit the link for more info

Learning Bayesian Statistics
#2 When should you use Bayesian tools, and Bayes in sports analytics, with Chris Fonnesbeck

Learning Bayesian Statistics

Play Episode Listen Later Oct 22, 2019 43:37


When are Bayesian methods most useful? Conversely, when should you NOT use them? How do you teach them? What are the most important skills to pick-up when learning Bayes? And what are the most difficult topics, the ones you should maybe save for later? In this episode, you’ll hear Chris Fonnesbeck answer these questions from the perspective of marine biology and sports analytics. Chris is indeed the New York Yankees’ senior quantitative analyst and an associate professor at Vanderbilt University School of Medicine. He specializes in computational statistics, Bayesian methods, meta-analysis, and applied decision analysis. He also created PyMC, a library to do probabilistic programming in python, and is the author of several tutorials at PyCon and PyData conferences. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com! Links from the show: Chris on Twitter: https://twitter.com/fonnesbeck PyMC3, Probabilistic Programming in Python: https://docs.pymc.io/ Chris on GitHub: https://github.com/fonnesbeck An introduction to Markov Chain Monte Carlo using PyMC3 - PyData London 2019: https://www.youtube.com/watch?v=SS_pqgFziAg Introduction to Statistical Modeling with Python - PyCon 2017 - video: https://www.youtube.com/watch?v=TMmSESkhRtI Introduction to Statistical Modeling with Python - PyCon 2017 - code repo: https://github.com/fonnesbeck/intro_stat_modeling_2017 Bayesian Non-parametric Models for Data Science using PyMC3 - PyCon 2018: https://www.youtube.com/watch?v=-sIOMs4MSuA Statistical Data Analysis in Python: https://github.com/fonnesbeck/statistical-analysis-python-tutorial --- Send in a voice message: https://anchor.fm/learn-bayes-stats/message

Misreading Chat
#68: Introduction to MCMC

Misreading Chat

Play Episode Listen Later Jul 25, 2019


Markov Chain Monte Carlo について森田がしったかぶりします。

mcmc markov chain monte carlo
Talking Machines
Interdisciplinary Data and Helping Humans Be Creative

Talking Machines

Play Episode Listen Later May 7, 2015 34:17


In Episode 10 we talk with David Blei of Columbia University. We talk about his work on latent dirichlet allocation, topic models, the PhD program in data that he’s helping to create at Columbia and why exploring data is inherently multidisciplinary. We learn about Markov Chain Monte Carlo and take a listener question about how machine learning can make humans more creative.

Data Skeptic
[MINI] Markov Chain Monte Carlo

Data Skeptic

Play Episode Listen Later Apr 3, 2015 15:50


This episode explores how going wine testing could teach us about using markov chain monte carlo (mcmc).

wine tasting mcmc markov chain monte carlo
StatLearn 2010 - Workshop on
2.3 A Mixture of Experts Latent Position Cluster Model for Social Network Data (Claire Gormley)

StatLearn 2010 - Workshop on "Challenging problems in Statistical Learning"

Play Episode Listen Later Dec 4, 2014 49:47


Social network data represent the interactions between a group of social actors. Interactions between colleagues and friendship networks are typical examples of such data. The latent space model for social network data locates each actor in a network in a latent (social) space and models the probability of an interaction between two actors as a function of their locations. The latent position cluster model extends the latent space model to deal with network data in which clusters of actors exist ? actor locations are drawn from a finite mixture model, each component of which represents a cluster of actors. A mixture of experts model builds on the structure of a mixture model by taking account of both observations and associated covariates when modeling a heterogeneous population. Herein, a mixture of experts extension of the latent position cluster model is developed. The mixture of experts framework allows covariates to enter the latent position cluster model in a number of ways, yielding different model interpretations. Estimates of the model parameters are derived in a Bayesian framework using a Markov Chain Monte Carlo algorithm. The algorithm is generally computationally expensive ? surrogate proposal distributions which shadow the target distributions are derived, reducing the computational burden. The methodology is demonstrated through an illustrative example detailing relations between a group of lawyers in the USA.

StatLearn 2013 - Workshop on
Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator (Arnaud Doucet)

StatLearn 2013 - Workshop on "Challenging problems in Statistical Learning"

Play Episode Listen Later May 16, 2013 48:50


When an unbiased estimator of the likelihood is used within an Markov chain Monte Carlo (MCMC) scheme, it is necessary to tradeoff the number of samples used against the computing time. Many samples for the estimator will result in a MCMC scheme which has similar properties to the case where the likelihood is exactly known but will be expensive. Few samples for the construction of the estimator will result in faster estimation but at the expense of slower mixing of the Markov chain.We explore the relationship between the number of samples and the efficiency of the resulting MCMC estimates. Under specific assumptions about the likelihood estimator, we are able to provide guidelines on the number of samples to select for a general Metropolis-Hastings proposal.We provide theory which justifies the use of these assumptions for a large class of models. On a number of examples, we find that the assumptions on the likelihood estimator are accurate. This is joint work with Mike Pitt (University of Warwick) and Robert Kohn (UNSW).

efficient implementation warwick arnaud unbiased doucet markov estimator mcmc markov chain monte carlo monte carlo mcmc metropolis hastings
Fakultät für Chemie und Pharmazie - Digitale Hochschulschriften der LMU - Teil 04/06
Markov chain Monte Carlo methods for parameter identification in systems biology models

Fakultät für Chemie und Pharmazie - Digitale Hochschulschriften der LMU - Teil 04/06

Play Episode Listen Later Jun 4, 2012


Mon, 4 Jun 2012 12:00:00 +0100 https://edoc.ub.uni-muenchen.de/15779/ https://edoc.ub.uni-muenchen.de/15779/1/Niederberger_Theresa.pdf Niederberger, Theresa ddc:540, ddc:500, Fakultät für Chem

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05

The cosmic origin and evolution is encoded in the large-scale matter distribution observed in astronomical surveys. Galaxy redshift surveys have become in the recent years one of the best probes for cosmic large-scale structures. They are complementary to other information sources like the cosmic microwave background, since they trace a different epoch of the Universe, the time after reionization at which the Universe became transparent, covering about the last twelve billion years. Regarding that the Universe is about thirteen billion years old, galaxy surveys cover a huge range of time, even if the sensitivity limitations of the detectors do not permit to reach the furthermost sources in the transparent Universe. This makes galaxy surveys extremely interesting for cosmological evolution studies. The observables, galaxy position in the sky, galaxy ma gnitude and redshift, however, give an incomplete representation of the real structures in the Universe, not only due to the limitations and uncertainties in the measurements, but also due to their biased nature. They trace the underlying continuous dark matter field only partially being a discrete sample of the luminous baryonic distribution. In addition, galaxy catalogues are plagued by many complications. Some have a physical foundation, as mentioned before, others are due to the observation process. The problem of reconstructing the underlying density field, which permits to make cosmological studies, thus requires a statistical approach. This thesis describes a cosmic cartography project. The necessary concepts, mathematical frame-work, and numerical algorithms are thoroughly analyzed. On that basis a Bayesian software tool is implemented. The resulting Argo-code allows to investigate the characteristics of the large-scale cosmological structure with unprecedented accuracy and flexibility. This is achieved by jointly estimating the large-scale density along with a variety of other parameters ---such as the cosmic flow, the small-scale peculiar velocity field, and the power-spectrum--- from the information provided by galaxy redshift surveys. Furthermore, Argo is capable of dealing with many observational issues like mask-effects, galaxy selection criteria, blurring and noise in a very efficient implementation of an operator based formalism which was carefully derived for this purpose. Thanks to the achieved high efficiency of Argo the application of iterative sampling algorithms based on Markov Chain Monte Carlo is now possible. This will ultimately lead to a full description of the matter distribution with all its relevant parameters like velocities, power spectra, galaxy bias, etc., including the associated uncertainties. Some applications are shown, in which such techniques are used. A rejection sampling scheme is successfully applied to correct for the observational redshift-distortions effect which is especially severe in regimes of non-linear structure formation, causing the so-called finger-of-god effect. Also a Gibbs-sampling algorithm for power-spectrum determination is presented and some preliminary results are shown in which the correct level and shape of the power-spectrum is recovered solely from the data. We present in an additional appendix the gravitational collapse and subsequent neutrino-driven explosion of the low-mass end of stars that undergo core-collapse Supernovae. We obtain results which are for the first time compatible with the Crab Nebula.

Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02

Due to the increasing availability of spatial or spatio-temporal regression data, models that allow to incorporate the special structure of such data sets in an appropriate way are highly desired in practice. A flexible modeling approach should not only be able to account for spatial and temporal correlations, but also to model further covariate effects in a semi- or nonparametric fashion. In addition, regression models for different types of responses are available and extensions require special attention in each of these cases. Within this thesis, numerous possibilities to model non-standard covariate effects such as nonlinear effects of continuous covariates, temporal effects, spatial effects, interaction effects or unobserved heterogeneity are reviewed and embedded in the general framework of structured additive regression. Beginning with exponential family regression, extensions to several types of multicategorical responses and the analysis of continuous survival times are described. A new inferential procedure based on mixed model methodology is introduced, allowing for a unified treatment of the different regression problems. Estimation of the regression coefficients is based on penalized likelihood, whereas smoothing parameters are estimated using restricted maximum likelihood or marginal likelihood. In several applications and simulation studies, the new approach turns out to be a promising alternative to competing methodology, especially estimation based on Markov Chain Monte Carlo simulation techniques.

mixed regression structured additives estimation inference model based ddc:500 markov chain monte carlo ddc:510 informatik und statistik
Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
Modeling dependencies between rating categories and their effects on prediction in a credit risk portfolio

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

Play Episode Listen Later Jan 1, 2006


The internal-ratings based Basel II approach increases the need for the development of more realistic default probability models. In this paper we follow the approach taken in McNeil and Wendin (2006) by constructing generalized linear mixed models for estimating default probabilities from annual data on companies with different credit ratings. The models considered, in contrast to McNeil and Wendin (2006), allow parsimonious parametric models to capture simultaneously dependencies of the default probabilities on time and credit ratings. Macro-economic variables can also be included. Estimation of all model parameters are facilitated with a Bayesian approach using Markov Chain Monte Carlo methods. Special emphasis is given to the investigation of predictive capabilities of the models considered. In particular predictable model specifications are used. The empirical study using default data from Standard and Poor gives evidence that the correlation between credit ratings further apart decreases and is higher than the one induced by the autoregressive time dynamics.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
State space mixed models for longitudinal obsservations with binary and binomial responses

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

Play Episode Listen Later Jan 1, 2006


We propose a new class of state space models for longitudinal discrete response data where the observation equation is specified in an additive form involving both deterministic and random linear predictors. These models allow us to explicitly address the effects of trend, seaonal or other time-varying covariates while preserving the power of state space models in modeling serial dependence in the data. We develop a Markov Chain Monte Carlo algorithm to carry out statistical inferene for models with binary and binomial responses, in which we invoke de Jong and Shephard's (1995) simulaton smoother to establish an efficent sampling procedure for the state variables. To quantify and control the sensitivity of posteriors on the priors of variance parameters, we add a signal-to-noise ratio type parmeter in the specification of these priors. Finally, we ilustrate the applicability of the proposed state space mixed models for longitudinal binomial response data in both simulation studies and data examples.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
Bayesian Poisson Log-Bilinear Mortality Projections

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

Play Episode Listen Later Jan 1, 2004


Mortality projections are major concerns for public policy, social security and private insurance. This paper implements a Bayesian log-bilinear Poisson regression model to forecast mortality. Computations are carried out using Markov Chain Monte Carlo methods in which the degree of smoothing is learnt from the data. Comparisons are made with the approach proposed by Brouhns, Denuit & Vermunt (2002a,b), as well as with the original model of Lee & Carter (1992).

comparison projections mortality poisson bayesian markov chain monte carlo ddc:510 bilinear
Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
Empirical Study of Intraday Option Price Changes using extended Count Regression Models

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

Play Episode Listen Later Jan 1, 2004


In this paper we model absolute price changes of an option on the XETRA DAX index based on quote-by-quote data from the EUREX exchange. In contrast to other authors, we focus on a parameter-driven model for this purpose and use a Poisson Generalized Linear Model (GLM) with a latent AR(1) process in the mean, which accounts for autocorrelation and overdispersion in the data. Parameter estimation is carried out by Markov Chain Monte Carlo methods using the WinBUGS software. In a Bayesian context, we prove the superiority of this modelling approach compared to an ordinary Poisson-GLM and to a complex Poisson-GLM with heterogeneous variance structure (but without taking into account any autocorrelations) by using the deviance information criterion (DIC) as proposed by Spiegelhalter et al. (2002). We include a broad range of explanatory variables into our regression modelling for which we also consider interaction effects: While, according to our modelling results, the price development of the underlying, the intrinsic value of the option at the time of the trade, the number of new quotations between two price changes, the time between two price changes and the Bid-Ask spread have significant effects on the size of the price changes, this is not the case for the remaining time to maturity of the option. By giving possible interpretations of our modelling results we also provide an empirical contribution to the understanding of the microstructure of option markets.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

The Markov Chain Monte Carlo method (MCMC) is often used to generate independent (pseudo) random numbers from a distribution with a density that is known only up to a normalising constant. With the MCMC method it is not necessary to compute the normalising constant (see e.g. Tierney, 1994; Besag, 2000). In this paper we show that the well-known acceptance-rejection algorithm also works with unnormalised densities, and so this algorithm can be used to confirm the results of the MCMC method in simple cases. We present an example with real data.

method alternatives mcmc markov chain monte carlo ddc:510
Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
Analysis of the time to sustained progression in Multiple Sclerosis using generalised linear and additive models

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

Play Episode Listen Later Jan 1, 2003


The course of multiple sclerosis (MS) is generally difficult to predict. This is due to the great inter-individual variability with respect to symptoms and disability status. An important prognostic endpoint for MS is the expected time to sustained disease progression. Using the Expanded Disability Status Scale (EDSS) this endpoint is here defined as a rise of 1.0 or 0.5 compared to baseline EDSS (5.5) which is confirmed for at least six months. The goal of this paper was threefold. It aimed at identifying covariates which significantly influence sustained progression, determining size and form of the effect of these covariates and estimating the survival curves for given predictors. To this end a piecewise exponential model utilizing piecewise constant hazard rates and a Poisson model were devised. In order to improve and simplify these models a method for piecewise linear parameterization of non-parametric generalized additive models (GAMs) was applied. The models included fixed and random effects, the posterior distribution was estimated using Markov Chain Monte Carlo methods (MCMC) as well as a penalized likelihood approach and variables were selected using Akaikes information criterium (AIC). The models were applied to data of placebo patients from worldwide clinical trials that are pooled in the database of the Sylvia Lawry Centre for Multiple Sclerosis Research (SLCMSR). Only with a pure exponential model and fixed effects, baseline EDSS and the number of relapses in the last 12 month before study entry had an effect on the hazard rate. For the piecewise exponential model with random study effects there was no effect of covariates on the hazard rate other than a slightly decreasing effect of time. This reflects the fact that unstable patients reach the event early and are therefore eliminated from the analysis (selection effect).

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03
Dynamic Modelling of Child Mortality in Developing Countries: Application for Zambia

Mathematik, Informatik und Statistik - Open Access LMU - Teil 02/03

Play Episode Listen Later Jan 1, 2002


In this paper, we analyse the causes of under five mortality in Zambia, with a particular emphasis on assessing possible time-variations in the effects of covariates, i.e. whether the effects of certain covariates vary with the age of the child. The analysis is based on micro data from the 1992 Demographic and health Survey. Employing a Bayesian dynamic logit model for discrete time survival data and Markov-Chain Monte Carlo methods, we find that there are several variables, including the age of the mother and the breastfeeding duration whose effects exhibit distinct age-dependencies. In the case of breastfeeding, this age dependency is intimately linked with the reasons for stopping breastfeeding. Incorporating such age dependencies greatly improves the explanatory power of the model and yields new insights on the differential role of covariates on child survival.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
A Bayesian Model for Spatial Disease Prevalence Data

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 2001


The analysis of the geographical distribution of disease on the scale of geographic areas such as administrative boundaries plays an important role in veterinary epidemiology. Prevalence estimates of wildlife population surveys are often based on regional count data generated by sampling animals shot by hunters. The observed disease rate per spatial unit is not a useful estimate of the underlying disease prevalence due to different sample sizes and spatial dependencies between neighbouring areas. Therefore, it is necessary to account for extra-sample variation and and spatial correlation in the data to produce more accurate maps of disease incidence. For this purpose a hierarchical Bayesian model in which structured and un-structured overdispersion is modelled explicitly in terms of spatial and non-spatial components was implemented by Markov Chain Monte Carlo methods. The model was empirically compared with the results of the non-spatial beta-binomial model using surveillance data of Pseudorabies virus infections of wildboars in the Federal State of Brandenburg, Germany.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
BayesX - Software for Bayesian Inference based on Markov Chain Monte Carlo simulation techniques

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 2000


BayesX is a Software tool for Bayesian inference based on Markov Chain Monte Carlo (MCMC) inference techniques. The main feature of BayesX so far, is a very powerful regression tool for Bayesian semiparametric regression within the Generalized linear models framework. BayesX is able to estimate nonlinear effects of metrical covariates, trends and flexible seasonal patterns of time scales, structured and/or unstructured random effects of spatial covariates (geographical data) and unstructured random effects of unordered group indicators. Moreover, BayesX is able to estimate varying coefficients models with metrical and even spatial covariates as effectmodifiers. The distribution of the response can be either Gaussian, binomial or Poisson. In addition, BayesX has some useful functions for handling and manipulating datasets and geographical maps.

software poisson bayesian inference generalized gaussian monte carlo simulations markov chain monte carlo ddc:510 markov chain monte carlo mcmc bayesx
Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
Markov Chain Monte Carlo Model Selection for DAG Models

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 2000


We present two methodologies for Bayesian model choice and averaging in Gaussian directed acyclic graphs (dags). In both cases model determination is carried out by implementing a reversible jump Markov Chain Monte Carlo sampler. The dimension-changing move involves adding or dropping a (directed) edge from the graph. The first methodology extends the results in Giudici and Green (1999), by excluding all non-moralized dags and searching in the space of their essential graphs. The second methodology employs the results in Geiger and Heckerman (1999) and searches directly in the space of all dags. To achieve this aim we rely on the concept of adjacency matrices, which provides a relatively inexpensive check for acyclicity. The performance of our procedure is illustrated by means of two simulated datasets.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
Multivariate Probit Analysis of Binary Time Series Data with Missing Responses

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 1996


The development of adequate models for binary time series data with covariate adjustment has been an active research area in the last years. In the case, where interest is focused on marginal and association parameters, generalized estimating equations (GEE) (see for example Lipsitz, Laird and Harrington (1991) and Liang, Zeger and Qaqish (1992)) and likelihood (see for example Fitzmaurice and Laird (1993) and Molenberghs and Lesaffre (1994)) based methods have been proposed. The number of parameters required for the full specification of these models grows exponentially with the length of the binary time series. Therefore, the analysis is often focused on marginal and first order parameters. In this case, the multivariate probit model (Ashford and Sowden (1970)) becomes an attractive alternative to the above models. The application of the multivariate probit model has been hampered by the intractability of the maximum likelihood estimator, when the length of the binary time series is large. This paper shows that this difficulty can be overcome by the use of Markov Chain Monte Carlo methods. This analysis also allows for valid point and interval estimates of the parameters in small samples. In addition, the analysis is adopted to handle the case of missing at random responses. The approach is illustrated on data involving binary responses measured at unequally spaced time points. Finally, this data analysis is compared to a GEE analysis given in Fitzmaurice and Lipsitz (1995).

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
Bayesian spline-type smoothing in generalized regression models

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 1996


Spline smoothing in non- or semiparametric regression models is usually based on the roughness penalty approach. For regression with normal errors, the spline smoother also has a Bayesian justification: Placing a smoothness prior over the regression function, it is the mean of the posterior given the data. For non-normal regression this equivalence is lost, but the spline smoother can still be viewed as the posterior mode. In this paper, we provide a full Bayesian approach to spline-type smoothing. The focus is on generalized additive models, however the models can be extended to other non-normal regression models. Our approach uses Markov Chain Monte Carlo methods to simulate samples from the posterior. Thus it is possible to estimate characteristics like the mean, median, moments, and quantiles of the posterior, or interesting functionals of the regression function. Also, this provides an alternative for the choice of smoothing parameters. For comparison, our approach is applied to real-data examples analyzed previously by the roughness penalty approach.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
Markov Chain Monte Carlo Simulation in Dynamic Generalized Linear Mixed Models

Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03

Play Episode Listen Later Jan 1, 1995


Dynamic generalized linear mixed models are proposed as a regression tool for nonnormal longitudinal data. This framework is an interesting combination of dynamic models, by other name state space models, and mixed models, also known as random effect models. The main feature is, that both time- and unit-specific parameters are allowed, which is especially attractive if a considerable number of units is observed over a longer period. Statistical inference is done by means of Markov chain Monte Carlo techniques in a full Bayesian setting. The algorithm is based on iterative updating using full conditionals. Due to the hierarchical structure of the model and the extensive use of Metropolis-Hastings steps for updating this algorithm mainly evaluates (log-)likelihoods in multivariate normal distributed proposals. It is derivative-free and covers a wide range of different models, including dynamic and mixed models, the latter with slight modifications. The methodology is illustrated through an analysis of artificial binary data and multicategorical business test data.