POPULARITY
Nalini AnantharamanGéométrie spectraleCollège de FranceAnnée 2023-2024Colloque - Géométries aléatoires et applications : Géométrie des excursions de champs aléatoires réguliers et inférence statistiqueIntervenante :Anne EstradeUniversité Paris CitéRésuméSome geometrical and topological features of the excursions of smooth random fields will be presented, such as their expected Lipschitz-Killing curvatures. The concerned random fields will be Gaussian or Gaussian based, but also shot-noise fields will be considered. Based on these features, one can statistically infer some informations on the underlying random field, in particular statistical tests (of Gaussianity, or of isotropy) and parameters estimations. A special focus on the two-dimensional case will be payed as it is the natural framework for image analysis.----Le terme « géométrie aléatoire » désigne tout processus permettant de construire de manière aléatoire un objet géométrique ou des familles d'objets géométriques. Un procédé simple consiste à assembler aléatoirement des éléments de base : sommets et arêtes dans le cas des graphes aléatoires, triangles ou carrés dans certains modèles de surfaces aléatoires, ou encore triangles, « pantalons » ou tétraèdres hyperboliques dans le cadre des géométries hyperboliques. La théorie des graphes aléatoires imprègne toutes les branches des mathématiques actuelles, des plus théoriques (théorie des groupes, algèbres d'opérateurs, etc.) aux plus appliquées (modélisation de réseaux de communication, par exemple). En mathématiques, l'approche probabiliste consiste à évaluer la probabilité qu'une propriété géométrique donnée apparaisse : lorsque l'on ne sait pas si un théorème est vrai, on peut tenter de démontrer qu'il l'est dans 99 % des cas.Une autre méthode classique pour générer des paysages aléatoires consiste à utiliser les séries de Fourier aléatoires, avec de nombreuses applications en théorie du signal ou en imagerie.En physique théorique, les géométries aléatoires sont au cœur de la théorie de la gravité quantique et d'autres théories des champs quantiques. Les différents aspects mathématiques s'y retrouvent curieusement entremêlés, par exemple, la combinatoire des quadrangulations ou des triangulations apparaît dans le calcul de certaines fonctions de partition.Ce colloque offrira un panorama non exhaustif des géométries aléatoires, couvrant des aspects allant des plus abstraits aux applications concrètes en imagerie et télécommunications.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Basic Facts about Language Model Internals, published by Beren Millidge on January 4, 2023 on The AI Alignment Forum. This post was written as part of the work done at Conjecture. As mentioned in our retrospective, while also producing long and deep pieces of research, we are also experimenting with a high iteration frequency. This is an example of this strand of our work. The goal here is to highlight interesting and unexplained language model facts. This is the first in a series of posts which will be exploring the basic ‘facts on the ground' of large language models at increasing levels of complexity. Understanding the internals of large-scale deep learning models, and especially large language models (LLMs) is a daunting task which has been relatively understudied. Gaining such an understanding of how large models work internally could also be very important for alignment. If we can understand how the representations of these networks form and what they look like, we could potentially track goal misgeneralization, as well as detect mesaoptimizers or deceptive behaviour during training and, if our tools are good enough, edit or remove such malicious behaviour during training or at runtime. When faced with a large problem of unknown difficulty, it is often good to first look at lots of relevant data, to survey the landscape, and build up a general map of the terrain before diving into some specific niche. The goal of this series of works is to do precisely this – to gather and catalogue the large number of easily accessible bits of information we can get about the behaviour and internals of large models, without commiting to a deep dive into any specific phenomenon. While lots of work in interpretability has focused on interpreting specific circuits, or understanding relatively small pieces of neural networks, there has been relatively little work in extensively cataloging the basic phenomenological states and distributions comprising language models at an intermediate level of analysis. This is despite the fact that, as experimenters with the models literally sitting in our hard-drives, we have easy and often trivial access to these facts. Examples include distributional properties of activations, gradients, and weights. While such basic statistics cannot be meaningful ‘explanations' for network behaviour in and of themselves, they are often highly useful for constraining one's world model of what can be going on in the network. They provide potentially interesting jumping off points for deeper exploratory work, especially if the facts are highly surprising, or else are useful datapoints for theoretical studies to explain why the network must have some such distributional property. In this post, we present a systematic view of basic distributional facts about large language models of the GPT2 family, as well as a number of surprising and unexplained findings. At Conjecture, we are undertaking follow-up studies on some of the effects discussed here. Activations Are Nearly Gaussian With Outliers If you just take the histogram of activity values in the residual stream across a sequence at a specific block (here after the first attention block), they appear nearly Gaussianly distributed. The first plot shows the histogram of the activities of the residual stream after the attention block in block 0 of GPT2-medium. This second plot shows the histogram of activities in the residual stream after the attention block of layer 10 of GPT2-medium, showing that the general Gaussian structure of the activations is preserved even deep inside the network. This is expected to some extent due to the central limit theorem (CLT), which enforces a high degree of Gaussianity on the distribution of neuron firing rates. This CLT mixing effect might be expected to destroy information in t...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Basic Facts about Language Model Internals, published by Beren Millidge on January 4, 2023 on The AI Alignment Forum. This post was written as part of the work done at Conjecture. As mentioned in our retrospective, while also producing long and deep pieces of research, we are also experimenting with a high iteration frequency. This is an example of this strand of our work. The goal here is to highlight interesting and unexplained language model facts. This is the first in a series of posts which will be exploring the basic ‘facts on the ground' of large language models at increasing levels of complexity. Understanding the internals of large-scale deep learning models, and especially large language models (LLMs) is a daunting task which has been relatively understudied. Gaining such an understanding of how large models work internally could also be very important for alignment. If we can understand how the representations of these networks form and what they look like, we could potentially track goal misgeneralization, as well as detect mesaoptimizers or deceptive behaviour during training and, if our tools are good enough, edit or remove such malicious behaviour during training or at runtime. When faced with a large problem of unknown difficulty, it is often good to first look at lots of relevant data, to survey the landscape, and build up a general map of the terrain before diving into some specific niche. The goal of this series of works is to do precisely this – to gather and catalogue the large number of easily accessible bits of information we can get about the behaviour and internals of large models, without commiting to a deep dive into any specific phenomenon. While lots of work in interpretability has focused on interpreting specific circuits, or understanding relatively small pieces of neural networks, there has been relatively little work in extensively cataloging the basic phenomenological states and distributions comprising language models at an intermediate level of analysis. This is despite the fact that, as experimenters with the models literally sitting in our hard-drives, we have easy and often trivial access to these facts. Examples include distributional properties of activations, gradients, and weights. While such basic statistics cannot be meaningful ‘explanations' for network behaviour in and of themselves, they are often highly useful for constraining one's world model of what can be going on in the network. They provide potentially interesting jumping off points for deeper exploratory work, especially if the facts are highly surprising, or else are useful datapoints for theoretical studies to explain why the network must have some such distributional property. In this post, we present a systematic view of basic distributional facts about large language models of the GPT2 family, as well as a number of surprising and unexplained findings. At Conjecture, we are undertaking follow-up studies on some of the effects discussed here. Activations Are Nearly Gaussian With Outliers If you just take the histogram of activity values in the residual stream across a sequence at a specific block (here after the first attention block), they appear nearly Gaussianly distributed. The first plot shows the histogram of the activities of the residual stream after the attention block in block 0 of GPT2-medium. This second plot shows the histogram of activities in the residual stream after the attention block of layer 10 of GPT2-medium, showing that the general Gaussian structure of the activations is preserved even deep inside the network. This is expected to some extent due to the central limit theorem (CLT), which enforces a high degree of Gaussianity on the distribution of neuron firing rates. This CLT mixing effect might be expected to destroy information in t...
Sparse Bayesian mass-mapping using trans-dimensional MCMC by Augustin Marignier et al. on Monday 28 November Uncertainty quantification is a crucial step of cosmological mass-mapping that is often ignored. Suggested methods are typically only approximate or make strong assumptions of Gaussianity of the shear field. Probabilistic sampling methods, such as Markov chain Monte Carlo (MCMC), draw samples form a probability distribution, allowing for full and flexible uncertainty quantification, however these methods are notoriously slow and struggle in the high-dimensional parameter spaces of imaging problems. In this work we use, for the first time, a trans-dimensional MCMC sampler for mass-mapping, promoting sparsity in a wavelet basis. This sampler gradually grows the parameter space as required by the data, exploiting the extremely sparse nature of mass maps in wavelet space. The wavelet coefficients are arranged in a tree-like structure, which adds finer scale detail as the parameter space grows. We demonstrate the trans-dimensional sampler on galaxy cluster-scale images where the planar modelling approximation is valid. In high-resolution experiments, this method produces naturally parsimonious solutions, requiring less than 1% of the potential maximum number of wavelet coefficients and still producing a good fit to the observed data. In the presence of noisy data, trans-dimensional MCMC produces a better reconstruction of mass-maps than the standard smoothed Kaiser-Squires method, with the addition that uncertainties are fully quantified. This opens up the possibility for new mass maps and inferences about the nature of dark matter using the new high-resolution data from upcoming weak lensing surveys such as Euclid. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.13963v1
Sparse Bayesian mass-mapping using trans-dimensional MCMC by Augustin Marignier et al. on Sunday 27 November Uncertainty quantification is a crucial step of cosmological mass-mapping that is often ignored. Suggested methods are typically only approximate or make strong assumptions of Gaussianity of the shear field. Probabilistic sampling methods, such as Markov chain Monte Carlo (MCMC), draw samples form a probability distribution, allowing for full and flexible uncertainty quantification, however these methods are notoriously slow and struggle in the high-dimensional parameter spaces of imaging problems. In this work we use, for the first time, a trans-dimensional MCMC sampler for mass-mapping, promoting sparsity in a wavelet basis. This sampler gradually grows the parameter space as required by the data, exploiting the extremely sparse nature of mass maps in wavelet space. The wavelet coefficients are arranged in a tree-like structure, which adds finer scale detail as the parameter space grows. We demonstrate the trans-dimensional sampler on galaxy cluster-scale images where the planar modelling approximation is valid. In high-resolution experiments, this method produces naturally parsimonious solutions, requiring less than 1% of the potential maximum number of wavelet coefficients and still producing a good fit to the observed data. In the presence of noisy data, trans-dimensional MCMC produces a better reconstruction of mass-maps than the standard smoothed Kaiser-Squires method, with the addition that uncertainties are fully quantified. This opens up the possibility for new mass maps and inferences about the nature of dark matter using the new high-resolution data from upcoming weak lensing surveys such as Euclid. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.13963v1
Non-perturbative non-Gaussianity and primordial black holes by Andrew D. Gow et al. on Wednesday 23 November We present a non-perturbative method for calculating the abundance of primordial black holes given an arbitrary one-point probability distribution function for the primordial curvature perturbation, $P(zeta)$. A non-perturbative method is essential when considering non-Gaussianities that cannot be treated using a conventional perturbative expansion. To determine the full statistics of the density field, we relate $zeta$ to a Gaussian field by equating the cumulative distribution functions. We consider two examples: a specific local-type non-Gaussian distribution arising from ultra slow roll models, and a general piecewise model for $P(zeta)$ with an exponential tail. We demonstrate that the enhancement of primordial black hole formation is due to the intermediate regime, rather than the far tail. We also show that non-Gaussianity can have a significant impact on the shape of the primordial black hole mass distribution. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.08348v2
Non-perturbative non-Gaussianity and primordial black holes by Andrew D. Gow et al. on Tuesday 22 November We present a non-perturbative method for calculating the abundance of primordial black holes given an arbitrary one-point probability distribution function for the primordial curvature perturbation, $P(zeta)$. A non-perturbative method is essential when considering non-Gaussianities that cannot be treated using a conventional perturbative expansion. To determine the full statistics of the density field, we relate $zeta$ to a Gaussian field by equating the cumulative distribution functions. We consider two examples: a specific local-type non-Gaussian distribution arising from ultra slow roll models, and a general piecewise model for $P(zeta)$ with an exponential tail. We demonstrate that the enhancement of primordial black hole formation is due to the intermediate regime, rather than the far tail. We also show that non-Gaussianity can have a significant impact on the shape of the primordial black hole mass distribution. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.08348v2
Detecting deviations from Gaussianity in high-redshift CMB lensing maps by Zhuoqi Zhang et al. on Monday 21 November While the probability density function (PDF) of the cosmic microwave background (CMB) convergence field approximately follows a Gaussian distribution, small contributions from structures at low redshifts make the overall distribution slightly non-Gaussian. Some of this late-time component can be modelled using the distribution of galaxies and subtracted off from the original CMB lensing map to produce a map of matter distribution at high redshifts. Using this high-redshift mass map, we are able to directly study the early phases of structure formation and look for deviations from our standard model. In this work, we forecast the detectability of signatures of non-Gaussianity due to nonlinear structure formation at $z>1.2$. Although we find that detecting such signatures using ongoing surveys will be challenging, we forecast that future experiments such as the CMB-S4 will be able to make detections of $sim$ 7$sigma$. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.09617v1
Cosmology from the kinetic polarized Sunyaev Zel'dovich effect by Selim C. Hotinli et al. on Wednesday 12 October The cosmic microwave background (CMB) photons that scatter off free electrons in the large-scale structure induce a linear polarization pattern proportional to the remote CMB temperature quadrupole observed in the electrons' rest frame. The associated blackbody polarization anisotropies are known as the polarized Sunyaev Zel'dovich (pSZ) effect. Relativistic corrections to the remote quadrupole field give rise to a non-blackbody polarization anisotropy proportional to the square of the transverse peculiar velocity field; this is the kinetic polarized Sunyaev Zel'dovich (kpSZ) effect. In this paper, we forecast the ability of future CMB and galaxy surveys to detect the kpSZ effect, finding that a statistically significant detection is within the reach of planned experiments. We further introduce a quadratic estimator for the square of the peculiar velocity field based on a galaxy survey and CMB polarization. Finally, we outline how the kpSZ effect is a probe of cosmic birefringence and primordial non-Gaussianity, forecasting the reach of future experiments. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2204.12503v2
Cosmology from the kinetic polarized Sunyaev Zel'dovich effect by Selim C. Hotinli et al. on Wednesday 12 October The cosmic microwave background (CMB) photons that scatter off free electrons in the large-scale structure induce a linear polarization pattern proportional to the remote CMB temperature quadrupole observed in the electrons' rest frame. The associated blackbody polarization anisotropies are known as the polarized Sunyaev Zel'dovich (pSZ) effect. Relativistic corrections to the remote quadrupole field give rise to a non-blackbody polarization anisotropy proportional to the square of the transverse peculiar velocity field; this is the kinetic polarized Sunyaev Zel'dovich (kpSZ) effect. In this paper, we forecast the ability of future CMB and galaxy surveys to detect the kpSZ effect, finding that a statistically significant detection is within the reach of planned experiments. We further introduce a quadratic estimator for the square of the peculiar velocity field based on a galaxy survey and CMB polarization. Finally, we outline how the kpSZ effect is a probe of cosmic birefringence and primordial non-Gaussianity, forecasting the reach of future experiments. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2204.12503v2
The JWST High Redshift Observations and Primordial Non-Gaussianity by M. Biagetti et al. on Tuesday 11 October Several bright and massive galaxy candidates at high redshifts have been recently observed by the James Webb Space Telescope. Such early massive galaxies seem difficult to reconcile with standard $Lambda$ Cold Dark Matter model predictions. We discuss under which circumstances such observed massive galaxy candidates can be explained by introducing primordial non-Gaussianity in the initial conditions of the cosmological perturbations. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.04812v1
The JWST High Redshift Observations and Primordial Non-Gaussianity by M. Biagetti et al. on Tuesday 11 October Several bright and massive galaxy candidates at high redshifts have been recently observed by the James Webb Space Telescope. Such early massive galaxies seem difficult to reconcile with standard $Lambda$ Cold Dark Matter model predictions. We discuss under which circumstances such observed massive galaxy candidates can be explained by introducing primordial non-Gaussianity in the initial conditions of the cosmological perturbations. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.04812v1
Snowmass2021 Cosmic Frontier: Report of the CF04 Topical Group on Dark Energy and Cosmic Acceleration in the Modern Universe by James Annis et al. on Monday 19 September Cosmological observations in the new millennium have dramatically increased our understanding of the Universe, but several fundamental questions remain unanswered. This topical group report describes the best opportunities to address these questions over the coming decades by extending observations to the $z
Snowmass2021 Cosmic Frontier: Report of the CF04 Topical Group on Dark Energy and Cosmic Acceleration in the Modern Universe by James Annis et al. on Monday 19 September Cosmological observations in the new millennium have dramatically increased our understanding of the Universe, but several fundamental questions remain unanswered. This topical group report describes the best opportunities to address these questions over the coming decades by extending observations to the $z
Primordial non-Gaussianity with Angular correlation function: Integral constraint and validation for DES by Walter Riquelme et al. on Thursday 15 September Local primordial non-Gaussianity (PNG) is a promising observable of the underlying physics of inflation, characterized by a parameter denoted by $f_{rm NL}$. We present the methodology to measure local $f_{rm NL}$ from the Dark Energy Survey (DES) data using the 2-point angular correlation function (ACF) via the induced scale-dependent bias. One of the main focuses of the work is the treatment of the integral constraint. This condition appears when estimating the mean number density of galaxies from the data and is especially relevant for PNG analyses, where it is found to be key in obtaining unbiased $f_{rm NL}$ constraints. The methods are analysed for two types of simulations: $sim 246$ GOLIAT N-body simulations with non-Gaussian initial conditions $f_{rm NL}$ equal to -100 and 100, and 1952 Gaussian ICE-COLA mocks with $f_{rm NL}=0$ that follow the DES angular and redshift distribution. We use the GOLIAT mocks to asses the impact of the integral constraint when measuring $f_{rm NL}$. We obtain biased PNG constraints when ignoring the integral constraint, $f_{rm NL} = -2.8pm1.0$ for $f_{rm NL}=100$ simulations, and $f_{rm NL}=-10.3pm1.5$ for $f_{rm NL}=-100$ simulations, whereas we recover the fiducial values within $1sigma$ when correcting for the integral constraint with $f_{rm NL}=97.4pm3.5$ and $f_{rm NL}=-95.2pm5.4$, respectively. We use the ICE-COLA mocks to validate our analysis in a DES-like setup, finding it to be robust to different analysis choices: best-fit estimator, the effect of integral constraint, the effect of BAO damping, the choice of covariance, and scale configuration. We forecast a measurement of $f_{rm NL}$ within $sigma(f_{rm NL})=31$ when using the DES-Y3 BAO sample, with the ACF in the $1 {rm deg}
Squeezing boldsymbol f rm NL out of the matter bispectrum with consistency relations by Samuel Goldstein et al. on Wednesday 14 September We show how consistency relations can be used to robustly extract the amplitude of local primordial non-Gaussianity ($f_{rm NL}$) from the squeezed limit of the matter bispectrum, well into the non-linear regime. First, we derive a non-perturbative relation between primordial non-Gaussianity and the leading term in the squeezed bispectrum, revising some results present in the literature. This relation is then used to successfully measure $f_{rm NL}$ from $N$-body simulations. We discuss the dependence of our results on different scale cuts and redshifts. Specifically, the analysis is strongly dependent on the choice of the smallest soft momentum, $q_{rm min}$, which is the most sensitive to primordial bispectrum contributions, but is largely independent of the choice of the largest hard momentum, $k_{rm max}$, due to the non-Gaussian nature of the covariance. We also show how the constraints on $f_{rm NL}$ improve at higher redshift, due to a reduced off-diagonal covariance. In particular, for a simulation with $f_{rm NL} = 100$ and a volume of $(2.4 text{ Gpc}/h)^3$, we measure $f_{rm NL} = 98 pm 12$ at redshift $z=0$ and $f_{rm NL} = 97 pm 8$ at $z=0.97$. Finally, we compare our results with a Fisher forecast, showing that the current version of the analysis is satisfactorily close to the Fisher error. We regard this as a first step towards the realistic application of consistency relations to constrain primordial non-Gaussianity using observations. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.06228v1
Weak Lensing the non-Linear Ly-alpha Forest by Patrick Shaw et al. on Monday 12 September We evaluate the performance of the Lyman-$alpha$ forest weak gravitational lensing estimator of Metcalf et al. on forest data from hydrodynamic simulations and ray-traced simulated lensing potentials. We compare the results to those obtained from the Gaussian random field simulated Ly$alpha$ forest data and lensing potentials used in previous work. We find that the estimator is able to reconstruct the lensing potentials from the more realistic data, and investigate dependence on spectrum signal to noise. The non-linearity and non-Gaussianity in this forest data arising from gravitational instability and hydrodynamics causes a reduction in signal to noise by a factor of $sim2.7$ for noise free data and a factor of $sim 1.5$ for spectra with signal to noise of order unity (comparable to current observational data). Compared to Gaussian field lensing potentials, using ray-traced potentials from N-body simulations incurs a further signal to noise reduction of a factor of $sim1.3$ at all noise levels. The non-linearity in the forest data is also observed to increase bias in the reconstructed potentials by $5-25%$, and the ray-traced lensing potential further increases the bias by $20-30%$. We demonstrate methods for mitigating these issues including Gaussianization and bias correction which could be used in real observations. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.04564v1
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.21.349589v1?rss=1 Authors: Kelty-Stephen, D., Furmanek, M. P., Mangalam, M. Abstract: Intermittency is a flexible control process entailing context-sensitive engagement with task constraints. The present work aims to situate the intermittency of dexterous behavior explicitly in multifractal modeling for non-Gaussian cascade processes. Multiscale probability density function (PDF) analysis of the center of pressure (CoP) fluctuations during quiet upright standing yields non-Gaussianity parameters lambda exhibiting task-sensitive curvilinear relationships with timescale. The present reanalysis aims for a finer-grained accounting of how non-Gaussian cascade processes might align with known, separable postural processes. It uses parallel decomposition of non-Gaussianity lambda-vs.-timescale and CoP. Orthogonal polynomials decompose lambda curvilinearity, and rambling-trembling analysis decomposes CoP into relatively more intentional rambling (displacement to new equilibrium points) and less intentional trembling sway (deviations around new equilibrium points). Modeling orthogonal polynomials of non-Gaussianity's lambda-vs.-timescale allows us to differentiate linear from quadratic decay, each of which indicates scale-invariant and scale-dependent cascades, respectively. We tested whether scale-dependent and scale-invariant cascades serve different roles, that is, responding to destabilizing task demands and supporting the proactive movement to a new equilibrium point, respectively. We also tested whether these cascades appear more clearly in rambling rather than trembling sway. More generally, we test whether multifractal nonlinear correlations supports this capacity of postural control to this two-step differentiation: both into rambling vs. trembling, then into scale-dependent vs. scale-invariant cascades within rambling sway. The results supported these hypotheses. Thus, the present work aligns specific aspects of task setting with aspects of cascade dynamics and confirms multifractal foundations of the organism-task relationship. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.03.234120v1?rss=1 Authors: Anh, V. V., Nguyen, H. T., Craig, A., Tran, E., Wang, Y. Abstract: This paper investigates the cause and detection of power-law scaling of brain wave activity due to the heterogeneity of the brain cortex, considered as a complex system, and the initial condition such as the alert or fatigue state of the brain. Our starting point is the construction of a mathematical model of global brain wave activity based on EEG measurements on the cortical surface. The model takes the form of a stochastic delay-differential equation (SDDE). Its fractional diffusion operator and delay operator capture the responses due to the heterogeneous medium and the initial condition. The analytical solution of the model is obtained in the form of a Karhunen-Loeve expansion. A method to estimate the key parameters of the model and the corresponding numerical schemes are given. Real EEG data on driver fatigue at 32 channels measured on 50 participants are used to estimate these parameters. Interpretation of the results is given by comparing and contrasting the alert and fatigue states of the brain. The EEG time series at each electrode on the scalp display power-law scaling, as indicated by their spectral slopes in the low-frequency range. The diffusion of the EEG random field is non-Gaussian, reflecting the heterogeneity of the brain cortex. This non-Gaussianity is more pronounced for the alert state than the fatigue state. The response of the system to the initial condition is also more significant for the alert state than the fatigue state. These results demonstrate the usefulness of global SDDE modelling complementing the time series approach for EEG analysis. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.05.136895v1?rss=1 Authors: Furmanek, M. P., Mangalam, M., Kelty-Stephen, D. G., Juras, G. Abstract: Healthy human postural sway exhibits strong intermittency, reflecting a richly interactive foundation of postural control. From a linear perspective, intermittent fluctuations might be interpreted as engagement and disengagement of complementary control processes at distinct timescales or from a nonlinear perspective, as cascade-like interactions across many timescales at once. The diverse control processes entailed by cascade-like multiplicative dynamics suggest specific non-Gaussian distributional properties at different timescales. Multiscale probability density function (PDF) analysis showed that when standing quietly, stable sand-filled loading of the upper extremities would elicit non-Gaussianity profiles showing a negative-quadratic crossover between short and long timescales. Unstable water-filled loading of the upper extremities would elicit simpler monotonic decreases in non-Gaussianity, that is, a positive-quadratic cancellation of the negative-quadratic crossover. Multiple known indices of postural sway governed the appearance or disappearance of the crossover. Finally, both loading conditions elicited Levy-like distributions over progressively larger timescales. These results provide evidence that postural instability recruits shorter-timescale processes into the non-Gaussian cascade processes, that indices of postural sway moderate this recruitment, and that postural control under unstable loading shows stronger statistical hallmarks of cascade structure. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.19.104760v1?rss=1 Authors: Mangalam, M., Kelty-Stephen, D. G. Abstract: Quiet standing exhibits strongly intermittent variability reflecting a richly interactive foundation. This intermittency can be understood in various ways. First, variability can be intermittent through the engagement and disengagement of complementary control processes at distinct scales. A second and perhaps a deeper way to understand this intermittency is through the possibility that closed-loop control depends on the cascade-like interactions across many timescales at once. These diverse control processes suggest specific non-Gaussian distributional properties at different timescales. Multiscale probability density (PDF) analysis shows that quiet standing on a stable surface exhibits a crossover between low non-Gaussianity (consistent with exponential distributions) at shorter timescales reflecting open-loop control, increasing towards higher non-Gaussianity and subsequently decreasing (consistent with cascade-driven Levy-like distributions) at longer scales. Destabilizing quiet standing elicits non-Gaussianity that begins high and decreases across timescales (consistent with cascade-driven Levy-like distribution), reflecting closed-loop postural corrections at more of the shorter timescales. Finally, indices of postural sway govern the appearance or disappearance of crossovers, suggesting that tempering of non-Gaussianity across log-timescale in stable-surface condition is mostly a function of endogenous postural control. These results provide new evidence that cascading interactions across longer-timescales supporting postural corrections can even recruit shorter-timescale processes with novel task constraints. Copy rights belong to original authors. Visit the link for more info
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 05/05
This thesis focuses on the development and application of Bayesian inference techniques for early-Universe signals and on the advancement of mathematical tools for information retrieval. A crucial quantity required to gain information from the early Universe is the primordial scalar potential and its statistics. We reconstruct this scalar potential from cosmic microwave background data. Technically, the inference is done by splitting the large inverse problem of such a reconstruction into many, each of them solved by an optimal linear filter. Once the primordial scalar potential and its correlation structure have been obtained the underlying physics can be directly inferred from it. Small deviations of the scalar potential from Gaussianity, for instance, can be used to study parameters of inflationary models. A method to infer such parameters from non-Gaussianity is presented. To avoid expensive numerical techniques the method is kept analytical as far as possible. This is achieved by introducing an approximation of the desired posterior probability including a Taylor expansion of a matrix determinant. The calculation of a determinant is also essential in many other Bayesian approaches, both apart from and within cosmology. In cases where a Taylor approximation fails, its evaluation is usually challenging. The evaluation is in particular difficult, when dealing with big data, where matrices are to huge to be accessible directly, but need to be represented indirectly by a computer routine implementing the action of the matrix. To solve this problem, we develop a method to calculate the determinant of a matrix by using well-known sampling techniques and an integral representation of the log-determinant. The prerequisite for the presented methods as well as for every data analysis of scientific experiments is a proper calibration of the measurement device. Therefore we advance the theory of self-calibration at the beginning of the thesis to infer signal and calibration simultaneously from data. This is achieved by successively absorbing more and more portions of calibration uncertainty into the signal inference equations. The result, the Calibration-Uncertainty Renormalized Estimator, follows from the solution of a coupled differential equation.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 04/05
Multi-wavelength large-scale surveys are currently exploring the Universe and establishing the cosmological scenario with extraordinary accuracy. There has been recently a significant theoretical and observational progress in efforts to use clusters of galaxies as probes of cosmology and to test the physics of structure formation. Galaxy clusters are the most massive gravitationally bound systems in the Universe, which trace the evolution of the large-scale structure. Their number density and distribution are highly sensitive to the underlying cosmological model. The constraints on cosmological parameters which result from observations of galaxy clusters are complementary with those from other probes. This dissertation examines the crucial role of clusters of galaxies in confirming the standard model of cosmology, with a Universe dominated by dark matter and dark energy. In particular, we examine the clustering of optically selected galaxy clusters as a useful addition to the common set of cosmological observables, because it extends galaxy clustering analysis to the high-peak, high-bias regime. The clustering of galaxy clusters complements the traditional cluster number counts and observable-mass relation analyses, significantly improving their constraining power by breaking existing calibration degeneracies. We begin by introducing the fundamental principles at the base of the concordance cosmological model and the main observational evidence that support it. We then describe the main properties of galaxy clusters and their contribution as cosmological probes. We then present the theoretical framework of galaxy clusters number counts and power spectrum. We revise the formulation and calibration of the halo mass function, whose high mass tail is populated by galaxy clusters. In addition to this, we give a prescription for modelling the cluster redshift space power spectrum, including an effective modelling of the weakly non-linear contribution and allowing for an arbitrary photometric redshift smoothing. Some definitions concerning the study of non-Gaussian initial conditions are presented, because clusters can provide constraints on these models. We dedicate a Chapter to the data we use in our analysis, namely the Sloan Digital Sky Survey maxBCG optical catalogue. We describe the data sets we derived from this large sample of clusters and the corresponding error estimates. Specifically, we employ the cluster abundances in richness bins, the weak-lensing mass estimates and the redshift-space power spectrum, with their respective covariance matrices. We also relate the cluster masses to the observable quantity (richness) by means of an empirical scaling relation and quantify its scatter. In the next Chapter we present the results of our Monte Carlo Markov Chain analysis and the cosmological constraints obtained. With the maxBCG sample, we simultaneously constrain cosmological parameters and cross-calibrate the mass-observable relation. We find that the inclusion of the power spectrum typically brings a 50% improvement in the errors on the fluctuation amplitude and the matter density. Constraints on other parameters are also improved, even if less significantly. In addition to the cluster data, we also use the CMB power spectra from WMAP7, which further tighten the confidence regions. We also apply this method to constrain models of the early universe through the amount of primordial non-Gaussianity of the initial density perturbations (local type) obtaining consistent results with the latest constraints. In the last Chapter, we introduce some preliminary calculations on the cross-correlation between clusters and galaxies, which can provide additional constraining power on cosmological models. In conclusion, we summarise our main achievements and suggest possible future developments of research.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 04/05
One of the key challenges in Cosmology today is to probe both statistical isotropy and Gaussianity of the primordial density perturbations, which are imprinted in the cosmic microwave background (CMB) radiation. While single-field slow-roll inflation predicts the CMB to fulfil these two characteristics, more complex models may give rise to anisotropy and/or non-Gaussianity. A detection or non-detection allows therefore to discriminate between different models of inflation and significantly improves the understanding of basic conditions of the very early Universe. In this work, a detailed CMB non-Gaussianity and isotropy analysis of the five- and seven-year observations of the WMAP satellite is presented. On the one hand, these investigations are performed by comparing the data set with simulations, which is the usual approach for this kind of analyses. On the other hand, a new model-independent approach is developed and applied in this work. Starting from the random phase hypothesis, so- called surrogate maps are created by shuffling the Fourier phases of the original maps for a chosen scale interval. Any disagreement between the data and these surrogates points towards phase correlations in the original map, and therefore – if systematics and foregrounds can be ruled out – towards a violation of single-field slow roll inflation. The construction of surrogate maps only works for an orthonormal set of Fourier functions on the sphere, which is provided by the spherical harmonics exclusively on a complete sky. For this reason, the surrogate approach is for the first time combined with a transformation of the full sky spherical harmonics to a cut sky version. Both the single surrogate approach as well as the combination with the cut sky transformation are tested thoroughly to assess and then rule out the effects of systematics. Thus, this work not only represents a detailed CMB analysis, but also provides a completely new method to test for scale- dependent higher order correlations in complete or partial spherical data sets, which can be applied in different fields of research. In detail, the applications of the above methods involve the following analyses: First, a detailed study of several frequency bands of the WMAP five-year data release is accomplished by means of a scaling index analysis, whereby the data are compared to simulations. Special attention is paid to anomalous local features, and ways to overcome the problem of boundary effects when excluding foreground-influenced parts of the sky. After this, the surrogate approach is for the first time applied to real CMB data sets. In doing so, several foreground-reduced full sky maps from both the five- and seven-year WMAP observations are used. The analysis includes different scale intervals and a huge amount of checks on possible systematics. Then, another step forward is taken by applying the surrogate approach for the first time to incomplete data sets, again from the WMAP five- and seven-year releases. The Galactic Plane, which is responsible for the largest amount of foreground contribution, is removed by means of several cuts of different sizes. In addition, different techniques for the basis transformation are used. In all of these investigations, remarkable non-Gaussianities and deviations from statistical isotropy are identified. In fact, the surrogate approach shows by far the most significant detection of non-Gaussianity to date. The band-wise analysis shows consistent results for all frequency bands. Despite a thorough search, no candidate for foreground or systematic influences could be found. Therefore, the findings of these analyses have so far to be taken as cosmological, and point on the one hand towards a strong violation of single-field slow-roll inflation, and question on the other hand the concept of statistical isotropy in general. Future analyses of the more precise measurements of the forthcoming PLANCK satellite will yield more information about the origin of the detected anomalies.
Guido D'Amico presents the consequences of imposing an approximate Galilean symmetry on the Effective Theory of Inflation, the theory of small perturbations around the inflationary background.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 03/05
The tremendous impact of Cosmic Microwave Background (CMB) radiation experiments on our understanding of the history and evolution of the universe is based on a tight connection between the observed fluctuations and the physical processes taking place in the very early universe. According to the prevalent paradigm, the anisotropies were generated during the era of inflation. The simplest inflationary models predict almost perfectly Gaussian primordial perturbations, but competitive theories can naturally be constructed, allowing for a wide range in primordial non-Gaussianity. For this reason, the test for non-Gaussianity becomes a fundamental means to probe the physical processes of inflation. The aim of the project is to develop a Bayesian formalism to infer the level of non-Gaussianity of local type. Bayesian statistics attaches great importance to a consistent formulation of the problem and properly calculates the error bounds of the measurements on the basis of the actual data. As a first step, we develop an exact algorithm to generate simulated temperature and polarization CMB maps containing arbitrary levels of local non-Gaussianity. We derive an optimization scheme that allows us to predict and actively control the simulation accuracy. Implementing this strategy, the code outperforms existing algorithms in computational efficiency by an order of magnitude. Then, we develop the formalism to extend the Bayesian approach to the calculation of the amplitude of non-Gaussianity. We implement an exact Hamiltonian Monte Carlo sampling algorithm to generate samples from the target probability distribution. These samples allow to construct the full posterior distribution of the level of non-Gaussianity given the data. The applicability of the scheme is demonstrated by means of a simplified data model. Finally, we fully implement the necessary equations considering a realistic CMB experiment dealing with partial sky coverage and anisotropic noise. A direct comparison between the traditional frequentist estimator and the exact Bayesian approach shows the advantage of the newly developed method. For a significant detection of non-Gaussianity, the former suffers from excess variance whereas the Bayesian scheme always provides optimal error bounds.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 03/05
Measurements of the Cosmic Microwave Background (CMB) emission with increasingly high resolution and sensitivity are now becoming available, and even higher quality data are expected from the ongoing Planck mission and future experiments. Dealing with the Galactic foreground contamination, however, is still problematic, due to our poor knowledge of the physics of the Interstellar Medium at microwave frequencies. This contamination biases the CMB observations and needs to be removed before using the data for cosmological studies. In this thesis the problem of component separation for the CMB is considered and a highly focused study of a specific implementation of Independent Component Analysis (ICA), called FastICA, is presented. This algorithm has been used to perform a foreground analysis of the WMAP three and five-year data and subsequently to investigate the properties of the main sources of diffuse Galactic emission (e.g. synchrotron, dust and free-free emission). The foreground contamination in the WMAP data is quantified in terms of coupling coefficients between the data and various templates, which are observations of the sky emission at frequencies where only one physical component is likely to dominate. The coefficients have been used to extract the frequency spectra of the Galactic components, with particular attention paid to the free-free frequency spectrum. Our results favour the existence of a spectral ‘bump’, interpreted as a signature of emission by spinning dust grains in the Warm Ionised Medium, which spatially correlates with the Hα radiation used to trace the free-free emission. The same coupling coefficients have been used to clean the WMAP observations, which have then been further analysed using FastICA. This iterative step in the analysis provides a powerful tool for cleaning the CMB data of any residuals not traced by the adopted templates. In practice, it is a unique way to potentially reveal new physical emission components. In this way, we detected a residual spatially concentrated emission component around the Galactic center, consistent with the so-called WMAP Haze. In order to take into account the actual spatial properties of the Galactic foreground emission, we proposed an analysis of theWMAP data on patches of the sky, both using FastICA and the Internal Linear Combination (ILC). Since the temperature power spectrum is reasonably insensitive to the fine details of the foreground corrections except on the largest scales (low l), the two methods are compared by means of non-Gaussianity tests, used to trace the presence of possible residuals. While the performance of FastICA improves only for particular cases with a small number of regions, the ILC CMB estimation generally ameliorates significantly if the number of patches is increased. Moreover, FastICA plays a key role in establishing a partitioning that realistically traces the features of the sky, a requirement we have shown to be paramount for a successful regional analysis.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 03/05
Die Entwicklung des ersten präzisen kosmologischen Modells, des LCDM Modells, ist eine bedeutende Errungenschaft der modernen, beobachtenden Kosmologie. Trotzdem bleiben eine Reihe von wichtigen Fragen über Zusammensetzung und Entwicklungsgeschichte des Universums unbeantwortet: Abgesehen von der Natur der Dunklen Materie ist der physikalische Ursprung der Dunklen Energie eine der ganz großen Fragen der theoretischen Physik. Ebenso bedürfen die statistischen Eigenschaften der anfänglichen Dichtefluktuationen im frühen Universum einer genauen überprüfung. Kleinste Abweichungen von den Gauß'schen Fluktuationen des Standardmodells würden, sofern sie nachgewiesen werden, eine Vielzahl von Informationen über die Physik des frühen Universums enthalten. In dieser Arbeit benutze ich numerische Verfahren, um neue, hochpräzise Vorhersagen zur kosmischen Strukturbildung in generalisierten Dunkle Energie Kosmologien zu treffen. Außerdem berücksichtige ich Modelle mit nicht-Gauß'schen Anfangbedingungen. Im ersten Abschnitt untersuche ich die nicht-lineare Strukturentstehung in sogenannten `Early Dark Energy' (EDE) Modellen und vergleiche sie mit dem LCDM Standardmodell. Interessanterweise zeigen meine Ergebnisse, dass der Sheth and Tormen (1999) Formalismus, mit dem üblicherweise die Anzahldichte von Halos aus Dunkler Materie geschätzt wird, in EDE Kosmologien weiterhin anwendbar ist, im Widerspruch zu analytischen Berechnungen. In diesem Zusammenhang untersuche ich auch das Verhältnis zwischen Masse und Geschwindigkeitsdispersion der Dunklen Materie in Halos. Dabei stelle ich eine gute übereinstimmung mit der Normalisierung der LCDM Kosmologien fest, wie sie in Evrard et al. (2008) beschrieben ist. Allerdings führt das frühere Anwachsen der Dichtestrukturen in EDE Modellen zu großen Unterschieden in der Massenfunktion der Halos bei hohen Rotverschiebungen. Dies könnte direkt in Beobachtungen gemessen werden, indem man die Anzahl der Gruppen als Funktion der Geschwindigkeitsdispersion der enthaltenen Galaxien entlang der Sichtlinie bestimmt. Insbesondere würde dadurch das Problem der mehrdeutigen Massebestimmung von Halos umgangen. Schließlich ermittele ich die Beziehung zwischen dem Konzentrationsparameter von Halos und der Halomasse in den EDE Kosmologien. Im zweiten Teil meiner Arbeit verwende ich ein Set an hochaufgelöste hydrodynamische Simulationen um die globalen Eigenschaften der thermischen und kinetischen Sunyaev Zeldovich (SZ) Effekte zu untersuchen. Dabei stellen wir fest, dass in den SZ-Beobachtungskarten der EDE Modelle der Compton-y-Parameter systematisch größer ist als im LCDM Modell. Erwartungsgemäß finde ich daher auch, dass das Leistungsspektrum der thermischen und kinetischen SZ Fluktuationen in EDE Kosmologien größer ist als im Standardmodell. Allerdings reicht diese Steigerung für realistische EDE Modelle nicht aus, um die theoretischen Voraussagen in übereinstimmung mit aktuellen Messungen der Mikrowellenhintergrundanisotropie bei großen Multipolwerten zu bringen. Eine Zählung der durch den SZ Effekt detektierbaren Halos in den simulierten Karten zeigt nur einen leichten Anstieg in den massereichsten Haufen für EDE Kosmologien. Ebenso sind Voraussagen für zukünftige Zählungen von SZ-detektierten Haufen durch das South Pole Telescope (SPT Ruhl, 2004) stark durch Unsicherheiten in der Kosmologie beeinträchtigt. Schließlich finde ich, dass die Normalisierung und die Steigung der Relation zwischen thermischem SZ-Effekt und Halomasse in vielen EDE Kosmologien unverändert bleibt, was die Interpretation von Beobachtungen des SZ Effekts in Galaxienhaufen vereinfacht. In weiteren Untersuchungen berechne ich eine Reihe von hochaufgelösten Vielteilchensimulationen für physikalisch motivierte nicht-Gauß'sche Kosmologien. In umfangreichen Studien untersuche ich die Massenverteilungsfunktion der Halos und deren Entwicklung in nicht-Gauß'schen Modellen. Zudem vergleiche ich meine numerischen Experimente mit analytischen Vorhersagen von Matarrese et al. (2000) und LoVerde et al. (2008). Dabei finde ich eine sehr gute übereinstimmung zwischen Simulation und analytischer Vorhersage, vorausgesetzt bestimmte Korrekturen für die Dynamik des nicht-sphärischen Kollapses werden berücksichtigt. Dazu werden die Vorhersagen dahingehend modifiziert, dass sie im Grenzfall sehr seltener Ereignisse einem geeignet veränderten Grenzwert der kritischen Dichte entsprechen. Desweiteren bestätige ich jüngste Ergebnisse, nach denen primordiale nicht-Gauß'sche Dichtefluktuationen eine starke skalenabhänginge Verzerrung auf großen Skalen verursachen, und ich lege einen physikalisch motivierten mathematischen Ausdruck vor, der es erlaubt, die Verzerrung zu messen und der eine gute Näherung für die Simulationsergebnisse darstellt.
We study the weak convergence (in the high-frequency limit) of the frequency components associated with Gaussian-subordinated, spherical and isotropic random fields. In particular, we provide conditions for asymptotic Gaussianity and we establish a new connection with random walks on the the dual of SO(3), which mirrors analogous results previously established for fields defined on Abelian groups. Our work is motivated by applications to cosmological data analysis, and specifically by the probabilistic modelling and the statistical nalysis of the Cosmic Microwave Background radiation, which is currently at the frontier of physical research. To obtain our main results, we prove several fine estimates involving convolutions of the so-called Clebsch-Gordan coefficients (which are elements of unitary matrices connecting reducible representations of SO(3)); this allows to intepret most of our asymptotic conditions in terms of coupling of angular momenta in a quantum mechanical system. Part of the proofs are based on recently established criteria for the weak convergence of multiple Wiener-Itô integrals. This is a joint paper by Domenico Marinucci (Rome "Tor Vergata") and Giovanni Peccati (Paris VI). Domenico MARINUCCI Document associé : support de présentation : http://epi.univ-paris1.fr/servlet/com.univ.collaboratif.utils.LectureFichiergw?CODE_FICHIER=1207750104939 (pdf) Ecouter l'intervention : Bande son disponible au format mp3 Durée : 57 mn
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05
Recent measurements of the Cosmic Microwave Background (CMB) have allowed the most accurate determinations yet of the parameters of the standard CDM model, but the data also contain intriguing anomalies that are inconsistent with the assumptions of statistical isotropy and Gaussianity. This work investigates possible sources of such anomalies by studying the morphology of the CMB. An unexpected correlation is found between the CMB anisotropies and a temperature pattern generated in a Bianchi Type VIIh universe, i.e., an anisotropic universe allowing a universal rotation or vorticity. This model is found to be incompatible with other observations of the cosmological parameters, but correcting for such a component can serendipitously remove many of the anomalies from the WMAP sky. This result indicates that an alternative cosmological model producing such a morphology may be needed. A similar cross-correlation method applied to the microwave foregrounds studies the variation of the spectral behaviours of the Galactic emission processes across the sky. The results shed light on the unexpectedly low free-free emission amplitude as well as the nature of the anomalous dust-correlated emission that dominates at low frequencies. As a complementary method, phase statistics apply to situations where no a priori knowledge of the spatial structure informs the search for a non-Gaussian signal. Such statistics are applied to compact topological models as well as to foreground residuals, and a preliminary analysis shows that these may prove powerful tools in the study of non-Gaussianity and anisotropy.