POPULARITY
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.05.531223v1?rss=1 Authors: Morishima, T., Fakruddin, M., Masuda, T., Wang, Y., Schoonenberg, V. A. C., Butter, F., Arima, Y., Akaike, T., Tomizawa, K., Wei, F., Suda, T., Takizawa, H. Abstract: A lack of the mitochondrial tRNA taurine modifications mediated by mitochondrial tRNA translation optimization 1 (Mto1) was recently shown to induce proteostress in embryonic stem cells. Since erythroid precursors actively synthesize the hemoglobin protein, we hypothesized that Mto1 dysfunctions may result in defective erythropoiesis. Hematopoietic-specific Mto1 conditional knockout (cKO) mice were embryonic lethal due to niche-independent defective terminal erythroid differentiation. Mechanistically, mitochondrial oxidative phosphorylation complex-I was severely defective in the Mto1 cKO fetal liver and this was followed by cytoplasmic iron accumulation. Overloaded cytoplasmic iron promoted heme biosynthesis and enhanced the expression of embryonic hemoglobin proteins, which induced an unfolded protein response via the IRE1 -Xbp1 signaling pathway in Mto1 cKO erythroblasts. An iron chelator rescued erythroid terminal differentiation in the Mto1 cKO fetal liver in vitro. The new point of view provided by this novel non-energy-related molecular mechanism may lead to a breakthrough in mitochondrial research. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.28.121343v1?rss=1 Authors: Bussy, A., Plitman, E., Patel, R., Tullo, S., Salaciak, A., Bedford, S., Farzin, S., Beland, M.-L., Valiquette, V., Kazazian, C., Tardif, C., Devenyi, G., Chakravarty, M. Abstract: The hippocampus has been extensively studied in various neuropsychiatric disorders throughout the lifespan. However, inconsistent results have been reported with respect to which subfield volumes are most related to age. Here, we investigate whether these discrepancies may be explained by experimental design differences that exist between studies. Multiple datasets were used to collect 1690 magnetic resonance scans from healthy individuals aged 18-95 years old. Standard T1-weighted (T1w; MPRAGE sequence, 1 mm3 voxels), high-resolution T2-weighted (T2w; SPACE sequence, 0.64 mm3 voxels) and slab T2-weighted (Slab; 2D turbo spin echo, 0.4 x 0.4 x 2 mm3 voxels) images were acquired. The MAGeT Brain algorithm was used for segmentation of the hippocampal grey matter (GM) subfields and peri-hippocampal white matter (WM) subregions. Linear mixed-effect models and Akaike information criterion were used to examine linear, second or third order natural splines relationship between hippocampal volumes and age. We demonstrated that stratum radiatum/lacunosum/moleculare and fornix subregions expressed the highest relative volumetric decrease, while the cornus ammonis 1 presented a relative volumetric preservation of its volume with age. We also found that volumes extracted from slab images were often underestimated and demonstrated different age-related relationships compared to volumes extracted from T1w and T2w images. The current work suggests that although T1w, T2w and slab derived subfield volumetric outputs are largely homologous, modality choice plays a meaningful role in the volumetric estimation of the hippocampal subfields. Copy rights belong to original authors. Visit the link for more info
David and Grace talk about abstraction in statistics, computing, mathematics and a few other things besides. Birth-death process (https://en.wikipedia.org/wiki/Birth–death_process) Akaike information criterion (https://en.wikipedia.org/wiki/Akaike_information_criterion) Stratechery by Ben Thompson (https://stratechery.com) Subtraction.com (https://www.subtraction.com) The Law of Leaky Abstractions (https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/) Logic gate (https://en.wikipedia.org/wiki/Logic_gate) The 10,000 Domino Computer (https://youtu.be/OpLU__bhu2w) CS50's Introduction to Computer Science (https://www.edx.org/course/cs50s-introduction-to-computer-science) Floating-point arithmetic (https://en.wikipedia.org/wiki/Floating-point_arithmetic) Single Variable Calculus (https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/) Zone of proximal development (https://en.wikipedia.org/wiki/Zone_of_proximal_development) principle of specificity (https://www.oxfordreference.com/view/10.1093/oi/authority.20110810105645210) Progressive overload (https://en.wikipedia.org/wiki/Progressive_overload) Species distribution modelling (https://en.wikipedia.org/wiki/Species_distribution_modelling) Population genetics (https://en.wikipedia.org/wiki/Population_genetics) Coalescent theory (https://en.wikipedia.org/wiki/Coalescent_theory)
In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data. The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provided for the AIC and GAIC to be valid, and explicit formulas are provided for the AIC and GAIC so they can be used in practice. Briefly, AIC and GAIC provide a way of estimating the average prediction error of your learning machine on test data without using test data or cross-validation methods. The GAIC is also called the Takeuchi Information Criterion (TIC).
StatLearn 2012 - Workshop on "Challenging problems in Statistical Learning"
The idea of selecting a model via penalizing a log-likelihood type criterion goes back to the early seventies with the pioneering works of Mallows and Akaike. One can find many consistency results in the literature for such criteria. These results are asymptotic in the sense that one deals with a given number of models and the number of observations tends to infinity. A non asymptotic theory for these type of criteria has been developed these last years that allows the size as well as the number of models to depend on the sample size. For practical relevance of these methods, it is desirable to get a precise expression of the penalty terms involved in the penalized criteria on which they are based. We will discuss some heuristics to design data-driven penalties, review some new results and discuss some open problems.
Background: Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods: Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. Results: GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. Conclusion: GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.
Mathematik, Informatik und Statistik - Open Access LMU - Teil 01/03
The success of a newly founded company or small business depends on various initial risk factors or staring conditions, respectively, like e.g. the market the business aims for, the experience and the age of the founder, the preparation prior to the launch, the financial frame, the legal basis of the company and many others. These risk factors determine the chance of survival for the venture in the market. However, the effects of these risk factors often change with time. They may vanish or even increase with the time the company is in the market. In this paper we analyse the survival of 1123 newly founded companies in the state of Bavaria, Germany. Our focus is thereby primarily on the investigation of time-variation of the initial factors for success. The time-variation is thereby tackled within the framework of varying coefficient models, as introduced by Hastei and Tibshirani (1993, J.R.S.S. B.), where time modifies the effects of risk factors. An important issue in our analysis is the separation of risk factors which have time-varying effects from those which have time-constant effects. We make use of the Akaike criterion to separate these two types of factors.