Podcast appearances and mentions of monte carlo mc

  • 8PODCASTS
  • 8EPISODES
  • 8mAVG DURATION
  • ?INFREQUENT EPISODES
  • Oct 17, 2022LATEST

POPULARITY

20172018201920202021202220232024


Latest podcast episodes about monte carlo mc

PaperPlayer biorxiv neuroscience
Model-Constrained Self-supervised Deep Learning Approach to the Quantification of Magnetic Resonance Spectroscopy Data Based on Linear-combination Model Fitting

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Oct 17, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.13.512064v1?rss=1 Authors: Shamaei, A., Starcukova, J., Starcuk, Z. Abstract: Purpose: While the recommended analysis method for magnetic resonance spectroscopy data is linear combination model (LCM) fitting, the supervised deep learning (DL) approach for quantification of MR spectroscopy (MRS) data recently showed encouraging results; however, supervised learning requires ground truth fitted spectra, which is not practical. This work investigates the feasibility and efficiency of the LCM-based self-supervised DL method for the analysis of MRS data. Method: We present a novel DL-based method for the quantification of relative metabolite concentrations, using quantum-mechanics simulated metabolite responses and neural networks. We trained, validated, and evaluated the proposed networks with simulated and publicly accessible in-vivo human brain MRS data and compared the performance with traditional methods. A novel adaptive macromolecule fitting algorithm is included. We investigated the performance of the proposed methods in a Monte Carlo (MC) study. Result: The validation using low-SNR simulated data demonstrated that the proposed methods could perform quantification comparably to other methods. The applicability of the proposed method for the quantification of in-vivo MRS data was demonstrated. Our proposed networks have the potential to reduce computation time significantly. Conclusion: The proposed model-constrained deep neural networks trained in a self-supervised manner with complex data can offer fast and efficient quantification of MRS data. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv biophysics
Light transport modeling in highly complex tissues using implicit mesh-based Monte Carlo algorithm

PaperPlayer biorxiv biophysics

Play Episode Listen Later Oct 12, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.11.335232v1?rss=1 Authors: Yan, S., Yuan, Y., Fang, Q. Abstract: The mesh-based Monte Carlo (MMC) technique has grown tremendously since its initial publication nearly a decade ago. It is now recognized as one of the most accurate Monte Carlo (MC) methods, providing accurate reference solutions for the development of novel biophotonics techniques. In this work, we aim to further advance MMC to address a major challenge in biophotonics modeling, i.e. light transport within highly complex tissues, such as dense microvascular networks, porous media and multi-scale tissue structures. Although the current MMC framework is capable of simulating light propagation in such media given its generality, the run-time and memory usage grow rapidly with increasing media complexity and size. This greatly limits our capability to explore complex and multi-scale tissue structures. Here, we propose a highly efficient implicit mesh-based Monte Carlo (iMMC) method that incorporates both mesh- and shape-based tissue representations to create highly complex yet memory efficient light transport simulations. We demonstrate that iMMC is capable of providing accurate solutions for dense vessel networks and porous tissues while reducing memory usage by greater than a hundred- or even thousand-fold. In a sample network of microvasculature, the reduced shape complexity results in nearly 3x speed acceleration. The proposed algorithm is now available in our open-source MMC software at http://mcx.space/#mmc. Copy rights belong to original authors. Visit the link for more info

Medizinische Fakultät - Digitale Hochschulschriften der LMU - Teil 18/19
Development of an electronic monitor for the determination of individual radon and thoron exposure

Medizinische Fakultät - Digitale Hochschulschriften der LMU - Teil 18/19

Play Episode Listen Later Jun 11, 2015


The carcinogenic effect of the radio isotope Rn-222 of the noble gas radon and its progeny, as well as its residential distribution, are well studied. In contrast, the knowledge about the effects and average dwelling concentration levels of its radio isotope Rn-220 (thoron) is still limited. Generally, this isotope has been assumed to be a negligible contributor to the effective annual dose. However, only recently it has been pointed out in several international studies, that the dose due to thoron exceeds the one from Rn-222 under certain conditions. Additionally, radon monitors may show a considerable sensitivity towards thoron which was also not accounted for in general. Therefore a reliable, inexpensive exposimeter, which allows to distinguish between decays of either radon and thoron, is required to conduct further studies. The scope of this thesis was to develop an electronic radon/thoron exposimeter which features small size, low weight and minimal power consumption. The design is based on the diffusion chamber principle and employs state-of-the-art alpha particle spectroscopy to measure activity concentrations. The device was optimized via inlet layout and filter selection for high thoron diffusion. Calibration measurements showed a similar sensitivity of the monitor towards radon and thoron, with a calibration factor of cfRn-222 = 16.2±0.9 Bq×m-3/cph and cfRn-220 = 14.4±0.8 Bq×m-3/cph, respectively. Thus, the radon sensitivity of the device was enhanced by a factor two compared to a previous prototype. The evaluation method developed in this work, in accordance with ISO 11665 standards, was validated by intercomparison measurements. The detection limits for radon and thoron were determined to be C#Rn-222 = 44.0 Bq/m3 and C#Rn-220 = 40.0 Bq/m3, respectively, in case of a low radon environment, a one-hour measurement interval, and a background count rate of zero. In contrast, in mixed radon/thoron concentrations where the Po-212 peak must be used for thoron concentration determination, a calibration factor of cfRn-220 = 100±10 Bq×m-3/cph was measured, yielding a detection limit of C#Rn-220 = 280 Bq/m3. Further, Monte Carlo (MC) simulations were performed by means of various codes including Geant4, to study the effect of the variation of parameters influencing the calibration factors. The results showed reasonable agreement between simulated and acquired spectra, with differences being below 8%, thus validating the employed simulation model. The simulations indicated a significant impact of environmental parameters, such as temperature and pressure, on the measured spectra and accordingly on the calibration factor. Therefore the calibration factor was quantified as a function of temperature, relative humidity and pressure as well as chamber volume. For devices with increased detection volume a considerable influence of air density changes, corresponding to altitudes from 0-5,000 m, and temperatures from -25 to 35 °C, on the calibration factor of up to 32% was observed. In contrast, for devices with standard housing the calibration factor changed only up to 4%. When increasing the detection volume compared to the employed standard housing by at least a factor of four, a maximum increase of the sensitivity of about 20% was found, at the expense of device portability. On the contrary, when reducing the height of the housing by 10~$mm$, which yields 40% less volume, a decrease of sensitivity by 30% and 41% for radon and thoron was observed, respectively. Finally, devices were used and tested at different realistic conditions, such as mines, radon spas, and dwellings with mixed Rn-222 and Rn-220 environments. Measurements in a salt mine with the device developed within the framework of this thesis revealed maximum radon concentrations of up to 1.0 kBq/m3. In the Bad Gastein Heilstollen, Rn-222 concentrations up to 24.3 kBq/m3 were found, in agreement with an AlphaGuard reference device. First measurements in radon/thoron environments of about 200 Bq/m3 each, in a clay model house at the Helmholtz Center Munich, showed reasonable agreement with reference devices, thus validating the introduced evaluation method. First measurements in a private Bavarian clay house revealed a low thoron concentration of about CRn-220 = 13.0±3.0 Bq/m3, in comparison to a high radon concentration of CRn-222 = 200±70 Bq/m3.

Linear Digressions
Monte Carlo For Physicists

Linear Digressions

Play Episode Listen Later Mar 12, 2015 8:13


This is another physics-centered podcast, about an ML-backed particle identification tool that we use to figure out what kind of particle caused a particular blob in the detector. But in this case, as in many cases, it looks hard at the outset to use ML because we don't have labeled training data. Monte Carlo to the rescue! Monte Carlo (MC) is fake data that we generate for ourselves, usually following certain sets of rules (often a Markov chain; in physics we generate MC according to the laws of physics as we understand them) and since you generated the event, you "know" what the correct label is. Of course, it's a lot of work to validate your MC, but the payoff is that then you can use Machine Learning where you never could before.

Medizin - Open Access LMU - Teil 22/22
Stereotactic radiotherapy of intrapulmonary lesions: comparison of different dose calculation algorithms for Oncentra MasterPlan®.

Medizin - Open Access LMU - Teil 22/22

Play Episode Listen Later Feb 22, 2015


Background The use of high accuracy dose calculation algorithms, such as Monte Carlo (MC) and Collapsed Cone (CC) determine dose in inhomogeneous tissue more accurately than pencil beam (PB) algorithms. However, prescription protocols based on clinical experience with PB are often used for treatment plans calculated with CC. This may lead to treatment plans with changes in field size (FS) and changes in dose to organs at risk (OAR), especially for small tumor volumes in lung tissue treated with SABR. Methods We re-evaluated 17 3D-conformal treatment plans for small intrapulmonary lesions with a prescription of 60 Gy in fractions of 7.5 Gy to the 80% isodose. All treatment plans were initially calculated in Oncentra MasterPlan® using a PB algorithm and recalculated with CC (CCre-calc). Furthermore, a CC-based plan with coverage similar to the PB plan (CCcov) and a CC plan with relaxed coverage criteria (CCclin), were created. The plans were analyzed in terms of Dmean, Dmin, Dmax and coverage for GTV, PTV and ITV. Changes in mean lung dose (MLD), V10Gy and V20Gy were evaluated for the lungs. The re-planned CC plans were compared to the original PB plans regarding changes in total monitor units (MU) and average FS. Results When PB plans were recalculated with CC, the average V60Gy of GTV, ITV and PTV decreased by 13.2%, 19.9% and 41.4%, respectively. Average Dmean decreased by 9% (GTV), 11.6% (ITV) and 14.2% (PTV). Dmin decreased by 18.5% (GTV), 21.3% (ITV) and 17.5% (PTV). Dmax declined by 7.5%. PTV coverage correlated with PTV volume (p < 0.001). MLD, V10Gy, and V20Gy were significantly reduced in the CC plans. Both, CCcov and CCclin had significantly increased MUs and FS compared to PB. Conclusions Recalculation of PB plans for small lung lesions with CC showed a strong decline in dose and coverage in GTV, ITV and PTV, and declined dose in the lung. Thus, switching from a PB algorithm to CC, while aiming to obtain similar target coverage, can be associated with application of more MU and extension of radiotherapy fields, causing greater OAR exposition.

Medizinische Fakultät - Digitale Hochschulschriften der LMU - Teil 16/19
Machbarkeit einer Studie zum strahlenbedingten Risiko von Herz-Kreislauferkrankungen auf Basis der kooperative Gesundheitsforschung in der Region Augsburg (KORA)

Medizinische Fakultät - Digitale Hochschulschriften der LMU - Teil 16/19

Play Episode Listen Later Nov 25, 2013


Radiation induced cardiovascular disease is a risk proven by many studies in patients exposed with high-doses from a thoracal radiotherapy. Since the theory of cardiovascular effects caused by low-doses of ionising radiation amongst the A-bomb survivors has been raised, much effort has been taken to get more information and significant results. Currently there is still no clear evidence of a threshold dose or a linear relation for doses below 0,5Gy (energy dose. The aim of this work was to develop a design for a feasability study, based on the German cohort study KORA (’Kooperative Gesundheitsforschung in der Region Augsburg’), for low dose radiation from diagnostic imaging and possibly induced myocardial infarction. To this end a completely new dosimetric system had to be set up to get the organ dose of the heart from different radiological examinations. This was achieved using conversion coefficients, calculated by Monte Carlo (MC) simulations with the mathematical or voxel phantoms from the ’Helmholtz Zentrum’ Munich (HMGU, former ’Gesellschaft für Strahlenforschung’ GSF), and a large benchmarking for the individual parameters depending on the examinations and patients. The next step was to create a sensible questionnaire for a retrospective radiation anamnesis including data for confounding analysis. This questionnaire combined with the retrospective dosimetry was successfully tested in a pilot study. The setup was a case control study based on actual patients suffering from myocardial infarction who were included in the Augsburg KORA-Herzinfarktregister and an age and gender stratified control group recruited from the population-based KORA-cohort having no diagnoses of myocardial infarction.

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 04/05

The interactions between biomolecules and their environment can be studied by experiments and simulations. Results from experiments and simulations are often interpretations based on the raw data. For an accurate comparison of both approaches, the interpretation of the raw data from experiments and simulation have to be in compliance. The design of such simulations and interpretation of raw data is demonstrated in this thesis for two examples; fluorescence resonance energy transfer (FRET) experiments and surface adsorption of biomolecules on inorganic surfaces like gold. FRET experiments allow to probe molecular distances via the distance-dependent energy transfer efficiency from an excited donor dye to its acceptor counterpart. In single molecule settings, not only average distances, but also distance distributions or even fluctuations can be probed, providing a powerful tool to study flexibilities and structural changes in biomolecules. However, the measured energy transfer efficiency does not only depend on the distance between the two dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages have to be employed, which severely limit the accuracy of the distance distributions extracted from FRET experiments alone. In this work, I combined efficiency distributions from FRET experiments with dye orientation statistics from molecular dynamics (MD) simulations to calculate improved estimates of the distance distributions. From the time-dependent mutual dye orientations, the FRET efficiency was calculated and the statistics of individual photo-absorption, FRET, and photo-emission events were determined from subsequent Monte Carlo (MC) simulations. All recorded emission events were then collected to bursts from which efficiencies were calculated in close resemblance to the actual FRET experiment. The feasibility of this approach has been tested by direct comparison to experimental data. As my test system, I chose a poly-proline chain with Alexa 488 and Alexa 594 dyes attached. Quantitative agreement of calculated efficiency distributions from simulations with the experimental ones was obtained. In addition, the presence of cis-isomers and specific dye conformations were identified as the sources of the experimentally observed heterogeneity. This agreement of in silico FRET with experiments allows employment of the dye orientation dynamics from simulations in the distance reconstruction. For multiple levels of approximation, the dye orientation dynamics was used in dye orientation models. At each level, fewer assumptions were applied to the dye orientation model. Each model was then used to reconstruct distance distributions from experimental efficiency distributions. Comparison of reconstructed distance distributions with those from simulations revealed a systematically improved accuracy of the reconstruction in conjunction with a reduction of model assumptions. This result demonstrates that dye orientations from MD simulations, combined with MC photon generation, can indeed be used to improve the accuracy of distance distribution reconstruction from experimental FRET efficiencies. A second example of simulations and interpretation in compliance with experiments are the studies of protein adsorption on gold surfaces. Interactions between biomolecules and inorganic surfaces, e.g. during the biomineralization of bone, are fundamental for multicellular organisms. Moreover, understanding these interactions is the basis for biotechnological applications such as biochips or nano-sensing. In the framework of the PROSURF project, a multi-scale approach for the simulation of biomolecular adsorption was implemented. First, parameters for MD simulations were derived from ab initio calculations. These parameters were then applied to simulate the adsorption of single amino acids and to calculate their adsorption free energy profiles. For the screening of adsorbed protein conformations, rigid body Brownian dynamics (BD) docking on surfaces was benchmarked with the free energy profiles from the MD simulations. Comparison of the protein adsorption rate from surface plasmon resonance experiments and BD docking yielded good agreement and therefore justifies the multi-scale approach. Additionally, MD simulations of protein adsorption on gold surfaces revealed an unexpected importance of positively charged residues on the surface for the initial adsorption steps. The multi-scale approach presented here allows the study of biomolecular interactions with inorganic surfaces consistently at multiple levels of theory: Atomistic details of the adsorption process can be studied by MD simulations whereas BD allows the extensive screening of protein libraries or adsorption geometries. In summary, compliance of simulation and experimental setup allows benchmarking of the simulation accuracy by comparison to experiments. In contrast to employing experiments alone, the combination of experiments and simulations enhances the accuracy of interpreted results from experimental raw data.

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05

Spatially controlled polymerization of actin is at the origin of cell motility and is responsible for the formation of cellular protrusions like lamellipodia. The pathogens Listeria monocytogenes and Shigella flexneri, move inside the infected cells by riding on an actin tail. The actin tail is formed from highly crosslinked polymerizing actin filaments, which undergo cycles of attachment and detachment to and from the surface of bacteria. In this thesis, we formulated a simple theoretical model of actin-based motility. The physical mechanism for our model is based on the load-dependent detachment rate, the load-dependent polymerization velocity, the restoring force of attached filaments, the pushing force of detached filaments and finally on the cross-linkage and/or entanglement of the filament network. We showed that attachment and detachment of filaments to the obstacle, as well as polymerization and cross-linking of the filaments lead to spontaneous oscillations in obstacle velocity. The velocity spike amplitudes and periods given by our model are in good agreement with those observed experimentally in Listeria. In this model, elasticity and curvature of the obstacle is not included. Future modelling will yield insight into the role of curvature and elasticity in the actin-based motility. As an important prerequisite for this model, we used analytical calculations as well as extensive Monte Carlo (MC) simulations to investigate the pushing force of detached filaments. The analysis starts with calculations of the entropic force exerted by a grafted semiflexible polymer on a rigid wall. The pushing force, which is purely entropic in origin, depends on the polymer's contour length, persistence length, orientation and eventually on the distance of the grafting point from the rigid wall. We checked the validity range of our analytical results by performing extensive Monte Carlo simulations. This was done for stiff, semiflexible and flexible filaments. In this analysis, the obstacle is always assumed to be a rigid wall. In the real experimental situations, the obstacle (such as membrane) is not rigid and performs thermal fluctuations. Further analytical calculations and MC simulations are necessary to include the elasticity of the obstacle