Podcasts about mcmc

  • 86PODCASTS
  • 221EPISODES
  • 38mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about mcmc

Show all podcasts related to mcmc

Latest podcast episodes about mcmc

Modellansatz - English episodes only

In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.

Modellansatz
Bayesian Learning

Modellansatz

Play Episode Listen Later May 2, 2025 35:02


In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.

~バスケにまつわる話~ #stayballin' radio
まだ消化しきれてないけどSOMECITY THE FINALを 振り返る | EP165

~バスケにまつわる話~ #stayballin' radio

Play Episode Listen Later Mar 6, 2025 59:40


SOMECITY THE FINAL 2024-2025。今年も心が躍りまくりの3日間でした。配信MCと敗者復活戦MCとして話させてもらった3人による振り返りトーク✏️JIZO・BigMo・VIBESへの質問、お悩み相談、ポッドキャストで話して欲しい事の提案、は ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠こちら⁠⁠⁠⁠⁠⁠から!⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

朝日新聞 ニュースの現場から
戦争で「「絡め取られた」自発性 市川房枝が迫られた選択 100年をたどる旅⑥ #1803

朝日新聞 ニュースの現場から

Play Episode Listen Later Mar 1, 2025 113:24


第2次世界大戦では、婦選活動を率いた市川房枝は「戦争協力」の道を選ばざるを得ませんでした。ドイツでは第1次世界大戦で多くの同性愛者が志願兵として戦いましたが、ナチスは迫害しました。戦時中の女性や同性愛者の経験から、今、くむべき教訓を考えます。*2025年2月7日に収録しました。 【関連記事】市川房枝が迫られた「三択」 ひたむきな活動は戦争協力に変質したhttps://www.asahi.com/articles/ASSDV0FJNSDVUPQJ00CM.html?iref=omny 「誰かのために」利用され、切り捨てられた 戦争がさらす国家の素顔https://www.asahi.com/articles/ASSDV1075SDVUSPT00DM.html?iref=omny 連載「100年をたどる旅」音声で追体験 取材班が舞台裏を語りますhttps://www.asahi.com/articles/AST1P4FWQT1PDIFI006M.html?iref=omny 【出演・スタッフ】高重治香(論説委員)花房吾早子(大阪社会部)MC 岸上渉MC・音源編集 南日慶子 【おねがい】朝日新聞ポッドキャストは、みなさまからの購読料で配信しています。番組継続のため、会員登録をお願いします! http://t.asahi.com/womz 【朝ポキ情報】アプリで記者と対話 http://t.asahi.com/won1 交流はdiscord https://bit.ly/asapoki_discord おたよりフォーム https://bit.ly/asapoki_otayori 朝ポキTV https://bit.ly/asapoki_youtube_ メルマガ https://bit.ly/asapoki_newsletter 広告ご検討の企業様は http://t.asahi.com/asapokiguide 番組検索ツール https://bit.ly/asapoki_cast 最新情報はX https://bit.ly/asapoki_twitter 番組カレンダー https://bit.ly/asapki_calendar 全話あります公式サイト https://bit.ly/asapoki_lp See omnystudio.com/listener for privacy information.

Jacksonville Jaguars Recent
Press Pass | Shatley, Engram and Oluokun on MCMC, Texans Matchup & Finishing Strong

Jacksonville Jaguars Recent

Play Episode Listen Later Nov 27, 2024 14:30 Transcription Available


Jaguars players OL Tyler Shatley, TE Evan Engram and LB Foye Oluokun speak with the media on Wednesday ahead of Week 13 vs. Texans. OL Tyler Shatley: 00:00 - 02:55 TE Evan Engram: 02:56 - 09:10 LB Foye Oluokun: 09:11 - 14:29See omnystudio.com/listener for privacy information.

Jacksonville Jaguars Recent
Press Pass | Trevor Lawrence Speaks on Wolfson's MCMC Support, His Health

Jacksonville Jaguars Recent

Play Episode Listen Later Nov 27, 2024 11:16 Transcription Available


Jaguars QB Trevor Lawrence meets with the media on Wednesday of Week 13 ahead of the matchup against the Houston Texans.See omnystudio.com/listener for privacy information.

Jaguars Reporters
Press Pass | Shatley, Engram and Oluokun on MCMC, Texans Matchup & Finishing Strong

Jaguars Reporters

Play Episode Listen Later Nov 27, 2024 14:30 Transcription Available


Jaguars players OL Tyler Shatley, TE Evan Engram and LB Foye Oluokun speak with the media on Wednesday ahead of Week 13 vs. Texans. OL Tyler Shatley: 00:00 - 02:55 TE Evan Engram: 02:56 - 09:10 LB Foye Oluokun: 09:11 - 14:29See omnystudio.com/listener for privacy information.

Jaguars Reporters
Press Pass | Trevor Lawrence Speaks on Wolfson's MCMC Support, His Health

Jaguars Reporters

Play Episode Listen Later Nov 27, 2024 11:16 Transcription Available


Jaguars QB Trevor Lawrence meets with the media on Wednesday of Week 13 ahead of the matchup against the Houston Texans.See omnystudio.com/listener for privacy information.

BFM :: Morning Brief
5G, More Questions Than Answers

BFM :: Morning Brief

Play Episode Listen Later Nov 11, 2024 11:49


The award of the second 5G spectrum has taken many by surprise when U Mobile, the smallest of telco players won it. However, this has raised more questions as to the reasons, technical specifications and also the benefit to the country for this award that the regulator, MCMC has yet to answer. Professor Dr Ong Kian Ming, Pro Vice Chancellor for External Engagement, at Taylor's University gives us his perspective and the implications of this on Malaysia Inc.Image Credit: shutterstock.com

578广播
#189【速食锦-易建联与杨笠事件之实在起不出副标题了】

578广播

Play Episode Listen Later Oct 29, 2024 57:07


这期太难写简介了, 本期节目仅代表578两位主播个人言论 节目内容如果冒犯到各位,我们先道歉哈 本期主播: MC鲍 MC黄

Quantitude
S6E06 Pop Quiz: Acronyms

Quantitude

Play Episode Listen Later Oct 22, 2024 40:25


In this week's episode Greg tries to ambush Patrick by bringing back the popular feature Pop Quiz, this time with a statistical acronym theme, only to pretty much get crushed by Patrick in the end. Along they way they also discuss: Wow That's Fantastic, QR codes and octogenarians, Questionable Rectum, catharsis, grassy knolls, petards, Sean ringtones, pity minutes, apologies to Roy Levy, bad clock management, asteroid Roombas, pitching beach balls, statistical sock puppets, and the DIC talk. Stay in contact with Quantitude! Web page: quantitudepod.org TwitterX: @quantitudepod YouTube: @quantitudepod Merch: redbubble.com

IBUKI STATION
LAKE BIWA 100 MCが岡田さんからMCアケさんに交代!岡田さんとここまでを振り返り、後半はアケさんと

IBUKI STATION

Play Episode Listen Later Oct 12, 2024 26:00 Transcription Available


LAKE BIWA 100は2日めの夜を迎えています。ここまで大会MCと、IBUKI STATIONのパーソナリティを務めていただいた岡田さんがここで交代。ここからは、MCアケさんが登場です。岡田さんとはここまでの振り返りを行い、MCアケさんには自己紹介をして頂きました。さらに、MCアケさんが選手を出迎えます。 LISTENで開く

578广播
#187【继续夜巡阿姆斯特丹-男人的天堂?】

578广播

Play Episode Listen Later Oct 9, 2024 70:59


本期节目有大量“迅”有关的话题 如果你身边有孩子的话,请戴上您的耳机 让我们一起和我台前线记者感受一下阿姆斯特丹的夜色吧 ps:本期和艺术毫不相关,请听友老师们放心 本期主播: MC鲍 MC黄

578广播
#187【夜巡阿姆斯特丹-一个适合周五飞过去一趟的城市】

578广播

Play Episode Listen Later Sep 19, 2024 41:00


阿姆斯特丹真是一个适合周五飞过去一趟看看艺术品的城市 这么近那么美,周末到阿姆斯特丹看看这两幅有故事的画吧 本期主播: MC鲍 MC黄

Keluar Sekejap
Strategi Geopolitik PMX, Penghalaan Sistem Nama Domain (DNS), Kewajipan Pensijilan Halal

Keluar Sekejap

Play Episode Listen Later Sep 10, 2024 90:08


Audio Siar Keluar Sekejap Episod 122 antaranya membincangkan penghalaan sistem nama domain (DNS) yang ingin dikuatkuasakan oleh MCMC dan kewajipan pensijilan halal yang sedang dikaji oleh JAKIM. Keluar Sekejap juga menyentuh strategi geopolitik PMX, Anwar Ibrahim susulan lawatan ke Rusia baru-baru ini. Rujukan : https://www.economist.com/asia/2024/08/29/why-does-the-west-back-the-wrong-asian-leaders https://www.scmp.com/week-asia/economics/article/3276605/malaysias-china-plus-one-gold-rush-stumbles-over-us-tariff-threat?module=top_story&pgtype=section Dikesempatan ini, Keluar Sekejap ingin mengucapkan terima kasih kepada Viu yang telah menaja episod kali ini. Viu, sebuah perkhidmatan penstriman video over-the-top (OTT) membawakan kandungan dan hiburan Asia terbaik termasuk drama, filem and program gaya hidup. Nikmati siri terbaru Viu Original The Secret setiap Khamis hanya di Viu. Tebus kod: KSSECRET sebelum 15 October di laman web / mobile web browser www.viu.com untuk dapatkan akses 30 hari percuma. Ikuti viu di Facebook : https://www.facebook.com/ViuMalaysia?mibextid=LQQJ4d Instagram : https://www.instagram.com/viumalaysia?igsh=NzZhdXExeG5qeGhh Youtube : https://youtube.com/@viumalaysia?si=qGVMM5SvNsy9aDPR X : https://x.com/viu_my?s=21&t=SZk0EUfxxKqTZWrHYJdi6w Tiktok : https://www.tiktok.com/@viu.malaysia?_t=8pb7W52LRM8&_r=1

BFM :: Morning Brief
U-Turn On DNS Redirection Policy After Public Outcry

BFM :: Morning Brief

Play Episode Listen Later Sep 9, 2024 10:33


Last week, news emerged that the Malaysian Communications and Multimedia Commission (MCMC) directed all Internet service providers to implement public DNS (Domain Name System) redirection for businesses, enterprises, and government agencies by September 30. Although Communications Minister Fahmi Fadzil defended the decision, he has asked MCMC to put the plan on hold pending stakeholder engagement. We discuss the rationale and implication of the policy with Alex Wong of SoyaCincau.Image Credit: shutterstock.com

578广播
#185【童润中是你我-一场加班引发的小血案】

578广播

Play Episode Listen Later Sep 6, 2024 52:35


童润中是你我, 童润中也不是你我 阶级斗争这四个字分量是不是有点大? 过好日子是不是会更重要点呢? 但是真理确实也需要捍卫。 你有什么想法请留言告诉我们 本期主播: MC鲍 MC黄

578广播
#184【We Are Back-我们是自行车】

578广播

Play Episode Listen Later Sep 4, 2024 30:17


怎么说呢,套用一句王建国老师的梗 We Are Back,我们是自行车 欢迎各位微博搜索578广播或者评论区留言加入【578广播花好月圆俱乐部听友群】 本期主播: MC鲍 MC黄

SDGs シンプルに話そう
出演者の表情を見ると、視聴者は違う印象を受ける? 大野由衣の模索(後編)#701

SDGs シンプルに話そう

Play Episode Listen Later Sep 2, 2024 36:45


朝ポキと記者サロンで同じようにMCを務めると、視聴者に意図しない印象を与えてしまう…。記者サロンMCの大野由衣は就任当初、そう悩んだといいます。いま意識していることは?反省を次につなげる姿勢に、安仁周と江向彩也夏も感銘を受けました。 ※2024年8月21日に収録しました。前後編の後編です。前編は《スマホ1台で録音したあの日 初代キャップ・大野由衣が語る朝ポキの黎明期(前編)#700》です。 【関連コンテンツ】連載「まなび場天声人語」 https://www.asahi.com/rensai/list.html?id=801岸田氏本人が話す「戦略ミス」 総裁選、はしご外されてhttps://www.asahi.com/articles/ASNBF34QCNBCUEHF007.html?iref=omny砂上の国家 満洲のスパイ戦 - プレミアムAhttps://www.asahi.com/special/manchukuo-spying/?iref=omny 【記者サロンの一覧はこちら】記者イベントカレンダーhttps://www.asahi.com/eventcalendar/?iref=omny 【出演者】大野由衣安仁周江向彩也夏(MC・編集) 【朝ポキ情報】ご感想はおたよりフォーム → https://bit.ly/asapoki_otayori番組カレンダー→ https://bit.ly/asapki_calendar出演者名検索ツール→ https://bit.ly/asapoki_cast最新情報はX(旧ツイッター)→ https://bit.ly/asapoki_twitter交流はコミュニティ → https://bit.ly/asapoki_communityテロップ付きはYouTube → https://bit.ly/asapoki_youtube_こぼれ話はメルマガ → https://bit.ly/asapoki_newsletter全話あります公式サイト → https://bit.ly/asapoki_lp広告ご検討の企業様は → http://t.asahi.com/asapokiguideメールはこちら → podcast@asahi.comSee omnystudio.com/listener for privacy information.

BFM :: Morning Brief
MCMC Challenges AIC's Authority

BFM :: Morning Brief

Play Episode Listen Later Aug 30, 2024 10:09


Earlier this, the Asia Internet Coalition (AIC), an industry group, sent a letter to the Prime Minister calling for a halt to new social media licensing. The Malaysian Communications and Multimedia Commission (MCMC) has since questioned AIC's representation of tech companies, noting discrepancies and Grab Malaysia's claim of not being consulted. We turn to Derek John Fernandez, a Commission Member of the MCMC, for insights on these allegations.Image Credit: Malaysia Communications and Multimedia Commission (MCMC)

Learning Bayesian Statistics
#110 Unpacking Bayesian Methods in AI with Sam Duffield

Learning Bayesian Statistics

Play Episode Listen Later Jul 10, 2024 72:27 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Use mini-batch methods to efficiently process large datasets within Bayesian frameworks in enterprise AI applications.Apply approximate inference techniques, like stochastic gradient MCMC and Laplace approximation, to optimize Bayesian analysis in practical settings.Explore thermodynamic computing to significantly speed up Bayesian computations, enhancing model efficiency and scalability.Leverage the Posteriors python package for flexible and integrated Bayesian analysis in modern machine learning workflows.Overcome challenges in Bayesian inference by simplifying complex concepts for non-expert audiences, ensuring the practical application of statistical models.Address the intricacies of model assumptions and communicate effectively to non-technical stakeholders to enhance decision-making processes.Chapters:00:00 Introduction to Large-Scale Machine Learning11:26 Scalable and Flexible Bayesian Inference with Posteriors25:56 The Role of Temperature in Bayesian Models32:30 Stochastic Gradient MCMC for Large Datasets36:12 Introducing Posteriors: Bayesian Inference in Machine Learning41:22 Uncertainty Quantification and Improved Predictions52:05 Supporting New Algorithms and Arbitrary Likelihoods59:16 Thermodynamic Computing01:06:22 Decoupling Model Specification, Data Generation, and InferenceThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal

The Nonlinear Library
AF - Calculating Natural Latents via Resampling by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jun 6, 2024 17:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Calculating Natural Latents via Resampling, published by johnswentworth on June 6, 2024 on The AI Alignment Forum. So you've read some of our previous natural latents posts, and you're sold on the value proposition. But there's some big foundational questions still unanswered. For example: how do we find these natural latents in some model, if we don't know in advance what they are? Examples in previous posts conceptually involved picking some latents out of the ether (like e.g. the bias of a die), and then verifying the naturality of that latent. This post is about one way to calculate natural latents, in principle, when we don't already know what they are. The basic idea is to resample all the variables once simultaneously, conditional on the others, like a step in an MCMC algorithm. The resampled variables turn out to be a competitively optimal approximate natural latent over the original variables (as we'll prove in the post). Toward the end, we'll use this technique to calculate an approximate natural latent for a normal distribution, and quantify the approximations. The proofs will use the graphical notation introduced in Some Rules For An Algebra Of Bayes Nets. Some Conceptual Foundations What Are We Even Computing? First things first: what even is "a latent", and what does it even mean to "calculate a natural latent"? If we had a function to "calculate natural latents", what would its inputs be, and what would its outputs be? The way we use the term, any conditional distribution (λ,xP[Λ=λ|X=x]) defines a "latent" variable Λ over the "observables" X, given the distribution P[X]. Together P[X] and P[Λ|X] specify the full joint distribution P[Λ,X]. We typically think of the latent variable as some unobservable-to-the-agent "generator" of the observables, but a latent can be defined by any extension of the distribution over X to a distribution over Λ and X. Natural latents are latents which (approximately) satisfy some specific conditions, namely that the distribution P[X,Λ] (approximately) factors over these Bayes nets: Intuitively, the first says that Λ mediates between the Xi's, and the second says that any one Xi gives approximately the same information about Λ as all of X. (This is a stronger redundancy condition than we used in previous posts; we'll talk about that change below.) So, a function which "calculates natural latents" takes in some representation of a distribution (xP[X]) over "observables", and spits out some representation of a conditional distribution (λ,xP[Λ=λ|X=x]), such that the joint distribution (approximately) factors over the Bayes nets above. For example, in the last section of this post, we'll compute a natural latent for a normal distribution. The function to compute that latent: Takes in a covariance matrix ΣXX for X, representing a zero-mean normal distribution P[X]. Spits out a covariance matrix ΣΛΛ for Λ and a cross-covariance matrix ΣΛX, together representing the conditional distribution of a latent Λ which is jointly zero-mean normal with X. … and the joint normal distribution over Λ,X represented by those covariance matrices approximately factors according to the Bayes nets above. Why Do We Want That, Again? Our previous posts talk more about the motivation, but briefly: two different agents could use two different models with totally different internal (i.e. latent) variables to represent the same predictive distribution P[X]. Insofar as they both use natural latents, there's a correspondence between their internal variables - two latents over the same P[X] which both approximately satisfy the naturality conditions must contain approximately the same information about X. So, insofar as the two agents both use natural latents internally, we have reason to expect that the internal latents of one can be faithfully translated int...

さやしのポッドキャスト -アイドルニュース解説
#324 地下アイドルにトーク力は必要か

さやしのポッドキャスト -アイドルニュース解説

Play Episode Listen Later Mar 25, 2024 15:16


地下アイドルにMCは求められてないらしいが、個人的にはMCからメンバーの関係性やキャラクターが読み取れたりすることもあるので、毎回とは言わないけどたまにはやってほしい派

Learning Bayesian Statistics
#98 Fusing Statistical Physics, Machine Learning & Adaptive MCMC, with Marylou Gabrié

Learning Bayesian Statistics

Play Episode Listen Later Jan 24, 2024 65:07 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meHow does the world of statistical physics intertwine with machine learning, and what groundbreaking insights can this fusion bring to the field of artificial intelligence?In this episode, we delve into these intriguing questions with Marylou Gabrié. an assistant professor at CMAP, Ecole Polytechnique in Paris. Having completed her PhD in physics at École Normale Supérieure, Marylou ventured to New York City for a joint postdoctoral appointment at New York University's Center for Data Science and the Flatiron's Center for Computational Mathematics.As you'll hear, her research is not just about theoretical exploration; it also extends to the practical adaptation of machine learning techniques in scientific contexts, particularly where data is scarce.In this conversation, we'll traverse the landscape of Marylou's research, discussing her recent publications and her innovative approaches to machine learning challenges, latest MCMC advances, and ML-assisted scientific computing.Beyond that, get ready to discover the person behind the science – her inspirations, aspirations, and maybe even what she does when not decoding the complexities of machine learning algorithms!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie and Cory Kiser.Visit https://www.patreon.com/learnbayesstats to unlock exclusive...

DataTalks.Club
Bayesian Modeling and Probabilistic Programming - Rob Zinkov

DataTalks.Club

Play Episode Listen Later Jan 22, 2024 54:15


We talked about: Rob's background Going from software engineering to Bayesian modeling Frequentist vs Bayesian modeling approach About integrals Probabilistic programming and samplers MCMC and Hakaru Language vs library Encoding dependencies and relationships into a model Stan, HMC (Hamiltonian Monte Carlo) , and NUTS Sources for learning about Bayesian modeling Reaching out to Rob Links: Book 1: https://bayesiancomputationbook.com/welcome.html Book/Course: https://xcelab.net/rm/statistical-rethinking/ Free ML Engineering course: http://mlzoomcamp.com Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html

Earthquake Science Center Seminars
Extremely Efficient Bayesian Inversions (or how to fit a model to data without the model or the data)

Earthquake Science Center Seminars

Play Episode Listen Later Jan 10, 2024 60:00


Sarah Minson, U.S. Geological Survey There are many underdetermined geophysical inverse problems. For example, when we try to infer earthquake fault slip, we find that there are many potential slip models that are consistent with our observations and our understanding of earthquake physics. One way to approach these problems is to use Bayesian analysis to infer the ensemble of all potential models that satisfy the observations and our prior knowledge. In Bayesian analysis, our prior knowledge is known as the prior probability density function or prior PDF, the fit to the data is the data likelihood function, and the target PDF that satisfies both the prior PDF and data likelihood function is the posterior PDF. Simulating a posterior PDF can be computationally expensive. Typical earthquake rupture models with 10 km spatial resolution can require using Markov Chain Monte Carlo (MCMC) to draw tens of billions of random realizations of fault slip. And now new technological advancements like LiDAR provide enormous numbers of laser point returns that image surface deformation at submeter scale, exponentially increasing computational cost. How can we make MCMC sampling efficient enough to simulate fault slip distributions at sub-meter scale using “Big Data”? We present a new MCMC approach called cross-fading in which we transition from an analytical posterior PDF (obtained from a conjugate prior to the data likelihood function) to the desired target posterior PDF by bringing in our physical constraints and removing the conjugate prior. This approach has two key efficiencies. First, the starting PDF is by construction “close” to the target posterior PDF, requiring very little MCMC to update the samples to match the target. Second, all PDFs are defined in model space, not data space. The forward model and data misfit are never evaluated during sampling, allowing models to be fit to Big Data with zero computational cost. It is even possible, without additional computational cost, to incorporate model prediction errors for Big Data, that is, to quantify the effects on data prediction of uncertainties in the model design. While we present earthquake models, this approach is flexible and can be applied to many geophysical problems.

The Nonlinear Library
AF - A case for AI alignment being difficult by Jessica Taylor

The Nonlinear Library

Play Episode Listen Later Dec 31, 2023 25:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A case for AI alignment being difficult, published by Jessica Taylor on December 31, 2023 on The AI Alignment Forum. This is an attempt to distill a model of AGI alignment that I have gained primarily from thinkers such as Eliezer Yudkowsky (and to a lesser extent Paul Christiano), but explained in my own terms rather than attempting to hew close to these thinkers. I think I would be pretty good at passing an ideological Turing test for Eliezer Yudowsky on AGI alignment difficulty (but not AGI timelines), though what I'm doing in this post is not that, it's more like finding a branch in the possibility space as I see it that is close enough to Yudowsky's model that it's possible to talk in the same language. Even if the problem turns out to not be very difficult, it's helpful to have a model of why one might think it is difficult, so as to identify weaknesses in the case so as to find AI designs that avoid the main difficulties. Progress on problems can be made by a combination of finding possible paths and finding impossibility results or difficulty arguments. Most of what I say should not be taken as a statement on AGI timelines. Some problems that make alignment difficult, such as ontology identification, also make creating capable AGI difficult to some extent. Defining human values If we don't have a preliminary definition of human values, it's incoherent to talk about alignment. If humans "don't really have values" then we don't really value alignment, so we can't be seriously trying to align AI with human values. There would have to be some conceptual refactor of what problem even makes sense to formulate and try to solve. To the extent that human values don't care about the long term, it's just not important (according to the values of current humans) how the long-term future goes, so the most relevant human values are the longer-term ones. There are idealized forms of expected utility maximization by brute-force search. There are approximations of utility maximization such as reinforcement learning through Bellman equations, MCMC search, and so on. I'm just going to make the assumption that the human brain can be well-modeled as containing one or more approximate expected utility maximizers. It's useful to focus on specific branches of possibility space to flesh out the model, even if the assumption is in some ways problematic. Psychology and neuroscience will, of course, eventually provide more details about what maximizer-like structures in the human brain are actually doing. Given this assumption, the human utility function(s) either do or don't significantly depend on human evolutionary history. I'm just going to assume they do for now. I realize there is some disagreement about how important evopsych is for describing human values versus the attractors of universal learning machines, but I'm going to go with the evopsych branch for now. Given that human brains are well-modeled as containing one or more utility functions, either they're well-modeled as containing one (perhaps which is some sort of monotonic function of multiple other score functions), or it's better to model them as multiple. See shard theory. The difference doesn't matter for now, I'll keep both possibilities open. Eliezer proposes "boredom" as an example of a human value (which could either be its own shard or a term in the utility function). I don't think this is a good example. It's fairly high level and is instrumental to other values. I think "pain avoidance" is a better example due to the possibility of pain asymbolia. Probably, there is some redundancy in the different values (as there is redundancy in trained neural networks, so they still perform well when some neurons are lesioned), which is part of why I don't agree with the fragility of value thesis as stated by Yudkowsky. Re...

Diabetes Connections with Stacey Simms Type 1 Diabetes
In the News... Does food-as-medicine work for T2D? Fake Ozempic warning, new Tzield research, My Cause My Cleats and more!

Diabetes Connections with Stacey Simms Type 1 Diabetes

Play Episode Listen Later Dec 29, 2023 7:29


It's In the News, a look at the top stories and headlines from the diabetes community happening now. Top stories this week: a new study looks at food-as-medicine for type 2, another FDA warning about fake Ozempic, new research says gut markers may help predict who Tzield will work best for, JDRF partners with NFL and more... Happy New Year - we'll see you in 2024! Find out more about Moms' Night Out  Please visit our Sponsors & Partners - they help make the show possible! Take Control with Afrezza  Omnipod - Simplify Life Learn about Dexcom  Edgepark Medical Supplies Check out VIVI Cap to protect your insulin from extreme temperatures Learn more about AG1 from Athletic Greens  Drive research that matters through the T1D Exchange The best way to keep up with Stacey and the show is by signing up for our weekly newsletter: Sign up for our newsletter here Here's where to find us: Facebook (Group) Facebook (Page) Instagram Twitter Check out Stacey's books! Learn more about everything at our home page www.diabetes-connections.com  Reach out with questions or comments: info@diabetes-connections.com Episode transcription: Hello and welcome to Diabetes Connections In the News! I'm Stacey Simms and every other Friday I bring you a short episode with the top diabetes stories and headlines happening now. XX In the news is brought to you by Edgepark simplify your diabetes journey with Edgepark XX Our top story this week… XX You often hear people say food is medicine.. but an intensive program trying to show that's the case did NOT improve glycemic control in adults with type 2 diabetes any better than usual care. This was a randomized clinical trial. After 6 months, both groups had a similar drop in HbA1c -- 1.5 percentage points among program enrollees and 1.3 percentage points with usual care, with no significant differences in other metabolic lab values between the groups either, the researchers wrote in JAMA the food-as-medicine participants even gained some weight compared with the usual care group over 6 months (adjusted mean difference 1.95 kg, P=0.04). "I was surprised by the findings because the program is so intensive," Doyle told MedPage Today. "The health system built brick-and-mortar clinics, staffed them with a dietitian, nurse, and community health worker, had weekly food pick-up for 10 meals per week for the entire family, and participants spend a year in the program."   Costing an estimated $2,000 annually per participant, the food-as-medicine program allowed participants to choose from a variety of vegetables, fruits, and entrees each week -- enough food for two meals a day, 5 days a week. They were also provided recipes and cooking instructions and met with dietitians to track goals. On the other hand, the control group was only provided usual care, a list of local food bank locations, and the option to join the program after 6 months.     The trial was conducted at two sites, one rural and one urban, in the mid-Atlantic region. It recruited 465 adults with type 2 diabetes who completed the study, all of whom started with an HbA1c of 8% or higher. All participants were also self-reported as food insecure. The average age was 54.6 years, 54.8% of participants were female, 81.3% were white, and most resided in the urban location. Of note, all participants also resided in the program's service area and were affiliated with the health system that ran it.   "One study should not be over-interpreted," said Doyle. "It is possible that such a program could work in other contexts, among patients less connected to a health system, or in other formats. The main alternative to providing healthy groceries and education is to provide pre-made 'medically tailored meals.'"   "I hope the study raises awareness of the potential for food-as-medicine programs to increase healthcare engagement and to push researchers and policymakers to generate more evidence on ways such programs can improve health." It's worth noting that there is very little study – much less clinical trial level study on this type of thing. The researchers say they hope it spurs more research to find methods that will have a large impact. https://news.mit.edu/2023/food-medicine-diabetes-study-1227 https://www.medpagetoday.com/primarycare/dietnutrition/107998   XX New information about moderate low carb diets for people with type 1. The study published in The Lancet Regional Health - Europe is the largest of its kind to date. Participants were for different periods randomly assigned in a crossover manner to eat a traditional diet with 50% of the energy from carbohydrates, or a moderate low-carbohydrate diet with 30% of the energy from carbohydrates.   The 50 participants all had type 1 diabetes with elevated mean glucose, long-term blood sugar, and injection therapy with insulin or an insulin pump. Half were women, half men. The average age was 48 years. Participants on a moderate low-carbohydrate diet were found to spend more time in what is known as the target range, the range within which people with type 1 diabetes should be in terms of glucose levels. The increase in time within the target range was an average of 68 minutes per day compared to the traditional diet, while the time with elevated values ​​was reduced by 85 minutes per day. The researchers saw no evidence of adverse effects. https://www.news-medical.net/news/20231220/Moderate-low-carb-diet-safe-and-effective-for-adults-with-type-1-diabetes.aspx   XX Researchers at Case Western Reserve University and University Hospitals have identified an enzyme that blocks insulin produced in the body—a discovery that could provide a new target to treat diabetes.   Their study, published Dec. 5 in the journal Cell, focuses on nitric oxide, a compound that dilates blood vessels, improves memory, fights infection and stimulates the release of hormones, among other functions. How nitric oxide performs these activities had long been a mystery.   The researchers discovered a novel “carrier” enzyme (called SNO-CoA-assisted nitrosylase, or SCAN) that attaches nitric oxide to proteins, including the receptor for insulin action. Given the discovery, next steps could be to develop medications against the enzyme, he said. https://thedaily.case.edu/new-cause-of-diabetes-discovered-offering-potential-target-for-new-classes-of-drugs-to-treat-the-disease/ XX The Food and Drug Administration on Thursday warned consumers not to use counterfeit versions of Novo Nordisk's diabetes drug Ozempic that have been found in the country's drug supply chain.   The FDA said it will continue to investigate counterfeit Ozempic 1 milligram injections and has seized thousands of units, but flagged that some may still be available for purchase. The agency said the needles from the seized injections are counterfeit and their sterility cannot be confirmed, which presents an additional risk of infection for patients.   Other confirmed counterfeit components from the seized products include the pen label and accompanying information about the healthcare professional and patient, as well as the carton. The FDA urged drug distributors, retail pharmacies, healthcare practitioners and patients to check the drug they have received and to not distribute, use or sell the units labeled with lot number NAR0074 and serial number 430834149057.   People who have Ozempic injections with the above lot number and serial number can report it directly to the FDA Office of Criminal Investigations. https://www.nbcnews.com/health/health-news/fda-warns-ozempic-counterfeit-diabetes-weight-loss-rcna130871 XX New research indicates that information in the gut may predict how well a person responds to Tzield. That's the medication approved earlier this year to delay the onset of type 1.  These findings reported in the journal Science Translational Medicine, casts a new spotlight on the immune system's relationship with the microbiome, revealing how gut microbes can shape the progression of type 1 diabetes. With this new knowledge in hand, clinicians may better pinpoint patients who are most likely to respond to teplizumab. https://medicalxpress.com/news/2023-12-gut-microbes-patients-response-drug.html   XX Experts are advocating for universal screening for type 1 diabetes. With the availability of Tzield and other medications on the horizon, there's a stronger push for screening earlier in life. At least 85% of people who are newly diagnosed do not have a family history of diabetes. Testing for autoantibodies can be completed at home through the TrialNet clinical trial program, or at a doctor's office or lab. For instance, JDRF's T1Detect program provides at-home testing for $55, with lower-cost options for people in financial need. The 2024 American Diabetes Association (ADA) Standards of Care recommend more intensive monitoring for the progression of preclinical type 1 diabetes. The Standards of Care also recommend using Tzield to delay the onset of diabetes in people at least 8 years old with stage 2 type 1 diabetes. https://diatribe.org/type-1-diabetes-it%E2%80%99s-time-population-wide-screening XX Commercial XX   https://www.healthline.com/health-news/the-years-biggest-medical-advancements-in-diabetes-treatment XX DRF, the leading global funder of type 1 diabetes (T1D) research, is recognizing the NFL stars who showcased their creativity and a remarkable show of support as part of the highly anticipated annual "My Cause My Cleats" (MCMC) campaign.   The My Cause My Cleats initiative allows NFL players to wear custom-painted cleats during selected games to raise awareness and funds for the charitable causes closest to their hearts. The unofficial start of the campaign begins on Giving Tuesday with unboxing day events showcasing the players' cleats and the stories behind them. It continues through weeks 13 and 14 of the season, culminating with the players donning their cleats on game day. After the games, some players donate their cleats to their chosen charities or the NFL auction, with all proceeds going toward their selected causes.   Type 1 Diabetes is a life-threatening autoimmune condition that affects people of all ages, regardless of family history or lifestyle choices. To live, people with T1D must carefully balance injecting or infusing insulin with their carbohydrate intake throughout the day and night. T1D impacts approximately 1.6 million people in the U.S. It is unpreventable, and there is currently no cure.   This year, JDRF is thankful for the support of several players who have T1D or are advocating for their loved ones with T1D, including Mark Andrews of the Baltimore Ravens, Orlando Brown, Jr. of the Cincinnati Bengals, Blake Ferguson of the Miami Dolphins, Collin Johnson of the Chicago Bears, Chad Muma of the Jacksonville Jaguars, Nate Peterman of the Chicago Bears, and Kevin Radar of the Tennessee Titans.   "The NFL players who support JDRF through the My Cause My Cleats exemplify the passion and determination at the heart of the type 1 diabetes community," said Kenya Felton, JDRF Director of PR and Celebrity Engagement. "They serve as inspirations for many adults and children affected by T1D, demonstrating that with an understanding of T1D, effective management, and a good support system, you can overcome the challenges of the disease. Their support helps to increase awareness and is significant in helping JDRF advance life-changing breakthroughs in T1D research and advocacy initiatives."   Since its inception in 2016, the MCMC campaign has provided a platform for many NFL players and affiliates to support JDRF's mission, including Beau Benzschawel, David Carr, Will Clarke, Keion Crossen, DeAndre Carter, Reid Ferguson, Jaedan Graham, Jarvis Jenkins, Collin Johnson, Henry Mondeaux, Jaelan Phillips, Adam Schefter, Brandon Wilds, and Jonah Williams. https://www.prnewswire.com/news-releases/nfl-stars-support-jdrf-and-champion-type-1-diabetes-awareness-through-the-my-cause-my-cleats-campaign-302022060.html   XX Join us again soon!    

BFM :: Morning Brief
Social Media Platform Regulation Timely

BFM :: Morning Brief

Play Episode Listen Later Dec 4, 2023 10:38


The Malaysian Communications and Multimedia Commission (MCMC) is expected to come up with a framework soon to facilitate the registration and regulation of social media platform providers. We discuss how this framework could affect the industry with Alexander Wong, managing editor and co-founder of SoyaCincau.Image credit: MCMC

regulation timely social media platforms mcmc platform regulation alexander wong
朝日新聞 ニュース深掘り
忘年会イベントの出演者発表! ポキ-1のプレゼンターも決定 #51-52

朝日新聞 ニュース深掘り

Play Episode Listen Later Nov 30, 2023 50:05


【12月16日はスペシャルデー】★どちらも無料、参加申し込みは不要です★11時から東京・下北沢でPodcast weekendに出展https://podcastweekend.jp/ 19時からはオンラインで忘年会!https://www.youtube.com/watch?v=pAHguuT21m0  【番組内容】今回の議題は……◆ポキ-1グランプリと同じ井岡と橋本がMCです(0:00)◆朝リスさんのるつぼ相談員は(08:50)◆井岡と橋本の悩みのるつぼ(12:50)◆ポキー1グランプリのプレゼンターは(21:40)◆Podcast weekendのグッズは盛りだくさん(28:54)メルマガ登録はこちらから→ https://bit.ly/asapoki_newsletter ◆朝リスさんからのお便り紹介(35:15)◆東京メンバーとのコール&レスポンス(44:25) ※2023年11月24日に収録しました。広告が入っているときは説明文の時間表記がズレる場合があります。これまでの制作会議はこちら( https://buff.ly/3XER8Co )。   【関連記事】自然に還るプラ、需要急増の見通し 車部品やWiFiルーターも試作https://www.asahi.com/articles/ASRCK4HRWRB0DIFI001.html?iref=omny  【出演・スタッフ】岸上渉橋本佳奈(MC)井岡諒(MC・音源編集) 【朝ポキ情報】ご感想はおたよりフォーム → https://bit.ly/asapoki_otayori 番組カレンダー→ https://bit.ly/asapki_calendar 出演者名検索ツール→ https://bit.ly/asapoki_cast 最新情報はX(旧ツイッター)→ https://bit.ly/asapoki_twitter 交流はコミュニティ → https://bit.ly/asapoki_community テロップ付きはYouTube → https://bit.ly/asapoki_youtube_ こぼれ話はメルマガ → https://bit.ly/asapoki_newsletter 全話あります公式サイト → https://bit.ly/asapoki_lp メールはこちら → podcast@asahi.com See omnystudio.com/listener for privacy information.

mcmc fukabori
The Nonlinear Library
LW - Learning coefficient estimation: the details by Zach Furman

The Nonlinear Library

Play Episode Listen Later Nov 16, 2023 3:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning coefficient estimation: the details, published by Zach Furman on November 16, 2023 on LessWrong. What this is for The learning coefficient (LC), or RLCT, is a quantity from singular learning theory that can help to quantify the "complexity" of deep learning models, among other things. This guide is primarily intended to help people interested in improving learning coefficient estimation get up to speed with how it works, behind the scenes. If you're just trying to use the LC for your own project, you can just use the library without knowing all the details, though this guide might still be helpful. It's highly recommended you read this post before reading this one, if you haven't already. We're primarily covering the WBIC paper (Watanabe 2010), the foundation for current LC estimation techniques, but the presentation here is original, aiming for better intuition, and differs substantially from the paper. We'll also briefly cover Lau et al. 2023. Despite all the lengthy talk, what you end up doing in practice is really simple, and the code is designed to highlight that. After some relatively quick setup, the actual LC calculation can be comfortably done in one or two lines of code. What this isn't for A good overview of SLT, or motivation behind studying the LC or loss landscape volume in the first place. We're narrowly focused on LC estimation here. Sampling details. These are very important! But they're not really unique to singular learning theory, and there are plenty of good resources and tutorials on MCMC elsewhere. Derivations of formulas, beyond the high-level reasoning. TLDR What is the learning coefficient? (Review from last time) The learning coefficient (LC), also called the RLCT, measures basin broadness. This isn't new, but typically "basin broadness" is operationalized as "basin flatness" - that is, via the determinant of the Hessian. When the model is singular (eigenvalues of the Hessian are zero), this is a bad idea. The LC operationalizes "basin broadness" as the (low-loss asymptotic) volume scaling exponent. This ends up being the right thing to measure, as justified by singular learning theory. How do we measure it? It turns out that measuring high-dimensional volume directly is hard. We don't do this. Instead we use MCMC to do what's known in statistics as "method of moments" estimation. We contrive a distribution with the LC as a population parameter, sample from that distribution and calculate one of its moments, and solve for the LC. We simplify some details in this section, but this is the conceptual heart of LC estimation. How do we measure it (for real)? The above is a bit simplified. The LC does measure loss volume scaling, but the "loss" it uses is the average or "infinite-data" limit of the empirical loss function. In practice, you don't know this infinite-data loss function. Luckily, you already have a good estimate of it - your empirical loss function. Unluckily, this estimate isn't perfect - it can have some noise. And it turns out this noise is actually worst in the place you least want it. But it all works out in the end! You actually just need to make one small modification to the "idealized" algorithm, and things work fine. This gets you an algorithm that really works in practice! Finally, the state-of-the-art method (Lau et al. 2023) makes a couple simple modifications, for scalability among other reasons: it measures the learning coefficient only *locally*, and uses mini-batch loss instead of full-batch. In chart form: as we move from idealized (top) to realistic (bottom), we get new problems, solutions, and directions for improvement. The guide itself covers the first two rows in the most detail, which are likely the most conceptually difficult to think about, and skips directly from the second row to the fourth row at ...

The Nonlinear Library: LessWrong
LW - Learning coefficient estimation: the details by Zach Furman

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 16, 2023 3:53


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning coefficient estimation: the details, published by Zach Furman on November 16, 2023 on LessWrong. What this is for The learning coefficient (LC), or RLCT, is a quantity from singular learning theory that can help to quantify the "complexity" of deep learning models, among other things. This guide is primarily intended to help people interested in improving learning coefficient estimation get up to speed with how it works, behind the scenes. If you're just trying to use the LC for your own project, you can just use the library without knowing all the details, though this guide might still be helpful. It's highly recommended you read this post before reading this one, if you haven't already. We're primarily covering the WBIC paper (Watanabe 2010), the foundation for current LC estimation techniques, but the presentation here is original, aiming for better intuition, and differs substantially from the paper. We'll also briefly cover Lau et al. 2023. Despite all the lengthy talk, what you end up doing in practice is really simple, and the code is designed to highlight that. After some relatively quick setup, the actual LC calculation can be comfortably done in one or two lines of code. What this isn't for A good overview of SLT, or motivation behind studying the LC or loss landscape volume in the first place. We're narrowly focused on LC estimation here. Sampling details. These are very important! But they're not really unique to singular learning theory, and there are plenty of good resources and tutorials on MCMC elsewhere. Derivations of formulas, beyond the high-level reasoning. TLDR What is the learning coefficient? (Review from last time) The learning coefficient (LC), also called the RLCT, measures basin broadness. This isn't new, but typically "basin broadness" is operationalized as "basin flatness" - that is, via the determinant of the Hessian. When the model is singular (eigenvalues of the Hessian are zero), this is a bad idea. The LC operationalizes "basin broadness" as the (low-loss asymptotic) volume scaling exponent. This ends up being the right thing to measure, as justified by singular learning theory. How do we measure it? It turns out that measuring high-dimensional volume directly is hard. We don't do this. Instead we use MCMC to do what's known in statistics as "method of moments" estimation. We contrive a distribution with the LC as a population parameter, sample from that distribution and calculate one of its moments, and solve for the LC. We simplify some details in this section, but this is the conceptual heart of LC estimation. How do we measure it (for real)? The above is a bit simplified. The LC does measure loss volume scaling, but the "loss" it uses is the average or "infinite-data" limit of the empirical loss function. In practice, you don't know this infinite-data loss function. Luckily, you already have a good estimate of it - your empirical loss function. Unluckily, this estimate isn't perfect - it can have some noise. And it turns out this noise is actually worst in the place you least want it. But it all works out in the end! You actually just need to make one small modification to the "idealized" algorithm, and things work fine. This gets you an algorithm that really works in practice! Finally, the state-of-the-art method (Lau et al. 2023) makes a couple simple modifications, for scalability among other reasons: it measures the learning coefficient only *locally*, and uses mini-batch loss instead of full-batch. In chart form: as we move from idealized (top) to realistic (bottom), we get new problems, solutions, and directions for improvement. The guide itself covers the first two rows in the most detail, which are likely the most conceptually difficult to think about, and skips directly from the second row to the fourth row at ...

Learning Bayesian Statistics
#90, Demystifying MCMC & Variational Inference, with Charles Margossian

Learning Bayesian Statistics

Play Episode Listen Later Sep 6, 2023 97:36 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meWhat's the difference between MCMC and Variational Inference (VI)? Why is MCMC called an approximate method? When should we use VI instead of MCMC?These are some of the captivating (and practical) questions we'll tackle in this episode. I had the chance to interview Charles Margossian, a research fellow in computational mathematics at the Flatiron Institute, and a core developer of the Stan software.Charles was born and raised in Paris, and then moved to the US to pursue a bachelor's degree in physics at Yale university. After graduating, he worked for two years in biotech, and went on to do a PhD in statistics at Columbia University with someone named… Andrew Gelman — you may have heard of him.Charles is also specialized in pharmacometrics and epidemiology, so we also talked about some practical applications of Bayesian methods and algorithms in these fascinating fields.Oh, and Charles' life doesn't only revolve around computers: he practices ballroom dancing and pickup soccer, and used to do improvised musical comedy!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar and Matt Rosinski.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Charles' website: https://charlesm93.github.io/Charles on Twitter: https://twitter.com/charlesm993Charles on GitHub:

Any Given Saturday 全米カレッジフットボールポッドキャスト
#129【ゲスト回】MC79さん登場!現地観戦の達人に話を聞いてみた【E2S34】

Any Given Saturday 全米カレッジフットボールポッドキャスト

Play Episode Listen Later Jul 31, 2023 71:39


7月はAny Given Saturdayのポッドキャスト強化月間です!強化月間最後のエピソードはゲスト回!!今回はNFLやカレッジフットボールの現地観戦する達人、MC79さん(@_MC79)さんにお越しいただきました! ✅MCさんのフットボール遍歴 ✅MCさんが選ぶカレッジスタジアム5選 ✅現地観戦のコツ(チケット購入方法など) 是非ご試聴ください!!  ========================= ◆⁠エピソード#000【トレーラー】⁠ トレーラーのくせに30分と長いのですが(苦笑)初めての方 は是非本編の前に聴いてみてください! ◆⁠ウェイブサイト「Any Given Saturday」⁠ アメリカにおいてアマチュアスポーツながら絶大なる人気を誇るカレッジフットボール(大学アメフト)の様々な情報を紹介するサイト。カレッジフットボールを楽しむための基礎知識からシーズン中の試合分析、スコア、ランキングなどの紹介。加えて大学選手がプロ入りするために避けて通れないNFLドラフトの情報も。 ◆⁠インスタグラム⁠ ◆⁠フェイスブック⁠ ◆⁠スタンドFM 主にライブ配信形式でカレッジフットボールの事をフリースタイルで話しています。そのアーカイブ音源を試聴していただけます。 ◆匿名で質問したい方はこちらの⁠Google Forms⁠でどうぞ ◆素材提供 MaxKoMusic | https://maxkomusic.com/ Chosic |  https://www.chosic.com/free-music/all/ MusMus | https://musmus.main/jp/ Pixabay | https://pixabay.com/music/ #カレッジフットボール #アメリカンフットボール #アメフト

nfl pixabay mcmc any given saturday chosic
Astro Awani
AWANI Pagi: Berita tumpuan & menarik di astroawani.com [21 Julai 2023]

Astro Awani

Play Episode Listen Later Jul 21, 2023 26:32


Ketahui berita yang anda perlu tahu hari ini bersama Geegee Ahmad & Afiezy Azaman Antara fokus AWANI Pagi: - Tesla lancar EV pertama di Malaysia, Model Y dengan harga bermula RM199,000 - KKM bimbang penjualan air minuman guna akuarium   - JAINJ, MCMC kerjasama bendung penyebaran ajaran sesat SiHulk #AWANIpagi #AWANInews

TOXIC SICKNESS RADIO SHOWS & LABEL RELEASES
R3VERZE PSYCHOLOGY INVITES LONE RAVER ON TOXIC SICKNESS / JULY / 2023

TOXIC SICKNESS RADIO SHOWS & LABEL RELEASES

Play Episode Listen Later Jul 16, 2023 58:51


1. AG Systems - Active Tekno 2. Helix & DJ Fury - Insane Asylum 3. AG Systems - Unda Ground 4. Square Wave - Like A Criminal 5. Helix & Tekno Dred - Mindless Pleasure 6. Helix - Now Control 7. Tekno Dred & Ad Man - A Voice Spoke To Me (Helix Remix) 8. MC MC & Rush Hour - Music Maker (Fury's Vengeance Mix) 9. Square Wave - God's Broth 10. DJ Fury - Lemonade Raygun (Back Room Mix) 11. Sharkey & Marc Smith - Utopia 12. Marc Smith - Nothing More 13. Marc Smith - Procrastinator

Astro Awani
AWANI 7:45 [24/06/2023] - Pentingnya kestabilan politik | agihan kerusi PN

Astro Awani

Play Episode Listen Later Jun 24, 2023 32:11


Laporan berita padat dan ringkas #AWANI745 bersama Luqman Hariz Tumpuan #AWANI745 malam ini:    Persepsi kerajaan Perpaduan akan jatuh Ogos punca pelabur takut, dedah PM  Pas bukan abang besar, tapi layak dapat kerusi banyak, kata Muhyiddin  Serang BN, Anthony Loke bidas Tony Pua  Sana sini berita palsu, MCMC pantau, ambil tindakan lebih tegas #AWANInews

Astro Awani
AWANI Pagi: Berita tumpuan & menarik di astroawani.com [24 Jun 2023]

Astro Awani

Play Episode Listen Later Jun 24, 2023 27:22


Ketahui berita yang anda perlu tahu hari ini bersama Afiezy Azaman & Fahmi Izuddin:

Astro Awani
AWANI 7:45 [23/06/2023] - Subsidi elektrik RM5.2 bilion | Facebook degil, lembab | Semua terkorban

Astro Awani

Play Episode Listen Later Jun 23, 2023 35:00


Bersiaran dari Bandar Baru Bangi, laporan berita padat dan ringkas #AWANI745 bersama Dzulfitri Yusof  Tumpuan #AWANI745 malam ini:   - Bil elektrik pengguna domestik bawah 708 ringgit sebulan, tidak terkesan - Positif kepada eknomi, kadar inflasi negara terus mereda - MCMC tegur pengendali platfrom Facebook, tindak balas nlembab, kurang memuaskan - Pengakhiran misi mencari, tragedi kapal selam Titan meletup di dasar laut

BFM :: Morning Brief
How Far Should The Law Extend Over Social Media Platforms?

BFM :: Morning Brief

Play Episode Listen Later Jun 8, 2023 11:30


Communications and Digital Minister Fahmi Fadzil recently called out social media platform Telegram for not taking action on illicit activity on the platform. Does the MCMC have the authority and legal means to regulate digital platforms hosted overseas? We discuss the legal framework that applies and gaps in regulation with Intellectual Property and Information Technology Lawyer, Foong Cheng Leong.Image by: Shutterstock

The Nonlinear Library
AF - The Lightcone Theorem: A Better Foundation For Natural Abstraction? by johnswentworth

The Nonlinear Library

Play Episode Listen Later May 15, 2023 10:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Lightcone Theorem: A Better Foundation For Natural Abstraction?, published by johnswentworth on May 15, 2023 on The AI Alignment Forum. Credit to David Lorell for serving as an active sounding board as the ideas in this post were developed. For about a year and a half now, my main foundation for natural abstraction math has been The Telephone Theorem: long-range interactions in a probabilistic graphical model (in the long-range limit) are mediated by quantities which are conserved (in the long-range limit). From there, the next big conceptual step is to argue that the quantities conserved in the long-range limit are also conserved by resampling, and therefore the conserved quantities of an MCMC sampling process on the model mediate all long-range interactions in the model. The most immediate shortcoming of the Telephone Theorem and the resampling argument is that they talk about behavior in infinite limits. To use them, either we need to have an infinitely large graphical model, or we need to take an approximation. For practical purposes, approximation is clearly the way to go, but just directly adding epsilons and deltas to the arguments gives relatively weak results. This post presents a different path. The core result is the Lightcone Theorem: Start with a probabilistic graphical model on the variables X1,.,Xn. The graph defines adjacency, distance, etc between variables. For directed graphical models (i.e. Bayes nets), spouses (as well as parents and children) count as adjacent. We can model those variables as the output of a Gibbs sampler (that's the MCMC process) on the graphical model. Call the initial condition of the sampler X0=(X01,.,X0n). The distribution of X0 must be the same as the distribution of X (i.e. the sampler is initialized “in equilibrium”). We can model the sampler as having run for any number of steps to generate the variables; call the number of steps T. At each step, the process resamples some set of nonadjacent variables conditional on their neighbors. The Lightcone Theorem says: conditional on X0, any sets of variables in X which are a distance of at least 2T apart in the graphical model are independent. Yes, exactly independent, no approximation. In short: the initial condition of the resampling process provides a latent, conditional on which we have exact independence at a distance. This was. rather surprising to me. If you'd floated the Lightcone Theorem as a conjecture a year ago, I'd have said it would probably work as an approximation for large T, but no way it would work exactly for finite T. Yet here we are. The Proof, In Pictures The proof is best presented visually. High-level outline: Perform a do() operation on the Gibbs sampler, so that it never resamples the variables a distance of T from XR. In the do()-operated process, X0 mediates between XTR and XTD(R,≥2T), where D(R,≥2T) indicates indices of variables a distance of at least 2T from XR. Since X0, XTR and XTD(R,≥2T) are all outside the lightcone of the do()-operation, they have the same joint distribution under the non-do()-operated sampler as under the do()-operated sampler. Therefore X0 mediates between XTR and XTD(R,≥2T) under the original sampler. We start with the graphical model: Within that graphical model, we'll pick some tuple of variables XR (“R” for “region”). I'll use the notation XD(R,t) for the variables a distance t away from R, XD(R,>t) for variables a distance greater than t away from R, XD(R,

The Nonlinear Library
LW - The Lightcone Theorem: A Better Foundation For Natural Abstraction? by johnswentworth

The Nonlinear Library

Play Episode Listen Later May 15, 2023 10:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Lightcone Theorem: A Better Foundation For Natural Abstraction?, published by johnswentworth on May 15, 2023 on LessWrong. Credit to David Lorell for serving as an active sounding board as the ideas in this post were developed. For about a year and a half now, my main foundation for natural abstraction math has been The Telephone Theorem: long-range interactions in a probabilistic graphical model (in the long-range limit) are mediated by quantities which are conserved (in the long-range limit). From there, the next big conceptual step is to argue that the quantities conserved in the long-range limit are also conserved by resampling, and therefore the conserved quantities of an MCMC sampling process on the model mediate all long-range interactions in the model. The most immediate shortcoming of the Telephone Theorem and the resampling argument is that they talk about behavior in infinite limits. To use them, either we need to have an infinitely large graphical model, or we need to take an approximation. For practical purposes, approximation is clearly the way to go, but just directly adding epsilons and deltas to the arguments gives relatively weak results. This post presents a different path. The core result is the Lightcone Theorem: Start with a probabilistic graphical model on the variables X1,.,Xn. The graph defines adjacency, distance, etc between variables. For directed graphical models (i.e. Bayes nets), spouses (as well as parents and children) count as adjacent. We can model those variables as the output of a Gibbs sampler (that's the MCMC process) on the graphical model. Call the initial condition of the sampler X0=(X01,.,X0n). The distribution of X0 must be the same as the distribution of X (i.e. the sampler is initialized “in equilibrium”). We can model the sampler as having run for any number of steps to generate the variables; call the number of steps T. At each step, the process resamples some set of nonadjacent variables conditional on their neighbors. The Lightcone Theorem says: conditional on X0, any sets of variables in X which are a distance of at least 2T apart in the graphical model are independent. Yes, exactly independent, no approximation. In short: the initial condition of the resampling process provides a latent, conditional on which we have exact independence at a distance. This was. rather surprising to me. If you'd floated the Lightcone Theorem as a conjecture a year ago, I'd have said it would probably work as an approximation for large T, but no way it would work exactly for finite T. Yet here we are. The Proof, In Pictures The proof is best presented visually. High-level outline: Perform a do() operation on the Gibbs sampler, so that it never resamples the variables a distance of T from XR. In the do()-operated process, X0 mediates between XTR and XTD(R,≥2T), where D(R,≥2T) indicates indices of variables a distance of at least 2T from XR. Since X0, XTR and XTD(R,≥2T) are all outside the lightcone of the do()-operation, they have the same joint distribution under the non-do()-operated sampler as under the do()-operated sampler. Therefore X0 mediates between XTR and XTD(R,≥2T) under the original sampler. We start with the graphical model: Within that graphical model, we'll pick some tuple of variables XR (“R” for “region”). I'll use the notation XD(R,t) for the variables a distance t away from R, XD(R,>t) for variables a distance greater than t away from R, XD(R,

Learning Bayesian Statistics
#82 Sequential Monte Carlo & Bayesian Computation Algorithms, with Nicolas Chopin

Learning Bayesian Statistics

Play Episode Listen Later May 5, 2023 66:35


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with me------------------------------------------------------------------------------Max Kochurov's State of Bayes Lecture Series: https://www.youtube.com/playlist?list=PL1iMFW7frOOsh5KOcfvKWM12bjh8zs9BQSign up here for upcoming lessons: https://www.meetup.com/pymc-labs-online-meetup/events/293101751/------------------------------------------------------------------------------We talk a lot about different MCMC methods on this podcast, because they are the workhorses of the Bayesian models. But other methods exist to infer the posterior distributions of your models — like Sequential Monte Carlo (SMC) for instance. You've never heard of SMC? Well perfect, because Nicolas Chopin is gonna tell you all about it in this episode!A lecturer at the French university of ENSAE since 2006, Nicolas is one of the world experts on SMC. Before that, he graduated from Ecole Polytechnique and… ENSAE, where he did his PhD from 1999 to 2003.Outside of work, Nicolas enjoys spending time with his family, practicing aikido, and reading a lot of books.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady and Kurt TeKolste.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Old episodes...

Learning Bayesian Statistics
#78 Exploring MCMC Sampler Algorithms, with Matt D. Hoffman

Learning Bayesian Statistics

Play Episode Listen Later Mar 1, 2023 62:41 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Matt Hoffman has already worked on many topics in his life – music information retrieval, speech enhancement, user behavior modeling, social network analysis, astronomy, you name it.Obviously, picking questions for him was hard, so we ended up talking more or less freely — which is one of my favorite types of episodes, to be honest.You'll hear about the circumstances Matt would advise picking up Bayesian stats, generalized HMC, blocked samplers, why do the samplers he works on have food-based names, etc.In case you don't know him, Matt is a research scientist at Google. Before that, he did a postdoc in the Columbia Stats department, working with Andrew Gelman, and a Ph.D at Princeton, working with David Blei and Perry Cook.Matt is probably best known for his work in approximate Bayesian inference algorithms, such as stochastic variational inference and the no-U-turn sampler, but he's also worked on a wide range of applications, and contributed to software such as Stan and TensorFlow Probability.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode and Gabriel Stechschulte.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Matt's website: http://matthewdhoffman.com/Matt on Google Scholar: https://scholar.google.com/citations?hl=en&user=IeHKeGYAAAAJ&view_op=list_worksThe No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo: https://www.jmlr.org/papers/volume15/hoffman14a/hoffman14a.pdfTuning-Free Generalized Hamiltonian Monte Carlo:

Entre Amigos, El Podcast - Official Denver Broncos Podcast
Entre Amigos, El Podcast - Análisis del juego Chiefs vs Broncos de la Semana 14

Entre Amigos, El Podcast - Official Denver Broncos Podcast

Play Episode Listen Later Dec 13, 2022 46:31 Transcription Available


En este episodio de Entre Amigos, Rebeca, Victor, y Carlos nos ayudan con el análisis del juego entre los Chiefs y Broncos de la Semana 14. Todas las ganancias de Damani's MCMC se destinarán a la Liga de Futbol Americano en Monterrey. Haga su oferta aquí  https://rb.gy/nc0902  Les recordamos votar por nuestro Fan of The Year. Vota por Juan aquí  https://bit.ly/3g3trCiSee omnystudio.com/listener for privacy information.

3GIQ
MCMC NCR - SSgt Andrew Dow

3GIQ

Play Episode Listen Later Nov 25, 2022 34:16


Today we sat down with SSgt Andrew Dow about his experience in MCMC NCR and how it catapulted him into his passion for practical shooting since his first MCMC.

578广播
书我读不下去013【 草样年华2-每个人的周舟】

578广播

Play Episode Listen Later May 23, 2022 85:11


你的周舟,她现在在哪里。 时间轴: 14:25 MC黄上大学时的嫂子带给MC黄的心灵震撼 55:43 台长把自己的照片传到了“北京月老赐婚”上 音乐: 1. 狼狈 - 康姆士COM‘Z 本期主播:小鲍、小黄 喜欢我们快快搜:radio578,加入【578广播花好月圆俱乐部】,收获更多快乐

578广播
书我读不下去013【 草样年华2-每个人的周舟】

578广播

Play Episode Listen Later May 23, 2022 85:11


你的周舟,她现在在哪里。 时间轴: 14:25 MC黄上大学时的嫂子带给MC黄的心灵震撼 55:43 台长把自己的照片传到了“北京月老赐婚”上 音乐: 1. 狼狈 - 康姆士COM‘Z 本期主播:小鲍、小黄 喜欢我们快快搜:radio578,加入【578广播花好月圆俱乐部】,收获更多快乐

3GIQ
Frank and Jared's Tactical Games and MCMC

3GIQ

Play Episode Listen Later Apr 12, 2022 66:59


-Introductions -How long have you been a competitive shooter? -What disciplines do you shoot? -Can you talk about how you got into the tactical games? -How did you and Frank prepare for the WV Games in terms of shooting and fitness? -Recap of the events:             Axel Bar two-man carry             Farmer's carry while partner is shooting, and sandbag over yoke             5-mile run             Yoke, Sandbag, Farmer Carry             Sled, sandbag, husafell stone relay             El Cartel -How do you feel you did, based on your level of preparation and experience? -How do you recommend Marines who are interested in Tactical Games prepare (timeline, shooting, body maintenance, game day maintenance) -You shot your first MCMC at Stone Bay, any thoughts on those two weeks? -What are some lessons you learned from TTG WV and MCMC East that you are going to incorporate into your personal fitness and shooting training? -You are PCS'ing to 3rdMarDiv this summer- what are some aspects from TTG and the MCMC that you plan on bringing to your unit? -Anything else you'd like to leave listeners with?

3GIQ
Episode 15 MCMC NCR Shooter Interviews

3GIQ

Play Episode Listen Later Nov 11, 2021 39:04


In this episode, I sat down with various Marines who competed in the Marine Corps Marksmanship Competition National Capital Region and got their take on the experience from this year's match. Instagram: @usmcshootingteam @mfgundlach