POPULARITY
Categories
Video-Version auf youtube Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: unscharf, ¡Hola!, Schublade Klostergeister revisited Klostergeister 2025: Offizielles Video Lob an die Teilnehmer Workshop umgestaltet Vorträge bis spät Abends O-Töne einiger Teilnehmer Vorstellung der Projektgruppen und Parallelkurse (Holzbildhauer … „#896 – Pinkes Einhorn auf der Kuhweide“ weiterlesen Der Beitrag #896 – Pinkes Einhorn auf der Kuhweide ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
مرور و بحث کتاب «مالکان ذهن»دانش پیشبینی و تغییر رفتار انسان با دادههای بزرگنوشته ساندرا ماتزاستاد دانشکده اقتصاد دانشگاه کلمبیادانش آموخته روانشناسی و علوم اجتماعی به کمک محاسبات کامپیوتریComputational social scienceسال انتشار: ۲۰۲۵#مالکان_ذهن#صاحبان_ذهن#ساندرا_ماتز#ساندرا_متز#mindmasters #mind_masters#sandra_matz#پادکست#مرور_کتاب#کتاب_صوتی#ایمان_فانی#مدرسه_زندگی_فارسی#تحلیل#مطالعات_میان_رشتهای#podcast#interdisciplinary_Studies#iman_fani#persian_school_of_lifeبرای خرید ویدیوی وبینارهای مدرسه زندگی یا ثبت نام وبینارهای آینده از اینجا اقدام کنید:خرید دورههای آموزشی در خارج از کشور:https://imanfani.thinkific.comخرید دورههای آموزشی در ایران:https://b2n.ir/a19688https://imanfani.comhttps://instagram.com/dr_iman_fanihttps://telegram.me/dr_iman_fani Hosted on Acast. See acast.com/privacy for more information.
Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: Der Mond ist ein Repeater für die Sonne, herunterfallende Mobiltelefone, kann man noch ein Bier holen? Neue Hörerinnen. Ist das shon die Preshow? Wie lange geht die Postshow? Bitte schreibt dazu, … „#895 – Ausschussmaschine“ weiterlesen Der Beitrag #895 – Ausschussmaschine ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
Stephen Wolfram answers general questions from his viewers about science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qaTopics: Conscious experience and perception - Brain structure and sensory extension - Brain manipulation and individuality - Consciousness and artificial systems - Computational theory and the brain
Kayleigh Houde is an Associate Principal and Global Computational Projects Lead at Buro Happold, where she is responsible for the harmonized development of new technologies within the open-source coding platform BHoM. Her leadership extends to chairing the MEP 2040 Commitment, participating in the ECHO Project and ASHRAE Center of Excellence for Building Decarbonization. She is also a lecturer at the University of Pennsylvania, where she teaches Parametric Life Cycle Assessment. We spoke with Kayleigh soon after the MEP 2040 and Carbon Leadership Forum had released The Beginner's Guide to MEP Embodied Carbon, a critical resource that was eagerly awaited in the community. Naturally, we spoke with her about that effort and about the broader question of why embodied carbon is important for MEP practitioners. “We have coalesced a lot of data to to bridge gaps for the MEP disciplines and provide clarity about the MEP impact,” she says.Kayleigh's technical leadership is paralleled by her deep commitment to collaboration across disciplines, evidenced in many ways, including her work on the ECHO effort to harmonize data across disciplines and certification programs. “Computers aren't the thing,” Kayleigh says of the potential of computation in climate work and the built environment. “They are the thing that gets you to the thing. Really, what computations helps you to solve are some of issues that we have in human collaboration. Sometimes we think we're connecting but we are not really speaking the same language. Getting people to talk and collaborate is a big part of the solution in the computational work.”
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: kurz vor Klostergeister, so ein Stapel Zeug, Laptopakku, Leistung muss man sich auch leisten #hsfeedback: Schwurbel-Geister-Fotografie in der Methodisch-Inkorrekt-Folge 344, Zeitmarke 01:35h Klostergeister 2025 – Ausblick News: Canon R6 Mark … „#894 – Geile Wegwerfkamera“ weiterlesen Der Beitrag #894 – Geile Wegwerfkamera ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
In episode 35, Thibault Schrepel talks to Kamil Nejezchleb, Vice-chairman of Czech Competition Autority.Thibault and Andy Kamil explore how the Czech Competition Authority is integrating computational antitrust into its daily operations. They discuss major cases in which such tools have played a role, the internal structure and expertise needed to support the shift to computational antitrust, the legal constraints imposed by Czech administrative and judicial review, and how the agency envisions the future of enforcement, from algorithmic remedies to cross-border data collaboration.Follow the Stanford Computational Antitrust project by subscribing to our newsletter at https://law.stanford.edu/computationalantitrust
- Health Freedom Movement and Nomination of Casey Means (0:00) - Hijacking of the Health Freedom Movement (2:29) - Trust in Government and Establishment Control (6:01) - Investigative Journalism and Epstein Files (6:40) - Delayed Promises and False Hope (11:43) - Introduction to Scott Gordon and Prompting AI Engines (17:24) - AI Reasoning and Natural Intelligence (21:21) - Practical Tips for Using AI Engines (47:36) - Ethical Considerations and Future Plans (1:13:12) - Enoch AI and Its Limitations (1:14:50) - Enoch AI's Capabilities and Future Improvements (1:28:28) - Introduction to "The Cancer Industry" Book (1:31:02) - Critique of Cancer Treatments and Industry Practices (1:37:45) - Mother's Day Special and Health Ranger Store Promotions (1:38:06) - Announcement of "Breaking the Chains" Docu Series (1:42:03) - Detailed Overview of "Breaking the Chains" Content (1:53:04) - Introduction to Unincorporated Nonprofit Associations (UNAs) (1:55:46) - Final Thoughts and Call to Action (2:03:18) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: Der Count und andere Muppets, Schlehmihl, Bitte schreibt dazu, worum es geht, wenn ihr ein „hi“ schreibt #hshi von Jürgen: Eine Liste der teuersten Teleobjektive #hsnachtrag zum Bild mit dem … „#893 – Existentielle Krise, aber fluffig“ weiterlesen Der Beitrag #893 – Existentielle Krise, aber fluffig ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
In this installment, Chris speaks with Tim Ribaric from Brock Univeristy in Ontario. Tim talks about digital librarianship, whether he sleeps, what a “Computational Notebook” is along with working with Google Colab. We also talked about “spark”, bringing the abstract of ideas to life, the “Software Carpentry Organization“, python for libraries and a wonderful organization […]
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Today's episode is different from all the previous ones, as for the first time on Scaling Theory, we focus on research methodology, exploring how AI is reshaping the very process of doing research and what that shift means for science and society at large.I sat down with James Evans, Professor of Sociology, Computational and Data Science at the University of Chicago, External Professor at the Santa Fe Institute, and Faculty Member at the Complexity Science Hub in Vienna, to explore how AI is transforming the way we simulate, scale, and understand human behavior, and what that shift means for science and society.We dive into his pioneering work on using large language models to simulate individuals, societies, and entire social systems. James and I explore the strengths and limits of AI agents for both the social and hard sciences before reflecting on the future of social science itself. We talk about research centers entirely run by AI and conferences conducted by AI agents, without any human involvement. We also discuss the role of small research teams in disruptive innovation, and how to cultivate proximity and serendipity in a research world where we increasingly cooperate with machines.You can follow me on X (@ProfSchrepel) and BlueSky (@ProfSchrepel) to receive regular updates.References:- Simulating Subjects: The Promise and Peril of AI Stand-ins for Social Agents and Interactions (2025) https://osf.io/preprints/socarxiv/vp3j2_v3- LLM Social Simulations Are a Promising Research Method (2025) https://arxiv.org/pdf/2504.02234- Large teams develop and small teams disrupt science and technology (2019) https://www.nature.com/articles/s41586-019-0941-9?wpisrc=- AI Expands Scientists' Impact but Contracts Science's Focus (2024) https://arxiv.org/abs/2412.07727- The Paradox of Collective Certainty in Science (2024) https://arxiv.org/html/2406.05809v1?utm_source=chatgpt.com- Being Together in Place as a Catalyst for Scientific Advance (Research Policy, 2023) https://www.sciencedirect.com/science/article/pii/S0048733323001956
Intervista a Sofia Lugli, Dottoressa in Computational and theoretical modelling of Language and Cognition, che da ottobre 2024 a marzo 2025 ha il tirocinio presso la Commissione europea - AI-based Services Unit in DG TranslationChe cosa sono i tirocini BlueBook?Il programma Blue Book Traineeship è un tirocinio retribuito di cinque mesi offerto dalla Commissione europea, che consente ai laureati di tutto il mondo di acquisire esperienza pratica nell'elaborazione delle politiche e nell'amministrazione dell'UE. I tirocinanti lavorano in un ambiente multiculturale, sviluppano competenze professionali e acquisiscono conoscenze sul funzionamento delle istituzioni dell'UE._____________Podcast EUZONE 2025 La nuova rubrica di podcast realizzata da Radio FSC-Unimore in collaborazione con il Centro EUROPE DIRECT di Modena, per approfondire i grandi temi di attualità europea e per conoscere da vicino l'azione dell'Unione europea sul territorio parlando di come le politiche europee influenzano la quotidianità dei cittadini. Cinque puntate nelle quali coinvolgere sia gli esperti delle iniziative che il centro Europe Direct organizza che gli stessi esperti del centro. EUZONE - Per approfondire i grandi temi di attualità europea - Per capire come le politiche europee influenzano la vita dei cittadini - Per conoscere da vicino l'azione dell'Unione europea sul territorio - Per sfatare i falsi miti sull'UE ___________Seguici su Instagram!https://www.instagram.com/radiofsc_unimore/www.radiofsc.it
In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.
In this episode Gudrun speaks with Nadja Klein and Moussa Kassem Sbeyti who work at the Scientific Computing Center (SCC) at KIT in Karlsruhe. Since August 2024, Nadja has been professor at KIT leading the research group Methods for Big Data (MBD) there. She is an Emmy Noether Research Group Leader, and a member of AcademiaNet, and Die Junge Akademie, among others. In 2025, Nadja was awarded the Committee of Presidents of Statistical Societies (COPSS) Emerging Leader Award (ELA). The COPSS ELA recognizes early career statistical scientists who show evidence of and potential for leadership and who will help shape and strengthen the field. She finished her doctoral studies in Mathematics at the Universität Göttingen before conducting a postdoc at the University of Melbourne as a Feodor-Lynen fellow by the Alexander von Humboldt Foundation. Afterwards she was a Professor for Statistics and Data Science at the Humboldt-Universität zu Berlin before joining KIT. Moussa joined Nadja's lab as an associated member in 2023 and later as a postdoctoral researcher in 2024. He pursued a PhD at the TU Berlin while working as an AI Research Scientist at the Continental AI Lab in Berlin. His research primarily focuses on deep learning, developing uncertainty-based automated labeling methods for 2D object detection in autonomous driving. Prior to this, Moussa earned his M.Sc. in Mechatronics Engineering from the TU Darmstadt in 2021. The research of Nadja and Moussa is at the intersection of statistics and machine learning. In Nadja's MBD Lab the research spans theoretical analysis, method development and real-world applications. One of their key focuses is Bayesian methods, which allow to incorporate prior knowledge, quantify uncertainties, and bring insights to the “black boxes” of machine learning. By fusing the precision and reliability of Bayesian statistics with the adaptability of machine and deep learning, these methods aim to leverage the best of both worlds. The KIT offers a strong research environment, making it an ideal place to continue their work. They bring new expertise that can be leveraged in various applications and on the other hand Helmholtz offers a great platform in that respect to explore new application areas. For example Moussa decided to join the group at KIT as part of the Helmholtz Pilot Program Core-Informatics at KIT (KiKIT), which is an initiative focused on advancing fundamental research in informatics within the Helmholtz Association. Vision models typically depend on large volumes of labeled data, but collecting and labeling this data is both expensive and prone to errors. During his PhD, his research centered on data-efficient learning using uncertainty-based automated labeling techniques. That means estimating and using the uncertainty of models to select the helpful data samples to train the models to label the rest themselves. Now, within KiKIT, his work has evolved to include knowledge-based approaches in multi-task models, eg. detection and depth estimation — with the broader goal of enabling the development and deployment of reliable, accurate vision systems in real-world applications. Statistics and data science are fascinating fields, offering a wide variety of methods and applications that constantly lead to new insights. Within this domain, Bayesian methods are especially compelling, as they enable the quantification of uncertainty and the incorporation of prior knowledge. These capabilities contribute to making machine learning models more data-efficient, interpretable, and robust, which are essential qualities in safety-critical domains such as autonomous driving and personalized medicine. Nadja is also enthusiastic about the interdisciplinarity of the subject — repeatedly changing the focus from mathematics to economics to statistics to computer science. The combination of theoretical fundamentals and practical applications makes statistics an agile and important field of research in data science. From a deep learning perspective, the focus is on making models both more efficient and more reliable when dealing with large-scale data and complex dependencies. One way to do this is by reducing the need for extensive labeled data. They also work on developing self-aware models that can recognize when they're unsure and even reject their own predictions when necessary. Additionally, they explore model pruning techniques to improve computational efficiency, and specialize in Bayesian deep learning, allowing machine learning models to better handle uncertainty and complex dependencies. Beyond the methods themselves, they also contribute by publishing datasets that help push the development of next-generation, state-of-the-art models. The learning methods are applied across different domains such as object detection, depth estimation, semantic segmentation, and trajectory prediction — especially in the context of autonomous driving and agricultural applications. As deep learning technologies continue to evolve, they're also expanding into new application areas such as medical imaging. Unlike traditional deep learning, Bayesian deep learning provides uncertainty estimates alongside predictions, allowing for more principled decision-making and reducing catastrophic failures in safety-critical application. It has had a growing impact in several real-world domains where uncertainty really matters. Bayesian learning incorporates prior knowledge and updates beliefs as new data comes in, rather than relying purely on data-driven optimization. In healthcare, for example, Bayesian models help quantify uncertainty in medical diagnoses, which supports more risk-aware treatment decisions and can ultimately lead to better patient outcomes. In autonomous vehicles, Bayesian models play a key role in improving safety. By recognizing when the system is uncertain, they help capture edge cases more effectively, reduce false positives and negatives in object detection, and navigate complex, dynamic environments — like bad weather or unexpected road conditions — more reliably. In finance, Bayesian deep learning enhances both risk assessment and fraud detection by allowing the system to assess how confident it is in its predictions. That added layer of information supports more informed decision-making and helps reduce costly errors. Across all these areas, the key advantage is the ability to move beyond just accuracy and incorporate trust and reliability into AI systems. Bayesian methods are traditionally more expensive, but modern approximations (e.g., variational inference or last layer inference) make them feasible. Computational costs depend on the problem — sometimes Bayesian models require fewer data points to achieve better performance. The trade-off is between interpretability and computational efficiency, but hardware improvements are helping bridge this gap. Their research on uncertainty-based automated labeling is designed to make models not just safer and more reliable, but also more efficient. By reducing the need for extensive manual labeling, one improves the overall quality of the dataset while cutting down on human effort and potential labeling errors. Importantly, by selecting informative samples, the model learns from better data — which means it can reach higher performance with fewer training examples. This leads to faster training and better generalization without sacrificing accuracy. They also focus on developing lightweight uncertainty estimation techniques that are computationally efficient, so these benefits don't come with heavy resource demands. In short, this approach helps build models that are more robust, more adaptive to new data, and significantly more efficient to train and deploy — which is critical for real-world systems where both accuracy and speed matter. Statisticians and deep learning researchers often use distinct methodologies, vocabulary and frameworks, making communication and collaboration challenging. Unfortunately, there is a lack of Interdisciplinary education: Traditional academic programs rarely integrate both fields. It is necessary to foster joint programs, workshops, and cross-disciplinary training can help bridge this gap. From Moussa's experience coming through an industrial PhD, he has seen how many industry settings tend to prioritize short-term gains — favoring quick wins in deep learning over deeper, more fundamental improvements. To overcome this, we need to build long-term research partnerships between academia and industry — ones that allow for foundational work to evolve alongside practical applications. That kind of collaboration can drive more sustainable, impactful innovation in the long run, something we do at methods for big data. Looking ahead, one of the major directions for deep learning in the next five to ten years is the shift toward trustworthy AI. We're already seeing growing attention on making models more explainable, fair, and robust — especially as AI systems are being deployed in critical areas like healthcare, mobility, and finance. The group also expect to see more hybrid models — combining deep learning with Bayesian methods, physics-based models, or symbolic reasoning. These approaches can help bridge the gap between raw performance and interpretability, and often lead to more data-efficient solutions. Another big trend is the rise of uncertainty-aware AI. As AI moves into more high-risk, real-world applications, it becomes essential that systems understand and communicate their own confidence. This is where uncertainty modeling will play a key role — helping to make AI not just more powerful, but also more safe and reliable. The lecture "Advanced Bayesian Data Analysis" covers fundamental concepts in Bayesian statistics, including parametric and non-parametric regression, computational techniques such as MCMC and variational inference, and Bayesian priors for handling high-dimensional data. Additionally, the lecturers offer a Research Seminar on Selected Topics in Statistical Learning and Data Science. The workgroup offers a variety of Master's thesis topics at the intersection of statistics and deep learning, focusing on Bayesian modeling, uncertainty quantification, and high-dimensional methods. Current topics include predictive information criteria for Bayesian models and uncertainty quantification in deep learning. Topics span theoretical, methodological, computational and applied projects. Students interested in rigorous theoretical and applied research are encouraged to explore our available projects and contact us for further details. The general advice of Nadja and Moussa for everybody interested to enter the field is: "Develop a strong foundation in statistical and mathematical principles, rather than focusing solely on the latest trends. Gain expertise in both theory and practical applications, as real-world impact requires a balance of both. Be open to interdisciplinary collaboration. Some of the most exciting and meaningful innovations happen at the intersection of fields — whether that's statistics and deep learning, or AI and domain-specific areas like medicine or mobility. So don't be afraid to step outside your comfort zone, ask questions across disciplines, and look for ways to connect different perspectives. That's often where real breakthroughs happen. With every new challenge comes an opportunity to innovate, and that's what keeps this work exciting. We're always pushing for more robust, efficient, and trustworthy AI. And we're also growing — so if you're a motivated researcher interested in this space, we'd love to hear from you." Literature and further information Webpage of the group G. Nuti, Lluis A.J. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arxiv Jan 2019 Wikipedia: Expected value of sample information C. Howson & P. Urbach: Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6, 2005. A.Gelman e.a.: Bayesian Data Analysis Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5, 2013. Yu, Angela: Introduction to Bayesian Decision Theory cogsci.ucsd.edu, 2013. Devin Soni: Introduction to Bayesian Networks, 2015. G. Nuti, L. Rugama, A.-I. Cross: Efficient Bayesian Decision Tree Algorithm, arXiv:1901.03214 stat.ML, 2019. M. Carlan, T. Kneib and N. Klein: Bayesian conditional transformation models, Journal of the American Statistical Association, 119(546):1360-1373, 2024. N. Klein: Distributional regression for data analysis , Annual Review of Statistics and Its Application, 11:321-346, 2024 C.Hoffmann and N.Klein: Marginally calibrated response distributions for end-to-end learning in autonomous driving, Annals of Applied Statistics, 17(2):1740-1763, 2023 Kassem Sbeyti, M., Karg, M., Wirth, C., Klein, N., & Albayrak, S. (2024, September). Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection. In Uncertainty in Artificial Intelligence (pp. 1890-1900). PMLR. M. K. Sbeyti, N. Klein, A. Nowzad, F. Sivrikaya and S. Albayrak: Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection pdf. To appear in Transactions on Machine Learning Research, 2025 Podcasts Learning, Teaching, and Building in the Age of AI Ep 42 of Vanishing Gradient, Jan 2025. O. Beige, G. Thäter: Risikoentscheidungsprozesse, Gespräch im Modellansatz Podcast, Folge 193, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019.
Héctor Cancela holds a PhD. degree in Computer Science from the University of Rennes 1, INRIA Rennes, France (1996), and a Computer Systems Engineer degree from the Universidad de la República, Uruguay (1990). He is a Full Professor at the Computing Institute at the Engineering School of the Universidad de la República (Uruguay), which he lead in two periods: 2006-2010 and 2017-2023. He was Dean of the Engineering School of the Universidad de la República (2010-2015). He is a Researcher at the National Program for the Development of Basic Sciences (PEDECIBA), Uruguay. His research interests are centered in network models and stochastic models, applied jointly with optimization methods for solving problems in different areas (reliability, communications, transport, production, biological applications, agricultural applications, etc). He has published more than 100 full papers in international journals, indexed conference proceedings and book chapters. He has supervised more than 20 Ph.D. and M.Sc. thesis. He has been General Chair and Program Chair of several international events, and a member of the Program Committee of more than 50 international conferences. He participated in the development of accreditation standards for MERCOSUR engineering programs. He was member of the task force which prepared the ACM/IEEE Computing Curricula 2020 report (ACM/IEEE CC 2020). He is associate editor of the journals International Transactions in Operations Research (ITOR), RAIRO Operations Research (RAIRO-OR), Mathematical Methods of Operations Research (MMOR), Computational and Applied Mathematics (COAM), and member of the editorial board of the journals Pesquisa Operacional (Brazil) and Ingenieria de Sistemas (Chile). Between 2010 and 2019 he was editor-in-chief of CLEIej, the electronic journal of the CLEI association. He is an IEEE Senior Member, also a member of ACM. He is a former President of CLEI (Centro Latinoamericano de Estudios en Informática – 2016-2020), and a former president of ALIO (Asociación Latino Ibero Americana de Investigación Operativa – 2006-2010. He is currently president of IFORS (International Federation of Operational Research Societies, 2025-2027).
For today's episode we learn about the cerebellum with Dr. Reza Shadmehr. Dr. Shadmehr is a trailblazing neuroscientist whose groundbreaking work has reshaped our understanding of how the brain controls movement. With a rich academic journey—from a bachelor's in electrical engineering to a PhD in robotics and computer science, followed by a postdoctoral fellowship at MIT—Dr. Shadmehr now leads the Shadmehr Lab at Johns Hopkins University. We dive into his pioneering theories, including motor memory consolidation, state space theory, and the neural encoding of action by the cerebellum's Purkinje cells. The conversation explores the physics of motor movement, prediction, error correction, and the often-overlooked power of the cerebellum—a brain region Dr. Shadmehr calls an “underrated yet powerful” player in our daily lives.Dr. Shadmehr shares his personal path into neuroscience, sparked by a childhood fascination with the brain. This curiosity led him to blend engineering principles with biology, culminating in a lifelong mission to decode how the brain builds internal models for movement. We unpack the cerebellum's critical role in fine-tuning actions—like stopping the tongue precisely or ensuring eye movements hit their mark, to cutting-edge research with marmosets. The episode also touches on the interplay between reward, effort, and cerebellar function, revealing surprising discoveries about how this brain region cancels noise to keep our movements smooth and purposeful. Shadmehr Lab: http://shadmehrlab.orgPublications: http://shadmehrlab.org/publicationsYT Videos (Very Good !): https://www.youtube.com/@shadmehrlab1352Daylight Computer Companyuse "autism" for $25 off athttps://buy.daylightcomputer.com/RYAN03139Chroma Iight Devicesuse "autism" for 10% discount athttps://getchroma.co/?ref=autism00:00 Reza Shadmehr02:26 Daylight Computer Company, use "autism" for $25 discount06:45 Chroma Light Devices- Lights Designed for Humans, use "autism" for 10% discount9:54 Reza's journey into Biomedical Engineering & Neuroscience16:26 Understanding the Cerebellum; 3 primary functions22:07 Neuronal Communication & Purkinje Cells; Sensory and Interneurons25:41 Cerebellum, eyes, and Autistic phenotype; Mesencephalon & other connections28:13 Excitation/Inhibition (E/I) balance; Video examples & Movement deficiencies 29:29 Layers of the Cerebellum34:08 E/I & Brain function36:30 Learning & Memory in the Cerebellum39:54 Dysfunction & Brain Compensation41:57 Basal Ganglia and Cerebellum43:48 Internal Calculators & Prediction49:42 Reward & Movement & Accuracy52:52 Movement control; Eye movements55:56 Purkinje Cells & Tongue movements; deceleration 59:47 Understanding Language of the Cerebellum 1:01:58 Future of Cerebellar Technology X: https://x.com/rps47586YT: https://www.youtube.com/channel/UCGxEzLKXkjppo3nqmpXpzuAemail: info.fromthespectrum@gmail.com
In episode 34, Thibault Schrepel talks to Andy Chen, Vice Chair (and Acting Chair) of the Taiwan Fair Trade Commission.Thibault and Andy talk about how the TFTC uses computational tools in merger review, price monitoring, and cartel detection. Andy shares insights on the agency's internal structure, the challenges of explainability, and how computational enforcement might reshape antitrust in Taiwan by 2040.Follow the Stanford Computational Antitrust project by subscribing to our newsletter at https://law.stanford.edu/computationalantitrust
Guest: Dr. Bruce Y LeeSenior Contributor @Forbes | Professor | CEO | Writer/Journalist | Entrepreneur | Digital & Computational Health | #AI | bruceylee.substack.com | bruceylee.com Bruce Y. Lee, MD, MBA is a writer, journalist, systems modeler, AI, computational and digital health expert, professor, physician, entrepreneur, and avocado-eater, not always in that order.Executive Director of PHICOR (Public Health Informatics, Computational, and Operations Research) [@PHICORteam]On LinkedIn | https://www.linkedin.com/in/bruce-y-lee-68a6834/Website | https://www.bruceylee.com/_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastVisit Marco's website
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: KI, Eskalation, Synchros Eastern Europe Electric Roadtrip startet Klostergeister in ca. einem Monat Vorbereitung im Slack Bringt Bücher und Bilder mit! Sammelt weiterhin Geschichten und Bilder mit Alex im Slack, … „#892 – 77777 Minuten“ weiterlesen Der Beitrag #892 – 77777 Minuten ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: Zeitliche Einordnung von Cool, Krass und Konkret, neue Darktable UI, größere Schriften? Urlaubsbericht: Pellworm #hsfeedback von Arne und Jürgen: Analogfotografie ist immaterielles Unesco Kulturerbe #Nachtrag von Dieter zu #HS888 und … „#891 – Puschelpflicht“ weiterlesen Der Beitrag #891 – Puschelpflicht ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
In episode 33, Thibault Schrepel talks to Natalie Harsdorf, Director General of the Austrian Federal Competition Authority. Thibault and Natalie discuss her priorities at the Austrian Competition Authority, with a focus on the role computational antitrust plays in the agency's strategy and daily operations. They also cover how international cooperation in antitrust is evolving and explore what the future might hold for competition policy. Follow the Stanford Computational Antitrust project at https://law.stanford.edu/computationalantitrust.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Corin Wagen is the Founder and CEO of Rowan. During our conversation, we talk about Corin's journey from the Jacobsen Lab at Harvard to starting Rowan with his brother. Rowan builds design and simulation software for chemistry. The company uses machine learning on quantum mechanics data to predict molecular interactions with high accuracy. Rowan trains its models on internal data and information from publicly available datasets. The key part of the product is reducing the time to make these predictions. Legacy, often on-prem, software can take weeks of computing to calculate, say the pKa or redox potential of a compound. Rowan uses machine learning to speed up these predictions with comparable accuracy.Rowan's vision is to make their tools accessible to all types of chemists and scientists, not just experts in particular areas of computational chemistry. Making computational chemistry as easy to use as Uber, Venmo, or ChatGPT. Simplifying complex calculations like reaction prediction, property estimation, and data analysis. With 100s of chemists using Rowan.
Costas Maranas is a Professor of Chemical Engineering at Penn State.
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: unter Luftabschluss, manchmal bleibt was hängen, Weckerhahn #hsfeedbacks zu Boris Foto-Lust-Verlust #hsfrage von Stephan: Eine Kommilitonin sucht einen Fotodrucker und wie suche ich im Archiv? Boris hat einen Epson ET … „#890 – Spaßstörung“ weiterlesen Der Beitrag #890 – Spaßstörung ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: Ich wollte schon immer ein ferngesteuertes Auto haben… Captcha Photographer (en, youtube) #hsfeedback von Rüdiger: Interessante Links auf Petapixel #hshi von Jürgen: Archiv der deutschen Südpolarexpeditionen geöffnet – Leibnitz IFL … „#889 – Schleifchen drum“ weiterlesen Der Beitrag #889 – Schleifchen drum ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
In this remarkable conversation, Michael Levin (Tufts University) and Blaise Aguera y Arcas (Google) examine what happens when biology and computation collide at their foundations. Their recent papers—arriving simultaneously yet from distinct intellectual traditions—illuminate how simple rules generate complex behaviors that challenge our understanding of life, intelligence, and agency.Michael's "Self-Sorting Algorithm" reveals how minimal computational models demonstrate unexpected problem-solving abilities resembling basal intelligence—where just six lines of deterministic code exhibit dynamic adaptability we typically associate with living systems. Meanwhile, Blaise's "Computational Life" investigates how self-replicating programs emerge spontaneously from random interactions in digital environments, evolving complexity without explicit design or guidance.Their parallel explorations suggest a common thread: information processing underlies both biological and computational systems, forming an endless cycle where information → computation → agency → intelligence → information. This cyclical relationship transcends the traditional boundaries between natural and artificial systems.The conversation unfolds around several interwoven questions:- How does genuine agency emerge from simple rule-following components?- Why might intelligence be more fundamental than life itself?- How do we recognize cognition in systems that operate unlike human intelligence?- What constitutes the difference between patterns and the physical substrates expressing them?- How might symbiosis between humans and synthetic intelligence reshape both?Perhaps most striking is their shared insight that we may already be surrounded by forms of intelligence we're fundamentally blind to—our inherent biases limiting our ability to recognize cognition that doesn't mirror our own. As Michael notes, "We have a lot of mind blindness based on our evolutionary firmware."The timing of their complementary work isn't mere coincidence but reflects a cultural inflection point where our understanding of intelligence is expanding beyond anthropocentric models. Their dialogue offers a conceptual framework for navigating a future where the boundaries between biological and synthetic intelligence continue to dissolve, not as opposing forces but as variations on a universal principle of information processing across different substrates.For anyone interested in the philosophical and practical implications of emergent intelligence—whether in cells, code, or consciousness—this conversation provides intellectual tools for understanding the transformed relationship between humans and technology that lies ahead.------Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.Thanks again to Jonathan Coulton for our music.
We have an other amazing guest for this episode: Anders Sandberg is a visionary philosopher, futurist, and transhumanist thinker whose work pushes the boundaries of human potential and the future of intelligence. As a senior research fellow at Oxford University's Future of Humanity Institute until its closing in 2024 Sandberg explores everything from cognitive enhancement and artificial intelligence to existential risks and space colonization. With a background in computational neuroscience, he bridges science and philosophy to tackle some of the most profound questions of our time: How can we expand our cognitive capacities? What are the ethical implications of radical life extension? Could we one day transcend biological limitations entirely? Known for his sharp intellect, playful curiosity, and fearless speculation, Sandberg challenges conventional wisdom, inviting us to imagine—and shape—a future where humanity thrives beyond its current constraints.00:00 introduction04:18 exersise & David Sinclair06:10 Will we survive the century?18:18 Who can we trust? Knowledge and humility23:17 Nuclear armaggedon39:51 Technology as a double-edged sword44:30 Sandberg origin story56:54 Computational neuroscience01:00:30 Personal identity and neural simulation01:05:24 Personal identity and reasons to want to continue living01:09:39 The psychology of behind different philosophical intutions and judgments01:17:48 Is death bad for Anders Sandberg?01:25:00 Altruism and individual rights01:31:29 Elon Musk says we must die for progress01:35:10 Artificial Intelligence01:55:08 AI civilization01:02:07 Cryonics02:04:00 Book recommendations Hosted on Acast. See acast.com/privacy for more information.
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: Live Uhren und Tafeln, Edding #hserleuchtung für Martin: Es heißt nicht PeterPIXEL WORKSHOP-Update Danke an Alexandra für Auphonic-Credits und Thorsten für Lego creator Kameras News TTArtisan stellt Retro-Faltkamera 203T für … „#888 – PeterPixel“ weiterlesen Der Beitrag #888 – PeterPixel ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Visualizing History's Fragments: A Computational Approach to Humanistic Research (Palgrave Macmillan, 2024) combines a methodological guide with an extended case study to show how digital research methods can be used to explore how ethnicity, gender, and kinship shaped early modern Algerian society and politics. However, the approaches presented have applications far beyond this specific study. More broadly, these methods are relevant for those interested in identifying and studying relational data, demographics, politics, discourse, authorial bias, and social networks of both known and unnamed actors. Ashley R. Sanders explores how digital research methods can be used to study archival specters - people who lived, breathed, and made their mark on history, but whose presence in the archives and extant documents remains limited, at best, if not altogether lost. Although digital tools cannot metaphorically resurrect the dead nor fill archival gaps, they can help us excavate the people-shaped outlines of those who might have filled these spaces. The six methodological chapters explain why and how each research method is used, present the visual and quantitative results, and analyze them within the context of the historical case study. In addition, every dataset is available on SpringerLink as Electronic Supplementary Material (ESM), and each chapter is accompanied by one or more video tutorials that demonstrate how to apply each of the techniques described (accessed via the SN More Media App). Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Video-Version Fast immer dienstags, gerne mal um 18:00 Uhr: Happy Shooting Live. Täglich im Slack mitmachen – auch Audio-/Videokommentare werden gern angenommen. Aus der Preshow: Programmieren mit und für AI #hsfeedback von Uwe zur KI-basierten Schadenserkennung: Krebserkennung gibt es auch #hsfeedback von Rolf: What a Fantastic Machine! Workshop-Update – Noch ein Platz frei bei der … „#887 – Wir reden nicht mit unsichtbaren Menschen“ weiterlesen Der Beitrag #887 – Wir reden nicht mit unsichtbaren Menschen ist ursprünglich hier erschienen: Happy Shooting - Der Foto-Podcast.
In recent years, Bitcoin has undergone a major culture shift which promotes stagnation, complacency & simping to politicians over maximizing the utility of the money. Eric Voskuil & John Carvalho join the show to remind everyone what the mission really is. State of Bitcoin - [00:01:17] Bitcoin Maximalism - [00:01:32] Bitcoin as a Ponzi Scheme - [00:02:27] Transaction Fees - [00:04:57] History of Bitcoin Tokens (Omni, Counterparty, Mastercoin) Definition of Tokens - [00:08:01] Custodial Problems with Tokens - [00:09:12] Bitcoin and Fiat Money - [00:11:09] Why Bitcoiners Talk About Money - [00:15:49] Stateless Money - [00:17:44] Austrian Economics and Bitcoin - [00:21:01] Monetary Inflation vs. Price Inflation - [00:26:01] Cantillon Effect - [00:29:00] Dollar Inflation and Gold - [00:33:59] Misunderstandings in the Bitcoin Community - [00:41:42] Bitcoin Semantics - [00:43:21] Bitcoin Divisibility - [01:00:13] Bitcoin Deflation - [01:03:41] Maxi Price and One Coin Assumption - [01:07:43] Competition Between Monies - [01:13:42] Scaling Bitcoin - [01:22:41] Bitcoin for the Unbanked - [01:26:14] Maximizing Throughput - [01:36:11] Right to Fork - [01:45:45] Running Old Bitcoin Versions - [01:51:35] Bitcoin as Money vs. Credit - [01:56:26] Settlement in Bitcoin - [02:07:45] Peer-to-Peer Credit Systems - [02:14:47] Fractional Reserve Banking - [02:26:32] Bitkit Wallet and Spending vs. Saving - [02:36:13] Block size increases and Bitcoin adoption - [03:00:00] Scaling Bitcoin and transaction validation - [03:01:00] Bitcoin overflowing into Litecoin and quantum resistance - [03:02:00] Pruning historical data and exchange price - [03:03:00] Lightning system complexity and Bitcoin's value proposition - [03:05:00] Bitcoin as an investment and speculation - [03:07:00] Optimizing Bitcoin throughput and developer motivations - [03:09:00] Scaling Bitcoin and speculation - [03:11:00] Shitcoins, scams, and Bitcoin's security model - [03:13:00] Litecoin's extension blocks and Mimblewimble - [03:15:00] Bitcoin's security and the legitimacy of altcoins - [03:17:00] Shitcoins and Bitcoin's essential aspects - [03:19:00] Majority hash power censorship and attacks - [03:21:00] Bitcoin speculation and market dynamics - [03:23:00] Michael Saylor's Bitcoin strategy and MicroStrategy's history - [03:26:00] Saylor's Bitcoin investment and market manipulation - [03:29:00] Saylor's stock sales and Bitcoin's future - [03:31:00] Blockstream's accomplishments and the Chia project - [03:33:00] Blockstream's influence and SegWit - [03:35:00] Adam Back's influence and Blockstream's hype - [03:37:00] Bitcoin Core's power and the need for competition - [03:39:00] Initial block download performance and Bitcoin Core's architecture - [03:41:00] UTXO store and Bitcoin Core's performance - [03:43:00] Parallelism in Bitcoin Core and assumed UTXO - [03:45:00] Initial block download time and Bitcoin Core's scalability - [03:47:00] Monoculture in Bitcoin development and IBD performance - [03:49:00] UTXO cache and shutdown time - [03:51:00] Trust assumptions in Bitcoin Core and UTXO commitments - [03:53:00] Bitcoin Core's halting problem and theoretical download limits - [03:55:00] Sponsorships: Sideshift, LayerTwo Labs, Ciurea - [03:57:00] Drivechains and ZK rollups - [04:02:00] ZK rollups and liquidity on Ethereum - [04:04:00] Drivechains and altcoins - [04:06:00] Scaling Bitcoin and cultural taboos - [04:08:00] Engineer-driven change and Monero's approach - [04:10:00] Confidential transactionsL Zano & DarkFi - [04:12:00] Fungibility and Bitcoin's metadata - [04:14:00] Privacy, metadata, and state surveillance - [04:16:00] Privacy, taint, and Bitcoin mixing - [04:18:00] Bitcoin mixing and plausible deniability - [04:20:00] Mining and company registration - [04:22:00] Block reward and hash power - [04:24:00] Privacy and mixing - [04:26:00] Privacy in the Bitcoin whitepaper and zero-knowledge proofs - [04:28:00] Dark Wallet and John Dillon - [04:30:00] Dark Wallet and Li Bitcoin - [04:32:00] Amir Taaki's projects and software development - [04:34:00] Dark Wallet funding and developer costs - [04:36:00] Libbitcoin's code size and developer salaries - [04:38:00] John Dillon and Greg Maxwell - [04:40:00] Opportunistic encryption and BIPs 151/152 - [04:42:00] Dandelion and privacy - [04:44:00] BIP 37 and Bloom filters - [04:46:00] Consensus cleanup and the Time Warp bug - [04:48:00] Merkle tree malleability and 64-byte transactions - [04:50:00] 64-byte transactions and SPV wallets - [04:52:00] Coinbase transactions and malleability - [04:54:00] Invalid block hashes and DoS vectors - [04:56:00] Core bug and ban list overflow - [04:58:00] Storing hashes of invalid blocks - [05:00:00] DoS vectors and invalid blocks - [05:02:00] Malleated Merkle trees and 64-byte transactions - [05:04:00] 64-byte transactions and Merkle tree malleability - [05:06:00] Null points and malleated blocks - [05:08:00] Redundant checks and the inflation soft fork - [05:10:00] Op code separator and code complexity - [05:12:00] Transaction order in a block - [05:14:00] Forward references in blocks - [05:16:00] Coinbase transaction rules - [05:18:00] Time Warp bug and Litecoin support - [05:20:00] Quadratic op roll bug - [05:22:00] Stack implementation and op roll - [05:24:00] Templatized stack and op roll optimization - [05:26:00] Non-standard transactions and direct submission to miners - [05:28:00] Mempool policy and DoS - [05:30:00] Monoculture and competing implementations - [05:32:00] Consensus cleanup and Berkeley DB - [05:34:00] Code vs. consensus - [05:36:00] Bitcoin Knots and Luke-jr - [05:38:00] 300 kilobyte node and Luke-jr's views - [05:40:00] Bitcoin Knots and performance - [05:42:00] Bitcoin Knots and censorship - [05:44:00] Censorship and miner incentives - [05:46:00] Censorship and hash power - [05:48:00] Soft forks and censorship - [05:50:00] Ordinals and covenants - [05:52:00] RBF and zero-confirmation transactions - [05:54:00] Double spending and merchant risk - [05:56:00] First-seen mempool policy and RBF - [05:58:00] Low-value transactions and RBF - [06:00:00] Computational cost of actions - [06:00:15] Building infrastructure and system disruption - [06:00:20] Threat actors and economic disruption - [06:00:26] Double spending detection and system control - [06:00:29] Safety and manageability of zero comp transactions - [06:00:41] Security of zero comp transactions - [06:00:51] RBF (Replace-by-fee) and its relevance - [06:01:06] Bitcoin's mempool and transaction handling - [06:01:25] Mempool overflow and resource management - [06:02:08] Transaction storage and mining - [06:02:45] Miners' incentives and fee maximization - [06:03:07] Mempool policy and DOS protection - [06:03:41] Transaction validation and block context - [06:04:11] Fee limits and DOS protection - [06:05:13] Transaction sets, graph processing, and fee maximization - [06:06:24] Mining empty blocks and hash rate - [06:07:34] Replace-by-fee (RBF) and its purpose - [06:08:07] Infrastructure and RBF - [06:09:14] Transaction pool and conflict resolution - [06:09:44] Disk space, fees, and DOS protection - [06:11:06] Fee rates and DOS protection - [06:12:22] Opt-in RBF and mempool full RBF - [06:13:45] Intent flagging in transactions - [06:14:45] Miners obeying user intent and system value - [06:17:06] Socialized gain and individual expense - [06:18:17] Service reliability and profitability - [06:19:06] First-seen mempool policy - [06:19:37] Mempool policy and implementation - [06:20:06] User perspective on transaction priority - [06:21:14] Mempool conflicts and double spending - [06:22:10] CPFP (Child Pays for Parent) - [06:22:24] Mempool management and fee rates - [06:24:30] Mempool complexity and Peter Wuille's work - [06:25:54] Memory and disk resource management - [06:27:37] First-seen policy and miner profitability - [06:29:25] Miners' preference for first-seen - [06:30:04] Computational cost and fee optimization - [06:31:10] Security, Cypherpunk mentality, and the state - [06:35:25] Bitcoin's security model and censorship resistance - [06:41:02] State censorship and fee increases - [06:43:00] State's incentive to censor - [06:46:15] Lightning Network and regulation - [06:48:41] NGU (Number Go Up) and deference to the state - [06:51:10] Reasons for discussing Bitcoin's security model - [06:53:25] Bitcoin's potential subversion and resilience - [06:55:50] Lightning Network subsidies and scaling - [06:57:36] Mining protocols and security - [07:02:02] Braidpool and centralized mining - [07:04:44] Compact blocks and latency reduction - [07:07:23] Orphan rates and mining centralization - [07:08:16] Privacy and threat environments - [07:08:40] Social graphs, reputation, and identity - [07:10:23] Social scalability and Bitcoin - [07:12:36] Individual empowerment and anonymity - [07:16:48] Trust in society and the role of the state - [07:18:01] Payment methods and trust - [07:20:15] Credit reporting agencies and regulation - [07:22:17] Hardware wallets and self-custody - [07:23:46] Security vulnerabilities in Ledger - [07:27:14] Disclosure of secrets on Ledger devices - [07:36:27] Compromised machines and hardware wallets - [07:42:00] Methods for transferring signed transactions - [07:48:25] Threat scenarios and hardware wallet security - [07:50:47] Hardware wallet usage and personal comfort - [07:56:40] Coldcard wallets and user experience - [08:02:23] Security issues in the VX project - [08:03:25] Seed generation and hardware randomness - [08:12:05] Mastering Bitcoin and random number generation - [08:17:41]
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
This week we step back in time 3 years ago to review an important cardiac MRI report on Fontan geometry and hemodynamics as measured by computational fluid dynamic analysis. How do factors like Fontan geometry or 'power loss' relate to quality of life for the Fontan young adult patient? How do these data inform imaging in the operating room during these palliations? We speak with the first author of this work, Associate Professor of Pediatrics at U. Penn, Dr. Laura Mercer-Rosa about this important and intriguing work. https://doi.org/10.1016/j.athoracsur.2022.01.017
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Outline00:00 - Intro03:26 - Development: ETH Zürich07:15 - Growth: Minnesota and Wisconsin36:16 - Productivity: Caltech53:28 - Change: ETH Zürich01:37:18 - Continuity: University of Pennsylvania01:45:36 - OutroLinksManfred's website: https://tinyurl.com/mryp38w3Farewell lecture: https://tinyurl.com/2j5uxsp4Laying the foundations: an advisor's perspective: https://tinyurl.com/ym5e4437Internal model control. A unifying review and some new results: https://tinyurl.com/vmm7a4tkInternal model control - PID controller design: https://tinyurl.com/rup3azjmRobust process control: https://tinyurl.com/8uhurkubRobust control of ill-conditioned plants: high-purity distillation: https://tinyurl.com/5xfty34cComputational complexity of mu calculation: https://tinyurl.com/24yvvcc7Robust constrained model predictive control using linear matrix inequalities: https://tinyurl.com/3pdk55kkControl of systems integrating logic, dynamics, and constraints: https://tinyurl.com/489ekvsnThe explicit linear quadratic regulator for constrained systems: https://tinyurl.com/mr2e3z83Model predictive control: https://tinyurl.com/2s4dafkd Parametric programming: https://tinyurl.com/2u9vj79yEmbotech: https://tinyurl.com/3f99ks65Embedded online optimization for model predictive control at megahertz rates: https://tinyurl.com/59z3vnb9Donoho's “Bridge from mathematical to digital and back”: https://tinyurl.com/3f9fk73wSupport the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
In this episode, we sit down with Valeria Becattini, a cognitive scientist and philosopher, to explore the paradoxical effects of body-scan meditation on our sense of self. Drawing from her research, Valeria explains how this Theravada Buddhist practice challenges our typical understanding of attention and sensory awareness. Using the predictive processing framework, she reveals how focused attention can lead to the dissolution of bodily boundaries, a phenomenon known as bhaṅga. Together, we delve into the implications of her findings for well-being and discuss how this meditative technique could inform therapeutic approaches for addiction, emotional dysregulation, and self-regulation. Join us for a thought-provoking journey into the intersection of philosophy, neuroscience, and contemplative practices.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Register for the next Dr. GPCR University course before February 14th https://www.ecosystem.drgpcr.com/event-details-registration/the-practical-assessment-of-signaling-bias Spots are limited! _________ Watch the video version of this podcast episode. https://www.ecosystem.drgpcr.com/dr-gpcr-podcast/ep-159-with-dr.-riccardo-capelli --------------------------------- Become a #DrGPCR Ecosystem Member --------------------------------- Imagine a world in which the vast majority of us are healthy. The #DrGPCR Ecosystem is all about dynamic interactions between us working towards exploiting the druggability of # GPCRs. We aspire to provide opportunities to connect, share, form trusting partnerships, grow, and thrive together. --------------------------------- To build our #GPCR Ecosystem, we created various enabling outlets. Individuals Organizations Are you a #GPCR professional? Subscribe to #DrGPCR Monthly Newsletter Listen and subscribe to #DrGPCR Podcasts Listen and watch GPCR-focused scientific talks at #VirtualCafe.
Chris, Ade and Jeremiah explore the ways new technology can help you make fantastic photos.
Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Dr. Tonya M. Evans. An author of Digital Money Demystified. She is a leading authority in copyright, trademark, fintech, and technology law. She is currently a tenured Professor of Law at Penn State Dickinson Law, with a co-hire appointment at the Penn State Institute for Computational and Data Sciences. Her extensive experience includes serving on the Digital Currency Group board and chairing the MakerDAO Ecosystem Growth Foundation board. She looks at how cryptocurrency will change and what to expect for 2025. An accomplished author of Digital Money Demystified, she is developing a series of books to guide lawyers through the evolving digital landscape. Dr. Evans blends her extensive legal experience with innovative educational initiatives to provide valuable insights into the future of law and emergent technologies. Connect with Dr. Tonya Evans: Website: www.advantageevans.com Instagram: @IPProfEvans Facebook: @AdvantageEvans X: @IPProfEvans Interview Questions: • Tell us about your book Digital Money Demystified.• Why did you write the book?• How to set up a cryptocurrency wallet• The Best Ways to Use Cryptocurrency For Last Minute Gifts• Why does Bitcoin Make great investment gifts for 2025• Avoiding Scams: How to Stay Safe When Using Crypto to Shop• Are you crypto-curious? The first steps to consider taking if you’re feeling the FOMO around investing in digital money• Top 10 crypto myths busted and backed by well-supported facts• Bitcoin for Retirement: How Digital Assets Can Help Safeguard Your Future – A guide to understanding how Bitcoin and other digital assets can be integrated into retirement planning to hedge against inflation and market volatility #AMI #STRAW #BEST #SHMSSee omnystudio.com/listener for privacy information.
Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Dr. Tonya M. Evans. An author of Digital Money Demystified. She is a leading authority in copyright, trademark, fintech, and technology law. She is currently a tenured Professor of Law at Penn State Dickinson Law, with a co-hire appointment at the Penn State Institute for Computational and Data Sciences. Her extensive experience includes serving on the Digital Currency Group board and chairing the MakerDAO Ecosystem Growth Foundation board. She looks at how cryptocurrency will change and what to expect for 2025. An accomplished author of Digital Money Demystified, she is developing a series of books to guide lawyers through the evolving digital landscape. Dr. Evans blends her extensive legal experience with innovative educational initiatives to provide valuable insights into the future of law and emergent technologies. Connect with Dr. Tonya Evans: Website: www.advantageevans.com Instagram: @IPProfEvans Facebook: @AdvantageEvans X: @IPProfEvans Interview Questions: • Tell us about your book Digital Money Demystified.• Why did you write the book?• How to set up a cryptocurrency wallet• The Best Ways to Use Cryptocurrency For Last Minute Gifts• Why does Bitcoin Make great investment gifts for 2025• Avoiding Scams: How to Stay Safe When Using Crypto to Shop• Are you crypto-curious? The first steps to consider taking if you’re feeling the FOMO around investing in digital money• Top 10 crypto myths busted and backed by well-supported facts• Bitcoin for Retirement: How Digital Assets Can Help Safeguard Your Future – A guide to understanding how Bitcoin and other digital assets can be integrated into retirement planning to hedge against inflation and market volatility #STRAW #BEST #SHMSSupport the show: https://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.
Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Dr. Tonya M. Evans. An author of Digital Money Demystified. She is a leading authority in copyright, trademark, fintech, and technology law. She is currently a tenured Professor of Law at Penn State Dickinson Law, with a co-hire appointment at the Penn State Institute for Computational and Data Sciences. Her extensive experience includes serving on the Digital Currency Group board and chairing the MakerDAO Ecosystem Growth Foundation board. She looks at how cryptocurrency will change and what to expect for 2025. An accomplished author of Digital Money Demystified, she is developing a series of books to guide lawyers through the evolving digital landscape. Dr. Evans blends her extensive legal experience with innovative educational initiatives to provide valuable insights into the future of law and emergent technologies. Connect with Dr. Tonya Evans: Website: www.advantageevans.com Instagram: @IPProfEvans Facebook: @AdvantageEvans X: @IPProfEvans Interview Questions: • Tell us about your book Digital Money Demystified.• Why did you write the book?• How to set up a cryptocurrency wallet• The Best Ways to Use Cryptocurrency For Last Minute Gifts• Why does Bitcoin Make great investment gifts for 2025• Avoiding Scams: How to Stay Safe When Using Crypto to Shop• Are you crypto-curious? The first steps to consider taking if you’re feeling the FOMO around investing in digital money• Top 10 crypto myths busted and backed by well-supported facts• Bitcoin for Retirement: How Digital Assets Can Help Safeguard Your Future – A guide to understanding how Bitcoin and other digital assets can be integrated into retirement planning to hedge against inflation and market volatility #STRAW #BEST #SHMSSee omnystudio.com/listener for privacy information.
Two-time Emmy and Three-time NAACP Image Award-winning, television Executive Producer Rushion McDonald interviewed Dr. Tonya M. Evans. An author of Digital Money Demystified. She is a leading authority in copyright, trademark, fintech, and technology law. She is currently a tenured Professor of Law at Penn State Dickinson Law, with a co-hire appointment at the Penn State Institute for Computational and Data Sciences. Her extensive experience includes serving on the Digital Currency Group board and chairing the MakerDAO Ecosystem Growth Foundation board. She looks at how cryptocurrency will change and what to expect for 2025. An accomplished author of Digital Money Demystified, she is developing a series of books to guide lawyers through the evolving digital landscape. Dr. Evans blends her extensive legal experience with innovative educational initiatives to provide valuable insights into the future of law and emergent technologies. Connect with Dr. Tonya Evans: Website: www.advantageevans.com Instagram: @IPProfEvans Facebook: @AdvantageEvans X: @IPProfEvans Interview Questions: • Tell us about your book Digital Money Demystified.• Why did you write the book?• How to set up a cryptocurrency wallet• The Best Ways to Use Cryptocurrency For Last Minute Gifts• Why does Bitcoin Make great investment gifts for 2025• Avoiding Scams: How to Stay Safe When Using Crypto to Shop• Are you crypto-curious? The first steps to consider taking if you’re feeling the FOMO around investing in digital money• Top 10 crypto myths busted and backed by well-supported facts• Bitcoin for Retirement: How Digital Assets Can Help Safeguard Your Future – A guide to understanding how Bitcoin and other digital assets can be integrated into retirement planning to hedge against inflation and market volatility #STRAW #BEST #SHMSSee omnystudio.com/listener for privacy information.