Branch of mathematics concerning chance and uncertainty
POPULARITY
LLMs and the Stochastic Nature of Generative AI
Today, I'm chatting with Stuart Winter-Tear about AI product management. We're getting into the nitty-gritty of what it takes to build and launch LLM-powered products for the commercial market that actually produce value. Among other things in this rich conversation, Stuart surprised me with the level of importance he believes UX has in making LLM-powered products successful, even for technical audiences. After spending significant time on the forefront of AI's breakthroughs, Stuart believes many of the products we're seeing today are the result of FOMO above all else. He shares a belief that I've emphasized time and time again on the podcast–product is about the problem, not the solution. This design philosophy has informed Staurt's 20-plus year-long career, and it is pivotal to understanding how to best use AI to build products that meet users' needs. Highlights/ Skip to Why Stuart was asked to speak to the House of Lords about AI (2:04) The LLM-powered products has Stuart been building recently (4:20) Finding product-market fit with AI products (7:44) Lessons Stuart has learned over the past two years working with LLM-power products (10:54) Figuring out how to build user trust in your AI products (14:40) The differences between being a digital product manager vs. AI product manager (18:13) Who is best suited for an AI product management role (25:42) Why Stuart thinks user experience matters greatly with AI products (32:18) The formula needed to create a business-viable AI product (38:22) Stuart describes the skills and roles he thinks are essential in an AI product team and who he brings on first (50:53) Conversations that need to be had with academics and data scientists when building AI-powered products (54:04) Final thoughts from Stuart and where you can find more from him (58:07) Quotes from Today's Episode “I think that the core dream with GenAI is getting data out of IT hands and back to the business. Finding a way to overlay all this disparate, unstructured data and [translate it] to the human language is revolutionary. We're finding industries that you would think were more conservative (i.e. medical, legal, etc.) are probably the most interested because of the large volumes of unstructured data they have to deal with. People wouldn't expect large language models to be used for fact-checking… they're actually very powerful, especially if you can have your own proprietary data or pipelines. Same with security–although large language models introduce a terrifying amount of security problems, they can also be used in reverse to augment security. There's a lovely contradiction with this technology that I do enjoy.” - Stuart Winter-Tear (5:58) “[LLM-powered products] gave me the wow factor, and I think that's part of what's caused the problem. If we focus on technology, we build more technology, but if we focus on business and customers, we're probably going to end up with more business and customers. This is why we end up with so many products that are effectively solutions in search of problems. We're in this rush and [these products] are [based on] FOMO. We're leaving behind what we understood about [building] products—as if [an LLM-powered product] is a special piece of technology. It's not. It's another piece of technology. [Designers] should look at this technology from the prism of the business and from the prism of the problem. We love to solutionize, but is the problem the problem? What's the context of the problem? What's the problem under the problem? Is this problem worth solving, and is GenAI a desirable way to solve it? We're putting the cart before the horse.” - Stuart Winter-Tear (11:11) “[LLM-powered products] feel most amazing when you're not a domain expert in whatever you're using it for. I'll give you an example: I'm terrible at coding. When I got my hands on Cursor, I felt like a superhero. It was unbelievable what I could build. Although [LLM products] look most amazing in the hands of non-experts, it's actually most powerful in the hands of experts who do understand the domain they're using this technology. Perhaps I want to do a product strategy, so I ask [the product] for some assistance, and it can get me 70% of the way there. [LLM products] are great as a jumping off point… but ultimately [they are] only powerful because I have certain domain expertise.” - Stuart Winter-Tear (13:01) “We're so used to the digital paradigm. The deterministic nature of you put in X, you get out Y; it's the same every time. Probabilistic changes every time. There is a huge difference between what results you might be getting in the lab compared to what happens in the real world. You effectively find yourself building [AI products] live, and in order to do that, you need good communities and good feedback available to you. You need these fast feedback loops. From a pure product management perspective, we used to just have the [engineering] timeline… Now, we have [the data research timeline]. If you're dealing with cutting-edge products, you've got these two timelines that you're trying to put together, and the data research one is very unpredictable. It's the nature of research. We don't necessarily know when we're going to get to where we want to be.” - Stuart Winter-Tear (22:25) “I believe that UX will become the #1 priority for large language model products. I firmly believe whoever wins in UX will win in this large language model product world. I'm against fully autonomous agents without human intervention for knowledge work. We need that human in the loop. What was the intent of the user? How do we get that right push back from the large language model to understand even the level of the person that they're dealing with? These are fundamental UX problems that are going to push UX to the forefront… This is going to be on UX to educate the user, to be able to inject the user in at the right time to be able to make this stuff work. The UX folk who do figure this out are going to create the breakthrough and create the mass adoption.” - Stuart Winter-Tear (33:42)
Mobile attribution is getting better than ever before. And that's in spite of it becoming more and more complex.There's so many measurement methodologies. Advanced SAN. AEM. Advanced AEM. Unified Measurement. SKAN. AAK. Privacy Sandbox. GAID. IDFA. Probabilistic. Modeled.You name it, there's MORE of everything.But in spite of all that, mobile attribution is getting better. Way better. And maybe, it's actually BECAUSE of all that. And the great news: it's also getting EASIER.Makes sense? Insane?Impossible?Check out this convo in Growth Masterminds between John Koetsier and Singular CTO Eran Friedman
Mark “Murch” Erhardt and Mike Schmidt are joined by Sindura Saraswathi, Christian Kümmerle, and Stéphan Vuylsteke to discuss Newsletter #345.News● P2P traffic analysis (1:35) ● Research into single-path LN pathfinding (6:45) ● Probabilistic payments using different hash functions as an xor function (21:17) Bitcoin Core PR Review Club● Stricter internal handling of invalid blocks (26:12) Releases and release candidates● Eclair v0.12.0 (37:49) Notable code and documentation changes● Bitcoin Core #31407 (38:52) ● Eclair #3027 (43:22) ● Eclair #3007 (44:17) ● Eclair #2976 (44:57) ● LDK #3608 (47:17) ● LDK #3624 (48:12) ● LDK #3016 (50:28) ● LDK #3629 (52:15) ● BDK #1838 (53:06)
Female VC Lab Show Notes Episode Title and Number: CES2025: What's Next for Smart Cities - AI for Data, Planning, and Beyond Name of Session: What's Next for Smart Cities: AI for Data, Planning and Beyond Date and Time: Tuesday, January 7, 2025 at 3:00 PM Location: Las Vegas Convention Center North - N257 Session Description: The rise of AI-powered Smart Cities is significant. Learn from early adopters who have seen AI capabilities become notable force multipliers in our smartest cities. Full Panel: David Shier, Managing Director, Visionary Future (Moderator) Sheri Bachstein, CEO, The Weather Company Barbara Bickham, Founder & GP, Trailyn VC Nadia Hansen, Global Digital Transformation Executive, Salesforce Prachi Vakharia, ARPA-I: Strategic Advisor for Innovations & Infrastructure, USDOT Episode Summary: In this episode, we dive into the transformative potential of AI in developing smart cities, as discussed in a CES 2025 panel. The conversation covers aspects ranging from extreme weather prediction to citizen-centric services and the ethical implications of AI in urban governance. Key Points Discussed: The role of AI in predicting extreme weather and urban design Probabilistic forecasting and its benefits for emergency preparedness The concept of citizen-centric smart cities for enhanced public participation Challenges in data standardization and the importance of public-private partnerships Ethical considerations, including bias and privacy concerns in AI technologies Timecode Guide: 00:00:04 - Introduction to smart cities and AI 00:00:38 - Discussion about extreme weather prediction with AI 00:01:53 - Explanation of probabilistic forecasting 00:03:09 - Concept of citizen-centric smart cities 00:04:47 - Data challenges and public-private partnerships 00:08:09 - Ethics, transparency, and public education on AI 00:11:01 - Challenges with AI model sizes and limited data 00:17:40 - Privacy concerns in smart cities 00:18:45 - Ensuring AI doesn't perpetuate bias 00:20:02 - Discussion on potential job losses and new opportunities 00:21:09 - Misuse of AI by authoritarian governments 00:22:25 - Weather strategy in the era of climate change 00:25:27 - Collaboration and human-centered AI 00:27:45 - Future vision for smart cities Full Topic Guide: The Future of Smart Cities: Harnessing AI for a Better Urban Life Introduction: As we stand on the brink of an AI-driven transformation in urban living, the CES 2025 panel offers an enlightening glimpse into how next-gen technologies are being leveraged to create smarter, more resilient cities. Hosts Daniel and Barbara Bickham review these discussions, stressing the interplay of data, AI, and human insight in shaping the future. AI in Predicting Extreme Weather: One of the most compelling highlights was the role of AI in weather prediction and urban planning. As climate change leads to unpredictable weather patterns, AI provides a way to prepare for unprecedented events. AI's capacity to analyze vast amounts of climate data allows for effective "what if" scenario modeling, enabling cities to design infrastructure that can withstand extreme conditions. For example, you might reinforce a seawall today based on AI predictions of future storms. Probabilistic Forecasting: Advancements in probabilistic forecasting were another exciting point discussed. Unlike traditional methods, probabilistic forecasting uses AI to run thousands of scenarios, providing a range of potential outcomes rather than a single prediction. This not only improves weather accuracy but also aids in emergency preparedness, allowing cities to allocate resources more efficiently and preempt disasters, thereby saving lives and reducing costs. Citizen-Centric Smart Cities: The concept of making cities more user-friendly for residents drew considerable attention. Imagine all city services being available through a single app—reporting potholes, paying taxes, or even participating in local governance through blockchain-enabled voting systems. This not only simplifies interactions with municipal services but also empowers citizens by giving them a direct say in urban management. Data Challenges and Public-Private Partnerships: However, the transition to smart cities is fraught with challenges, particularly in data management. Cities gather massive amounts of data, but these are often trapped in silos, hindering a comprehensive overview. Collaboration through public-private partnerships is vital here. Government bodies possess critical data, while private enterprises offer the technological expertise to process and utilize this information effectively. Examples like the Weather Company's collaborations with NOAA and NVIDIA highlight the potential of these partnerships. Ethical and Transparency Considerations: The panel stressed the ethical implications of integrating AI into city management. Trust and transparency are crucial for citizen buy-in. There needs to be a concerted effort to educate both government officials and the public on AI. Moreover, it's important to ensure that AI systems are fair and do not perpetuate existing biases. Implementing strong ethical guidelines and having diverse teams develop these technologies can mitigate potential risks. Privacy Concerns: With vast amounts of data being collected, privacy is a significant concern. Clear rules on data collection, use, and access are necessary to prevent misuse. Enabling citizens to control their data and ensuring transparency about how it is used are steps toward this goal. Conclusion: The future of smart cities depends not just on technological advancements but on collaborative, human-centered approaches. As we navigate this transformation, the focus must remain on using technology to enhance, not hinder, urban life. Notable Quotes from the Hosts: "It's about giving people the tools and the knowledge to be active participants in shaping the future of their cities." "We need to be mindful of the potential consequences of AI on jobs, privacy, and equality." Fun Facts or Interesting Tidbits: People were more interested in searching for inmates in jail rather than marriage licenses on the redesigned Clark County website. A project in Massachusetts involved transit maps from the 1970s, highlighting the need to update and digitize old data for modern use. Stay engaged with the conversation on how AI and technology are transforming our cities. Feel inspired? Share your ideas on how we can build a smarter, more inclusive future for our urban environments. Follow us on social media and subscribe to our newsletter for more updates and deep dives into the latest technological trends.
In this episode of the podcast, I'm joined by two academics -- Julian Runge from Northwestern University's Medill School and Koen Pauwels from Northeastern University -- for a conversation about the methodological (and stylistic) distinction between marketing experimentation and probabilistic measurement. Julian has previously appeared on the podcast and recently contributed a guest article for Mobile Dev Memo, and he and I have co-authored a number of articles (including this Harvard Business Review piece). Koen runs the Marketing and Metrics blog as well as the Pauwels on Marketing newsletter on LinkedIn. Some of the topics addressed in our discussion include: Experimentation in marketing measurement; The most popular techniques for probabilistic measurement, and how they are implemented; How a firm can integrate experimentation into its marketing measurement efforts; How firms tend to improperly implement Media Mix Modeling; Whether it is possible to measure incrementality for a specific channel, using that channel's tools; How marketers should think about demonstrating the value of their efforts; How the value of brand equity can be measured and integrated into marketing measurement; How a firm should think about experimentation and opportunity cost. Thanks to the sponsors of this week's episode of the Mobile Dev Memo podcast: Vibe. Vibe is the leading Streaming TV ad platform for small and medium-sized businesses looking for actionable advertising campaign performance. INCRMNTAL. True attribution measures incrementality, always on. Interested in sponsoring the Mobile Dev Memo podcast? Contact Marketecture. The Mobile Dev Memo podcast is available on: Apple Podcasts Spotify Google Podcasts
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meOur theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Effective data science education requires feedback and rapid iteration.Building LLM applications presents unique challenges and opportunities.The software development lifecycle for AI differs from traditional methods.Collaboration between data scientists and software engineers is crucial.Hugo's new course focuses on practical applications of LLMs.Continuous learning is essential in the fast-evolving tech landscape.Engaging learners through practical exercises enhances education.POC purgatory refers to the challenges faced in deploying LLM-powered software.Focusing on first principles can help overcome integration issues in AI.Aspiring data scientists should prioritize problem-solving over specific tools.Engagement with different parts of an organization is crucial for data scientists.Quick paths to value generation can help gain buy-in for data projects.Multimodal models are an exciting trend in AI development.Probabilistic programming has potential for future growth in data science.Continuous learning and curiosity are vital in the evolving field of data science.Chapters:09:13 Hugo's Journey in Data Science and Education14:57 The Appeal of Bayesian Statistics19:36 Learning and Teaching in Data Science24:53 Key Ingredients for Effective Data Science Education28:44 Podcasting Journey and Insights36:10 Building LLM Applications: Course Overview42:08 Navigating the Software Development Lifecycle48:06 Overcoming Proof of Concept Purgatory55:35 Guidance for Aspiring Data Scientists01:03:25 Exciting Trends in Data Science and AI01:10:51 Balancing Multiple Roles in Data Science01:15:23 Envisioning Accessible Data Science for AllThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim
The nueroscience of intuition & how to hone it as your super power. Professor Joel Pearson is a neuroscientist who's been studying how the brain processes unconscious information for 25 years. He's on a mission to distill the science of intuition into simple, practical rules that are easy to follow and can improve decision-making. His book is called The Intuition Toolkit. He shares his simple rules for us, how to hone in on our intuition, make it easier to listen to & when NOT to listen to it. CONNECT WITH US Connect with That's Helpful on Instagram. Find Joel on Twitter & via his website. BOOK The Intuition Toolkit PODCASTS Intuitive Eating 101 Want to become a podcast sponsor, got some feedback for me or just fancy a chat? Email me - thatshelpful@edstott.com TIMESTAMPS 00:00:00 Intro 00:02:24 Why intuition is Joel's passion 00:03:53 How hard is intuition to study in the lab? 00:04:54 How do you study intuition in the lab? 00:09:18 Why do we feel intuition physically? 00:11:17 Why are certain people more intuitive than others? 00:13:04 Self Awareness 00:16:45 Getting better at using your intuition over time 00:19:30 The difference between addiction & intuition 00:24:17 Intuitive Eating & processed foods 00:32:59 Probabilistic thinking 00:36:09 The importance of environment 00:39:12 When not to use intuition 00:45:00 The one thing to remember when it comes to intuition
My guest today is Michael Garfield, a paleontologist, futurist, writer, podcast host and strategic advisor whose “mind-jazz” performances — essays, music and fine art — bridge the worlds of art, science and philosophy. This year, Michael received a $10k O'Shaughnessy Grant for his “Humans On the Loop” discussion series, which explores the nature of agency, power, responsibility and wisdom in the age of automation. This whirlwind discussion is impossible to sum up in a couple of sentences (just look at the number of books & articles mentioned!) Ultimately, it is a conversation about a subject I think about every day: how we can live curious, collaborative and fulfilling lives in our deeply weird, complex, probabilistic world. I hope you enjoy this conversation as much as I did. For the full transcript, episode takeaways, and bucketloads of other goodies designed to make you go, “Hmm, that's interesting!”, check out our Substack. Important Links: Michael's Website Humans On The Loop Twitter Future Fossils Substack Show Notes: What is “mind jazz”? Humans “ON” the loop? The Red Queen hypothesis and the power of weirdness Probabilistic thinking & the perils of optimization Context collapse, pernicious convenience & coordination at scale How organisations learn Michael as World Emperor MORE! Books, Articles & Podcasts Mentioned: The Nature of Technology: What It Is and How It Evolves; by W. Brian Arthur Pharmako-AI; by K Allado-McDowell The Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century; by Howard Bloom The Genius of the Beast: A Radical Re-Vision of Capitalism; by Howard Bloom One Summer: America, 1927; by Bill Bryson Through the Looking-Glass, and What Alice Found There; by Lewis Carroll The Beginning of Infinity: Explanations That Transform the World; by David Deutsch Scale Theory: A Nondisciplinary Inquiry; by Joshua DiCaglio Revenge of the Tipping Point: Overstories, Superspreaders and the Rise of Social Engineering; by Malcolm Gladwell The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous; by Joseph Henrich Do Conversation: There's No Such Thing as Small Talk; by Robert Poynton Reality Hunger: A Manifesto; by David Shields The Time Falling Bodies Take to Light: Mythology, Sexuality and the Origins of Culture; by William Irwin Thompson The New Inquisition: Irrational Rationalism and the Citadel of Science; by Robert Anton Wilson Designing Neural Media; by K Allado-McDowell Pace Layering: How Complex Systems Learn and Keep Learning; by Steward Brand Losing Humanity: The Case against Killer Robots; by Bonnie Docherty What happens with digital rights management in the real world?; by Cory Doctorow The Evolution of Surveillance Part 1: Burgess Shale to Google Glass; by Michael Garfield An Introduction to Extitutional Theory; by Jessy Kate Schingler 175 - C. Thi Nguyen on The Seductions of Clarity, Weaponized Games, and Agency as Art; Future Fossils with Michael Garfield
In this lecture, we review basic probability fundamentals (measure spaces, probability measures, random variables, probability density functions, probability mass functions, cumulative distribution functions, moments, mean/expected value/center of mass, standard deviation, variance), and then we start to build a vocabulary of different probabilistic models that are used in different modeling contexts. These include uniform, triangular, normal, exponential, Erlang-k, Weibull, and Poisson variables. If we do not have time to do so during this lecture, we will finish the discussion in the next lecture with the Bernoulli-based discrete variables and Poisson processes.
In the realm of nutrition science and health, understanding the intricate relationship between various factors and health outcomes is crucial yet challenging. How do we determine whether a specific nutrient genuinely impacts our health, or if the observed effects are merely coincidental? This intriguing question brings us to the core concepts of correlation and causation. You've likely heard the adage “correlation is not causation,” but what does this truly mean in the context of scientific research and public health recommendations? Can a strong association between two variables ever imply a causal relationship, or is it always just a statistical coincidence? These questions are not merely academic; they are pivotal in shaping the guidelines that influence our daily lives. For instance, when studies reveal a link between high sodium intake and hypertension, how do scientists distinguish between a mere correlation and a true causal relationship? Similarly, the debate around LDL cholesterol and cardiovascular disease hinges on understanding whether high cholesterol levels directly cause heart disease, or if other confounding factors are at play. Unraveling these complexities requires a deep dive into the standards of proof and the different models used to assess causality in scientific research. As we delve into these topics, we'll explore how public health recommendations are formed despite the inherent challenges in proving causality. What methods do scientists use to ensure that their findings are robust and reliable? How do they account for the myriad of confounding variables that can skew results? By understanding the nuances of these processes, we can better appreciate the rigorous scientific effort that underpins dietary guidelines and health advisories. Join us on this exploration of correlation, causation, and the standards of proof in nutrition science. Through real-world examples and critical discussions, we will illuminate the pathways from observational studies to actionable health recommendations. Are you ready to uncover the mechanisms that bridge the gap between scientific evidence and practical health advice? Let's dive in and discover the fascinating dynamics at play. Timestamps: 01:32 Understanding Correlation and Causation 03:54 Historical Perspectives on Causality 06:33 Causal Models in Health Sciences 14:53 Probabilistic vs. Deterministic Causation 30:52 Standards of Proof in Public Health 36:44 Applying Causal Models in Nutrition Science 58:54 Key Ideas Segment (Premium-only) Links: Enroll in the next cohort of our Applied Nutrition Literacy course Go to episode page Subscribe to Sigma Nutrition Premium Receive our free weekly email: the Sigma Synopsis Related episode: 343 – Understanding Causality in Nutrition Science
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities, published by Axel Højmark on July 22, 2024 on LessWrong. Produced as part of the MATS Program Summer 2024 Cohort. The project is supervised by Marius Hobbhahn and Jérémy Scheurer Introduction To mitigate risks from future AI systems, we need to assess their capabilities accurately. Ideally, we would have rigorous methods to upper bound the probability of a model having dangerous capabilities, even if these capabilities are not yet present or easily elicited. The paper "Evaluating Frontier Models for Dangerous Capabilities" by Phuong et al. 2024 is a recent contribution to this field from DeepMind. It proposes new methods that aim to estimate, as well as upper-bound the probability of large language models being able to successfully engage in persuasion, deception, cybersecurity, self-proliferation, or self-reasoning. This post presents our initial empirical and theoretical findings on the applicability of these methods. Their proposed methods have several desirable properties. Instead of repeatedly running the entire task end-to-end, the authors introduce milestones. Milestones break down a task and provide estimates of partial progress, which can reduce variance in overall capability assessments. The expert best-of-N method uses expert guidance to elicit rare behaviors and quantifies the expert assistance as a proxy for the model's independent performance on the task. However, we find that relying on milestones tends to underestimate the overall task success probability for most realistic tasks. Additionally, the expert best-of-N method fails to provide values directly correlated with the probability of task success, making its outputs less applicable to real-world scenarios. We therefore propose an alternative approach to the expert best-of-N method, which retains its advantages while providing more calibrated results. Except for the end-to-end method, we currently feel that no method presented in this post would allow us to reliably estimate or upper bound the success probability for realistic tasks and thus should not be used for critical decisions. The overarching aim of our MATS project is to uncover agent scaling trends, allowing the AI safety community to better predict the performance of future LLM agents from characteristics such as training compute, scaffolding used for agents, or benchmark results (Ruan et al., 2024). To avoid the issue of seemingly emergent abilities resulting from bad choices of metrics (Schaeffer et al., 2023), this work serves as our initial effort to extract more meaningful information from agentic evaluations. We are interested in receiving feedback and are particularly keen on alternative methods that enable us to reliably assign low-probability estimates (e.g. 1e7) to a model's success rate on a task. Evaluation Methodology of Phuong et al. The goal of the evaluations we discuss is to estimate the probability of an agent succeeding on a specific task T . Generally, when we refer to an agent, we mean an LLM wrapped in scaffolding that lets it execute shell commands, run code, or browse the web to complete some predetermined task. Formally, the goal is to estimate P(Ts), the probability that the agent solves task T and ends up in the solved state Ts . The naive approach to estimate this is with Monte Carlo sampling: The authors call this the end-to-end method. However, the end-to-end method struggles with low-probability events. The expected number of trials needed to observe one success for a task is 1P(Ts) making naive Monte Carlo sampling impractical for many low-probability, long-horizon tasks. In practice, this could require running multi-hour tasks hundreds of thousands of times. To address this challenge, Phuong et al. devise three additional method...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities, published by Axel Højmark on July 22, 2024 on The AI Alignment Forum. Produced as part of the MATS Program Summer 2024 Cohort. The project is supervised by Marius Hobbhahn and Jérémy Scheurer Introduction To mitigate risks from future AI systems, we need to assess their capabilities accurately. Ideally, we would have rigorous methods to upper bound the probability of a model having dangerous capabilities, even if these capabilities are not yet present or easily elicited. The paper "Evaluating Frontier Models for Dangerous Capabilities" by Phuong et al. 2024 is a recent contribution to this field from DeepMind. It proposes new methods that aim to estimate, as well as upper-bound the probability of large language models being able to successfully engage in persuasion, deception, cybersecurity, self-proliferation, or self-reasoning. This post presents our initial empirical and theoretical findings on the applicability of these methods. Their proposed methods have several desirable properties. Instead of repeatedly running the entire task end-to-end, the authors introduce milestones. Milestones break down a task and provide estimates of partial progress, which can reduce variance in overall capability assessments. The expert best-of-N method uses expert guidance to elicit rare behaviors and quantifies the expert assistance as a proxy for the model's independent performance on the task. However, we find that relying on milestones tends to underestimate the overall task success probability for most realistic tasks. Additionally, the expert best-of-N method fails to provide values directly correlated with the probability of task success, making its outputs less applicable to real-world scenarios. We therefore propose an alternative approach to the expert best-of-N method, which retains its advantages while providing more calibrated results. Except for the end-to-end method, we currently feel that no method presented in this post would allow us to reliably estimate or upper bound the success probability for realistic tasks and thus should not be used for critical decisions. The overarching aim of our MATS project is to uncover agent scaling trends, allowing the AI safety community to better predict the performance of future LLM agents from characteristics such as training compute, scaffolding used for agents, or benchmark results (Ruan et al., 2024). To avoid the issue of seemingly emergent abilities resulting from bad choices of metrics (Schaeffer et al., 2023), this work serves as our initial effort to extract more meaningful information from agentic evaluations. We are interested in receiving feedback and are particularly keen on alternative methods that enable us to reliably assign low-probability estimates (e.g. 1e7) to a model's success rate on a task. Evaluation Methodology of Phuong et al. The goal of the evaluations we discuss is to estimate the probability of an agent succeeding on a specific task T . Generally, when we refer to an agent, we mean an LLM wrapped in scaffolding that lets it execute shell commands, run code, or browse the web to complete some predetermined task. Formally, the goal is to estimate P(Ts), the probability that the agent solves task T and ends up in the solved state Ts . The naive approach to estimate this is with Monte Carlo sampling: The authors call this the end-to-end method. However, the end-to-end method struggles with low-probability events. The expected number of trials needed to observe one success for a task is 1P(Ts) making naive Monte Carlo sampling impractical for many low-probability, long-horizon tasks. In practice, this could require running multi-hour tasks hundreds of thousands of times. To address this challenge, Phuong et al. devise three addi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Important open problems in voting, published by Closed Limelike Curves on July 2, 2024 on LessWrong. Strategy-resistance Identify, or prove impossibility, of a voting system which incentivizes 1. A strictly sincere ranking of all candidates in the zero-information setting, where it implements a "good" social choice rule such as the relative (normalized) utilitarian rule, a Condorcet social choice rule, or the Borda rule. 2. In a Poisson game or similar setting: a unique semi-sincere Nash equilibrium that elects the Condorcet winner (if one exists), similar to those shown for approval voting by Myerson and Weber (1993) and Durand et al. (2019). Properties of Multiwinner voting systems There's strikingly little research on multiwinner voting systems. You can find a table of criteria for single-winner systems on Wikipedia, but if you try and find the same for multi-winner systems, there's nothing. Here's 9 important criteria we can judge multiwinner voting systems on: 1. Independence of Irrelevant Alternatives 2. Independence of Universally-Approved Candidates 3. Monotonicity 4. Participation 5. Precinct-summability 6. Polynomial-time approximation scheme 7. Proportionality for solid coalitions 8. Perfect representation in the limit 9. Core-stability (may need to be approximated within a constant factor) I'm curious which combinations of these properties exist. Probabilistic/weighted voting systems are allowed. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Generative AI Summit Austin - Registration --------------------------------------------------------------The future of defense and warfare has always been linked to technological change, from the discovery of fire to the Bronze Age to the Nuclear Age to now the AI Age. My guest today is Charlie Burgoyne, Founder and CEO of both Valkyrie and their newest spinout Andromeda, who joins me to discuss the future of warfare going forward. But we don't stop there as we explore every rabbit hole in search of Truth. Andromeda synthesizes large volumes of information quickly and accurately, helping users understand the interconnected context to make informed decisions.Pop culture and media narratives of innovation, from Terminator to Data to Her to Oppenheimer, shape how the public views, embraces, or fears technological waves.History has shown that industry and defense have always been intertwined, with modern tech entrepreneurs playing roles similar to industrialists of the past in advancing defense capabilities.Probabilistic thinking is essential in defense decision-making, especially where high levels of computation can easily mask high levels of uncertainties.What's Next Austin"I think that we're going to see a big push to support the intelligence community the way we've seen a big push to support the defense community. I think that an equivalent to AFC is coming on the intelligence side."Accountability in Age of LLMS at SXSW by Charlie BurgoyneValkyrie: Website, Facebook, InstagramAndromeda: Website -------------------Austin Next Links: Website, X/Twitter, YouTube, LinkedIn
In this episode of the Agile Uprising podcast, host Troy Lightfoot chat with Prateek Singh author of Scaling Simplified and the new course for Product Management "Accelerating Product Value" about taking your product management career to the next level with Probablisitc Thinking and Flow. Troy's Next Product Value Class: "AgileUprising" for 20% off! The episode is rich with insights, making it a valuable listen for anyone involved in organizational change. Probabilistic vs. Deterministic Thinking in Product Management: The discussion highlighted the importance of probabilistic thinking, where multiple future outcomes are considered, rather than deterministic thinking, which often leads to rigid and potentially inaccurate expectations. Challenges with Traditional Product Prioritization: Traditional methods of product prioritization, such as backlog management, are seen as potentially obsolete the moment they are established due to the dynamic nature of market and development realities. Advantages of Rapid Experimentation: Getting ideas to production swiftly and with minimal initial investment allows for direct testing with customers, providing real feedback and reducing the risk of significant investment in unproven ideas. Financial Impact of Flow and Learning: Faster realization of product value through improved flow can significantly enhance ROI, by reducing the costs associated with delays and increasing the effectiveness of learning from the market. The Role of Flow Metrics in Learning Systems: Flow metrics like cycle time and throughput are vital for transforming product development and operations into learning systems, where the speed of learning and adaptation is critical. Concept of Thinking in Bets: The podcast also touched on using the concept of “Thinking in Bets” (from Annie Duke's work) to manage investment in product development. This approach advocates for small, incremental bets to minimize losses while exploring the potential success of new ideas. Links Read More ---- About the Agile Uprising If you enjoyed this episode, please give us a review, a rating, or leave comments on iTunes, Stitcher or your podcasting platform of choice. It really helps others find us. Much thanks to the artist from who provided us our outro music free-of-charge! If you like what you heard, to find more music you might enjoy! If you'd like to join the discussion and share your stories, please jump into the fray at our We at the Agile Uprising are committed to being totally free. However, if you'd like to contribute and help us defray hosting and production costs we do have a . Who knows, you might even get some surprises in the mail!
Cate seemed to be fully entrenched on the default path — she had graduated from Yale and became the Supreme Court advocate on her way to becoming a partner in her law firm. But she didn't want to live the lives of the people around her. She pivoted hard, you could almost hear car brakes squeaking as she made the turn, and over a year she became the number one female poker player in the world. She later started art and perfume companies and led operations at Avlea — a pandemic medicine company. One might think she's simply a superhuman, and what she did is beyond the grasp of mere mortals, but Cate claims that agency is learnable and joins the podcast to tell us how.
Speakers for AI Engineer World's Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we've been studying the best ML research conferences so we can make the best AI industry conf! Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.ICLR 2024 took place from May 6-11 in Vienna, Austria. Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.As we did last year, we'll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré's spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.We had a blast at ICLR 2024 and you can bet that we'll be back in 2025
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Aaron Lowry, an experienced consultant and returning guest. They discuss a wide range of topics, including Lowry's work in rebuilding custom vehicles, the value of blending aesthetics with engineering, and the challenges of balancing principles and propositions in problem-solving. They also explore the evolving world of artificial intelligence, contrasting its limitations with human intelligence, and consider its impact on creative expression. Connect with Aaron on Twitter at @Aaron_Lowry for more insights into his projects and ideas. Check out this GPT we trained on this conversation Timestamps 00:00 - Stewart Alsop introduces Aaron Lowry, discussing their previous conversations and current interests. They mention the makerspace and complexities in physical and software creation, while Lowry shares insights on sheet metal work and its principles. 00:05 - Stewart talks about challenges in crafting and how quick access to information on computers may impact patience. He appreciates Lowry's language of attunement and asks for Lowry's views on AI, given that he hasn't been directly involved in building it. 00:10 - Lowry discusses intelligence, consciousness, and the reciprocal relationship between agent and environment. He explores challenges in defining intelligence, noting the mirror-like effect of AI reflecting our own limitations. 00:15 - Stewart discusses how filtering AI models reduces their utility. Lowry describes prompt injection as a way to navigate AI limitations while emphasizing the importance of understanding the parameters that bound the data set. 00:20 - Lowry acknowledges the energy required to maintain AI models, comparing it to the efficiency of the human brain. He stresses the probabilistic nature of human intelligence versus the deterministic nature of machine learning. 00:25 - Lowry distinguishes between the infinite potential of probabilistic intelligence and deterministic frameworks. He compares real-world interaction to a video game, noting how deterministic thinking can make people behave like NPCs. 00:30 - They discuss navigating principles versus propositions, likening it to piloting a sailboat. Maintaining direction requires continuous feedback and adaptation. 00:35 - Stewart differentiates between propositional and participatory knowing, noting AI's strong grasp of the former. Lowry argues that perspective is assigned in AI models but participation remains absent. 00:40 - Lowry describes the truck he is restoring, noting the blend of modern engineering and aesthetic choices. He shares his process of acquiring knowledge from books and the internet. 00:45 - They discuss Brian Rommel's approach to training language models with high-quality data from the past, emphasizing the importance of data quality. 00:50 - They discuss how AI models can synthesize a broader spectrum of perspectives than any individual. Lowry advocates for plurality in models, warning against a single authoritative perspective. 00:55 - They delve into AI's impact on art. Despite the democratization of creative tools, Lowry asserts that authentic artistic inspiration is still necessary. He highlights the empty appeal of AI-generated perfection lacking the soul of human art. Key insights Principles vs. Propositions in Problem-Solving: Aaron Lowry emphasizes the importance of working with first principles rather than rigid propositions. He compares this to piloting a sailboat, where adaptability and constant course correction are crucial, and stresses that a principle-based approach allows for dynamic navigation of complex challenges. Sheet Metal Work as a Metaphor: Lowry draws parallels between his experience working with sheet metal and broader life lessons. He finds that patience, precision, and an understanding of thermodynamics are essential when shaping materials and that these skills have broader applications, like aligning with fundamental principles in all aspects of life. AI and Human Intelligence Contrasts: Despite not being directly involved in building AI, Lowry offers a thoughtful analysis of its relationship to human intelligence. He argues that AI can mirror our limitations and reflects how intelligence is both probabilistic and deterministic, giving us powerful tools but also raising ethical and practical challenges. Guardrails and Filtering in AI Models: The conversation explores how filtering in AI reduces its utility. While Lowry acknowledges that filters are essential for contextualizing data sets, he also notes that prompt injection helps circumvent these limitations, revealing the inherent challenges in fully controlling AI output. Plurality of Perspectives in AI: Both Alsop and Lowry agree that multiple AI models are necessary to capture a range of perspectives, and relying on a single authoritative model could be dangerous. They highlight that AI models should maintain diversity to better reflect the broad spectrum of human experience. AI's Role in Creative Expression: They touch upon the potential of AI to create art, noting how it can democratize creative tools. However, Lowry points out that even with high technical proficiency, AI-generated art often lacks the emotional resonance that comes from genuine human inspiration and participation. Blending Aesthetics with Engineering: Lowry shares his approach to rebuilding classic vehicles, which blends modern engineering with aesthetic considerations. His goal is to maintain the beauty of the original designs while ensuring functionality, illustrating the delicate balance between creativity and technical precision.
Today we're sharing a recent interview Erik gave to Alex LaBossiere that covers lots of themes we discuss on Moment of Zen - including Erik's views on tech vs. the media, hypocrisy of the elites, risk orientation, and the thesis behind Turpentine. Grow your newsletter with Beehiiv, head to https://Beehiiv.com and use code "MOZ" for 20% off your first three months -- SPONSORS: NETSUITE | BEEHIIV | SQUAD NETSUITE
Erik Torenberg is a technology entrepreneur and investor. Presently, he's the founder of Turpentine. Previously, he was the chairman of On Deck, co-founder and general partner at Village Global, and first employee at Product Hunt. 0:00 - Intro 5:08 - On Investing vs Operating and Being a Player 7:34 - Trust and Direct Distribution 10:45 - Legacy Media and the Techlash 17:24 - What is Turpentine 18:41 - Lessons From 1000 Interviews 20:59 - The Printing Press, Barbells, and the Future of Media 25:22 - Pre-Internet Journalism and Survival of the Fittest Ideas 28:44 - Creator Monetization and Price Discrimination 34:55 - Does Media Have a Direction? 41:44 - Tailwinds and Turpentine at Scale 49:37 - Creating More Startups and Understanding Risk 54:24 - Is Entrepreneurship Founder or Market Limited? 59:04 - What Do Most People Get Wrong About Community Building? 1:01:34 - Probabilistic vs Deterministic Mindsets 1:04:29 - On Faith over Logic 1:08:06 - Modernity and God Shaped Holes 1:10:44 - The Hypocrisy of Elites 1:13:30 - Asking Erik Questions He Asks People 1:15:40 - What Should More People be Thinking About?
Episode 121I spoke with Professor Ryan Tibshirani about:* Differences between the ML and statistics communities in scholarship, terminology, and other areas. * Trend filtering* Why you can't just use garbage prediction functions when doing conformal predictionRyan is a Professor in the Department of Statistics at UC Berkeley. He is also a Principal Investigator in the Delphi group. From 2011-2022, he was a faculty member in Statistics and Machine Learning at Carnegie Mellon University. From 2007-2011, he did his Ph.D. in Statistics at Stanford University.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. The Gradient Podcast on: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:10) Ryan's background and path into statistics* (07:00) Cultivating taste as a researcher* (11:00) Conversations within the statistics community* (18:30) Use of terms, disagreements over stability and definitions* (23:05) Nonparametric Regression* (23:55) Background on trend filtering* (33:48) Analysis and synthesis frameworks in problem formulation* (39:45) Neural networks as a specific take on synthesis* (40:55) Divided differences, falling factorials, and discrete splines* (41:55) Motivations and background* (48:07) Divided differences vs. derivatives, approximation and efficiency* (51:40) Conformal prediction* (52:40) Motivations* (1:10:20) Probabilistic guarantees in conformal prediction, choice of predictors* (1:14:25) Assumptions: i.i.d. and exchangeability — conformal prediction beyond exchangeability* (1:25:00) Next directions* (1:28:12) Epidemic forecasting — COVID-19 impact and trends survey* (1:29:10) Survey methodology* (1:38:20) Data defect correlation and its limitations for characterizing datasets* (1:46:14) OutroLinks:* Ryan's homepage* Works read/mentioned* Nonparametric Regression* Adaptive Piecewise Polynomial Estimation via Trend Filtering (2014) * Divided Differences, Falling Factorials, and Discrete Splines: Another Look at Trend Filtering and Related Problems (2020)* Distribution-free Inference* Distribution-Free Predictive Inference for Regression (2017)* Conformal Prediction Under Covariate Shift (2019)* Conformal Prediction Beyond Exchangeability (2023)* Delphi and COVID-19 research* Flexible Modeling of Epidemics* Real-Time Estimation of COVID-19 Infections* The US COVID-19 Trends and Impact Survey and Big data, big problems: Responding to “Are we there yet?” Get full access to The Gradient at thegradientpub.substack.com/subscribe
In episode 119 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Sipser.Professor Sipser is the Donner Professor of Mathematics and member of the Computer Science and Artificial Intelligence Laboratory at MIT.He received his PhD from UC Berkeley in 1980 and joined the MIT faculty that same year. He was Chairman of Applied Mathematics from 1998 to 2000 and served as Head of the Mathematics Department 2004-2014. He served as interim Dean of Science 2013-2014 and then as Dean of Science 2014-2020.He was a research staff member at IBM Research in 1980, spent the 1985-86 academic year on the faculty of the EECS department at Berkeley and at MSRI, and was a Lady Davis Fellow at Hebrew University in 1988. His research areas are in algorithms and complexity theory, specifically efficient error correcting codes, interactive proof systems, randomness, quantum computation, and establishing the inherent computational difficulty of problems. He is the author of the widely used textbook, Introduction to the Theory of Computation (Third Edition, Cengage, 2012).Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pubSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:40) Professor Sipser's background* (04:35) On interesting questions* (09:00) Different kinds of research problems* (13:00) What makes certain problems difficult* (18:48) Nature of the P vs NP problem* (24:42) Identifying interesting problems* (28:50) Lower bounds on the size of sweeping automata* (29:50) Why sweeping automata + headway to P vs. NP* (36:40) Insights from sweeping automata, infinite analogues to finite automata problems* (40:45) Parity circuits* (43:20) Probabilistic restriction method* (47:20) Relativization and the polynomial time hierarchy* (55:10) P vs. NP* (57:23) The non-connection between GO's polynomial space hardness and AlphaGo* (1:00:40) On handicapping Turing Machines vs. oracle strategies* (1:04:25) The Natural Proofs Barrier and approaches to P vs. NP* (1:11:05) Debates on methods for P vs. NP* (1:15:04) On the possibility of solving P vs. NP* (1:18:20) On academia and its role* (1:27:51) OutroLinks:* Professor Sipser's homepage* Papers discussed/read* Halting space-bounded computations (1978)* Lower bounds on the size of sweeping automata (1979)* GO is Polynomial-Space Hard (1980)* A complexity theoretic approach to randomness (1983)* Parity, circuits, and the polynomial-time hierarchy (1984)* A follow-up to Furst-Saxe-Sipser* The Complexity of Finite Functions (1991) Get full access to The Gradient at thegradientpub.substack.com/subscribe
Today - the science behind intuition & how it can help us to hone this super power. Professor Joel Pearson is a neuroscientist who's been studying how the brain processes unconscious information for 25 years. He's on a mission to distill the science of intuition into simple, practical rules that are easy to follow and can improve decision-making. His book is called The Intuition Toolkit. He shares his simple rules for us, how to hone in on our intuition, make it easier to listen to & when NOT to listen to it. PSA: The podcast is now also available on Youtube & as a video podcast on Spotify if you'd like to see our smiling faces while you listen! This episode is brought to you by YouFoodz, for $200 of your first 5 boxes use the code HELPFUL or order via this link. CONNECT WITH US Connect with That's Helpful on Instagram. Find Joel on Twitter & via his website. BOOK The Intuition Toolkit PODCASTS Intuitive Eating 101 Want to become a podcast sponsor, got some feedback for me or just fancy a chat? Email me - thatshelpful@edstott.com TIMESTAMPS 00:00:00 Intro 00:02:24 Why intuition is Joel's passion 00:03:53 How hard is intuition to study in the lab? 00:04:54 How do you study intuition in the lab? 00:09:18 Why do we feel intuition physically? 00:11:17 Why are certain people more intuitive than others? 00:13:04 Self Awareness 00:16:45 Getting better at using your intuition over time 00:19:30 The difference between addiction & intuition 00:24:17 Intuitive Eating & processed foods 00:32:59 Probabilistic thinking 00:36:09 The importance of environment 00:39:12 When not to use intuition 00:45:00 The one thing to remember when it comes to intuition
Dr. Steven 'SEVEN' Waterhouse is an active investor and builder in the crypto / web3 space with 11 years of experience. Dr. Waterhouse is currently CEO and co-founder of Orchid Labs, advisor to various projects including RiscZero and Squid Router, and is CEO and co-founder of Nazare Ventures, a new fund focused on crypto and AI, highlighting his ambition to contribute towards the digital landscape of the future. He was recently CTO & Venture Partner at Fabric Ventures and was previously a partner at Pantera Capital from 2013 to 2016. He served on the board of Bitstamp during this period.Dr. Waterhouse holds a Ph.D. in Engineering from Cambridge with a focus on speech recognition and machine learning. At Cambridge, Dr. Waterhouse received the Isaac Newton Prize for outstanding Ph.D. research and represented Cambridge in the rowing and water polo teams.In this conversation, we discuss:- Introduction to Orchid Protocol- Ensuring decentralisation as AI and Web3's convergence grows- DePINs- Technology and innovation- Privacy in the digital age- Building blocks for a freer internet- Integrating on-chain privacy- The fight for internet freedom- Probabilistic nano-payments- The role of Digital Resource Networks (DRNs)- SensorshipOrchidWebsite: www.orchid.comX: @OrchidProtocolDiscord: discord.gg/GDbxmjxX9FDr. Steven WaterhouseX: @deseventralLinkedIn: Steven W. --------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50
In this new episode in the "Making Better Supply Chain Bets with the Power of Probabilities" series with Noodle.ai, hosts Scott Luton and Greg White are joined by the CRO of Noodle.ai, Tim Krug, and the End-to-End Planning and Support Director with Imperial Brands, Remi Guillon.Join us as they delve into the transformative story of a fast-moving consumer goods company, highlighting the adoption of innovative planning approaches through the lens of artificial intelligence. Listen in and learn more about:Probabilistic supply chain planning and the future of AI-driven decision-makingThe human element in supply chain transformation and the cultural aspects and leadership role in embracing technologyThe significance of trust between partners and what it takes to create unbiased forecastsThe importance of data cleanliness, change management, and acting with integrityAdditional Links & Resources:Learn more about Noodle.ai: www.Noodle.aiLearn more about Supply Chain Now: https://supplychainnow.comWEBINAR- AI-based forecast automation dreams do come true at Danone: https://bit.ly/3RRcMRjWEBINAR- Inflation & SMBs: Unlocking Cash Tied up in Inventory: https://bit.ly/48BIUzNWEBINAR- Achieving the Next-Gen Control Tower in 2024: Lessons Learned from the High-Tech Industry: https://bit.ly/3Syzrn1WEBINAR- Data-Driven Decision Making in Logistics: https://bit.ly/49lt102This episode is hosted by Scott Luton and Greg White. For additional information, please visit our dedicated show page at: https://supplychainnow.com/navigating-ai-frontier-demand-planning-imperial-brands-1231
Emily Mongold, Stanford University The impact of liquefaction on a regional scale is not well understood or modeled with traditional approaches. This paper presents a method to quantitatively assess liquefaction hazard and risk on a regional scale, accounting for uncertainties in soil properties, groundwater conditions, ground shaking parameters, and empirical liquefaction potential index (LPI) equations. The regional analysis is applied to a case study to calculate regional occurrence rates for the extent and severity of liquefaction and to quantify losses resulting from ground shaking and liquefaction damage to residential buildings. We present a regional-scale metric to quantify the extent and severity of liquefaction. A sensitivity analysis on epistemic uncertainty indicates that the two most important factors on output liquefaction maps are the empirical liquefaction equation, emphasizing the necessity of incorporating multiple equations in future regional studies, and the water table level, highlighting concerns around data availability and sea level rise. Furthermore, the disaggregation of seismic sources reveals that triggering earthquakes for various extents of liquefaction originate from multiple sources, though primarily nearby faults and large magnitude ruptures. This finding indicates the value of adopting regional probabilistic analysis in future studies to capture the diverse sources and spatial distribution of liquefaction.
We talked about: Rob's background Going from software engineering to Bayesian modeling Frequentist vs Bayesian modeling approach About integrals Probabilistic programming and samplers MCMC and Hakaru Language vs library Encoding dependencies and relationships into a model Stan, HMC (Hamiltonian Monte Carlo) , and NUTS Sources for learning about Bayesian modeling Reaching out to Rob Links: Book 1: https://bayesiancomputationbook.com/welcome.html Book/Course: https://xcelab.net/rm/statistical-rethinking/ Free ML Engineering course: http://mlzoomcamp.com Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html
We learned a new word!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In a conversation with Kevin Murphy, the distinguished researcher at Google DeepMind, we explored his multifaceted career spanning academia and industry. Born in Ireland and educated at prestigious institutions in the UK and the US, Kevin's academic journey led him to become an associate professor in Canada, before transitioning to manage a research team at Google DeepMind in California. We dived into the inspiration behind his acclaimed three books on "Probabilistic Machine Learning" and discussed some of the practical applications of his research. Kevin also offered valuable advice to early career researchers and Ph.D. students in Machine Learning. For more information and to access the transcript: www.ucl.ac.uk/statistics/sample-space Date of episode recording: 2023-11-13 Duration: 00:29:51 Language of episode: English Presenter: Nicolás Hernández Guests: Kevin Murphy Producer: Nicolás Hernández
Tune in to hear Dwayne Kerrigan and Keiron McCammon dive into various topics such as technology, AI, spirituality, and connected devices. This candid and unscripted episode showcases the unpredictable nature of business and entrepreneurship.[00:00:00] Welcome[00:04:07] Probabilistic model trained on human knowledge.[00:08:06] Models and their accuracy.[00:12:26] Technological advancements and consciousness.[00:15:39] Influencers and world leaders.[00:20:42] Slowing things down online.[00:23:23] Facebook's impact on disinformation.[00:29:44] Teenage relationships and tragic consequences.[00:33:28] Shift Happens incubator.[00:35:12] Ripple effect of knowledge.Connect with Dwayne KerriganLinked In: https://www.linkedin.com/in/dwayne-kerrigan-998113281/ Facebook: https://www.facebook.com/businessofdoingbusinessdk Instagram: https://www.instagram.com/thebusinessofdoingbusinessdk/Disclaimer The views, information, or opinions expressed by guests during The Business of Doing Business are solely those of the individuals involved and do not necessarily represent those of Dwayne Kerrigan and his affiliates. Dwayne Kerrigan or The Business of Doing Business is not responsible for and does not verify the accuracy of any of the information contained in the podcast series. The primary purpose of this podcast is to educate and inform. Listeners are advised to consult with a qualified professional or specialist before making any decisions based on the content of this podcast.
In Episode 2 of Season 2 of the Mobile Dev Memo podcast, I speak with Michael Kaminsky, the co-founder and CEO of Recast, on the topic of media mix modeling and probabilistic marketing measurement more broadly. Among other things, our conversation touches upon: How a marketing team can get started with adopting probabilistic measurement; The unforeseen difficulties that teams face when adopting probabilistic measurement methodologies; How much longer pseudo-deterministic solutions can be relied upon; How the measurement methodology utilized by a marketing team changes its media buying behavior. About Michael: Michael Kaminsky is a co-founder and co-CEO of Recast, a startup that is re-inventing Marketing Mix Modeling for modern marketers. Michael was trained as a statistician and econometrician focused on helping people make better decisions. He's passionate about taking cutting edge statistical techniques and using those to build tools that help marketers drive better performance for their businesses.
MLOps podcast #189 with Rohit Agarwal, CEO of Portkey.ai, Designing for Forward Compatibility in Gen AI. // Abstract For two whole years of working with a large LLM deployment, I always felt uncomfortable. How is my system performing? Are my users liking the outputs? Who needs help? Probabilistic systems can make this really hard to understand. In this talk, we'll discuss practical & implementable items to secure your LLM system and gain confidence while deploying to production. // Bio Rohit is the Co-founder and CEO of portkey.ai which is an FMOps stack for monitoring, model management, compliance, and more. Previously, he headed Product & AI at Pepper Content which has served ~900M generations on LLMs in production. Having seen large LLM deployments in production, he's always happy to help companies build their infra stacks on FM APIs or Open-source models. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://portkey.ai --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Rohit on LinkedIn: https://www.linkedin.com/in/1rohitagarwal/
Hilary Mason was on the ground floor of data science research, and now she's bringing that same pioneering spirit to generative AI. For this episode, Host and Partner at Lightspeed, Michael Mignano, talks with Hilary about how to safeguard probabilistic systems and how researchers and founders can form the most effective teams. Episode Chapters (00:00) - Intro (05:18) - A founder's thoughts - NYC vs. Silicon Valley (09:50) - Why Hilary thinks non-linear storytelling was wrong (13:44) - Understanding online traffic through bit.ly (15:57) - The taxonomy of data science (19:06) - Founding Fast Forward Labs - “Hire your nerd best friend” (23:05) - Can academia and startups coexist? (26:50) - Machine learning (ML) vs. Artificial Intelligence (AI) (34:00) - Selling Fast Forward Labs to Cloudera (38:51) - Hidden Door's inception (44:29) - The challenge - and opportunity - of AI hallucinations (48:07) - What is Hidden Door? (52:38) - Building an architecture for unstructured input (57:38) - How can you try Hidden Door? (01:00:45) - Shifting the software engineer mindset (01:04:17) - How will product-building shift with generative AI? (01:07:34) - Is AI hype dangerous? (01:12:36) - Where to learn more about Hidden Door Stay in touch: www.lsvp.com X: https://twitter.com/lightspeedvp LinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/ Instagram: https://www.instagram.com/lightspeedventurepartners/ Subscribe on your favorite podcast app: generativenow.co Email: generativenow@lsvp.com The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
Subscribe to the show Click here to get your free copy of The Inner Voice of Trading audiobook.
Dr. Rajendra Gupta is a cosmologist and professor of physics at the University of Ottawa. He has dedicated his work to making sense of images from the new James Webb telescope that appear to contradict the established Big Bang timeline for the universe. Dr. Gupta's explains the recently observed mature galaxies at just 300 million years post-Bang by revising the age of the universe and invoking forsaken ideas like variable constants and tired light. The task remains to explain the cause of this variation. Tell us your ideas in the comments! (00:00:00) Go! (00:00:19) Raj Gupta in the press (00:01:27) What drives you? (00:08:28) Questioning the age of the universe (00:12:23) Patreon Ask (00:24:03) Keating & Rogan & doubting the Bang (00:34:34) The Hubble Tension (00:39:03) What does it mean to rewrite physics? (00:45:03) No difference between distance & time in astronomy (00:48:59) Can physics avoid material foundations indefinitely? (00:56:51) Stiffness of space (01:02:42) A reason why physical constants might change over time (01:05:07) Ressurecting old unpopular ideas from the past (01:12:41) Laboratory measurement of redshift (01:24:18) Multiplicity of theories of nature (01:30:54) Probabilistic approaches to understanding nature (01:37:35) Reimagining mediators in fundamental physics (01:49:47) Closing thoughts on the future of physics and cosmology Support the scientific revolution by joining our Patreon: https://bit.ly/3lcAasB Tell us what you think in the comments or on our Discord: https://discord.gg/MJzKT8CQub #cosmology #jameswebbspacetelescope #astronomy #astrophysics #philosophyofscience #philosophy Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
In this episode, we dive into a common pitfall we've seen many developers fall into lately. While many ad networks use probabilistic models for optimization, they turn to SKAN data when it comes to billing. This discrepancy might mean you're paying 30-40% more than anticipated or that your campaign's performance isn't as stellar as you believed. In this episode, I unpack the underlying mechanics of what is happening – and what we recommend doing instead.**Check out the show notes here:https://mobileuseracquisitionshow.com/episode/ad-network-skan-billing/**Get more mobile user acquisition goodies here:http://RocketShipHQ.comhttp://RocketShipHQ.com/blog
In this enlightening episode, we dive into identity landscape and its significance in the digital era. Joining us are two distinguished guests, Lindsey Colferai, the esteemed Director of Business Development at Adapex, and Debra Fleenor, the visionary Founder & President of the company. Together, they unravel the complexities of identity solutions, exploring their transformative impact on businesses and individuals alike. Through their expertise and firsthand experiences, listeners will gain invaluable insights into the cutting-edge advancements, challenges, and future prospects of this rapidly evolving landscape. Get ready for an illuminating journey into the realm of identity with Adapex by tuning in! About Us: Our mission is to teach historically excluded people how to get started in programmatic media buying and find a dream job. We do so by providing on-demand lessons via the Reach and Frequency™️ program, a dope community with like-minded programmatic experts, and live free and paid group coaching. Hélène Parker has over 10 years of experience in programmatic media buying, servicing agencies and brands in activation, strategy and planning, and leadership. She now dedicates her time to recruiting and training programmatic traders while consulting companies on how to grow and scale a programmatic department. Interested in training or hiring programmatic juniors? Book a Free Call Timestamp: 00:03:23 - Who is Lindsey Colferai? 00:05:32 - Who is Debra Fleenor? 00:08:57 - Defining Adapex's Mission to a 5 year old 00:13:19 - Defining Adapex's Products to a 5 year old 00:15:54 - What is ‘identity' from a publisher's perspective? 00:30:22 - Deterministic vs Probabilistic 00:32:52 - How do you see SSP's relationship growing with the industry? 00:40:41 - Word of wisdom from our guests Interested in finding out if you are a fit for a career in digital advertising and programmatic? Take our free Quiz: www.heleneparker.com/programmaticquiz
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Technical Challenges around Probabilistic Programs and Javascript, published by Ozzie Gooen on August 26, 2023 on The Effective Altruism Forum. While working on Squiggle, we've encountered many technical challenges in writing probabilistic functionality with Javascript. Some of these challenges are solved in Python and must be ported over, and some apply to all languages. We think the following tasks could be good fits for others to tackle. These are fairly isolated and could be done in contained NPM packages or similar. The solutions would be useful for Squiggle and might be handy for others in the Javascript ecosystem as well. Advice and opinions are also appreciated. This post was quickly written, as it's for a narrow audience and might get outdated. We're happy to provide more rigor and context if requested. Let us know if you are interested in taking any of them on and could use some guidance! For those not themselves interested in contributing, this might be useful for giving people a better idea of the sorts of challenges we at QURI work on. 1. Density Estimation Users often want to convert samples into continuous probability distribution functions (PDFs). This is difficult to do automatically. The standard approach of basic Kernel Density Estimation can produce poor fits on multimodal or heavily skewed data. a. Variable kernel density estimation Simple KDE algorithms use a constant bandwidth. There are multiple methods for estimating this. One common method is Silverman's rule of thumb. In practice, using Silverman's rule of thumb with one single bandwidth performs poorly for multimodal or heavily skewed distributions. Squiggle performs log KDE for heavily skewed distributions, but this only helps so much, and this strategy comes with various inconsistencies. There's a set of algorithms for variable kernel density estimation or adaptive bandwidth choice, which seems more promising. Another option is the Sheather-Jones method, which existing python KDE libraries use. We don't know of good Javascript implementations of these algorithms. b. Performant KDE with non-triangle shapes Squiggle now uses a triangle kernel for speed. Fast algorithms (FFT) should be possible, with better kernel shapes. See this thread for some more discussion. c. Cutoff Heuristics One frequent edge-case is that many distributions have specific limits, often at 0. There might be useful heuristics like, "If there are no samples below zero, then it's very likely there should be zero probability mass below zero, even if many samples are close and the used bandwidth would imply otherwise." See this issue for more information. d. Discrete vs. continuous estimation Sometimes, users pass in samples from discrete distributions or mixtures of discrete and continuous distributions. In these cases, it's helpful to have heuristics to detect which data might be meant to be discrete and which is meant to be continuous. Right now, in Squiggle, we do this by using simple heuristics of repetition - if multiple samples are precisely the same, we assume they represent discrete information. It's unclear if there are any great/better ways of doing this heuristically. e. Multidimensional KDE Eventually, it will be useful to do multidimensional KDE. It might be more effective to do this in WebAssembly, but this would of course, introduce complications. 2. Quantiles to Distributions, Maybe with Metalog A frequent use case is: "I have a few quantile/CDF points in mind and want to fit this to a distribution. How should I do this?" One option is to use the Metalog distribution. There's no great existing Javascript implementation of Metalog yet. Sam Nolan made one attempt, but it's not as flexible as we'd like. (It fails to convert many points into metalog distributions). Jonas Moss thinks we can do better than...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New probabilistic simulation tool, published by ProbabilityEnjoyer on August 19, 2023 on The Effective Altruism Forum. Dagger (usedagger.com) is a new tool for calculations with uncertainty. It uses Monte Carlo simulation. There are two ways to specify your simulation model: Import an existing spreadsheet use Probly, a Python dialect designed for probabilistic simulation ℹ️ Each of the two links above has 4 interactive examples. You might want to start there. Spreadsheet Example In this 15-second video, we take a complex existing spreadsheet (100+ rows) from GiveWell and turn it into a Monte Carlo simulation The sheet already gives us "optimistic/pessimistic" values, so it's as simple as adding one column to specify the distribution as (e.g.) uniform (Longer version of this video) Features Dependency graph Intuitive and mathematically rigorous sensitivity analysis Our sensitivity analysis uses Sobol' global sensitivity indices. The approach and the intuition behind it are explained in more detail here. ℹ️ You need to enable the sensitivity analysis under "Advanced options" Summary table This table exposes the structure of your model by showing the dependency graph as a tree. Similar to Workflowy, you can zoom to any variable, or expand/collapse. Probly Probly feels very like Python, except that any number can also be a probability distribution: Example Here's a fuller example of the syntax and resulting output. It's part of a GiveWell CEA of iron and folic acid supplementation. Distribution support Probly supports 9 probability distributions. Each can be constructed in multiple ways. For example, you can construct a normal distribution in 5 ways: This clickable table shows you everything that's supported, and includes example code: ℹ️ Shortcut: probly.dev redirects to usedagger.com/probly Limitations There are at the moment numerous limitations. A small selection of them: Probly: Doesn't support the Sobol' sensitivity analysis Doesn't show the dependency graph Spreadsheet There is no UI in Dagger to edit the model. All changes must go via the spreadsheet. The spreadsheet must specify probability distributions in a specific format. All models are public Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
“In the early stages of a startup, it's not your monetization or competitors that can kill the company, but the team. Most of this is due to cofounders' conflicts. When you have tough times, it's up to you to actually survive through them, so finding a cofounder whom you get along with and who serves as your anchor is the most important thing to do.” - Phillip An “My mentors pushed me towards probabilistic thinking. When I have a really tough decision to make, I think about it as an expected value equation where if you take two choices, within each probability, what is the expected value of success that could drastically change your outcome?” - Phillip An “It's very hard to run a business. During the COVID-19 pandemic, we explored other business models that solved the needs of the market at the time and we survived through it. We have the kind of mentality that regardless of what happens, we can power through and we can be the ones that prove everyone wrong. It's almost like having that chip on your shoulder. It's such an important experience to have that conviction to just go for it and not look back.” - Phillip An Jeremy Au and Phillip An, the founder of Homebase, touched upon the challenges and opportunities in the Southeast Asian property sector, the journey of building a startup, and personal experiences in entrepreneurship. Phillip shared valuable insights on navigating complexities within the organization, balancing financial sustainability and market needs, and the importance of a strong founding team. He also discussed the significance of home ownership in Asian culture and how Homebase aims to address the growing home price crisis. Additionally, Phillip shared a personal story of overcoming adversity during the early stages of his company. Key Topics: Challenges and opportunities in the Southeast Asian property sectorBuilding a startup and the importance of a strong founding teamBalancing financial sustainability and market demandsAddressing the home price crisis and the significance of home ownershipOvercoming adversity and personal experiences in entrepreneurship Watch, listen or read the full insight at https://www.bravesea.com/blog/phillip-an Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://chat.whatsapp.com/CeL3ywi7yOWFd8HTo6yzde Spotify: https://open.spotify.com/show/4TnqkaWpTT181lMA8xNu0T YouTube: https://www.youtube.com/@JeremyAu Apple Podcasts: https://podcasts.apple.com/sg/podcast/brave-southeast-asia-tech-singapore-indonesia-vietnam/id1506890464 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZC5jby9icmF2ZWR5bmFtaWNz TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea Learn more about Pollen here: https://www.pollen.tech
In this episode, we explain Apple's soon-to-be-introduced privacy manifests, which could be the beginning of the end for fingerprinting/probabilistic attribution, which could seriously impact performance for advertisers that still rely on fingerprinting/probabilistic networks.Check out the show notes here: https://mobileuseracquisitionshow.com/episode/privacy-manifests-death-fingerprinting-probablistic-matching/**Get more mobile user acquisition goodies here:http://RocketShipHQ.comhttp://RocketShipHQ.com/blog
Lex chats with Sasha Orloff, CEO and Co-founder of Puzzle - “the first smart accounting software,” which combines a streaming financial data platform that's connected to a general accounting ledger. Sasha starts the conversation detailing his journey from traditional banking to fintech entrepreneurship. Reflecting on the early days of fintech, contrasting the East and West Coast perspectives, and sharing his experiences from leading innovative fintech ventures like LendUp. The conversation dives into the future of fintech, discussing the transition into autonomous accounting, the rise and fall of Personal Financial Management (PFM) tools, and the integration of natural language processing into quantitative software. The duo also contemplates the nature of rules in language models, the dichotomy between deterministic and probabilistic models in AI, and the future of intelligent software. The episode concludes with Sasha sharing various channels to connect with him and learn more about Puzzle. MENTIONED IN THE CONVERSATION Puzzle's Website: https://bit.ly/3NKuO6YSasha's LinkedIn profile: https://bit.ly/3Pt3pHX Topics: Fintech, AI, artificial intelligence, machine learning, GenAI, Accounting, Software, LLM, embedded finance Companies: Puzzle, Puzzle Financial, Mission Lane, LendUp, ChatGPT, Brex, Stripe, Plaid ABOUT THE FINTECH BLUEPRINT
Probabilistic math makes the same assumption that shamans, Catholics, Mormons, Hindus and every other faith all make--that there's a fundamental force in the world that makes the world what it is. In this podcast I examine the assumption that physicists make that randomness correctly describes the most basic structure of the universe. Some people won't like this. Support this show on Patreon and get exclusive access to some amazing perks. Subscribe to my newsletter Scott Carney Investigates Podcast Books: The Wedge What Doesn't Kill Us The Enlightenment Trap The Vortex The Red Market Social Media: YouTube Instagram Facebook Twitter Bluesky ©PokeyBear LLC (2023)
Why do a small number of ads get most of the spend on Meta?In this episode, we'll talk about why this is good for your ads - and discuss the probabilistic Bayesian testing paradigm as well, which so you understand how Meta's algos make these decisions so you can construct your creative testing process more intelligently.We break this down in today's episode.We go into this - and every other aspect of creative testing post-ATT in our new book, which you can download for free: The Definitive Guide to Meta Creative Testing post-ATT.Check out the show notes here: https://mobileuseracquisitionshow.com/episode/deconstructing-bayesian-paradigms/Get more mobile user acquisition goodies here:http://RocketShipHQ.comhttp://RocketShipHQ.com/blog
This week's Pipeliners Podcast episode features Jill Watson continuing her discussion over probabilistic risk analysis, how to build a PRA model as a pipeliner, and the many ways a PRA model can be beneficial to the pipeline industry. In this episode, you will learn what goes into the probabilistic risk analysis model, how humans can interfere with the process, and what models are considered strong. Visit PipelinePodcastNetwork.com for a full episode transcript, as well as detailed show notes with relevant links and insider term definitions.
Sarah Moss is the William Wilhartz Professor of Philosophy and Professor of Law by courtesy at the University of Michigan. She works primarily in epistemology and the philosophy of language, though in the case of this conversation her work has an important bearing on legal philosophy. Robinson and Sarah talk about her book Probabilistic Knowledge, which argues that you can know something that you believe even if you do not believe it fully, and as she quite aptly points out, “The central theses of the book have significant consequences for social and political questions concerning racial profiling, statistical evidence, and legal standards of proof,” all of which are discussed in this episode. Robinson and Sarah begin by introducing the concept of probabilistic belief before turning to Sarah's argument in favor of probabilistic knowledge. They then turn to some applications of her work to outstanding puzzles in philosophy and law. Keep up with Sarah on her website, http://www-personal.umich.edu/~ssmoss/, and check out Probabilistic Knowledge on Amazon, https://a.co/d/iobL8iZ. Robinson's Website: http://robinsonerhardt.com OUTLINE: 00:00 Introduction 3:58 Math and Epistemology 7:35 What is Probabilistic Belief? 11:22 Sarah, David Lewis, and Robert Stalnaker 28:26 Credence and Probabilistic Belief 33:40 Are All Beliefs Probabilistic? 56:57 Probabilistic Knowledge and Racial Profiling 1:20:25 Probabilistic Knowledge and Transformative Experience 1:29:30 Statistical Evidence and Legal Proof 1:48:39 Pragmatic Encroachment on Legal Proceedings 2:04:07 Is Belief a Strong or a Weak Attitude? 2:12:39 The Preface Paradox 2:21:06 Probabilistic Knowledge and the Newcomb Problem 2:27:18 Probabilistic Knowledge and the Philosophy of Action Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between. --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support
In this week's episode of the Pipeliners Podcast, host Russel Treat is joined by Jill Watson of Xodus Group to discuss probabilistic risk analysis. This episode is part one of a two-part series focused on probabilistic risk analysis. Visit PipelinePodcastNetwork.com for a full episode transcript, as well as detailed show notes with relevant links and insider term definitions.
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Probabilities play a big part in AI and machine learning. After all, AI systems are Probabilistic systems that must learn what to do. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Bayes' Theorem, Bayesian Classifier, Naive Bayes, and explain how they relate to AI and why it's important to know about them. Continue reading AI Today Podcast: AI Glossary Series – Bayes' Theorem, Bayesian Classifier, Naive Bayes at AI & Data Today.
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Probabilistic & Deterministic and explain how they relate to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary AI Glossary Series – Artificial Intelligence Continue reading AI Today Podcast: AI Glossary Series – Probabilistic & Deterministic at Cognilytica.