Podcast appearances and mentions of Judea Pearl

Computer scientist

  • 53PODCASTS
  • 83EPISODES
  • 40mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 5, 2025LATEST
Judea Pearl

POPULARITY

20172018201920202021202220232024


Best podcasts about Judea Pearl

Latest podcast episodes about Judea Pearl

Artificiality
David Wolpert: The Thermodynamics of Meaning

Artificiality

Play Episode Listen Later Apr 5, 2025 76:19


In this episode, we welcome David Wolpert, a Professor at the Santa Fe Institute renowned for his groundbreaking work across multiple disciplines—from physics and computer science to game theory and complexity. * Note: If you enjoy our podcast conversations, please join us for the Artificiality Summit on October 23-25 in Bend, Oregon for many more in person conversations like these! Learn more about the Summit at www.artificiality.world/summit.We reached out to David to explore the mathematics of meaning—a concept that's becoming crucial as we live more deeply with artificial intelligences. If machines can hold their own mathematical understanding of meaning, how does that reshape our interactions, our shared reality, and even what it means to be human?David takes us on a journey through his paper "Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics," co-authored with Artemy Kolchinsky. While mathematically rigorous in its foundation, our conversation explores these complex ideas in accessible terms.At the core of our discussion is a novel framework for understanding meaning itself—not just as a philosophical concept, but as something that can be mathematically formalized. David explains how we can move beyond Claude Shannon's syntactic information theory (which focuses on the transmission of bits) to a deeper understanding of semantic information (what those bits actually mean to an agent).Drawing from Judea Pearl's work on causality, Schrödinger's insights on life, and stochastic thermodynamics, David presents a unified framework where meaning emerges naturally from an agent's drive to persist into the future. This approach provides a mathematical basis for understanding what makes certain information meaningful to living systems—from humans to single cells.Our conversation ventures into:How AI might help us understand meaning in ways we cannot perceive ourselvesWhat a mathematically rigorous definition of meaning could mean for AI alignmentHow contexts shape our understanding of what's meaningfulThe distinction between causal information and mere correlationWe finish by talking about David's current work on a potentially concerning horizon: how distributed AI systems interacting through smart contracts could create scenarios beyond our mathematical ability to predict—a "distributed singularity" that might emerge in as little as five years. We wrote about this work here. For anyone interested in artificial intelligence, complexity science, or the fundamental nature of meaning itself, this conversation offers rich insights from one of today's most innovative interdisciplinary thinkers. About David Wolpert:David Wolpert is a Professor at the Santa Fe Institute and one of the modern era's true polymaths. He received his PhD in physics from UC Santa Barbara but has made seminal contributions across numerous fields. His research spans machine learning (where he formulated the "No Free Lunch" theorems), statistical physics, game theory, distributed intelligence, and the foundations of inference and computation. Before joining SFI, Wolpert held positions at NASA, Stanford, and the Santa Fe Institute as a professor. His work consistently bridges disciplinary boundaries to address fundamental questions about complex systems, computation, and the nature of intelligence.Thanks again to Jonathan Coulton for our music.

Value Driven Data Science
Episode 53: A Wake-Up Call from 3 Tech Leaders on Why You're Failing as a Data Scientist

Value Driven Data Science

Play Episode Listen Later Feb 26, 2025 58:26


Genevieve Hayes Consulting Episode 53: A Wake-Up Call from 3 Tech Leaders on Why You're Failing as a Data Scientist Are your data science projects failing to deliver real business value?What if the problem isn’t the technology or the organization, but your approach as a data scientist?With only 11% of data science models making it to deployment and close to 85% of big data projects failing, something clearly isn’t working.In this episode, three globally recognised analytics leaders, Bill Schmarzo, Mark Stouse and John Thompson, join Dr Genevieve Hayes to deliver a tough love wake-up call on why data scientists struggle to create business impact, and more importantly, how to fix it.This episode reveals:Why focusing purely on technical metrics like accuracy and precision is sabotaging your success — and what metrics actually matter to business leaders. [04:18]The critical mindset shift needed to transform from a back-room technical specialist into a valued business partner. [30:33]How to present data science insights in ways that drive action — and why your fancy graphs might be hurting rather than helping. [25:08]Why “data driven” isn’t enough, and how to adopt a “data informed” approach that delivers real business outcomes. [54:08] Guest Bio Bill Schmarzo, also known as “The Dean of Big Data,” is the AI and Data Customer Innovation Strategist for Dell Technologies' AI SPEAR team, and is the author of six books on blending data science, design thinking, and data economics from a value creation and delivery perspective. He is an avid blogger and is ranked as the #4 influencer worldwide in data science and big data by Onalytica and is also an adjunct professor at Iowa State University, where he teaches the “AI-Driven Innovation” class.Mark Stouse is the CEO of ProofAnalytics.ai, a causal AI company that helps companies understand and optimize their operational investments in light of their targeted objectives, time lag, and external factors. Known for his ability to bridge multiple business disciplines, he has successfully operationalized data science at scale across large enterprises, driven by his belief that data science’s primary purpose is enabling better business decisions.John Thompson is EY's Global Head of AI and is the author of four books on AI, data and analytics teams. He was named one of dataIQ's 100 most influential people in data in 2023 and is also an Adjunct Professor at the University of Michigan, where he teaches a course based on his book “Building Analytics Teams”. Links Connect with Bill on LinkedInConnect with Mark on LinkedInConnect with John on LinkedIn Connect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE Read Full Transcript [00:00:00] Dr Genevieve Hayes: Hello, and welcome to Value Driven Data Science, the podcast that helps data scientists transform their technical expertise into tangible business value, career autonomy, and financial reward. I’m Dr. Genevieve Hayes, and today I’m joined by three globally recognized innovators and leaders in AI, analytics, and data science.[00:00:24] Bill Schmarzo, Mark Stouse, and John Thompson. Bill? Also known as the Dean of Big Data, is the AI and Data Customer Innovation Strategist for Dell Technologies AI Spear Team, and is the author of six books on blending data science, design thinking, and data economics from a value creation and delivery perspective.[00:00:49] He is an avid blogger and is ranked as the number four influencer worldwide in data science and big data Analytica. And he’s also an adjunct professor at Iowa State University, where he teaches AI driven innovation. Mark is the CEO of proofanalytics. ai, a causal AI company that helps organizations understand and optimize their operational investments in light of their targeted objectives, time lag and external factors.[00:01:23] Known for his ability to bridge multiple business disciplines, he has successfully operationalized data science at scale across large enterprises. Driven by his belief that data science’s primary purpose is enabling better business decisions. And John is EY’s global head of AI and is the author of four books on AI data and analytics teams.[00:01:49] He was named one of DataIQ’s 100 most influential people in data in 2023. and is also an adjunct professor at the University of Michigan, where he teaches a course based on his book, Building Analytics Teams. Today’s episode will be a tough love wake up call for data scientists on why you are failing to deliver real business value and more importantly, what you can do about it.[00:02:17] So get ready to boost your impact. Earn what you’re worth and rewrite your career algorithm. Bill, Mark, John, welcome to the show.[00:02:25] Mark Stouse: Thank[00:02:26] Bill Schmarzo: Thanks for having us.[00:02:27] John Thompson: to be here.[00:02:28] Dr Genevieve Hayes: Only 11 percent of data scientists say their models always deploy. Only 10 percent of companies obtain significant financial benefits from AI technologies and close to 85 percent of big data projects fail. These statistics, taken from research conducted by Rexa Analytics, the Boston Consulting Group and Gartner respectively, paint a grim view of what it’s like working as a data scientist.[00:02:57] The reality is, you’re probably going to fail. And when that reality occurs, it’s not uncommon for data scientists to blame either the executive for not understanding the brilliance of their work, or the corporate culture for not being ready for data science. And maybe this is true for some organizations.[00:03:20] Particularly those relatively new to the AI adoption path. But it’s now been almost 25 years since William Cleveland first coined the term data science. And as the explosive uptake of generative AI tools, such as chat GPT demonstrate with the right use case. People are very willing to take on AI technologies.[00:03:42] So perhaps it’s finally time to look in the mirror and face the truth. Perhaps the problem is you, the data scientist. But if this is the case, then don’t despair. In many organizations, the leadership just don’t have the time to provide data scientists with the feedback necessary to improve. But today, I’m sitting here with three of the world’s best to provide that advice just for you.[00:04:09] So, let’s cut to the chase what are the biggest mistakes you see data scientists making when it comes to demonstrating their value?[00:04:18] Mark Stouse: I think that you have to start with the fact that they’re not demonstrating their value, right? I mean, if you’re a CEO, a CFO, head of sales really doesn’t matter if you’re trying to make better business decisions over and over and over again. As Bill talks about a lot, the whole idea here is economic,[00:04:39] and it is. About engaging, triggering the laws of compounding you’ve got to be able to do stuff that makes that happen. Data management, for example, even though we all agree that it’s really necessary, particularly if you’re launching, you know, big data solutions. You can’t do this sequentially and be successful.[00:05:04] You’re going to have to find some areas probably using, you know, old fashioned math around causal analytics, multivariable linear regression, things like that, to at least get the ball rolling. In terms of delivering better value, the kind of value that business leaders actually see as valuable[00:05:29] I mean, one of the things that I feel like I say a lot is, you have to have an understanding of your mission, the mission of data science. As somebody who, as a business leader champions it. Is to help people make those better and better and better decisions. And if you’re not doing that, you’re not creating value.[00:05:52] Full stop.[00:05:53] Bill Schmarzo: Totally agree with Mark. I think you’re going to find that all three of us are in violent agreement on a lot of this stuff. What I find interesting is it isn’t just a data scientist fault. Genevieve, you made a comment that leadership lacks the time to provide guidance to data scientists. So if leadership Is it treating data and analytics as an economics conversation if they think it’s a technology conversation is something that should be handled by the CIO, you’ve already lost, you’ve already failed, you already know you failed,[00:06:24] Mark mentioned the fact that this requires the blending of both sides of the aisle. It requires a data scientist to have the right mindset to ask questions like what it is that we’re trying to achieve. How do we create value? What are our desired outcomes? What are the KPIs metrics around which are going to make your success?[00:06:39] Who are our key stakeholders? There’s a series of questions that the data scientist must be empowered to ask and the business Leadership needs to provide the time and people and resources to understand what we’re trying to accomplish. It means we can go back old school with Stephen Covey, begin with an end in mind.[00:07:01] What is it we’re trying to do? Are we trying to improve customer retention? We try to do, you know, reduce unplanned operational downtime or improve patient outcomes. What is it we’re trying to accomplish? The conversation must, must start there. And it has to start with business leadership, setting the direction, setting the charter, putting the posts out where we want to go, and then the data science team collaborating with the stakeholders to unleash that organizational tribal knowledge to actually solve[00:07:32] Dr Genevieve Hayes: think a lot of the problem comes with the fact that many business leaders see data science as being like an IT project. So, if you’ve got your Windows upgrade, the leadership It gives the financing to IT, IT goes along and does it. And then one morning you’re told, when you come into work, your computer will magically upgrade to the latest version of Windows.[00:07:55] So no one really gets bothered by it. And I think many business leaders treat data science as just another IT project like that. They think they can just Give the funding, the data scientists will go away and then they’ll come in one morning and the data science will magically be on their computer.[00:08:15] Bill Schmarzo: Yeah, magic happens, right? No, no, magic doesn’t happen, it doesn’t happen. There has to be that leadership commitment to be at the forefront, not just on the boat, but at the front of the boat saying this is the direction we’re going to go.[00:08:29] John Thompson: That’s the whole reason this book was written. The whole point is that, analytics projects are not tech projects. Analytics projects are cultural transformation projects, is what they are. And if you’re expecting the CEO, CFO, CIO, COO, whoever it is, to go out there and set the vision.[00:08:50] That’s never going to happen because they don’t understand technology, and they don’t understand data. They’d rather be working on building the next factory or buying another company or something like that. What really has to happen is the analytics team has to provide leadership to the leadership for them to understand what they’re going to do.[00:09:12] So when I have a project that we’re trying to do, my team is trying to do, and if we’re working for, let’s say, marketing, I go to the CMO and I say, look, you have to dedicate and commit. that your subject matter experts are going to be in all the meetings. Not just the kickoff meetings, not just the quarterly business review, the weekly meetings.[00:09:36] Because when we go off as an analytics professionals and do things on our own, we have no idea what the business runs like. , we did analytics at one company that I work for. We brought it back and we showed it to the they said, the numbers are wildly wrong. And we said, well, why? And they said, well, you probably don’t understand that what we do is illegal in 10 US states.[00:10:00] So you probably have the data from all those 10 states in the analysis. And we did. So, we took it all out and they look down there and go, you got it right. It’s kind of surprising. You didn’t know what you were doing and you got it right. So, it has to be a marriage of the subject matter experts in the business.[00:10:17] And the data scientists, you can’t go to the leadership and say, tell us what you want. They don’t know what they want. They’d want another horse in Henry Ford’s time, or they glue a, a Walkman onto a radio or something in Steve Jobs time. They don’t know what they want. So you have to come together.[00:10:36] And define it together and you have to work through the entire project together.[00:10:42] Mark Stouse: Yeah, I would add to that, okay, that a lot of times the SMEs also have major holes in their knowledge that the analytics are going to challenge and give them new information. And so I totally agree. I mean, this is an iterative learning exchange. That has profound cultural implications.[00:11:11] One of the things that AI is doing right now is it is introducing a level of transparency and accountability into operations, corporate operations, my operations, your operations, that honestly, none of us are really prepared for. None of us are really prepared for the level of learning that we’re going to have to do.[00:11:36] And very few of us are aware of how polymathic. Most of our challenges, our problems, our objectives really are one of the things that I love to talk about in this regard is analytics made me a much better person. That I once was because it showed me the extent of my ignorance.[00:12:01] And when I kind of came to grips with that and I started to use really the modicum of knowledge that I have as a way of curating my ignorance. And I got humble about it made a big difference[00:12:16] John Thompson: Well, that’s the same when I was working shoulder to shoulder with Bill, I just realized how stupid I was. So, then I just, really had to, come back and, say, oh, God nowhere near the summit, I have a long way to go.[00:12:31] Bill Schmarzo: Hey, hey, Genevie. Let me throw something out there at you and it builds on what John has said and really takes off on what Mark is talking about is that there is a cultural preparation. It needs to take place across organizations in order to learn to master the economies of learning,[00:12:48] the economies of learning, because you could argue in knowledge based industries that what you are learning is more important than what you know. And so if what you know has declining value, and what you’re learning has increasing value, then what Mark talked about, and John as well, both city presenting data and people saying, I didn’t know that was going on, right?[00:13:09] They had a certain impression. And if they have the wrong cultural mindset. They’re going to fight that knowledge. They’re going to fight that learning, oh, I’m going to get fired. I’m going to get punished. No, we need to create cultures that says that we are trying to master the economies and learning and you can’t learn if you’re not willing to fail.[00:13:29] And that is what is powerful about what AI can do for us. And I like to talk about how I’m a big fan of design thinking. I integrate design thinking into all my workshops and all my training because it’s designed to. Cultivate that human learning aspect. AI models are great at cultivating algorithmic learning.[00:13:50] And when you bring those two things together around a learning culture that says you’re going to try things, you’re going to fail, you’re going to learn, those are the organizations that are going to win.[00:13:59] John Thompson: Yeah, you know, to tie together what Mark and Bill are saying there is that, you need people to understand that they’re working from an outmoded view of the business. Now, it’s hard for them to hear that. It’s hard for them to realize it. And what I ask data scientists to do that work for me is when we get a project and we have an operational area, sales, marketing, logistics, finance, manufacturing, whatever it is.[00:14:26] They agreed that they’re going to go on the journey with us. We do something really simple. We do an exploratory data analysis. We look at means and modes and distributions and things like that. And we come back and we say, this is what the business looks like today. And most of the time they go, I had no idea.[00:14:44] You know, I didn’t know that our customers were all, for the most part, between 70 and 50. I had no idea that our price point was really 299. I thought it was 3, 299. So you then end up coming together. You end up with a shared understanding of the business. Now one of two things is generally going to happen.[00:15:05] The business is going to freak out and leave the project and say, I don’t want anything to do with this, or they’re going to lean into it and say, I was working from something that was, as Bill said, declining value. Okay. Now, if they’re open, like a AI model that’s being trained, if they’re open to learning, they can learn what the business looks like today, and we can help them predict what the business should look like tomorrow.[00:15:31] So we have a real issue here that the three of us have talked about it from three different perspectives. We’ve all seen it. We’ve all experienced it. It’s a real issue, we know how people can come together. The question is, will they?[00:15:46] Dr Genevieve Hayes: think part of the issue is that, particularly in the area of data science, there’s a marked lack of leadership because I think a lot of people don’t understand how to lead these projects. So you’ve got Many data scientists who are trained heavily in the whole technical aspect of data science, and one thing I’ve come across is, you know, data scientists who’ll say to me, my job is to do the technical work, tell me what to do.[00:16:23] I’ll go away and do it. Give it to you. And then you manager can go and do whatever you like with it.[00:16:29] Mark Stouse: Model fitment.[00:16:31] Dr Genevieve Hayes: Yeah. And then one thing I’ve experienced is many managers in data science are, you know, It’s often the area that they find difficult to find managers for, so we’ll often get people who have no data science experience whatsoever[00:16:46] and so I think part of the solution is teaching the data scientists that they have to start managing up because they’re the ones who understand what they’re doing the best, but no one’s telling them that because the people above them often don’t know that they should be telling the data[00:17:08] John Thompson: Well, if that’s the situation, they should just fire everybody and save the money. Because it’s never going to go anywhere. But Bill, you were going to say something. Go ahead.[00:17:16] Bill Schmarzo: Yeah, I was going to say, what’s interesting about Genevieve, what you’re saying is that I see this a lot in not just data scientists, but in a lot of people who are scared to show their ignorance in new situations. I think Mark talked about this, is it because they’re, you think about if you’re a data scientist, you probably have a math background. And in math, there’s always a right answer. In data science, there isn’t. There’s all kinds of potential answers, depending on the situation and the circumstances. I see this all the time, by the way, with our sales folks. Who are afraid we’re selling technology. We’re afraid to talk to the line of business because I don’t understand their business Well, you don’t need to understand their business, but you do need to become like socrates and start asking questions What are you trying to accomplish?[00:18:04] What are your goals? What are your desired outcomes? How do you measure success? Who are your stakeholders ? You have to be genuinely interested In their success and ask those kind of questions if you’re doing it to just kind of check a box off Then just get chad gpt to rattle it off But if you’re genuinely trying to understand what they’re trying to accomplish And then thinking about all these marvelous different tools you have because they’re only tools And how you can weave them together to help solve that now you’ve got That collaboration that john’s book talks about about bringing these teams together Yeah[00:18:39] Mark Stouse: is, famously paraphrased probably did actually say something like this, . But he’s famously paraphrased as saying that he would rather have a really smart question than the best answer in the world. And. I actually experienced that two days ago,[00:18:57] in a conversation with a prospect where I literally, I mean, totally knew nothing about their business. Zero, but I asked evidently really good questions. And so his impression of me at the end of the meeting was, golly, you know, so much about our business. And I wanted to say, yeah, cause you just educated me.[00:19:21] Right. You know, I do now. And so I think there’s actually a pattern here that’s really worth elevating. So what we are seeing right now with regard to data science teams is scary similar to what happened with it after Y2K, the business turned around and looked at him and said, seriously, we spend all that money,[00:19:45] I mean, what the heck? And so what happened? The CIO got, demoted organizationally pretty far down in the company wasn’t a true C suite member anymore. Typically the whole thing reported up into finance. The issue was not. Finance, believing that they knew it better than the it people,[00:20:09] it was, we are going to transform this profession from being a technology first profession to a business outcomes. First profession, a money first profession, an economics organization, that has more oftentimes than not been the outcome in the last 25 years. But I think that that’s exactly what’s going on right now with a lot of data science teams.[00:20:39] You know, I used to sit in technology briefing rooms, listening to CIOs and other people talk about their problems. And. This one CIO said, you know, what I did is I asked every single person in my organization around the world to go take a finance for non financial managers course at their local university.[00:21:06] They want credit for it. We’ll pay the bill. If they just want to audit it, they can do that. And they started really cross pollinating. These teams to give them more perspective about the business. I totally ripped that off because it just struck me as a CMO as being like, so many of these problems, you could just do a search and replace and get to marketing.[00:21:32] And so I started doing the same thing and I’ve made that suggestion to different CDOs, some of whom have actually done it. So it’s just kind of one of those things where you have to say, I need to know more. So this whole culture of being a specialist is changing from.[00:21:53] This, which, this is enough, this is okay , I’m making a vertical sign with my hand, to a T shaped thing, where the T is all about context. It’s all about everything. That’s not part of your. Profession[00:22:09] John Thompson: Yeah, well, I’m going to say that here’s another book that you should have your hands on. This is Aristotle. We can forget about Socrates. Aristotle’s the name. But you know. But , Bill’s always talking about Socrates. I’m an Aristotle guy myself. So, you[00:22:23] Bill Schmarzo: Okay, well I Socrates had a better jump shot. I’m sorry. He could really nail that[00:22:28] John Thompson: true. It’s true. Absolutely. Well, getting back , to the theme of the discussion, in 1 of the teams that I had at CSL bearing, which is an Australian company there in Melbourne, I took my data science team and I brought in speech coaches.[00:22:45] Presentation coaches people who understand business, people who understood how to talk about different things. And I ran them through a battery of classes. And I told them, you’re going to be in front of the CEO, you’re going to be in front of the EVP of finance, you’re going to be in front of all these different people, and you need to have the confidence to speak their language.[00:23:07] Whenever we had meetings, we talk data science talk, we talk data and integration and vectors and, algorithms and all that kind of stuff. But when we were in the finance meeting, we talked finance. That’s all we talked. And whenever we talked to anybody, we denominated all our conversations in money.[00:23:25] Whether it was drachma, yen, euros, pounds, whatever it was, we never talked about speeds and feeds and accuracy and results. We always talked about money. And if it didn’t make money, we didn’t do it. So, the other thing that we did that really made a difference was that when the data scientists and data scientists hate this, When they went into a meeting, and I was there, and even if I wasn’t there, they were giving the end users and executives recommendations.[00:23:57] They weren’t going in and showing a model and a result and walking out the door and go, well, you’re smart enough to interpret it. No, they’re not smart enough to interpret it. They actually told the marketing people. These are the 3 things you should do. And if your data scientists are not being predictive and recommending actions, they’re not doing their job.[00:24:18] Dr Genevieve Hayes: What’s the, so what test At the end of everything, you have to be able to say, so what does this mean to whoever your audience is?[00:24:25] Mark Stouse: That’s right. I mean, you have to be able to say well, if the business team can’t look at your output, your data science output, and know what to do with it, and know how to make a better decision, it’s like everything else that you did didn’t happen. I mean it, early in proof, we were working on. UX, because it became really clear that what was good for a data scientist wasn’t working. For like everybody else. And so we did a lot of research into it. Would you believe that business teams are okay with charts? Most of them, if they see a graph, they just totally freeze and it’s not because they’re stupid.[00:25:08] It’s because so many people had a bad experience in school with math. This is a psychological, this is an intellectual and they freeze. So in causal analytics, one of the challenges is that, I mean, this is pretty much functioning most of the time anyway, on time series data, so there is a graph,[00:25:31] this is kind of like a non negotiable, but we had a customer that was feeding data so fast into proof that the automatic recalc of the model was happening like lickety split. And that graph all of a sudden looked exactly like a GPS. It worked like a GPS. In fact, it really is a GPS. And so as soon as we stylized.[00:26:01] That graph to look more like a GPS track, all of a sudden everybody went, Oh,[00:26:10] Dr Genevieve Hayes: So I got rid of all the PTSD from high school maths and made it something familiar.[00:26:16] Mark Stouse: right. And so it’s very interesting. Totally,[00:26:21] Bill Schmarzo: very much mirrors what mark talked about So when I was the new vice president of advertiser analytics at yahoo we were trying to solve a problem to help our advertisers optimize their spend across the yahoo ad network and because I didn’t know anything about that industry We went out and my team went out and interviewed all these advertisers and their agencies.[00:26:41] And I was given two UEX people and zero data. Well, I did have one data scientist. But I had mostly UX people on this project. My boss there said, you’re going to want UX people. I was like, no, no, I need analytics. He said, trust me in UX people and the process we went through and I could spend an hour talking about the grand failure of the start and the reclamation of how it was saved at a bar after too many drinks at the Waldorf there in New York.[00:27:07] But what we’ve realized is that. For us to be effective for our target audience was which was media planners and buyers and campaign managers. That was our stakeholders. It wasn’t the analysts, it was our stakeholders. Like Mark said, the last thing they wanted to see was a chart. And like John said, what they wanted the application to do was to tell them what to do.[00:27:27] So we designed this user interface that on one side, think of it as a newspaper, said, this is what’s going on with your campaign. This audience is responding. These sites are this, these keywords are doing this. And the right hand side gave recommendations. We think you should move spend from this to this.[00:27:42] We think you should do this. And it had three buttons on this thing. You could accept it and it would kick into our advertising network and kick in. And we’d measure how effective that was. They could reject it. They didn’t think I was confident and we’d measure effectiveness or they could change it. And we found through our research by putting that change button in there that they had control, that adoption went through the roof.[00:28:08] When it was either yes or no, adoption was really hard, they hardly ever used it. Give them a chance to actually change it. That adoption went through the roof of the technology. So what John was saying about, you have to be able to really deliver recommendations, but you can’t have the system feel like it’s your overlord.[00:28:27] You’ve got to be like it’s your Yoda on your shoulder whispering to your saying, Hey, I think you should do this. And you’re going, eh, I like that. No, I don’t like this. I want to do that instead. And when you give them control, then the adoption process happens much smoother. But for us to deliver those kinds of results, we had to know in detail, what decisions are they trying to make?[00:28:45] How are they going to measure success? We had to really understand their business. And then the data and the analytics stuff was really easy because we knew what we had to do, but we also knew what we didn’t have to do. We didn’t have to boil the ocean. We were trying to answer basically 21 questions.[00:29:01] The media planners and buyers and the campaign managers had 21 decisions to make and we built analytics and recommendations for each Of those 21[00:29:10] John Thompson: We did the same thing, you know, it blends the two stories from Mark and Bill, we were working at CSL and we were trying to give the people tools to find the best next location for plasma donation centers. And, like you said, there were 50, 60 different salient factors they had, and when we presented to them in charts and graphs, Information overload.[00:29:34] They melted down. You can just see their brains coming out of their ears. But once we put it on a map and hit it all and put little dials that they could fiddle with, they ran with it.[00:29:49] Bill Schmarzo: brilliant[00:29:50] Mark Stouse: totally, totally agree with that. 100% you have to know what to give people and you have to know how to give them, control over some of it, nobody wants to be an automaton. And yet also they will totally lock up if you just give them the keys to the kingdom. Yeah.[00:30:09] Dr Genevieve Hayes: on what you’ve been saying in the discussion so far, what I’m hearing is that the critical difference between what data scientists think their role is and what business leaders actually need is the data scientists is. Well, the ones who aren’t performing well think their role is to just sit there in a back room and do technical work like they would have done in their university assignments.[00:30:33] What the business leaders need is someone who can work with them, ask the right questions in order to understand the needs of the business. make recommendations that answer those questions. But in answering those questions, we’re taking a data informed approach rather than a data driven approach. So you need to deliver the answers to those questions in such a way that you’re informing the business leaders and you’re delivering it in a way that Delivers the right user experience for them, rather than the user experience that the data scientists might want, which would be your high school maths graphs.[00:31:17] Is that a good summary?[00:31:20] John Thompson: Yeah, I think that’s a really good summary. You know, one of the things that Bill and I, and I believe Mark understands is we’re all working to change, you know, Bill and I are teaching at universities in the United States. I’m on the advisory board of about five. Major universities. And whenever I go in and talk to these universities and they say, Oh, well, we teach them, these algorithms and these mathematical techniques and these data science and this statistics.[00:31:48] And I’m like, you are setting these people up for failure. You need to have them have presentation skills, communication skills, collaboration. You need to take about a third of these credits out and change them out for soft skills because you said it Genevieve, the way we train people, young people in undergraduate and graduate is that they have a belief that they’re going to go sit in a room and fiddle with numbers.[00:32:13] That’s not going to be successful.[00:32:16] Mark Stouse: I would give one more point of dimensionality to this, which is a little more human, in some respects, and that is that I think that a lot of data scientists love the fact that they are seen as Merlin’s as shamans. And the problem that I personally witnessed this about two years ago is when you let business leaders persist in seeing you in those terms.[00:32:46] And when all of a sudden there was a major meltdown of some kind, in this case, it was interest rates, and they turn around and they say, as this one CEO said in this meeting Hey, I know you’ve been doing all kinds of really cool stuff back there with AI and everything else. And now I need help.[00:33:08] Okay. And the clear expectation was. I need it now, I need some brilliant insight now. And the answer that he got was, we’re not ready yet. We’re still doing the data management piece. And this CEO dropped the loudest F bomb. That I think I have ever heard from anybody in almost any situation,[00:33:36] and that guy, that data science leader was gone the very next day. Now, was that fair? No. Was it stupid? For the data science leader to say what he said. Yeah, it was really dumb.[00:33:52] Bill Schmarzo: Don’t you call that the tyranny of perfection mark? Is that your term that you always use? is that There’s this idea that I gotta get the data all right first before I can start doing analysis And I think it’s you I hear you say the tyranny of perfection is what hurts You Progress over perfection, learning over absolutes, and that’s part of the challenge is it’s never going to be perfect.[00:34:13] Your data is never going to be perfect, you got to use good enough data[00:34:17] Mark Stouse: It’s like the ultimate negative version of the waterfall.[00:34:22] John Thompson: Yeah,[00:34:23] Mark Stouse: yet we’re all supposedly living in agile paradise. And yet very few people actually operate[00:34:30] John Thompson: that’s 1 thing. I want to make sure that we get in the recording is that I’ve been on record for years and I’ve gone in front of audiences and said this over and over again. Agile and analytics don’t mix that is. There’s no way that those 2 go together. Agile is a babysitting methodology. Data scientists don’t do well with it.[00:34:50] So, you know, I’ll get hate mail for that, but I will die on that hill. But, the 1 thing that, Mark, I agree with 100 percent of what you said, but the answer itself or the clue itself is in the title. We’ve been talking about. It’s data science. It’s not magic. I get people coming and asking me to do magical things all the time.[00:35:11] And I’m like. Well, have you chipped all the people? Do you have all their brain waves? If you have that data set, I can probably analyze it. But, given that you don’t understand what’s going on inside their cranium, that’s magic. I can’t do that. We had the same situation when COVID hit, people weren’t leaving their house.[00:35:29] So they’re not donating plasma. It’s kind of obvious, so, people came to us and said, Hey, the world’s gone to hell in a handbasket in the last two weeks. The models aren’t working and I’m like, yeah, the world’s changed, give us four weeks to get a little bit of data.[00:35:43] We’ll start to give you a glimmer of what this world’s going to look like two months later. We had the models working back in single digit error terms, but when the world goes haywire, you’re not going to have any data, and then when the executives are yelling at you, you just have to say, look, this is modeling.[00:36:01] This is analytics. We have no precedent here.[00:36:05] Bill Schmarzo: to build on what John was just saying that the challenge that I’ve always seen with data science organizations is if they’re led by somebody with a software development background, getting back to the agile analytics thing, the problem with software development. is that software development defines the requirements for success.[00:36:23] Data science discovers them. It’s hard to make that a linear process. And so, if you came to me and said, Hey, Schmarz, you got a big, giant data science team. I had a great data science team at Hitachi. Holy cow, they were great. You said, hey, we need to solve this problem. When can you have it done?[00:36:38] I would say, I need to look at the problem. I need to start exploring it. I can’t give you a hard date. And that drove software development folks nuts. I need a date for when I, I don’t know, cause I’ve got to explore. I’m going to try lots of things. I’m going to fail a lot.[00:36:51] I’m going to try things that I know are going to fail because I can learn when I fail. And so, when you have an organization that has a software development mindset, , like John was talking about, they don’t understand the discovery and learning process that the data science process has to go through to discover the criteria for success.[00:37:09] Mark Stouse: right. It’s the difference between science and engineering.[00:37:13] John Thompson: Yes, exactly. And 1 of the things, 1 of the things that I’ve created, it’s, you know, everybody does it, but I have a term for it. It’s a personal project portfolio for data scientists. And every time I’ve done this and every team. Every data scientist has come to me individually and said, this is too much work.[00:37:32] It’s too hard. I can’t[00:37:34] Bill Schmarzo: Ha, ha, ha,[00:37:35] John Thompson: three months later, they go, this is the only way I want to work. And what you do is you give them enough work so when they run into roadblocks, they can stop working on that project. They can go out and take a swim or work on something else or go walk their dog or whatever.[00:37:53] It’s not the end of the world because the only project they’re working on can’t go forward. if they’ve got a bunch of projects to time slice on. And this happens all the time. You’re in, team meetings and you’re talking and all of a sudden the data scientist isn’t talking about that forecasting problem.[00:38:09] It’s like they ran into a roadblock. They hit a wall. Then a week later, they come in and they’re like, Oh, my God, when I was in the shower, I figured it out. You have to make time for cogitation, introspection, and eureka moments. That has to happen in data science.[00:38:28] Bill Schmarzo: That is great, John. I love that. That is wonderful.[00:38:30] Mark Stouse: And of course the problem is. Yeah. Is that you can’t predict any of that, that’s the part of this. There’s so much we can predict. Can’t predict that.[00:38:42] Bill Schmarzo: you know what you could do though? You could do Mark, you could prescribe that your data science team takes multiple showers every day to have more of those shower moments. See, that’s the problem. I see a correlation. If showers drive eureka moments, dang it.[00:38:54] Let’s give him more showers.[00:38:56] John Thompson: Yep. Just like firemen cause fires[00:38:59] Mark Stouse: Yeah, that’s an interesting correlation there, man.[00:39:05] Dr Genevieve Hayes: So, if businesses need something different from what the data scientists are offering, why don’t they just articulate that in the data scientist’s role description?[00:39:16] John Thompson: because they don’t know they need it.[00:39:17] Mark Stouse: Yeah. And I think also you gotta really remember who you’re dealing with here. I mean, the background of the average C suite member is not highly intellectual. That’s not an insult, that’s just they’re not deep thinkers. They don’t think a lot. They don’t[00:39:37] John Thompson: that with tech phobia.[00:39:38] Mark Stouse: tech phobia and a short termism perspective.[00:39:43] That arguably is kind of the worst of all the pieces.[00:39:48] John Thompson: storm. It’s a[00:39:49] Mark Stouse: It is, it is a[00:39:50] John Thompson: know, I, I had, I’ve had CEOs come to me and say, we’re in a real crisis here and you guys aren’t helping. I was like, well, how do you know we’re not helping? You never talked to us. And, in this situation, we had to actually analyze the entire problem and we’re a week away from making recommendations.[00:40:08] And I said that I said, we have an answer in 7 days. He goes, I need an answer today. I said, well, then you should go talk to someone else because in 7 days, I’ll have it. But now I don’t. So, I met with him a week later. I showed them all the data, all the analytics, all the recommendations. And they said to me, we don’t really think you understand the business well enough.[00:40:27] We in the C suite have looked at it and we don’t think that this will solve it. And I’m like, okay, fine, cool. No problem. So I left, and 2 weeks later, they called me in and said, well, we don’t have a better idea. So, what was that you said? And I said, well, we’ve coded it all into the operational systems.[00:40:43] All you have to do is say yes. And we’ll turn it on and it was 1 of the 1st times and only times in my life when the chart was going like this, we made all the changes and it went like that. It was a perfect fit. It worked like a charm and then, a month later, I guess it was about 6 months later, the CEO came around and said, wow, you guys really knew your stuff.[00:41:07] You really were able to help us. Turn this around and make it a benefit and we turned it around faster than any of the competitors did. And then he said, well, what would you like to do next? And I said, well, I resigned last week. So, , I’m going to go do it somewhere else.[00:41:22] And he’s like, what? You just made a huge difference in the business. And I said, yeah, you didn’t pay me anymore. You didn’t recognize me. And I’ve been here for nearly 4 years, and I’ve had to fight you tooth and nail for everything. I’m tired of it.[00:41:34] Mark Stouse: Yeah. That’s what’s called knowing your value. One of the things that I think is so ironic about this entire conversation is that if any function has the skillsets necessary to forecast and demonstrate their value as multipliers. Of business decisions, decision quality, decision outcomes it’s data science.[00:42:05] And yet they just kind of. It’s like not there. And when you say that to them, they kind of look at you kind of like, did you really just say that, and so it is, one of the things that I’ve learned from analytics is that in the average corporation, you have linear functions that are by definition, linear value creators.[00:42:32] Sales would be a great example. And then you have others that are non linear multipliers. Marketing is one, data science is another, the list is long, it’s always the non linear multipliers that get into trouble because they don’t know how to show their value. In the same way that a linear creator can show it[00:42:55] John Thompson: And I think that’s absolutely true, Mark. And what I’ve been saying, and Bill’s heard this until he’s sick of it. Is that, , data science always has to be denominated in currency. Always, if you can’t tell them in 6 months, you’re going to double the sales or in 3 months, you’re going to cut cost or in, , 5 months, you’re going to have double the customers.[00:43:17] If you’re not denominating that in currency and whatever currency they care about, you’re wasting your time.[00:43:23] Dr Genevieve Hayes: The problem is, every single data science book tells you that the metrics to evaluate models by are, precision, recall, accuracy, et[00:43:31] John Thompson: Yeah, but that’s technology. That’s not business.[00:43:34] Dr Genevieve Hayes: exactly. I’ve only ever seen one textbook where they say, those are technical metrics, but the metrics that really count are the business metrics, which are basically dollars and cents.[00:43:44] John Thompson: well, here’s the second one that says it.[00:43:46] Dr Genevieve Hayes: I will read that. For the audience it’s Business Analytics Teams by John Thompson.[00:43:51] John Thompson: building analytics[00:43:52] Dr Genevieve Hayes: Oh, sorry, Building[00:43:54] Mark Stouse: But, but I got to tell you seriously, the book that John wrote that everybody needs to read in business. Okay. Not just data scientists, but pretty much everybody. Is about causal AI. And it’s because almost all of the questions. In business are about, why did that happen? How did it happen? How long did it take for that to happen?[00:44:20] It’s causal. And so, I mean, when you really look at it that way and you start to say, well, what effects am I causing? What effects is my function causing, all of a sudden the scales kind of have a way of falling away from your eyes and you see things. Differently.[00:44:43] John Thompson: of you to say that about that book. I appreciate that.[00:44:46] Mark Stouse: That kick ass book, kick[00:44:48] John Thompson: Well, thank you. But, most people don’t understand that we’ve had analytical or foundational AI for 70 years. We’ve had generative AI for two, and we’ve had causal for a while, but only people understand it are the people on this call and Judea Pearl and maybe 10 others in the world, but we’re moving in a direction where those 3 families of AI are going to be working together in what I’m calling composite AI, which is the path to artificial, or as Bill says, average general intelligence or AGI.[00:45:24] But there are lots of eight eyes people talk about it as if it’s one thing and it’s[00:45:29] Mark Stouse: Yeah, correct. That’s right.[00:45:31] Dr Genevieve Hayes: I think part of the problem with causal AI is it’s just not taught in data science courses.[00:45:37] John Thompson: it was not taught anywhere. The only place it’s taught is UCLA.[00:45:40] Mark Stouse: But the other problem, which I think is where you’re going with it Genevieve is even 10 years ago, they weren’t even teaching multivariable linear regression as a cornerstone element of a data science program. So , they basically over rotated and again, I’m not knocking it.[00:46:01] I’m not knocking machine learning or anything like that. Okay. But they over rotated it and they turned it into some sort of Omni tool, that could do it all. And it can’t do it all.[00:46:15] Dr Genevieve Hayes: think part of the problem is the technical side of data science is the amalgamation of statistics and computer science . But many data science university courses arose out of the computer science departments. So they focused on the machine learning courses whereas many of those things like.[00:46:34] multivariable linear analysis and hypothesis testing, which leads to things like causal AI. They’re taught in the statistics courses that just don’t pop up in the data science programs.[00:46:46] Mark Stouse: Well, that’s certainly my experience. I teach at USC in the grad school and that’s the problem in a nutshell right there. In fact, we’re getting ready to have kind of a little convocation in LA about this very thing in a couple of months because it’s not sustainable.[00:47:05] Bill Schmarzo: Well, if you don’t mind, I’m going to go back a second. We talked about, measuring success as currency. I’m going to challenge that a little bit. We certainly need to think about how we create value, and value isn’t just currency. John held up a book earlier, and I’m going to hold up one now, Wealth of Nations,[00:47:23] John Thompson: Oh yeah.[00:47:25] Bill Schmarzo: Page 28, Adam Smith talks about value he talks about value creation, and it isn’t just about ROI or net present value. Value is a broad category. You got customer value, employee value, a partner stakeholder. You have society value, community value of environmental value.[00:47:43] We have ethical value. And as we look at the models that we are building, that were guided or data science teams to build, we need to broaden the definition of value. It isn’t sufficient if we can drive ROI, if it’s destroying our environment and putting people out of work. We need to think more holistically.[00:48:04] Adam Smith talks about this. Yeah, 1776. Good year, by the way, it’s ultimate old school, but it’s important when we are As a data science team working with the business that we’re broadening their discussions, I’ve had conversations with hospitals and banks recently. We run these workshops and one of the things I always do, I end up pausing about halfway through the workshop and say, what are your desired outcomes from a community perspective?[00:48:27] You sit inside a community or hospital. You have a community around you, a bank, you have a community around you. What are your desired outcomes for that community? How are you going to measure success? What are those KPIs and metrics? And they look at me like I got lobsters crawling out of my ears.[00:48:40] The thing is is that it’s critical if we’re going to Be in champion data science, especially with these tools like these new ai tools causal predictive generative autonomous, these tools allow us to deliver a much broader range of what value is And so I really rail against when somebody says, you know, and not trying to really somebody here but You know, we gotta deliver a better ROI.[00:49:05] How do you codify environmental and community impact into an ROI? Because ROI and a lot of financial metrics tend to be lagging indicators. And if you’re going to build AI models, you want to build them on leading indicators.[00:49:22] Mark Stouse: It’s a lagging efficiency metric,[00:49:24] Bill Schmarzo: Yeah, exactly. And AI doesn’t do a very good job of optimizing what’s already happened.[00:49:29] That’s not what it does.[00:49:30] John Thompson: sure.[00:49:31] Bill Schmarzo: I think part of the challenge, you’re going to hear this from John and from Mark as well, is that we broaden this conversation. We open our eyes because AI doesn’t need to just deliver on what’s happened in the past, looks at the historical data and just replicates that going forward.[00:49:45] That leads to confirmation bias of other things. We have a chance in AI through the AI utility function to define what it is we want our AI models to do. from environmental, society, community, ethical perspective. That is the huge opportunity, and Adam Smith says that so.[00:50:03] John Thompson: There you go. Adam Smith. I love it. Socrates, Aristotle, Adam[00:50:08] Bill Schmarzo: By the way, Adam Smith motivated this book that I wrote called The Economics of Data Analytics and Digital Transformation I wrote this book because I got sick and tired of walking into a business conversation and saying, Data, that’s technology. No, data, that’s economics.[00:50:25] Mark Stouse: and I’ll tell you what, you know what, Genevieve, I’m so cognizant of the fact in this conversation that the summer can’t come fast enough when I too will have a book,[00:50:39] John Thompson: yay.[00:50:41] Mark Stouse: yeah, I will say this, One of the things that if you use proof, you’ll see this, is that there’s a place where you can monetize in and out of a model, but money itself is not causal. It’s what you spend it on. That’s either causal or in some cases, not[00:51:01] That’s a really, really important nuance. It’s not in conflict with what John was saying about monetizing it. And it’s also not in conflict with what. My friend Schmarrs was saying about, ROI is so misused as a term in business. It’s just kind of nuts.[00:51:25] It’s more like a shorthand way of conveying, did we get value[00:51:31] John Thompson: yeah. And the reason I say that we denominated everything in currency is that’s generally one of the only ways. to get executives interested. If you go in and say, Oh, we’re going to improve this. We’re going to improve that. They’re like, I don’t care. If I say this project is going to take 6 months and it’s going to give you 42 million and it’s going to cost you nothing, then they’re like, tell me more, and going back to what Bill had said earlier, we need to open our aperture on what we do with these projects when we were at Dell or Bill and I swapped our times at Dell, we actually did a project with a hospital system in the United States and over 2 years.[00:52:11] We knocked down the incidence of post surgical sepsis by 72%. We saved a number of lives. We saved a lot of money, too, but we saves people’s lives. So analytics can do a lot. Most of the people are focused on. Oh, how fast can we optimize the search engine algorithm? Or, how can we get the advertisers more yield or more money?[00:52:32] There’s a lot of things we can do to make this world better. We just have to do it.[00:52:36] Mark Stouse: The fastest way to be more efficient is to be more effective, right? I mean, and so when I hear. CEOs and CFOs, because those are the people who use this language a lot. Talk about efficiency. I say, whoa, whoa, hold on. You’re not really talking about efficiency. You’re talking about cost cutting.[00:52:58] Those two things are very different. And it’s not that you shouldn’t cut costs if you need to, but it’s not efficiency. And ultimately you’re not going to cut your way into better effectiveness. It’s just not the way things go.[00:53:14] John Thompson: Amen.[00:53:15] Mark Stouse: And so, this is kind of like the old statement about physicists,[00:53:18] if they’re physicists long enough, they turn into philosophers. I think all three of us, have that going on. Because we have seen reality through a analytical lens for so long that you do actually get a philosophy of things.[00:53:38] Dr Genevieve Hayes: So what I’m hearing from all of you is that for data scientists to create value for the businesses that they’re working for, they need to start shifting their approach to basically look at how can we make the businesses needs. And how can we do that in a way that can be expressed in the business’s language, which is dollars and cents, but also, as Bill pointed out value in terms of the community environment.[00:54:08] So less financially tangible points of view.[00:54:11] Bill Schmarzo: And if I could just slightly add to that, I would say first thing that they need to do is to understand how does our organization create value for our constituents and stakeholders.[00:54:22] Start there. Great conversation. What are our desired outcomes? What are the key decisions? How do we measure success? If we have that conversation, by the way, it isn’t unusual to have that conversation with the business stakeholders and they go I’m not exactly sure.[00:54:37] John Thompson: I don’t know how that works.[00:54:38] Bill Schmarzo: Yeah. So you need to find what are you trying to improve customer retention? You’re trying to increase market share. What are you trying to accomplish and why and how are you going to measure success? So the fact that the data science team is asking that question, because like John said, data science can solve a whole myriad of problems.[00:54:54] It isn’t that it can’t solve. It can solve all kinds. That’s kind of the challenge. So understanding what problems we want to solve starts by understanding how does your organization create value. If you’re a hospital, like John said, reducing hospital acquired infections, reducing long term stay, whatever it might be.[00:55:09] There are some clear goals. Processes initiatives around which organizations are trying to create value[00:55:18] Dr Genevieve Hayes: So on that note, what is the single most important change our listeners could make tomorrow to accelerate their data science impact and results?[00:55:28] John Thompson: I’ll go first. And it’s to take your data science teams and not merge them into operational teams, but to introduce the executives that are in charge of these areas and have them have an agreement that they’re going to work together. Start there.[00:55:46] Bill Schmarzo: Start with how do you how does the organization create value? I mean understand that fundamentally ask those questions and keep asking until you find somebody in the organization who can say we’re trying to do this[00:55:57] Mark Stouse: to which I would just only add, don’t forget the people are people and they all have egos and they all want to appear smarter and smarter and smarter. And so if you help them do that, you will be forever in there must have list, it’s a great truth that I have found if you want to kind of leverage bills construct, it’s the economies of ego.[00:56:24] Bill Schmarzo: I like[00:56:24] John Thompson: right, Mark, wrap this up. When’s your book coming out? What’s the title?[00:56:28] Mark Stouse: It’s in July and I’ll be shot at dawn. But if I tell you the title, but so I interviewed several hundred fortune, 2000 CEOs and CFOs about how they see go to market. The changes that need to be made in go to market. The accountability for it all that kind of stuff. And so the purpose of this book really in 150, 160 pages is to say, Hey, they’re not all correct, but this is why they’re talking to you the way that they’re talking to you, and this is why they’re firing.[00:57:05] People in go to market and particularly in B2B at an unprecedented rate. And you could, without too much deviation, do a search and replace on marketing and sales and replace it with data science and you’d get largely the same stuff. LinkedIn,[00:57:25] Dr Genevieve Hayes: for listeners who want to get in contact with each of you, what can they do?[00:57:29] John Thompson: LinkedIn. John Thompson. That’s where I’m at.[00:57:32] Mark Stouse: Mark Stouse,[00:57:34] Bill Schmarzo: And not only connect there, but we have conversations all the time. The three of us are part of an amazing community of people who have really bright by diverse perspectives. And we get into some really great conversations. So not only connect with us, but participate, jump in. Don’t be afraid.[00:57:51] Dr Genevieve Hayes: And there you have it, another value packed episode to help you turn your data skills into serious clout, cash, and career freedom. If you found today’s episode useful and think others could benefit, please leave us a rating and review on your podcast platform of choice. That way we’ll be able to reach more data scientists just like you.[00:58:11] Thanks for joining me today, Bill, Mark, and John.[00:58:16] Mark Stouse: Great being with[00:58:16] John Thompson: was fun.[00:58:18] Dr Genevieve Hayes: And for those in the audience, thanks for listening. I’m Dr. Genevieve Hayes, and this has been value driven data science. The post Episode 53: A Wake-Up Call from 3 Tech Leaders on Why You're Failing as a Data Scientist first appeared on Genevieve Hayes Consulting and is written by Dr Genevieve Hayes.

Master of Life Awareness
"The Book of Why" by Judea Pearl & Dana Mackenzie - Book PReview - The New Science of Cause and Effect

Master of Life Awareness

Play Episode Listen Later Dec 30, 2024 22:22


The Book of Why by Judea Pearl & Dana Mackenzie reminds us that correlation is not causation. The causal revolution, instigated by Judea Pearl and his colleagues, has cut through a century of confusion and established causality -- the study of cause and effect -- on a firm scientific basis. It lets us explore the world that is and the worlds that could have been. It shows us the essence of human thought and key to artificial intelligence.  The New Science of Cause and Effect "The Book of Why" by Judea Pearl & Dana Mackenzie - Book PReview Book of the Week - BOTW - Season 7 Book 52 Buy the book on Amazon https://amzn.to/3BSL2YB GET IT. READ :) #why #causality #awareness  FIND OUT which HUMAN NEED is driving all of your behavior http://6-human-needs.sfwalker.com/ Human Needs Psychology + Emotional Intelligence + Universal Laws of Nature = MASTER OF LIFE AWARENESS https://www.sfwalker.com/master-life-awareness --- Support this podcast: https://podcasters.spotify.com/pod/show/sfwalker/support

The Nonlinear Library
EA - Apply to Aether - Independent LLM Agent Safety Research Group by RohanS

The Nonlinear Library

Play Episode Listen Later Aug 24, 2024 13:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to Aether - Independent LLM Agent Safety Research Group, published by RohanS on August 24, 2024 on The Effective Altruism Forum. The basic idea Aether will be a small group of talented early-career AI safety researchers with a shared research vision who work full-time with mentorship on their best effort at making AI go well. That research vision will broadly revolve around the alignment, control, and evaluation of LLM agents. There is a lot of latent talent in the AI safety space, and this group will hopefully serve as a way to convert some of that talent into directly impactful work and great career capital. Get involved! 1. Submit a short expression of interest here by Fri, Aug 23rd at 11:59pm PT if you would like to contribute to the group as a full-time in-person researcher, part-time / remote collaborator, or advisor. (Note: Short turnaround time!) 2. Apply to join the group here by Sat, Aug 31st at 11:59pm PT. 3. Get in touch with Rohan at rs4126@columbia.edu with any questions. Who are we? Team members so far Rohan Subramani I recently completed my undergrad in CS and Math at Columbia, where I helped run an Effective Altruism group and an AI alignment group. I'm now interning at CHAI. I've done several technical AI safety research projects in the past couple years. I've worked on comparing the expressivities of objective-specification formalisms in RL (at AI Safety Hub Labs, now called LASR Labs), generalizing causal games to better capture safety-relevant properties of agents (in an independent group), corrigibility in partially observable assistance games (my current project at CHAI), and LLM instruction-following generalization (part of an independent research group). I've been thinking about LLM agent safety quite a bit for the past couple of months, and I am now also starting to work on this area as part of my CHAI internship. I think my (moderate) strengths include general intelligence, theoretical research, AI safety takes, and being fairly agentic. A relevant (moderate) weakness of mine is programming. I like indie rock music :). Max Heitmann I hold an undergraduate master's degree (MPhysPhil) in Physics and Philosophy and a postgraduate master's degree (BPhil) in Philosophy from Oxford University. I collaborated with Rohan on the ASH Labs project ( comparing the expressivities of objective-specification formalisms in RL), and have also worked for a short while at the Center for AI Safety (CAIS) under contract as a ghostwriter for the AI Safety, Ethics, and Society textbook. During my two years on the BPhil, I worked on a number of AI safety-relevant projects with Patrick Butlin from FHI. These were focussed on deep learning interpretability, the measurement of beliefs in LLMs, and the emergence of agency in AI systems. In my thesis, I tried to offer a theory of causation grounded in statistical mechanics, and then applied this theory to vindicate the presuppositions of Judea Pearl-style causal modeling and inference. Advisors Erik Jenner and Francis Rhys Ward have said they're happy to at least occasionally provide feedback for this research group. We will continue working to ensure this group receives regular mentorship from experienced researchers with relevant background. We are highly prioritizing working out of an AI safety office because of the informal mentorship benefits this brings. Research agenda We are interested in conducting research on the risks and opportunities for safety posed by LLM agents. LLM agents are goal-directed cognitive architectures powered by one or more large language models (LLMs). The following diagram (taken from On AutoGPT) depicts many of the basic components of LLM agents, such as task decomposition and memory. We think future generations of LLM agents might significantly alter the safety landscape, for two ...

Causal Bandits Podcast
Free Will, LLMs & Intelligence | Judea Pearl Ep 21 | CausalBanditsPodcast.com

Causal Bandits Podcast

Play Episode Listen Later Aug 12, 2024 55:42 Transcription Available


Send us a Text Message.Meet The Godfather of Modern Causal InferenceHis work has pretty literally changed the course of my life and I am honored and incredibly grateful we could meet for this great conversation in his home in Los AngelesTo anybody who knows something about modern causal inference, he needs no introduction.He loves history, philosophy and music, and I believe it's fair to say that he's the godfather of modern causality.Ladies & gentlemen, please welcome, professor Judea Pearl.Subscribe to never miss an episodeAbout The GuestJudea Pearl is a computer scientist, and a creator of the Structural Causal Model (SCM) framework for causal inference. In 2011, he has been awarded the Turing Award, the highest distinction in computer science, for his pioneering works on Bayesian networks and graphical causal models and "fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning".Connect with Judea:Judea on Twitter/XJudea's webpageAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.Connect with Alex:Alex on the Internet LinksPearl, J. - "The Book of Why"Kahneman, D. - "ThinkiShould we build the Causal Experts Network?Share your thoughts in the surveyAnything But LawDiscover inspiring stories and insights from entrepreneurs, athletes, and thought leaders.Listen on: Apple Podcasts SpotifySupport the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

Causal Bandits Podcast
Causal AI & Individual Treatment Effects | Scott Mueller Ep. 20 | CausalBanditsPodcast.com

Causal Bandits Podcast

Play Episode Listen Later Jul 22, 2024 53:29


Send us a Text Message.Can we say something about YOUR personal treatment effect?The estimation of individual treatment effects is the Holy Grail of personalized medicine.It's also extremely difficult.Yet, Scott is not discouraged from studying this topic.In fact, he quit a pretty successful business to study it.In a series of papers, Scott describes how combining experimental and observational data can help us understand individual causal effects.Although this sounds enigmatic to many, the intuition behind this mechanism is simpler than you might think.In the episode we discuss:

Quantitude
S5E23 A Rosetta Stone for DAGs and SEM

Quantitude

Play Episode Listen Later Apr 30, 2024 48:56


In this week's episode Greg and Patrick talk about both structural equation modeling and directed acyclic graphs, or DAGs, where they are similar and where they are different, and try to provide a Rosetta Stone for translating back and forth between the two. Along the way they also discuss pop, garage sales, thinking about excessive thought, roly-polies, potato bugs, been to the cinema, sweet tea, smiley face sub-i, poop hat, the British Museum, fiberglass replicas, love languages, cave drawings, the space-time continuum, coffee shops, a DAGs Czar, We Are The World (with Cyndi Lauper), tennis shoes, and bubblers. Stay in contact with Quantitude! Twitter: @quantitudepod Web page: quantitudepod.org Merch: redbubble.com

Investigando la investigación
301. Más allá de las correlaciones: un viaje hacia el aprendizaje causal, con Jordi Vitrià y Álvaro Parafita

Investigando la investigación

Play Episode Listen Later Mar 29, 2024 49:28


Hoy abordamos un tema fascinante y de creciente importancia: el aprendizaje causal. Nos acompañan Jordi Vitrià, de la Universidad de Barcelona, y Álvaro Parafita, investigador senior en el Barcelona Supercomputing Center, dos expertos en el campo del machine learning y el aprendizaje causal, quienes nos guían por este complejo pero intrigante mundo. El aprendizaje causal se centra en entender las relaciones de causa y efecto más allá de las simples correlaciones estadísticas. Esta aproximación permite a las máquinas tomar decisiones más informadas y justas, impactando positivamente en nuestra vida cotidiana. Durante la conversación, nuestros invitados comparten su transición de trabajar en machine learning clásico a investigar en aprendizaje causal. Esta transición fue motivada por los límites del aprendizaje automático tradicional y la promesa del aprendizaje causal de ofrecer soluciones más robustas y éticas. Se destacó cómo el aprendizaje causal puede abordar problemas de sesgo y discriminación en algoritmos, contribuyendo significativamente a la justicia algorítmica y la toma de decisiones éticas. Exploramos también aplicaciones prácticas del aprendizaje causal en áreas tan variadas como la medicina, la política y la justicia algorítmica. Estas aplicaciones subrayan la importancia de comprender las causas reales detrás de los datos para tomar decisiones informadas y justas. Jordi y Álvaro compartieron desafíos y anécdotas de su investigación, resaltando la importancia de la resiliencia en la ciencia y cómo los aparentes fracasos pueden ser fuentes valiosas de aprendizaje y descubrimiento. Para aquellos interesados en adentrarse en el aprendizaje causal, nuestros invitados recomiendan comenzar con lecturas fundamentales como "The Book of Why" de Judea Pearl, y subrayan la necesidad de paciencia y resiliencia frente a los retos inherentes al campo. Este episodio no solo arroja luz sobre la complejidad y la belleza del aprendizaje causal, sino que también subraya su creciente relevancia en un mundo cada vez más guiado por la tecnología y la inteligencia artificial. Invitamos a nuestros oyentes a explorar más sobre este tema apasionante y a seguir cuestionando cómo las decisiones automatizadas impactan nuestras vidas. Para aquellos interesados en profundizar más, aquí los enlaces a las páginas web de nuestros invitados y sus direcciones de email: Jordi Vitria - https://algorismes.github.io/ - jordi.vitria@ub.edu Álvaro Parafita - https://www.linkedin.com/in/alvaroparafita/ - parafita.alvaro@gmail.com Más info y discusiones sobre este y otros episodios en nuestra comunidad de investigadores en: ⁠⁠⁠⁠⁠⁠⁠⁠https://horacio-ps.com/comunidad --- Send in a voice message: https://podcasters.spotify.com/pod/show/horacio-ps/message

The Mixtape with Scott
S3E3: Carlos Cinelli, Statistician, University of Washington

The Mixtape with Scott

Play Episode Listen Later Jan 23, 2024 64:13


Philosophy of the PodcastWelcome to the Mixtape with Scott, a podcast devoted to hearing the stories of living economists and a non-randomly selected oral history of the economics profession of the last 50 years. Before I introduce this week's guest, I wanted to start off with a quote from a book I'm reading that explains the philosophy of the podcast. “For the large m majority of people, hearing others' stories enables them to see their own experiences in a new, truthful light. They realize — usually instantaneously — that a story another has told is their own story, only with different details. This realization seems to sneak past their defenses. There is something almost irresistible about another person's facing and honoring the truth, without fanfare of any kind, but with courage and clarity and assurance. The other participants feel invited, even emboldened, to stand unflinching before the truth themselves. By opening ourselves even a little to the remarkable spectacle of other people reconsidering their lives, we begin to reconsider our own.” — Terry Warner, Bonds That Make Us FreeThe purpose of the podcast is not to tell the story of living economists. The purpose of the podcast is to hear the stories of living economists as they themselves tell it. It is to make an effort to without judgment just pay attention to the life lived of another person and not make them some non-playable character in the video game of our life. To immature people, others are not real, and the purpose of the podcast is, if for no one else, to listen to people so that they become real, and in that process of listening, for me to be changed.They may sound heavy or it may sound even a little silly. After all, isn't this first and foremost a conversation between two economists? But economists are people first, and the thing I just said is for people. And let's be frank — aren't man of us feeling, at least some of the time, alone in our work? And isn't, at least some of the time, the case that our work is all consuming? I think there are people in my family who still don't understand what my job is as a professor at a university, let alone what my actual research is about. There are colleagues like that too. Many of us are in departments where we may be the only ones in our field, and many of us are studying topics where our networks are thin. And so loneliness is very common. It is common for professors, it is common for students, it is common for people in industry, it is common for people non-profits and it is common for people in government. It is common for people in between jobs. And while the purpose of the podcast is not to alleviate loneliness, as that most likely is only something a person can do for themselves, the purpose is to share in the stories of other people on the hypothesis that that is a gift we give those whose stories we listen to, but it's also maybe moreso the gift we give the deepest part of ourselves. Scott's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Carlos Cinelli, PhD Statistics, University of Washington's Statistics DepartmentSo, with that said, let me introduce this week's guest. Carlos Cinelli may seem like a guest who does not quite fit, but his is the story of the economics profession in a couple of ways. First, he is someone who left economics. Carlos was an undergraduate major in economics who then did a masters in economics and after doing so left economics (and econometrics) to become a statistician. The leaving of economics is not the road less traveled. By talking to Carlos, and hearing his story, the hope is that the survivor bias of the podcast guests might be weakened if only a tad bit. But Carlos also fits into one of the broader themes of the podcast which is causal inference. Carlos studied at UCLA under two notable figures in the history of econometrics and causal inference: Ed Leamer in the economics department and Judea Pearl in the computer science department. And Carlos is now an assistant professor at University of Washington in the statistics department whose work consistently moved into domains of relevance in economics, such as his work in the linear of econometric theory and practice by Chris Taber, Emily Oster and others. That work is important and concerns sensitivity analysis with omitted variable bias. And he has also written an excellent paper with Judea Pearl and Andrew Forney detailing precisely the kinds of covariates we should be contemplating when trying to address the claims of unconfoundedness. So without further ado, I will turn it over to Carlos. Thank you again for your support of the podcast. Please like, share and follow!Scott's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to Scott's Substack at causalinf.substack.com/subscribe

Many Minds
From the archive: The point of (animal) personality

Many Minds

Play Episode Listen Later Dec 27, 2023 45:06


Hi friends! We've been on hiatus for the fall, but we'll be back with new episodes in January 2024. In the meanwhile, enjoy another favorite from our archives! ---- [originally aired November 2, 2022] Some of us are a little shy; others are sociable. There are those that love to explore the new, and those happy to stick to the familiar. We're all a bit different, in other words—and when I say “we” I don't just mean humans. Over the last couple of decades there's been an explosion of research on personality differences in animals too—in birds, in dogs, in fish, all across the animal kingdom. This research is addressing questions like: What are the ways that individuals of the same species differ from each other? What drives these differences? And is this variation just randomness, some kind of inevitable biological noise, or could it have an evolved function? My guest today is Dr. Kate Laskowski. Kate is an Assistant Professor of Evolution and Ecology at the University of California, Davis. Her lab focuses on fish. They use fish, and especially one species of fish—the Amazon molly—as a model system for understanding animal personality (or as she sometimes calls it “consistent individual behavioral variation”).  In this episode, Kate and I discuss a paper she recently published with colleagues that reviews this booming subfield. We talk about how personality manifests in animals and how it may differ from human personality. We zoom in on what is perhaps the most puzzling question in this whole research area: Why do creatures have personality differences to begin with? Is there a point to all this individual variation, evolutionarily speaking? We discuss two leading frameworks that have tried to answer the question, and then consider some recent studies of Kate's that have added an unexpected twist. On the way, we touch on Darwinian demons, combative anemones, and a research method Kate calls "fish Big Brother." Alright friends, I had fun with this one, and I think you'll enjoy it, too. On to my conversation with Kate Laskowski!   A transcript of this episode is available here.    Notes and links 3:00 – A paper by Dr. Laskowski and a colleague on strong personalities in sticklebacks. 5:30 – The website for the lab that Dr. Laskowski directs at UC-Davis.   7:00 – The paper we focus on—‘Consistent Individual Behavioral Variation: What do we know and where are we going?'—is available here. 11:00 – A brief encyclopedia entry on sticklebacks. 13:00 – A video of two sea anemones fighting. A research article about fighting (and personality) in sea anemones. 15:00 – A classic article reviewing the “Big 5” model in human personality research. 17:00 – The original article proposing five personality factors in animals. 22:30 – A recent special issue on the “Pace-of-Life syndromes” framework. 27:00 – A recent paper on evidence for the “fluctuating selection” idea in great tits. 29:00 – A 2017 paper by Dr. Laskowski and colleagues on “behavioral individuality” in clonal fish raised in near-identical environments. 32:10 – A just-released paper by Dr. Laskowski and colleagues extending their earlier findings on clonal fish. 39:30 – The Twitter account of the Many Birds project. The website for the project.   Dr. Laskowski recommends: Innate, by Kevin Mitchell Why Fish Don't Exist, by Lulu Miller The Book of Why, by Judea Pearl and Dana Mackenzie   Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.  For updates about the show, visit our website or follow us on Twitter: @ManyMindsPod.

Selbstbewusste KI
#23 Christian Hugo Hoffmann: Verantwortung aufteilen in Mensch-Maschine-Teams

Selbstbewusste KI

Play Episode Listen Later Nov 27, 2023 69:52


Der Unternehmer, Philosoph, Ökonom und Publizist Christian Hugo Hoffmann sieht blinde Flecken in der KI-Entwicklung. Gemeinsam sprechen wir über Unterschiede der Intelligenz von Menschen, Tieren und Maschinen und wie diese Differenzen ihn zu seinem neuen Buch motiviert haben. Weiter geht es darum, inwiefern sich die Intelligenzforschung auf falschen Pfaden befindet, um Alleinstellungsmerkmale menschlicher Intelligenz sowie über die Simulationstheorie. Für die Zukunft sieht Christian Hugo Hoffmann die Zuschreibung von Verantwortung nicht nur beim Menschen oder nur der Maschine, sondern auch in Mensch-Maschine-Teams. Autor: Karsten WendlandRedaktion, Aufnahmeleitung und Produktion: Karsten WendlandRedaktionsassistenz: Robin Herrmann Licence: CC-BY In dieser Episode genannte Quellen:  Website von Christian Hugo Hoffmann: https://www.christian-hugo-hoffmann.com The Quest for a Universal Theory of Intelligence: https://www.degruyter.com/document/doi/10.1515/9783110756166/html Human Intelligence and Exceptionalism Revisited by a Philosopher: 100 Years After 'Intelligence and its Measurement': https://www.ingentaconnect.com/content/imp/jcs/2022/00000029/f0020011/art00003 Judea Pearl: http://bayes.cs.ucla.edu/jp_home.html Causality von Judea Pearl: http://bayes.cs.ucla.edu/BOOK-2K/ The Book of Why von Judea Pearl: http://bayes.cs.ucla.edu/WHY/ Yuval Noah Harari: https://www.ynharari.com/de/ Reality Plus von David Chalmers: https://www.suhrkamp.de/buch/david-j-chalmers-realitaet-t-9783518588000 Alpha Go: https://deepmind.google/technologies/alphago/ Technological Brave New World? Eschatological Narratives on Digitization and Their Flaws von Christian Hugo Hoffmann: https://scholarlypublishingcollective.org/psup/posthuman-studies/article-abstract/6/1/53/343530/Technological-Brave-New-World-Eschatological?redirectedFrom=fulltext

LessWrong Curated Podcast
"At 87, Pearl is still able to change his mind" by rotatingpaguro

LessWrong Curated Podcast

Play Episode Listen Later Oct 30, 2023 9:36


Judea Pearl is a famous researcher, known for Bayesian networks (the standard way of representing Bayesian models), and his statistical formalization of causality. Although he has always been recommended reading here, he's less of a staple compared to, say, Jaynes. So the need to re-introduce him. My purpose here is to highlight a soothing, unexpected show of rationality on his part.One year ago I reviewed his last book, The Book of Why, in a failed[1] submission to the ACX book review contest. There I spend a lot of time around what appears to me as a total paradox in a central message of the book, dear to Pearl: that you can't just use statistics and probabilities to understand causal relationships; you need a causal model, a fundamentally different beast. Yet, at the same time, Pearl shows how to implement a causal model in terms of a standard statistical model.Before giving me the time to properly raise all my eyebrows, he then sweepingly connects this insight to Everything Everywhere. In particular, he thinks that machine learning is "stuck on rung one", his own idiomatic expression to say that machine learning algorithms, only combing for correlations in the training data, are stuck at statistics-level reasoning, while causal reasoning resides at higher "rungs" on the "ladder of causation", which can't be reached unless you deliberately employ causal techniques.Source:https://www.lesswrong.com/posts/uFqnB6BG4bkMW23LR/at-87-pearl-is-still-able-to-change-his-mindNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

Resources Radio
Electrifying Large Vehicles, with Nafisa Lohawala

Resources Radio

Play Episode Listen Later Jul 31, 2023 24:39


In this week's episode, host Kristin Hayes talks with Nafisa Lohawala, a fellow at Resources for the Future who researches the effects of government policies on the transportation sector. Lohawala discusses the findings of a recent report that explores efforts to electrify medium- and heavy-duty vehicle fleets, the opportunities and challenges of electrification as a pathway toward lower transportation-sector emissions, and policies that could aid electrification. References and recommendations: “Medium- and Heavy-Duty Vehicle Electrification: Challenges, Policy Solutions, and Open Research Questions” by Beia Spiller, Nafisa Lohawala, and Emma DeAngeli; https://www.rff.org/publications/reports/medium-and-heavy-duty-vehicle-electrification-challenges-policy-solutions-and-open-research-questions/ Special series on the Common Resources blog: Electrifying Large Vehicles by Emma DeAngeli, Nafisa Lohawala, and Beia Spiller; https://www.resources.org/special-series-electrifying-large-vehicles/ “The Book of Why: The New Science of Cause and Effect” by Judea Pearl and Dana Mackenzie; https://www.hachettebookgroup.com/titles/judea-pearl/the-book-of-why/9780465097616/

The Sam Taylor Podcast
Embracing the Unknown: Exploring philosophy, management and neuroscience with Colin Conrad, IDPhD

The Sam Taylor Podcast

Play Episode Listen Later Jun 29, 2023 70:29


Dr. Colin Conrad is an Assistant Professor at Dalhousie University and a friend of Sam's. He is one of a relatively small number of researchers using EEG and eye tracking effectively combining computer science, philosophy, neuroscience, and management. In this episode, he joined Sam to discuss about his research, give insights to the formation of a new college at Dal, some likes and dislikes of his field and his definition of success. Mentioned in this episode: The Book of Why by Judea Pearl: https://www.amazon.ca/Book-Why-Science-Cause-Effect/dp/046509760X Dr. Colin Conrad's Info: colin.conrad@dal.ca https://www.dal.ca/faculty/management/school-of-information-management/faculty-staff/faculty/colin-conrad.html Vishu's Info: vishu.handa@dal.ca https://www.linkedin.com/in/vishu-handa-0bbba7254/ Sam's Info: samantha.taylor@dal.ca https://www.linkedin.com/in/samantha-taylor-64b93558

Making Sense with Sam Harris
Making Sense of Free Will | Episode 5 of The Essential Sam Harris

Making Sense with Sam Harris

Play Episode Listen Later Feb 14, 2023 44:18


In this episode, we examine the timeless question of “free will”: what constitutes it, what is meant by it, what ought to be meant by it, and, of course, whether we have it at all. We start with the neuroscientist Robert Sapolsky who begins to deflate the widely held intuition and assumption of “libertarian free will” by drawing out a mechanistic and determined description of the universe. We then hear from the philosopher who has long been Sam's intellectual wrestling opponent on this subject, Daniel Dennett. Dennett and Sam spar about definitional and epistemological frameworks of what Dennett insists is “free will,” and what Sam contends could never be. The author and physicist Sean Carroll then engages Sam with more attempts to find a philosophically defensible notion of free will by leaning on the unknowable nature of the universe revealed by quantum mechanics. We then listen in on Sam's engagement with the mathematician and author Judea Pearl who focuses on matters of causation to tease out a freedom of will. After a historical review of Princess Elizabeth's famous exchanges with Rene Descartes, we hear from the biologist Jerry Coyne, who firmly agrees with Sam that a deterministic picture of reality leaves absolutely no room for anything like free will. We then hear from the curiously entertaining mind of comedian and producer Ricky Gervais who was thinking about free will while taking a bath when he decided to phone Sam. We conclude with Sam's own response to concerns that an erasure of free will inevitably result in fatalism, loss of meaning, and passive defeat. Sam insists that the loss of free will actually pushes us in the opposite direction where we begin to see hatred and vengeance as incoherent and start to connect with a deeper and truer sense of genuine compassion.   About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating.

Making Sense with Sam Harris - Subscriber Content

In this episode, we examine the timeless question of “free will”: what constitutes it, what is meant by it, what ought to be meant by it, and, of course, whether we have it at all. We start with the neuroscientist Robert Sapolsky who begins to deflate the widely held intuition and assumption of “libertarian free will” by drawing out a mechanistic and determined description of the universe. We then hear from the philosopher who has long been Sam’s intellectual wrestling opponent on this subject, Daniel Dennett. Dennett and Sam spar about definitional and epistemological frameworks of what Dennett insists is “free will,” and what Sam contends could never be. The author and physicist Sean Carroll then engages Sam with more attempts to find a philosophically defensible notion of free will by leaning on the unknowable nature of the universe revealed by quantum mechanics. We then listen in on Sam’s engagement with the mathematician and author Judea Pearl who focuses on matters of causation to tease out a freedom of will. After a historical review of Princess Elizabeth’s famous exchanges with Rene Descartes, we hear from the biologist Jerry Coyne, who firmly agrees with Sam that a deterministic picture of reality leaves absolutely no room for anything like free will. We then hear from the curiously entertaining mind of comedian and producer Ricky Gervais who was thinking about free will while taking a bath when he decided to phone Sam. We conclude with Sam’s own response to concerns that an erasure of free will inevitably result in fatalism, loss of meaning, and passive defeat. Sam insists that the loss of free will actually pushes us in the opposite direction where we begin to see hatred and vengeance as incoherent and start to connect with a deeper and truer sense of genuine compassion. About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.

Lexman Artificial
Judea Pearl: Ringgit Retirees, Tanganyikas Psychophysics, and Oddball Novelties

Lexman Artificial

Play Episode Listen Later Feb 6, 2023 5:32


Judea Pearl discusses the pros and cons of retirements in different countries and discusses the factors that contribute to happiness.

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 3 - When will AGI arrive? - Jack Kendall (CTO, Rain.AI, maker of neural net chips)

Artificial General Intelligence (AGI) Show with Soroush Pour

Play Episode Listen Later Feb 1, 2023 61:34


In this episode, we speak with Rain.AI CTO Jack Kendall about his timelines for the arrival of AGI. He also speaks to how we might get there and some of the implications.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/Show linksJack KendallBio: Jack invented a new method for connecting artificial silicon neurons using coaxial nanowires at the U. Florida before starting Rain as co-founder and CTO.LinkedIn: https://www.linkedin.com/in/jack-kendall-21072887/Website: https://rain.aiFurther resourcesTry out ChatGPT: https://openai.com/blog/chatgpt/Judea Pearl's book, "The Book of Why"[Paper] https://www.deepmind.com/publications/causal-reasoning-from-meta-reinforcement-learning[Paper] Backpropagation and the Brain: https://www.nature.com/articles/s41583-020-0277-3

RECONSIDER with Bill Hartman
Reconsider... Weight & Fat Loss with Bill Hartman | Episode #3

RECONSIDER with Bill Hartman

Play Episode Listen Later Jan 29, 2023 39:42


In this episode Bill and Chris talk about the misconceptions of weight loss and fat loss in the health, wellness and fitness world. Many people go to the gym and try to train themselves for better body composition under poor guidance and through workout models that will more often than not result in injury. If we can understand the first principles of fat loss and what is necessary in order to experience success, we can get a much greater return on our investment of energy and resources. #weightloss #fatlossworkouts #exerciseLIKE the podcast to help other see it COMMENT on YT with any questions you might have and let's start a discussion SHARE it with anyone who you know is stuck in the land of fitness confusionSUBSCRIBE for even more helpful content:YT: https://www.youtube.com/@BillHartmanPT IG: https://www.instagram.com/bill_hartman_pt/ FB: https://www.facebook.com/BillHartmanPT WEB: https://billhartmanpt.com/Podcast audio: https://open.spotify.com/show/7cJM6v5S38RLroac6BQjrd?si=eca3b211dafc4202 https://podcasts.apple.com/us/podcast/reconsider-with-bill-hartman/id1662268221 or download with YTPremium Books mentioned in this episode: The Book of Why by Judea Pearl https://amzn.to/3CNz6Va The One Thing by Greg Keller https://amzn.to/3iGyJ8g All Gain, No Pain by Bill Hartman https://amzn.to/3kHpJjTImportant questions asked in this episode: (0:00) What are we even talking about here? (3:00) What are second and third order consequences? (4:35) Why do we need to continually expand our perspectives? (6:00) What are leading and lagging measures and how do we use them? (10:09) Why are systems better than goals? (11:56) What is the “one thing” that we can do to help with our progress today? (14:05) What is some of the history behind weight loss, fat loss diets and exercise? (18:40) Isn't it possible for us to find whatever result we might want if we look hard enough in the research? (19:45) What is the current confusion and misinformation about weight/fat loss? (23:05) Can we shift away from the idea of weight loss please? (25:24) What is one of the biggest first principles of fat loss training? (26:25) What are two very useful lag measures for fat loss? (27:37) How can thinking algorithmically apply to fat loss workouts? (29:17) What is the problem with just focusing on losing weight? (33:01) Can we get a recap and wrap this up guys? (35:42) Is exercise even a good way to lose fat? (37:29) What about walking and step counting? (39:00) Will Bill say something profound to close this out?Reconsider… is sponsored by Substance Nutrition https://substancenutrition.com/ A healthy brain requires a healthy body. Why not take care of both all at once by using Synthesis protein and Neuro Coffee? Use code RECON at checkout to get free shipping on all of your orders

RECONSIDER with Bill Hartman
RECONsider... No Pain, No Gain with Bill Hartman | Episode #2

RECONSIDER with Bill Hartman

Play Episode Listen Later Jan 15, 2023 27:18


In this episode Bill and Chris talk about the common aphorism “no pain, no gain” as it relates to progress you might see by going to the gym or training. Do we really need blood, sweat and tears for results?As a continuation of the topics discussed in the first episode about mental models and reconsiderations for getting into shape; the guys explore the history of the saying, what the current myths are surrounding it, and offer better questions to ask in order to reach any health/fitness outcome desired. #askbetterquestions(3:15) Who are these guys and what are they talking about?(4:18) Is pain even necessary for progress? (6:48) How do we manage expectations for success?(9:02) What is some history of the saying?(13:24) What sort of test does Bill use to help determine if the sensation someone is feeling is pain/injury or not?(18:43) Is there ever a good type of pain?(21:41) How can we establish safe to fail ranges for ourselves?Books mentioned in this episode:The Book of Why by Judea Pearl https://amzn.to/3CNz6VaThe One Thing by Greg Keller https://amzn.to/3iGyJ8gReconsider… is sponsored by Substance Nutrition www.substancenutrition.com A healthy brain requires a healthy body. Why not take care of both all at once by using Synthesis protein and Neuro Coffee?Use code RECON at checkout to get free shipping on all of your orders Make sure to like, comment and subscribe to the podcast to help us continue to grow and evolve.If you know anyone else struggling to navigate the murky waters of health, wellness and fitness please share this episode.If you would like us to cover a certain topic that you find surrounded with confusion please email us at AskBillHartman@gmail.com and put “Reconsider” in the subject line. You can also leave us a comment if you are watching us on the YouTubes.

Lexman Artificial
Judea Pearl on Remover Lexman Artificial podcast is the new, innovative

Lexman Artificial

Play Episode Listen Later Dec 19, 2022 4:34


Judea Pearl, a renowned computer scientist and artificial intelligence expert, discusses his new book, "Remover". The book discusses methods for removing tarmacadam from road surfaces and other surfaces.

Forecasting Impact
Prof. Galit Shmueli, On causal inference, behavioural modifications, and role of ethics.

Forecasting Impact

Play Episode Listen Later Dec 13, 2022 61:10


In this episode, we spoke to Prof Galit Shmueli, Tsing Hua Distinguished Professor at the Institute of Service Science, and Institute Director at the College of Technology Management, National Tsing Hua University. Galit talked with us about the multi-disciplinary work she has done over the years, as well as the differences between statistical models that are purposed for predicting as opposed to explaining. We also discussed causal inference and how it can be used to estimate behaviour modification by the tech giants. We continued and talked about the ethics and the complexity of that landscape.  Galit's recommended books:  1.    The age of surveillance capitalism, Shoshana Zuboff 2.     Books on causality:     • The book of Why, Dana Mackenzie and Judea Pearl      • Causal Inference in Statistics: A Primer, Judea Pearl, Madelyn Glymour, and Nicholas P. Jewell      • Causality, Judea Pearl  3.     Mostly Harmless Econometrics: An Empiricist's Companion, Joshua D. Angrist, ‎Jörn-Steffen Pischke 

Lexman Artificial
Judea Pearl on Puddling: A Philosophical Investigation into the nature and Value of Imagination

Lexman Artificial

Play Episode Listen Later Nov 9, 2022 4:49


Judea Pearl is a cognitive neuroscientist, philosopher and author who has written extensively on the experience of thought. She talks about her book Puddling: A Philosophical Investigation into the nature and Value of Imagination, and how it connects to our experience as cognitive neuro explorers.

Lexman Artificial
Judea Pearl on the Patriciate

Lexman Artificial

Play Episode Listen Later Nov 7, 2022 4:25


Judea Pearl describes the patriciate and their weekly meetings, as well as the world they live in.

Many Minds
The point of (animal) personality

Many Minds

Play Episode Listen Later Nov 2, 2022 45:06


Some of us are a little shy; others are sociable. There are those that love to explore the new, and those happy to stick to the familiar. We're all a bit different, in other words—and when I say “we” I don't just mean humans. Over the last couple of decades there's been an explosion of research on personality differences in animals too—in birds, in dogs, in fish, all across the animal kingdom. This research is addressing questions like: What are the ways that individuals of the same species differ from each other? What drives these differences? And is this variation just randomness, some kind of inevitable biological noise, or could it have an evolved function? My guest today is Dr. Kate Laskowski. Kate is an Assistant Professor of Evolution and Ecology at the University of California, Davis. Her lab focuses on fish. They use fish, and especially one species of fish—the Amazon molly—as a model system for understanding animal personality (or as she sometimes calls it “consistent individual behavioral variation”).  In this episode, Kate and I discuss a paper she recently published with colleagues that reviews this booming subfield. We talk about how personality manifests in animals and how it may differ from human personality. We zoom in on what is perhaps the most puzzling question in this whole research area: Why do creatures have personality differences to begin with? Is there a point to all this individual variation, evolutionarily speaking? We discuss two leading frameworks that have tried to answer the question, and then consider some recent studies of Kate's that have added an unexpected twist. On the way, we touch on Darwinian demons, combative anemones, and a research method Kate calls "fish Big Brother." Alright friends, I had fun with this one, and I think you'll enjoy it, too. On to my conversation with Kate Laskowski!   A transcript of this episode will be available soon.   Notes and links 3:00 – A paper by Dr. Laskowski and a colleague on strong personalities in sticklebacks. 5:30 – The website for the lab that Dr. Laskowski directs at UC-Davis.   7:00 – The paper we focus on—‘Consistent Individual Behavioral Variation: What do we know and where are we going?'—is available here. 11:00 – A brief encyclopedia entry on sticklebacks. 13:00 – A video of two sea anemones fighting. A research article about fighting (and personality) in sea anemones. 15:00 – A classic article reviewing the “Big 5” model in human personality research. 17:00 – The original article proposing five personality factors in animals. 22:30 – A recent special issue on the “Pace-of-Life syndromes” framework. 27:00 – A recent paper on evidence for the “fluctuating selection” idea in great tits. 29:00 – A 2017 paper by Dr. Laskowski and colleagues on “behavioral individuality” in clonal fish raised in near-identical environments. 32:10 – A just-released paper by Dr. Laskowski and colleagues extending their earlier findings on clonal fish. 39:30 – The Twitter account of the Many Birds project. The website for the project.   Dr. Laskowski recommends: Innate, by Kevin Mitchell Why Fish Don't Exist, by Lulu Miller The Book of Why, by Judea Pearl and Dana Mackenzie   You can read more about Dr. Laskowksi's work on her website and follow her on Twitter.   Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/). You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts. **You can now subscribe to the Many Minds newsletter here!** We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.

Lexman Artificial
Judea Pearl of Solidarity LLC on Camp & Solidity

Lexman Artificial

Play Episode Listen Later Nov 1, 2022 4:49


Judea Pearl of Solidarity LLC talks about liking bloodstones and Quintilian.

Lexman Artificial
Judea Pearl on Triggers and Quellers

Lexman Artificial

Play Episode Listen Later Oct 15, 2022 4:01


Judea Pearl, a world-renowned expert on hate crimes and terrorism, discusses the psychological effects of fear and violence. He explains how these events can lead to PTSD, or post-traumatic stress disorder. In this episode, Pearl talks about the ways in which we can quell the fear that terrorism can generate.

Lexman Artificial
Judea Pearl on Chance and Life

Lexman Artificial

Play Episode Listen Later Oct 11, 2022 4:41


In this episode, Lexman interviews Judea Pearl about the role of chance in human life. They discuss Betjeman and the stages of life, as well as Pearl's unique take on the disposability of human beings.

Lexman Artificial
Judea Pearl on Gastrin, Homer, and Arundel Lampions

Lexman Artificial

Play Episode Listen Later Oct 2, 2022 4:14


Lexman Artificial interviews Judea Pearl, a gastroenterologist and senior associate editor at "The Gastroenterology Journal". They discuss Judea's research on gastrin and how it relates to the Iliad.

The Honest Report
The Violent Consequences of Anti-Israel Hatred: A Fireside Chat with Professor Judea Pearl, Father of Journalist Daniel Pearl

The Honest Report

Play Episode Listen Later Sep 29, 2022 20:45


Few people know the consequences of antisemitism and anti-Israel hatred more than Judea Pearl. His son Daniel was a journalist for the Wall Street Journal newspaper, working in Pakistan, when he was kidnapped and murdered by Al Qaeda in 2002. In his last moments, Daniel was forced by his captors to say on camera "My father is Jewish, my mother is Jewish, I am Jewish," showing the world the brutal face of antisemitism. In this week's podcast, we sit down with Daniel's father Dr. Judea Pearl, a respected computer science professor, for his personal reflections as well as wider perspectives about the state of anti-Israel propaganda and antisemitism in the world today. Welcome to The Honest Report podcast --- Send in a voice message: https://anchor.fm/thehonestreport/message

The Saad Truth with Dr. Saad
My Chat with Computer Scientist Dr. Judea Pearl, Co-Author of The Book of Why (The Saad Truth with Dr. Saad_441)

The Saad Truth with Dr. Saad

Play Episode Listen Later Aug 19, 2022 73:45


Topics covered include causality (causal inferencing), heuristics, artificial intelligence, Bayesian statistics, operations research, optimization, Alan Turing, Daniel Pearl, purpose and meaning in life, and regret. Judea's website: http://bayes.cs.ucla.edu/jp_home.html Note: Apologies for the low audio stemming from my guest. It is tough to always ensure maximal production quality. _______________________________________ If you appreciate my work and would like to support it: https://subscribestar.com/the-saad-truth https://patreon.com/GadSaad https://paypal.me/GadSaad _______________________________________ This clip was posted earlier today (August 19, 2022) on my YouTube channel as THE SAAD TRUTH_1444: https://youtu.be/SVEAZDWQ_lc _______________________________________ The Parasitic Mind: How Infectious Ideas Are Killing Common Sense (paperback edition) was released on October 5, 2021. Order your copy now. https://www.amazon.com/Parasitic-Mind-Infectious-Killing-Common/dp/162157959X/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr= https://www.amazon.ca/Parasitic-Mind-Infectious-Killing-Common/dp/162157959X https://www.amazon.co.uk/Parasitic-Mind-Infectious-Killing-Common/dp/162157959X _______________________________________ Please visit my website gadsaad.com, and sign up for alerts. If you appreciate my content, click on the "Support My Work" button. I count on my fans to support my efforts. You can donate via Patreon, PayPal, and/or SubscribeStar. _______________________________________ Dr. Gad Saad is a professor, evolutionary behavioral scientist, and author who pioneered the use of evolutionary psychology in marketing and consumer behavior. In addition to his scientific work, Dr. Saad is a leading public intellectual who often writes and speaks about idea pathogens that are destroying logic, science, reason, and common sense. _______________________________________

JBS: Jewish Broadcasting Service
In the News: Judea Pearl

JBS: Jewish Broadcasting Service

Play Episode Listen Later Aug 9, 2022 29:23 Very Popular


Judea Pearl, father of murdered journalist, Daniel Pearl z"l, talks about what the recent killing of al-Qaeda leader Ayman al-Zawahiri means for him personally, the concept of justice, as well as his efforts to call attention to 'Zionophobia' and why he says the fight against it is so critical.  With Teisha Bader.

Lexman Artificial
Judea Pearl on Eden and the Espada

Lexman Artificial

Play Episode Listen Later Jul 22, 2022 4:54


Judea Pearl, a Labourite and caregiver, tells the fascinating story of how she and her cavallas came to reside in Eden.

Lexman Artificial
Judea Pearl: Decolorisations, Dorcas, and Ichnite

Lexman Artificial

Play Episode Listen Later Jul 18, 2022 3:28


In this episode, Lexman interviews Judea Pearl, a geologist and historian who has written extensively on the decolorisations of Dorcas. They discuss the process of decolorisations and the origins of ichnite.

Lexman Artificial
Judea Pearl on Egos, Fluorescein, and Satan

Lexman Artificial

Play Episode Listen Later Jul 13, 2022 6:52


Judea Pearl, a philosopher and MIT professor, discusses the concept of egoism. Egos refer to the notion that people have a limited self-interest that affects their interactions with others. He discusses issues such as unobtrusiveness and wheeziness, which can be manifestations of a person's ego. Pearl discusses the relationship between egoism and Satan, and how doles can be used to manipulate others.

mit satan egos judea pearl fluorescein unobtrusiveness
Lexman Artificial
Judea Pearl on the Role of the Theologizer in the Modern Age

Lexman Artificial

Play Episode Listen Later Jun 22, 2022 3:19


In this episode, Judea Pearl discusses the role of the theologizer in the modern age. He talks about how theologizers must be able to manage the tension between historicism and modernity, and how theologizing can be used to make sense of the world.

The Nonlinear Library
AF - Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jun 4, 2022 3:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc, published by johnswentworth on June 4, 2022 on The AI Alignment Forum. There's a common perception that various non-deep-learning ML paradigms - like logic, probability, causality, etc - are very interpretable, whereas neural nets aren't. I claim this is wrong. It's easy to see where the idea comes from. Look at the sort of models in, say, Judea Pearl's work. Like this: It says that either the sprinkler or the rain could cause a wet sidewalk, season is upstream of both of those (e.g. more rain in spring, more sprinkler use in summer), and sidewalk slipperiness is caused by wetness. The Pearl-style framework lets us do all sorts of probabilistic and causal reasoning on this system, and it all lines up quite neatly with our intuitions. It looks very interpretable. The problem, I claim, is that a whole bunch of work is being done by the labels. “Season”, “sprinkler”, “rain”, etc. The math does not depend on those labels at all. If we code an ML system to use this sort of model, its behavior will also not depend on the labels at all. They're just suggestively-named LISP tokens. We could use the exact same math/code to model some entirely different system, like my sleep quality being caused by room temperature and exercise, with both of those downstream of season, and my productivity the next day downstream of sleep. We could just replace all the labels with random strings, and the model would have the same content: Now it looks a lot less interpretable. Perhaps that seems like an unfair criticism? Like, the causal model is doing some nontrivial work, but connecting the labels to real-world objects just isn't the problem it solves? . I think that's true, actually. But connecting the internal symbols/quantities/data structures of a model to external stuff is (I claim) exactly what interpretability is all about. Think about interpretability for deep learning systems. A prototypical example for what successful interpretability might look like is e.g. we find a neuron which robustly lights up specifically in response to trees. It's a tree-detector! That's highly interpretable: we know what that neuron “means”, what it corresponds to in the world. (Of course in practice single neurons are probably not the thing to look at, and also the word “robustly” is doing a lot of subtle work, but those points are not really relevant to this post.) The corresponding problem for a logic/probability/causality-based model would be: take a variable or node, and figure out what thing in the world it corresponds to, ignoring the not-actually-functionally-relevant label. Take the whole system, remove the labels, and try to rederive their meanings. . which sounds basically-identical to the corresponding problem for deep learning systems. We are no more able to solve that problem for logic/probability/causality systems than we are for deep learning systems. We can have a node in our model labeled “tree”, but we are no more (or less) able to check that it actually robustly represents trees than we are for a given neuron in a neural network. Similarly, if we find that it does represent trees and we want to understand how/why the tree-representation works, all those labels are a distraction. One could argue that we're lucky deep learning is winning the capabilities race. At least this way it's obvious that our systems are uninterpretable, that we have no idea what's going on inside the black box, rather than our brains seeing the decorative natural-language name “sprinkler” on a variable/node and then thinking that we know what the variable/node means. Instead, we just have unlabeled nodes - an accurate representation of our actual knowledge of the node's “meaning”. Thanks for listening. To help us out with The Nonlinear Lib...

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
196 | Judea Pearl on Cause and Effect

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later May 9, 2022 76:50 Very Popular


To say that event A causes event B is to not only make a claim about our actual world, but about other possible worlds — in worlds where A didn't happen but everything else was the same, B would not have happened. This leads to an obvious difficulty if we want to infer causes from sets of data — we generally only have data about the actual world. Happily, there are ways around this difficulty, and the study of causal relations is of central importance in modern social science and artificial intelligence research. Judea Pearl has been the leader of the “causal revolution,” and we talk about what that means and what questions remain unanswered.Support Mindscape on Patreon.Judea Pearl received a Ph.D. in electrical engineering from the Polytechnic Institute of Brooklyn. He is currently a professor of computer science and statistics and director of the Cognitive Systems Laboratory at UCLA. He is a founding editor of the Journal of Causal Inference. Among his awards are the Lakatos Award in the philosophy of science, The Allen Newell Award from the Association for Computing Machinery, the Benjamin Franklin Medal, the Rumelhart Prize from the Cognitive Science Society, the ACM Turing Award, and the Grenander Prize from the American Mathematical Society. He is the co-author (with Dana MacKenzie) of The Book of Why: The New Science of Cause and Effect.Web siteGoogle Scholar publicationsWikipediaAmazon author pageTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Libros y Dinero
EL LIBRO DEL PORQUÉ de Judea Pearl

Libros y Dinero

Play Episode Listen Later May 3, 2022 26:29


Hoy estamos recibiendo el 2do regalo de la inteligencia artificial para la humanidad. Una Inteligencia Artificial que podrá imaginar y reflexionar sobre sus acciones… UN ROBOT CON CONCIENCIA. Sin reglas y que emita juicios morales. Todo esto es consecuencia del entendimiento estadístico y los límites de la BIG DATA. Judea Pearl, experto en ciencias de la computación y filósofo, ha desarrollado el lenguaje matemático para que las máquinas entiendan CAUSALIDAD. Recibe una probadita de este tema y si te interesa más… lee el libro! . SI ERES AMANTE DE LOS LIBROS… Sígueme en mis otras REDES! . YOUTUBE https://bit.ly/3FqJhPX . INSTAGRAM https://bit.ly/39AZDcP . TIKTOK https://bit.ly/3LHr4jj

Collège de France (Général)
Leçon inaugurale - Rémy Slama : Causes et conditions extérieures des maladies et de la santé - VIDEO

Collège de France (Général)

Play Episode Listen Later Mar 31, 2022 64:55


Rémy Slama Collège de France Santé publique (chaire annuelle 2021-2022) Leçon inaugurale : Causes et conditions extérieures des maladies et de la santé La leçon inaugurale brossera un tableau chronologique des risques ayant pesé ou pesant encore sur la santé humaine, de la trilogie épidémies-guerres-famines, qui s'est progressivement et partiellement estompée pour faire place aux facteurs dits de style de vie (tabac, alcool, déséquilibres alimentaires, sédentarité) et aux agents physico-chimiques. Notre mouvement, centrifuge par rapport au patient et au déclenchement de la maladie, consistera à passer de l'énumération des causes de décès à l'identification des causes des causes. Les pathologies infectieuses, dont le développement a pu être favorisé entre autres par l'invention de l'agriculture, qui a rapproché humains et animaux domestiques, favorisant les zoonoses, ont été jusqu'au début du XXe siècle la cause majeure de mortalité en Europe. Avec leur contrôle progressif dans les pays du nord a eu lieu une transition épidémiologique, consistant en une diminution de la mortalité, qui a permis un allongement spectaculaire de l'espérance de vie, multipliée par trois en trois siècles (d'environ 25 ans avant la Révolution à 82 ans aujourd'hui en France), expliquée par la diminution de la mortalité par maladies infectieuses, survenant souvent tôt, progressivement remplacée par les maladies chroniques, survenant généralement à un âge plus avancé. Nous rappellerons la contribution des polymorphismes génétiques, des facteurs de style de vie à la survenue des maladies chroniques. Puis nous évoquerons les modifications de notre environnement au cours de l'Anthropocène, les éléments généraux en faveur d'un effet de l'environnement physico-chimique sur la survenue de ces maladies chroniques, et les arguments plus spécifiques allant dans le même sens, s'appuyant sur les développements méthodologiques récents, à la fois dans le champ de la toxicologie et, chez l'humain, des biomarqueurs d'exposition et de l'inférence causale (Judea Pearl), qui fournit un cadre rigoureux pour l'identification des causes des maladies dans une approche non expérimentale.

Collège de France (Général)
Leçon inaugurale - Rémy Slama : Causes et conditions extérieures des maladies et de la santé - VIDEO

Collège de France (Général)

Play Episode Listen Later Mar 31, 2022 64:55


Rémy SlamaCollège de FranceSanté publique (chaire annuelle 2021-2022)Leçon inaugurale : Causes et conditions extérieures des maladies et de la santéLa leçon inaugurale brossera un tableau chronologique des risques ayant pesé ou pesant encore sur la santé humaine, de la trilogie épidémies-guerres-famines, qui s'est progressivement et partiellement estompée pour faire place aux facteurs dits de style de vie (tabac, alcool, déséquilibres alimentaires, sédentarité) et aux agents physico-chimiques. Notre mouvement, centrifuge par rapport au patient et au déclenchement de la maladie, consistera à passer de l'énumération des causes de décès à l'identification des causes des causes. Les pathologies infectieuses, dont le développement a pu être favorisé entre autres par l'invention de l'agriculture, qui a rapproché humains et animaux domestiques, favorisant les zoonoses, ont été jusqu'au début du XXe siècle la cause majeure de mortalité en Europe. Avec leur contrôle progressif dans les pays du nord a eu lieu une transition épidémiologique, consistant en une diminution de la mortalité, qui a permis un allongement spectaculaire de l'espérance de vie, multipliée par trois en trois siècles (d'environ 25 ans avant la Révolution à 82 ans aujourd'hui en France), expliquée par la diminution de la mortalité par maladies infectieuses, survenant souvent tôt, progressivement remplacée par les maladies chroniques, survenant généralement à un âge plus avancé. Nous rappellerons la contribution des polymorphismes génétiques, des facteurs de style de vie à la survenue des maladies chroniques. Puis nous évoquerons les modifications de notre environnement au cours de l'Anthropocène, les éléments généraux en faveur d'un effet de l'environnement physico-chimique sur la survenue de ces maladies chroniques, et les arguments plus spécifiques allant dans le même sens, s'appuyant sur les développements méthodologiques récents, à la fois dans le champ de la toxicologie et, chez l'humain, des biomarqueurs d'exposition et de l'inférence causale (Judea Pearl), qui fournit un cadre rigoureux pour l'identification des causes des maladies dans une approche non expérimentale.

Collège de France (Général)
Leçon inaugurale : Causes et conditions extérieures des maladies et de la santé

Collège de France (Général)

Play Episode Listen Later Mar 31, 2022 64:55


Rémy Slama Collège de France Santé publique (chaire annuelle 2021-2022) Leçon inaugurale : Causes et conditions extérieures des maladies et de la santé La leçon inaugurale brossera un tableau chronologique des risques ayant pesé ou pesant encore sur la santé humaine, de la trilogie épidémies-guerres-famines, qui s'est progressivement et partiellement estompée pour faire place aux facteurs dits de style de vie (tabac, alcool, déséquilibres alimentaires, sédentarité) et aux agents physico-chimiques. Notre mouvement, centrifuge par rapport au patient et au déclenchement de la maladie, consistera à passer de l'énumération des causes de décès à l'identification des causes des causes. Les pathologies infectieuses, dont le développement a pu être favorisé entre autres par l'invention de l'agriculture, qui a rapproché humains et animaux domestiques, favorisant les zoonoses, ont été jusqu'au début du XXe siècle la cause majeure de mortalité en Europe. Avec leur contrôle progressif dans les pays du nord a eu lieu une transition épidémiologique, consistant en une diminution de la mortalité, qui a permis un allongement spectaculaire de l'espérance de vie, multipliée par trois en trois siècles (d'environ 25 ans avant la Révolution à 82 ans aujourd'hui en France), expliquée par la diminution de la mortalité par maladies infectieuses, survenant souvent tôt, progressivement remplacée par les maladies chroniques, survenant généralement à un âge plus avancé. Nous rappellerons la contribution des polymorphismes génétiques, des facteurs de style de vie à la survenue des maladies chroniques. Puis nous évoquerons les modifications de notre environnement au cours de l'Anthropocène, les éléments généraux en faveur d'un effet de l'environnement physico-chimique sur la survenue de ces maladies chroniques, et les arguments plus spécifiques allant dans le même sens, s'appuyant sur les développements méthodologiques récents, à la fois dans le champ de la toxicologie et, chez l'humain, des biomarqueurs d'exposition et de l'inférence causale (Judea Pearl), qui fournit un cadre rigoureux pour l'identification des causes des maladies dans une approche non expérimentale.

Collège de France (Général)
Leçon inaugurale : Causes et conditions extérieures des maladies et de la santé

Collège de France (Général)

Play Episode Listen Later Mar 31, 2022 64:55


Rémy SlamaCollège de FranceSanté publique (chaire annuelle 2021-2022)Leçon inaugurale : Causes et conditions extérieures des maladies et de la santéLa leçon inaugurale brossera un tableau chronologique des risques ayant pesé ou pesant encore sur la santé humaine, de la trilogie épidémies-guerres-famines, qui s'est progressivement et partiellement estompée pour faire place aux facteurs dits de style de vie (tabac, alcool, déséquilibres alimentaires, sédentarité) et aux agents physico-chimiques. Notre mouvement, centrifuge par rapport au patient et au déclenchement de la maladie, consistera à passer de l'énumération des causes de décès à l'identification des causes des causes. Les pathologies infectieuses, dont le développement a pu être favorisé entre autres par l'invention de l'agriculture, qui a rapproché humains et animaux domestiques, favorisant les zoonoses, ont été jusqu'au début du XXe siècle la cause majeure de mortalité en Europe. Avec leur contrôle progressif dans les pays du nord a eu lieu une transition épidémiologique, consistant en une diminution de la mortalité, qui a permis un allongement spectaculaire de l'espérance de vie, multipliée par trois en trois siècles (d'environ 25 ans avant la Révolution à 82 ans aujourd'hui en France), expliquée par la diminution de la mortalité par maladies infectieuses, survenant souvent tôt, progressivement remplacée par les maladies chroniques, survenant généralement à un âge plus avancé. Nous rappellerons la contribution des polymorphismes génétiques, des facteurs de style de vie à la survenue des maladies chroniques. Puis nous évoquerons les modifications de notre environnement au cours de l'Anthropocène, les éléments généraux en faveur d'un effet de l'environnement physico-chimique sur la survenue de ces maladies chroniques, et les arguments plus spécifiques allant dans le même sens, s'appuyant sur les développements méthodologiques récents, à la fois dans le champ de la toxicologie et, chez l'humain, des biomarqueurs d'exposition et de l'inférence causale (Judea Pearl), qui fournit un cadre rigoureux pour l'identification des causes des maladies dans une approche non expérimentale.

AI Live & Unbiased
Causality and Artificial Intelligence with Arni Steingrimsson

AI Live & Unbiased

Play Episode Listen Later Mar 11, 2022 36:40 Very Popular


Dr. Jerry Smith welcomes you to another episode of AI Live and Unbiased to explore the breadth and depth of Artificial Intelligence and to encourage you to change the world, not just observe it!   Dr. Jerry is joined today by Arni Steingrimsson, a Data Science Machine Learning and Artificial Intelligence in the U.S. and Mexico. He is a senior-level Data Scientist, who comes from a biomedical field. Arnie and Dr. Jerry are talking today about Causality and the crucial role it plays in the AI space.   Key Takeaways: What is Causality? Why is it important to Artificial Intelligence? Causality is what is causing the outcome; from a data perspective there are certain features that will be causal to the outcome but there is no guarantee that you can change the outcome by changing those features. Defining causality is less important than knowing what is capable. Granger causality is defined as a statistical dependence. Judea Pearl proposes three levels of causality: Association, Intervention, and Counterfactual. Why it is important to actually know the cause of something? People who want to be ahead and business leaders need to know how they can influence their decisions and make a change, that is why knowing the causality is crucial. Counterfactual Causality explains the connection between x and y, but y does not really change the possibility for x to occur or not to occur. What are counterfactuals? They are a comparison of different states in the same world, but how do you quantitatively compute these two states? It is done by holding to a variable. Simpson's paradox: Something observed at a high level is counter to the thing observed at a low level. Simpson's paradox is usually overlooked. The study of data is an important part of the causality world. Using machine learning in the world of causality: There are some data scientists that didn't study causality, and they think that they can just use classical machine learning, isolating features, and feature reduction and that means using causality… but that is not the way of “changing the world”; you need to know why certain inputs changed and what caused this change. A reported driver is different than a causal driver. The application of Evolutionary principles in the AI world: The predictors are the blocks that put those inputs which are causal; this way we know the causal input to then create the machine learning model that will tell what will happen as a result of the given inputs but it does not tell us what we should set those inputs to. First, we figure out what is causal and make a model for that, then once we have this model of the world, we tell people what conditions need to be set to get the best chances of achieving your outcome. What kinds of tools are used for evolutionary computing? Python and their library called Deep. What can be done after simulation? What is next? After simulation, we need to take the inputs that represent causal drivers and put them into action in the field to monitor the change. If you want to improve your product you need to put programs (such as marketing and sales efforts) out and collect the data on them, how they are improving and what are the changes.   Stay Connected with AI Live and Unbiased: Visit our website AgileThought.com Email your thoughts or suggestions to Podcast@AgileThought.com or Tweet @AgileThought using #AgileThoughtPodcast!   Learn more about Dr. Jerry Smith   Mentioned in this episode: Causality: Models, Reasoning, and Inference, by Judea Pearl

Casual Inference
Artificial Intelligence, Personalized Medicine, and Causal Bounds with Judea Pearl | Season 3 Episode 9

Casual Inference

Play Episode Listen Later Feb 28, 2022 55:04 Very Popular


In this episode Lucy D'Agostino McGowan and Ellie Murray chat with Judea Pearl, Chancellor professor of computer science and statistics at the University of California, Los Angeles.

AI Live & Unbiased
Four Most Commonly Asked Questions About AI with Dr. Jerry Smith

AI Live & Unbiased

Play Episode Listen Later Feb 25, 2022 43:02 Very Popular


Dr. Jerry Smith welcomes you to another episode of AI Live and Unbiased to explore the breadth and depth of Artificial Intelligence and to encourage you to change the world, not just observe it!   Dr. Jerry is talking today about questions and answers in the world of data science machinery and artificial intelligence.   Key Takeaways: What are Dr. Jerry's favorite AI design tools? Dr, Jerry shares his four primary tools: MATLAB. Is a commercial product. It has a home, academic, and enterprise version. MATLAB has toolkits and applications. The Predictive Maintenance Toolbox at MATLAB, especially the preventive failure model is of great value when we want to know why things fail, also by measuring systems performance and predicting the useful life of a product. Mathematical Modeling with Symbolic Math Toolbox is useful for algorithm-based environments. It is built on solid mathematics. R Programming is Dr. Jerry's favorite free tool for programming with statistical and math perspectives. R is an open and free source and comes with a lot of applications. Python is a great tool for programming and is as capable as R programming to assist us in problem-solving. Python is very useful when you know your work is directed to an enterprise level. Does Dr. Jerry have any recommended books for causality? The Book of Why is foundational for both the businessperson and the data scientist. It provides a historical review of what causality is and why it is important. For a deeper understanding of causality, Dr. Jerry recommends Causal Inference in Statistics: A Primer.   Counterfactuals and Causal Inferences: Methods and Principles it is a great tool to think through the counterfactual analysis.   Behavioral Data Analysis with R and Python is an awesome book for the practitioner who wants to know what behaviors are, how they show up in data, the causal characteristics, and how to abstract behavioral aspects from data. Dr. Jerry recommends Designing for Behavior Change, it talks about the three main strategies that we use to help people change their behaviors. The seven rules of human behavior can be found in Eddie Rafii's latest book: Behaviology, New Science of Human Behavior. Dr. Jerry shares his favorite tools for casual analysis: Compellon allows us to do performance analysis, showing the fundamental causal chains in your target of interest. It can be used by analysts. It allows users to do “what-if” analysis. Compellon is a commercial product.   Causal Nexus is an open-source package in Python that has a much deeper look at causal models than Compellon. BayesiaLab is a commercial tool that is one of the higher-end tools an organization can have. It allows you to work on casual networks and counterfactual events. It is used in AI research.   What skills are needed for data science machinery and AI developers? Capabilities can be segmented into Data-oriented, Information-oriented, Knowledge, and Intelligence. These different capabilities are used in many roles according to several levels of maturity.   Stay Connected with AI Live and Unbiased: Visit our website AgileThought.com Email your thoughts or suggestions to Podcast@AgileThought.com or Tweet @AgileThought using #AgileThoughtPodcast!   Learn more about Dr. Jerry Smith   Mentioned in this episode: MATLAB MATLAB Mathematical Modeling Python Artificial Intelligence with R Compellon Causal Nex BayesiaLab   Dr. Jerry's Book Recommendations: The Book of Why: The New Science of Cause and Effect, Judea Pearl, Dana Mackenzie   Causal Inference in Statistics: A Primer, Madelyn Glymour, Judea Pearl, and Nicholas P. Jewell   Counterfactuals and Causal Inferences: Methods and Principles,  Stephen L. Morgan and Christopher Winship   Behavioral Data Analysis with R and Python: Customer-Driven Data for Real Business Results, Florent Buisson   Designing for Behavior Change: Applying Psychology and Behavioral Economics, Stephen Wendel   Behaviology, New Science of Human Behavior, Eddie Rafii

La tecnología de Hoy por Hoy
La Tecnología | A los gigantes les cuesta bailar

La tecnología de Hoy por Hoy

Play Episode Listen Later Feb 15, 2022 14:11


Comentamos el estreno de la nueva red de Retina, que incluye, por ejemplo, este podcast, y también el Observatorio Retina, que se presenta todos los años con las predicciones y el análisis de las tendencias más importantes del mundo tecnológico. Entre las que se destacan este año estaría la tecnología con propósito: para qué sirve toda esta innovación, cómo va a mejorar la vida de las personas. La reputación del sector tecnológico ha sufrido en los últimos años precisamente por esto, y la implementación de cambios no es tan fácil porque, como dice Jaime García Cantero, a los gigantes les cuesta bailar. Dentro de una de las tendencias principales, la inteligencia artificial, destacamos la figura de Judea Pearl, un sabio de nuestro tiempo que ha sido galardonado con el Premio Fronteras del Conocimiento de la Fundación BBVA. Es uno de los padres de la inteligencia artificial, en la que lleva trabajando desde los años 70, con la creación de las redes bayesianas. Cuando vemos una representación gráfica de un algoritmo, es gracias a Pearl que, además de todos sus logros, tiene una biografía fascinante. Nos ayuda a acercarnos a este gran personaje el catedrático de ciencias de la computación de la Universidad Politécnica de Madrid Pedro Larrañaga.  

A hombros de gigantes
A hombros de gigantes - Tres parapléjicos logran caminar gracias a un implante eléctrico - 13/02/22

A hombros de gigantes

Play Episode Listen Later Feb 13, 2022 55:17


Tres personas paralíticas, con la médula espinal completamente seccionada, han conseguido andar con la ayuda de un dispositivo eléctrico que es capaz de estimular y controlar los distintos grupos musculares. Un gran avance desarrollado por científicos suizos que hemos analizado con Juan de los Reyes Aguilar, investigador del Grupo de Neurofisiología Experimental y Circuitos Neuronales del Hospital Nacional de Parapléjicos de Toledo. -Hemos informado de un nuevo récord de energía de fusión logrado por un experimento europeo en el reactor JET, en Oxford; de la concesión del Premio Fundación BBVA Fronteras del Conocimiento en Tecnologías de la Información y la Comunicación al ingeniero israelí Judea Pearl por sentar las bases de la Inteligencia Artificial moderna; de la decisión de la UE de invertir 45.000 millones de euros hasta 2030 para cuadruplicar su producción de chips; de un estudio que revela que los chimpancés no solo se curan sus heridas sino también la de los otros congéneres, y de una investigación que reduce en un 20% el contenido de agua dulce almacenado en los glaciares del planeta. Ana Iglesias Mialaret nos ha contado que las olas de calor marinas están arrasando los corales del Mediterráneo según un estudio del CSIC (con testimonios de Joaquim Garrabou, del Instituto de Ciencias del Mar). En su recorrido por la obra de Ramón y Cajal, su escuela y su legado, Fernando de Castro nos ha hablado de los discípulos de Pío del Río Hortega, discípulo a su vez de nuestro premio Nobel. Con Montse Villar nos hemos asomado a las galaxias activas, aquellas que contienen agujeros negros supermasivos en su interior, para conocer cómo se comportan estos objetos y cómo influyen en su vecindario estelar. Escuchar audio

The Nonlinear Library
AF - Causality, Transformative AI and alignment - part I by Marius Hobbhahn

The Nonlinear Library

Play Episode Listen Later Jan 27, 2022 13:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Causality, Transformative AI and alignment - part I, published by Marius Hobbhahn on January 27, 2022 on The AI Alignment Forum. TL;DR: transformative AI(TAI) plausibly requires causal models of the world. Thus, a component of AI safety is ensuring secure paths to generating these causal models. We think the lens of causal models might be undervalued within the current alignment research landscape and suggest possible research directions. This post was written by Marius Hobbhahn and David Seiler. MH would like Richard Ngo for encouragement and feedback. If you think these are interesting questions and want to work on them, write us. We will probably start to play around with GPT-3 soonish. If you want to join the project, just reach out. There is certainly stuff we missed. Feel free to send us references if you think they are relevant. There are already a small number of people working on causality within the EA community. They include Victor Veitch, Zhijing Jin and PabloAMC. Check them out for further insights. Causality - a working definition: Just to get this out of the way: we follow a broad definition of causality, i.e. we assume it can be learned from (some) data and doesn't have to be put into the model by humans. Furthermore, we don't think the representation has to be explicit, e.g. in a probabilistic model, but could be represented in other ways, e.g. in the weights of neural networks. But what is it? In a loose sense, you already know: things make other things happen. When you touch a light switch and a light comes on, that's causality. There is a more technical sense in which no one understands causality, not even Judea Pearl (where does causal information ultimately come from if you have to make causal assumptions to get it? For that matter, how do we get variables out of undifferentiated sense data?). But it's possible to get useful results without understanding causality precisely, and for our purposes, it's enough to approach the question at the level of causal models. Concretely: you can draw circles around phenomena in the world (like "a switch" and "a lightbulb") to make them into nodes in a graph, and draw arrows between those nodes to represent their causal relationships (from the switch to the lightbulb if you think the switch causes the lightbulb to turn on, or from the lightbulb to the switch if you think it's the other way around). There's an old Sequences post that covers the background in more detail. The key points for practical purposes are that causal models: Are sparse, and thus easy to reason about and make predictions with (or at least, easier to reason about than the joint distribution over all your life experiences). Can be segmented by observations. Suppose you know that the light switch controls the flow of current to the bulb and that the current determines whether the bulb is on or off. Then, if you observe that there's no current in the wire (maybe there's a blackout), then you don't need to know anything about the state of the switch to know the state of the bulb. Able to evaluate counterfactuals. If the light switch is presently off, but you want to imagine what would happen if it were on, your causal model can tell you (insofar as it's correct). Why does causality matter? Causal, compared to correlational, information has two main advantages. For the following section, I got help from a fellow Ph.D. student. 1. Data efficiency Markov factorization: Mathematically speaking, Markov factorization ensures conditional independence between some nodes given other nodes. In practice, this means that we can write a joint probability distribution as a sparse graph where only some nodes are connected if we assume causality. It introduces sparsity. “Namely, if we have a joint with n binary random variables, it would have 2^n - 1 indepen...

The Nonlinear Library
LW - Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives by AnnaSalamon fromDecision Theory: Newcomb's Problem

The Nonlinear Library

Play Episode Listen Later Dec 25, 2021 8:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Decision Theory: Newcomb's Problem, Part 4: Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives, published by AnnaSalamon. (This is the third post in a planned sequence.) My last post left us with the questions: Just what are humans, and other common CSAs, calculating when we imagine what “would” happen “if” we took actions we won't take? Is there more than one natural way to calculate these counterfactual “would”s? If so, what are the alternatives, and which alternative works best? Today, I'll take an initial swing at these questions. I'll review Judea Pearl's causal Bayes nets; show how Bayes nets offer a general methodology for computing counterfactual “would”s; and note three plausible alternatives for how to use Pearl's Bayes nets to set up a CSA. One of these alternatives will be the “timeless” counterfactuals of Eliezer's Timeless Decision Theory. The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened. In its general form, this problem has proved to be quite gnarly. It has been bothering philosophers of science for at least 57 years, since the publication of Nelson Goodman's book “Fact, Fiction, and Forecast” in 1952: Let us confine ourselves for the moment to [counterfactual conditionals] in which the antecedent and consequent are inalterably false--as, for example, when I say of a piece of butter that was eaten yesterday, and that had never been heated, `If that piece of butter had been heated to 150°F, it would have melted.' Considered as truth-functional compounds, all counterfactuals are of course true, since their antecedents are false. Hence `If that piece of butter had been heated to 150°F, it would not have melted.' would also hold. Obviously something different is intended, and the problem is to define the circumstances under which a given counterfactual holds while the opposing conditional with the contradictory consequent fails to hold. Recall that we seem to need counterfactuals in order to build agents that do useful decision theory -- we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals. As Goodman puts it, “[t]he analysis of counterfactual conditionals is no fussy little grammatical exercise.” Judea Pearl's Bayes nets offer a method for computing counterfactuals. As noted, it is hard to reduce human counterfactuals in general: it is hard to build an algorithm that explains what (humans will say) really “would” have happened, “if” an impossible event had occurred. But it is easier to construct specific formalisms within which counterfactuals have well-specified meanings. Judea Pearl's causal Bayes nets offer perhaps the best such formalism. Pearl's idea is to model the world as based on some set of causal variables, which may be observed or unobserved. In Pearl's model, each variable is determined by a conditional probability distribution on the state of its parents (or by a simple probability distribution, if it has no parents). For example, in the following Bayes net, the beach's probability of being “Sunny” depends only on the “Season”, and the probability that there is each particular “Number of beach-goers” depends only on the “Day of the week” and on the “Sunniness”. Since the “Season” and the “Day of the week” have no parents, they simply have fixed probability distributions. Once we have a Bayes net set up to model a given domain, computing counterfactuals is easy. We just: Take the usual conditional and unconditional probability distributions, that come with the Bayes net; Do “surgery” on the Bayes net to plug in the...

The Nonlinear Library: LessWrong
LW - Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives by AnnaSalamon fromDecision Theory: Newcomb's Problem

The Nonlinear Library: LessWrong

Play Episode Listen Later Dec 25, 2021 8:55


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Decision Theory: Newcomb's Problem, Part 4: Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives, published by AnnaSalamon. (This is the third post in a planned sequence.) My last post left us with the questions: Just what are humans, and other common CSAs, calculating when we imagine what “would” happen “if” we took actions we won't take? Is there more than one natural way to calculate these counterfactual “would”s? If so, what are the alternatives, and which alternative works best? Today, I'll take an initial swing at these questions. I'll review Judea Pearl's causal Bayes nets; show how Bayes nets offer a general methodology for computing counterfactual “would”s; and note three plausible alternatives for how to use Pearl's Bayes nets to set up a CSA. One of these alternatives will be the “timeless” counterfactuals of Eliezer's Timeless Decision Theory. The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened. In its general form, this problem has proved to be quite gnarly. It has been bothering philosophers of science for at least 57 years, since the publication of Nelson Goodman's book “Fact, Fiction, and Forecast” in 1952: Let us confine ourselves for the moment to [counterfactual conditionals] in which the antecedent and consequent are inalterably false--as, for example, when I say of a piece of butter that was eaten yesterday, and that had never been heated, `If that piece of butter had been heated to 150°F, it would have melted.' Considered as truth-functional compounds, all counterfactuals are of course true, since their antecedents are false. Hence `If that piece of butter had been heated to 150°F, it would not have melted.' would also hold. Obviously something different is intended, and the problem is to define the circumstances under which a given counterfactual holds while the opposing conditional with the contradictory consequent fails to hold. Recall that we seem to need counterfactuals in order to build agents that do useful decision theory -- we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences. So we need to know how to compute those counterfactuals. As Goodman puts it, “[t]he analysis of counterfactual conditionals is no fussy little grammatical exercise.” Judea Pearl's Bayes nets offer a method for computing counterfactuals. As noted, it is hard to reduce human counterfactuals in general: it is hard to build an algorithm that explains what (humans will say) really “would” have happened, “if” an impossible event had occurred. But it is easier to construct specific formalisms within which counterfactuals have well-specified meanings. Judea Pearl's causal Bayes nets offer perhaps the best such formalism. Pearl's idea is to model the world as based on some set of causal variables, which may be observed or unobserved. In Pearl's model, each variable is determined by a conditional probability distribution on the state of its parents (or by a simple probability distribution, if it has no parents). For example, in the following Bayes net, the beach's probability of being “Sunny” depends only on the “Season”, and the probability that there is each particular “Number of beach-goers” depends only on the “Day of the week” and on the “Sunniness”. Since the “Season” and the “Day of the week” have no parents, they simply have fixed probability distributions. Once we have a Bayes net set up to model a given domain, computing counterfactuals is easy. We just: Take the usual conditional and unconditional probability distributions, that come with the Bayes net; Do “surgery” on the Bayes net to plug in the...

The Nonlinear Library: LessWrong Top Posts
Eliezer's Sequences and Mainstream Academia by lukeprog

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 6:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eliezer's Sequences and Mainstream Academia, published by lukeprog on the LessWrong. Due in part to Eliezer's writing style (e.g. not many citations), and in part to Eliezer's scholarship preferences (e.g. his preference to figure out much of philosophy on his own), Eliezer's Sequences don't accurately reflect the close agreement between the content of The Sequences and work previously done in mainstream academia. I predict several effects from this: Some readers will mistakenly think that common Less Wrong views are more parochial than they really are. Some readers will mistakenly think Eliezer's Sequences are more original than they really are. If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article. I'd like to counteract these effects by connecting the Sequences to the professional literature. (Note: I sort of doubt it would have been a good idea for Eliezer to spend his time tracking down more references and so on, but I realized a few weeks ago that it wouldn't take me much effort to list some of those references.) I don't mean to minimize the awesomeness of the Sequences. There is much original content in them (edit: probably most of their content is original), they are engagingly written, and they often have a more transformative effect on readers than the corresponding academic literature. I'll break my list of references into sections based on how likely I think it is that a reader will have missed the agreement between Eliezer's articles and mainstream academic work. (This is only a preliminary list of connections.) Obviously connected to mainstream academic work Eliezer's posts on evolution mostly cover material you can find in any good evolutionary biology textbook, e.g. Freeman & Herron (2007). Likewise, much of the Quantum Physics sequence can be found in quantum physics textbooks, e.g. Sakurai & Napolitano (2010). An Intuitive Explanation of Bayes' Theorem, How Much Evidence Does it Take, Probability is in the Mind, Absence of Evidence Is Evidence of Absence, Conservation of Expected Evidence, Trust in Bayes: see any textbook on Bayesian probability theory, e.g. Jaynes (2003) or Friedman & Koller (2009). What's a Bias, again?, Hindsight Bias, Correspondence Bias; Positive Bias: Look into the Dark, Doublethink: Choosing to be Biased, Rationalization, Motivated Stopping and Motivated Continuation, We Change Our Minds Less Often Than We Think, Knowing About Biases Can Hurt People, Asch's Conformity Experiment, The Affect Heuristic, The Halo Effect, Anchoring and Adjustment, Priming and Contamination, Do We Believe Everything We're Told, Scope Insensitivity: see standard works in the heuristics & biases tradition, e.g. Kahneman et al. (1982), Gilovich et al. 2002, Kahneman 2011. According to Eliezer, The Simple Truth is Tarskian and Making Beliefs Pay Rent is Peircian. The notion of Belief in Belief comes from Dennett (2007). Fake Causality and Timeless Causality report on work summarized in Pearl (2000). Fake Selfishness argues that humans aren't purely selfish, a point argued more forcefully in Batson (2011). Less obviously connected to mainstream academic work Eliezer's metaethics sequences includes dozens of lemmas previously discussed by philosophers (see Miller 2003 for an overview), and the resulting metaethical theory shares much in common with the metaethical theories of Jackson (1998) and Railton (2003), and must face some of the same critiques as those theories do (e.g. Sobel 1994). Eliezer's free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl's work on causality), but the conclusion is standard compatibilism. How an Algorithm Feels F...

The Edu Futures Podcast
An Interview with Dana Mackenzie About The Book of Why

The Edu Futures Podcast

Play Episode Listen Later Jul 14, 2021 40:31


Dr. Dana MacKenzie, a mathematician turned science writer and co-author of The Book of Why (written with Turning Award winner, Judea Pearl), join us to talk about correlation versus causation and other key concepts that have relevance as we seek to create and understand the future of AI in education.

ai judea pearl dana mackenzie
Tired Women
#41: Was macht eigentlich eine Data Scientist?

Tired Women

Play Episode Listen Later Mar 20, 2021 53:47


Sponsor dieser Episode ist der Haymon Verlag. Ich durfte euch heute das Buch „Immer noch wach" vorstellen. Neugierig? Der Roman ist überall erhältlich bzw. bestellbar, wo es Bücher gibt. Auf der Website des Haymon Verlags findet ihr außerdem noch viele andere spannende Bücher: https://www.haymonverlag.at/. Eins meiner Buch-Highlights 2020 war "Invisible Women - Exposing Data Bias In A World Designed for Men" von Caroline Criado-Perez. Um mehr über das Thema aus erster Hand zu erfahren, habe ich mich mit der Data Scientist Angelika Schmid getroffen und sie gefragt, was man in dem Job überhaupt macht und auf welche Herausforderungen man als Quereinsteigerin stößt. Angelika erklärt mir, was es mit Korrelation und Kausalität von Daten auf sich hat und warum wir schon in der Schule einen gesunden Umgang mit Daten lernen sollten. Außerdem machen wir einen Ausflug in netzpolitische Themen wie Datenschutz und -transparenz und landen schließlich beim Imposter-Syndrom von Frauen in technischen Berufen. Shownotes: Intro-Vocals: Vanessa Kogler Inhaltliche Gestaltung: Esther Ecke Cut: Iris Böhm Cover-Design: Julia Feller The Book of Why von Judea Pearl & Mackenzie The Circle von Dave Eggers Cousera Udemy Kettle

Amanpour
Amanpour: Judea Pearl, Mariana Mazzucato, Stuart Stevens and Jon Batiste

Amanpour

Play Episode Listen Later Jan 28, 2021 55:31


As Pakistan's Supreme Court orders the release of the four men accused and previously convicted of the murder of American journalist Daniel Pearl in 2002, Christiane Amanpour is joined by his father Judea Pearl to reflect on the loss of his son and getting justice. Economist Mariana Mazzucato talks about her new book "Mission Economy" and how the public and private sectors must collaborate to tackle the world's big problems. Then former Republican consultant and author of "It Was All a Lie", Stuart Stevens , says unless the GOP changes, the long term trend is bad for the Republican party. Our Michel Martin talks to Grammy nominated composer, singer and pianist Jon Batiste about making the music behind “Soul”, his activism and new album "We Are." To learn more about how CNN protects listener privacy, visit cnn.com/privacy

QuickRead.com Podcast - Free book summaries
Summary of "The Book of Why" by Judea Pearl and Dana MacKenzie | Free Audiobook

QuickRead.com Podcast - Free book summaries

Play Episode Listen Later Jan 23, 2021 25:32


As humans, our instinct is to ask the questions “why” and “what if?” As you go about your day, you might ask yourself, “If I take this aspirin, will my headache go away?” or “What did I eat that made my stomach hurt?” You might even ask questions about the past too like, “What if I left my house just a few minutes earlier, would I have made my flight?” Whenever we ask questions like these, we are dealing with cause and effect relationships, or how certain factors lead to various results. In the scientific community, “Correlation is not causation” has been the mantra chanted by scientists for more than a century, prohibiting causal talk in many classrooms and scientific studies. Today, however, we have gone through a Causal Revolution instigated by author Judea Pearl and his colleagues. Through The Book of Why, Pearl shows us how his work in causal relationships will allow us to explore the world in more ways than one. It also shows us that the key to artificial intelligence is human thought and creating machines that can determine causes and effects. As you read, you’ll learn how the human brain is the most advanced tool in the world, how misunderstood data can lead to protests of the smallpox vaccine, and how controlled experiments have been around for as long as humans. Do you want more free audiobook summaries like this? Download our app for free at QuickRead.com/App and get access to hundreds of free book and audiobook summaries.

AI with AI
Always Look on the Bright Side of Life

AI with AI

Play Episode Listen Later Jan 15, 2021 35:53


In COVID-related news, Andy and Dave discuss a commercial AI model from Biocogniv that predicts COVID-19 infection using only blood tests, with a 95% sensitivity and a 49% specificity. In a story that highlights the general challenge with algorithms, Stanford reported challenges in using a rules-based algorithm to determine priority of vaccine distribution, when it omitted front-line doctors from initial distribution. In non-COVID AI news, Vincent Boucher and Gary Marcus organize a second “AI Debate” on the topic of Moving AI Forward: An Interdisciplinary Approach, which included Daniel Kahneman, Christof Koch, Judea Pearl, Fei-Fei Li, Margaret Mitchel, and many others. Reuters reports that Google’s PR, policy, and legal teams have been editing AI research papers in order to give them a more positive tone, and to reduce discussions of the potential drawbacks of the technology. And Microsoft patents a “chat bot technology” that would seek to reincarnate deceased people. In research, Google announces MuZero, which masters chess, Go, shogi, and the Atari Learning Environment by planning with a learned model (and no information on the rules). Jeff Heaton provides the book of the week, with Applications of Deep Neural Networks. A survey paper from four universities looks at Data Security for Machine Learning. Another survey paper examines how researchers develop and use datasets for machine learning research. And the ConwayLife.com community celebrates the 50th anniversary of the Game of Life, to include an online simulator called the Exploratorium. Click here to visit our website and explore the links mentioned in the episode. 

Filosofía de bolsillo
Episodio 38. Una moral de la simpatía [*Versión gratuita*]

Filosofía de bolsillo

Play Episode Listen Later Dec 3, 2020 25:17


FdB 2x11 | Además del escándalo de proponer una moral sin metafísica, Hume invirtió la ética tradicional que sometía el sentimiento a la razón. A través de una lectura atenta y detallada de su Tratado de la naturaleza humana, veremos que para él el sentimiento moral, y por encima de todos, el de la simpatía, es el que ofrece los contenidos morales y son estos los que dan valor a nuestros actos. La sección "Un libro en el bolsillo" nos trae El libro del porqué, un libro intelectualmente audaz y fruto de una mente privilegiada que se nutre de muchas disciplinas como la de Judea Pearl. Un texto capaz de extraer profundas implicaciones filosóficas de la estadística, las ciencias computacionales o el presente y futuro de la Inteligencia Artificial. ❗ FILOSOFÍA DE BOLSILLO sólo es y será posible gracias a ti. Hazte mecenas en https://www.filosofiadebolsillo.com/patreon y accede a este y otros EPISODIOS COMPLETOS y a tus recompensas. Si quieres apoyar el proyecto y tienes cualquier duda, escribe a correo@filosofiadebolsillo.com ➡️ Puedes seguir FILOSOFÍA DE BOLSILLO en las principales plataformas como Spotify, iVoox, Apple Podcasts, Google Podcasts, Lecton o Youtube, o visitar filosofiadebolsillo.com. --- Send in a voice message: https://anchor.fm/diego-civilotti/message

CTRL ENTER | Data Science IDP
CTRL ENTER #02 | Fronteiras da Pesquisa em Estatística, com Carlos Cinelli

CTRL ENTER | Data Science IDP

Play Episode Listen Later Oct 13, 2020 29:39


Carlos Cinelli, PhD candidate em Estatística na UCLA, discute as fronteiras da pesquisa sobre causalidade e a revolução da credibilidade. Além disso, ele fala da experiência de trabalhar com o cientista da computação Judea Pearl, a lenda viva que revolucionou a reflexão sobre causalidade. Artigos citados:Cinelli, Hazlett. Making sense of sensitivity: Extending omitted variable bias. https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12348Cinelli. Inferência estatística e a prática econômica no Brasil : os (ab)usos dos testes de significância. https://repositorio.unb.br/handle/10482/11230Cinelli, Forney, Pearl. A Crash Course in Good and Bad Controls. https://ftp.cs.ucla.edu/pub/stat_ser/r493.pdf Recomendações de livros:Angrist, Pischke. Mostly harmless econometrics: An empiricist's companion https://amzn.to/2GONZNeMorgan, Winship. Counterfactuals and causal inference. https://amzn.to/3dlRiHRPearl, Mackenzie. Book of Why https://amzn.to/3dmpH9kPearl , Glymour. Causal Inference in Statistics: A Primer https://amzn.to/36SZXAe Peters, Janzing, Schölkopf https://amzn.to/36ZN7jQ Apresentação: Leonardo MonasterioEdição: Felipe Mux Ouça também em https://www.idp.edu.br/podcasts/

QuickRead.com Podcast - Free book summaries
The Book of Why by Judea Pearl and Dana MacKenzie | Summary | Free Audiobook

QuickRead.com Podcast - Free book summaries

Play Episode Listen Later Jun 28, 2020 27:29


The New Science of Cause and Effect. As humans, our instinct is to ask the questions “why” and “what if?” As you go about your day, you might ask yourself, “If I take this aspirin, will my headache go away?” or “What did I eat that made my stomach hurt?” You might even ask questions about the past too like, “What if I left my house just a few minutes earlier, would I have made my flight?” Whenever we ask questions like these, we are dealing with cause and effect relationships, or how certain factors lead to various results. In the scientific community, “Correlation is not causation” has been the mantra chanted by scientists for more than a century, prohibiting causal talk in many classrooms and scientific studies. Today, however, we have gone through a Causal Revolution instigated by author Judea Pearl and his colleagues. Through The Book of Why, Pearl shows us how his work in causal relationships will allow us to explore the world in more ways than one. It also shows us that the key to artificial intelligence is human thought and creating machines that can determine causes and effects. As you read, you’ll learn how the human brain is the most advanced tool in the world, how misunderstood data can lead to protests of the smallpox vaccine, and how controlled experiments have been around for as long as humans. *** Do you want more free audiobook summaries like this? Download our app for free at QuickRead.com/App and get access to hundreds of free book and audiobook summaries.

COMPLEXITY
Melanie Mitchell on Artificial Intelligence: What We Still Don't Know

COMPLEXITY

Play Episode Listen Later Mar 4, 2020 77:16


Since the term was coined in 1956, artificial intelligence has been a kind of mirror that tells us more about our theories of intelligence, and our hopes and fears about technology, than about whether we can make computers think. AI requires us to formulate and specify: what do we mean by computation and cognition, intelligence and thought? It is a topic rife with hype and strong opinions, driven more by funding and commercial goals than almost any other field of science...with the curious effect of making massive, world-changing technological advancements even as we lack a unifying theoretical framework to explain and guide the change. So-called machine intelligences are more and more a part of everyday human life, but we still don’t know if it is possible to make computers think, because we have no universal, satisfying definition of what thinking is. Meanwhile, we deploy technologies that we don’t fully understand to make decisions for us, sometimes with tragic consequences. To build machines with common sense, we have to answer fundamental questions such as, “How do humans learn?” “What is innate and what is taught?” “How much do sociality and evolution play a part in our intelligence, and are they necessary for AI?”This week’s guest is computer scientist Melanie Mitchell, Davis Professor of Complexity at SFI, Professor of Computer Science at Portland State University, founder of ComplexityExplorer.org, and author or editor of six books, including the acclaimed Complexity: A Guided Tour and her latest, Artificial Intelligence: A Guide for Thinking Humans. In this episode, we discuss how much left there is to learn about artificial intelligence, and how research in evolution, neuroscience, childhood development, and other disciplines might help shed light on what AI still lacks: the ability to truly think.Visit Melanie Mitchell’s Website for research papers and to buy her book, Artificial Intelligence: A Guide for Thinking Humans. Follow Melanie on Twitter.Watch Melanie's SFI Community Lecture on AI.Join our Facebook discussion group to meet like minds and talk about each episode.Podcast Theme Music by Mitch Mignano.Follow us on social media:Twitter • YouTube • Facebook • Instagram • LinkedInMore discussions with Melanie:Lex FridmanEconTalkJim RuttWBUR On PointMelanie's AMA on The Next Web

COMPLEXITY
David B. Kinney on the Philosophy of Science

COMPLEXITY

Play Episode Listen Later Feb 19, 2020 55:43


Science is often seen as a pure, objective discipline — as if it all rests neatly on cause and effect. As if the universe acknowledges a difference between ideal categories like “biology” and “physics.” But lately, the authority of science has had to reckon with critiques that it is practiced by flawed human actors inside social institutions. How much can its methods really disclose? Somewhere between the two extremes of scientism and the assertion that all knowledge is a social construct, real scientists continue to explore the world under conditions of uncertainty, ready to revise it all with deeper rigor.For this great project to continue in spite of our known biases, it’s helpful to step back and ask some crucial questions about the nature, limits, and reliability of science. To answer the most fundamental questions of our cosmos, it is time to bring back the philosophers to articulate a better understanding of how it is that we know what we know in the first place. Some questions — like the nature of causation, where we should look for aliens, and why we might rationally choose not to know important information — might not be answerable without bringing science and philosophy back into conversation with each other.This week’s guest is David Kinney, an Omidyar Postdoctoral Fellow here at SFI whose research focuses on the philosophy of science and formal epistemology. We talk about his work on rational ignorance, explanatory depth, causation, and more on a tour of a philosophy unlike what most of us may be familiar with from school — one thriving in collaboration with the sciences.DavidBKinney.comOn the Explanatory Depth and Pragmatic Value of Coarse-Grained, Probabilistic, Causal Explanations.Philosophy of Science. 86(1): 145-167.Is Causation Scientific?Visit our website for more information or to support our science and communication efforts.Join our Facebook discussion group to meet like minds and talk about each episode.Podcast Theme Music by Mitch Mignano.Follow us on social media:Twitter • YouTube • Facebook • Instagram • LinkedIn

Coffee with Amir
Keynote: Judea Pearl - The New Science of Cause and Effect

Coffee with Amir

Play Episode Listen Later Feb 15, 2020 63:41


PyData LA 2018 The talk will explain why data science should embrace an engine for processing cause-effect relationships. I will describe the structure of this engine, how it has revolutionized the data-intensive sciences and how it is about to revolutiones machine learning.

Lex Fridman Podcast
Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI

Lex Fridman Podcast

Play Episode Listen Later Dec 11, 2019 83:21


Judea Pearl is a professor at UCLA and a winner of the Turing Award, that’s generally recognized as the Nobel Prize of computing. He is one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Bayesian Networks and profound ideas in causality in general. These ideas are important not just for AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lies at the core of what is currently missing and

Podcast about Artificial Creativity
11 Sam Harris And Judea Pearl

Podcast about Artificial Creativity

Play Episode Listen Later Aug 9, 2019 14:20


References: “Making Sense” podcast episode with Sam Harris and Judea Pearl: https://www.youtube.com/watch?v=NNDvhFbMD0s Popper, “Objective Knowledge” - Chapter 1 “Conjectural Knowledge: My Solution of the Problem of Induction” - Chapter 7 “Evolution and the Tree of Knowledge” Popper, “Conjectures and Refutations”, chapter 1 “Science: Conjectures and Refutations” David Deutsch, “The Beginning of Infinity”, chapter 2 “Closer to Reality”

Making Sense with Sam Harris - Subscriber Content

Sam Harris speaks with Judea Pearl about his work on the mathematics of causality and artificial intelligence. They discuss how science has generally failed to understand causation, different levels of causal inference, counterfactuals, the foundations of knowledge, the nature of possibility, the illusion of free will, artificial intelligence, the nature of consciousness, and other topics. Judea Pearl is a computer scientist and philosopher, known for his work in AI and the development of Bayesian networks, as well as his theory of causal and counterfactual inference. He is a professor of computer science and statistics and director of the Cognitive Systems Laboratory at UCLA. In 2011, he was awarded with the Turing Award, the highest distinction in computer science. He is the author of The Book of Why: The New Science of Cause and Effect (coauthored with Dana McKenzie) among other titles. Twitter: @yudapearl

Making Sense with Sam Harris
#164 — Cause & Effect

Making Sense with Sam Harris

Play Episode Listen Later Aug 5, 2019 49:15


In this episode of the podcast, Sam Harris speaks with Judea Pearl about his work on the mathematics of causality and artificial intelligence. They discuss how science has generally failed to understand causation, different levels of causal inference, counterfactuals, the foundations of knowledge, the nature of possibility, the illusion of free will, artificial intelligence, the nature of consciousness, and other topics. SUBSCRIBE to continue listening and gain access to all content on samharris.org/subscribe.

Zurück zur Zukunft
#20 | AI-Special von der Rise of AI Konferenz

Zurück zur Zukunft

Play Episode Listen Later May 19, 2019 44:18


In dieser Folge führt Agnieszka Interviews mit den Speakern der Rise of AI Konferenz durch zu Themen von KI-Ethik bis zur Hardware https://riseof.ai Interview 1: Ein Gespräch mit Prof. P. DDr. Dipl.-Kfm. Lic. Justinus Pech OCist, einem katholischen Ordensgeistlichen und Wirtschaftswissenschaftler über die Ethik der Künstlichen Intelligenz. https://de.wikipedia.org/wiki/Justinus_Christoph_Pech Interview 2: Ein Gespräch mit Dr. Tina Klüwer, Unternehmerin und Mitglied der Enquete-Kommission „Künstliche Intelligenz“ des Deutschen Bundestages, über die Technologie hinter ihrem Startup parlamind und die Herausforderungen für die Tech-Startups in Deutschland. https://parlamind.com https://www.bundestag.de/ausschuesse/weitere_gremien/enquete_ki Interview 3: Ein Gespräch mit Albert Wenger, PhD, "Investor auf der Suche nach dem Sinn des Lebens", wie ihn der Handelsblatt in seinem Beitrag bezeichnete, über die Herausforderungen für die Menschheit im Zeitalter der Künstlicher Intelligenz und sein Buch "World after Capital". https://en.wikipedia.org/wiki/Albert_Wenger https://www.usv.com/about/albert-wenger https://www.handelsblatt.com/unternehmen/management/albert-wenger-ein-investor-auf-der-suche-nach-dem-sinn-des-lebens/22640062.html http://worldaftercapital.org Interview 4: Ein Gespräch mit Ulrich Schmidt, Segment Manager bei EBV Elektronik, darüber warum wir im Kontext von AI auch über Hardware sprechen müssen. https://www.avnet.com/wps/portal/ebv/ Interview 5: Ein Gespräch mit Sofie Quidenus-Wahlforss, CEO & Founder bei omni:us über die Versicherungsindustrie und die Vorteile der Zusammenarbeit von Mensch und Maschine https://omnius.com Interview 6: Ein Gespräch mit Dr. Tarek R. Besold AI, Lead von Alpha Health AI Lab und Mitglied der Arbeitsgruppe KI des DIN-Normenausschusses Informationstechnik und Anwendungen, über die Erklärbarkeit der Algorithmen. http://alpha.company https://www.din.de/de/mitwirken/normenausschuesse/nia/nationale-gremien/wdc-grem:din21:284801493?sourceLanguage&destinationLanguage Buchempfehlungen: "World after Capital" von Albert Wenger: http://worldaftercapital.org "The Book of Why" von Judea Pearl: https://www.amazon.de/Book-Why-Science-Cause-Effect/dp/046509760X/ Many thanks for the music by Lee Rosevere freemusicarchive.org/music/Lee_Rose…_Start_the_Day

MCMP – Philosophy of Mind
The Mind-Brain Entanglement

MCMP – Philosophy of Mind

Play Episode Listen Later Apr 18, 2019 51:36


Roland Poellinger (MCMP/LMU) gives a talk at the MCMP Colloquium (14 May, 2014) titled "The Mind-Brain Entanglement". Abstract: Listing "The Nonreductivist’s Troubles with Mental Causation" (1993) Jaegwon Kim suggested that the only remaining alternatives are the eliminativist’s standpoint or plain denial of the mind’s causal powers if we want to uphold the closure of the physical and reject causal overdetermination at the same time. Nevertheless, explaining stock market trends by referring to investors’ fear of loss is a very familiar example of attributing reality to both domains and acknowledging the mind’s interaction with the world: "if you pick a physical event and trace its causal ancestry or posterity, you may run into mental events" (Kim 1993). In this talk I will use the formal framework of Bayes net causal models in an interventionist understanding (as devised, e.g., by Judea Pearl in "Causality", 2000) to make the concept of causal influence precise. Investigating structurally similar cases of conflicting causal intuitions will motivate a natural extension of the interventionist Bayes net framework, Causal Knowledge Patterns, in which our intuition that the mind makes a difference finds an expression.

The David Suissa Podcast
Judea Pearl: Defending Israel on College Campuses

The David Suissa Podcast

Play Episode Listen Later Mar 8, 2019 70:19


UCLA Professor and renowned scholar on Artificial Intelligence Judea Pearl weighs in on the struggle to defend Israel and his mission to honor his son Daniel's memory.    Follow David Suissa on Facebook, Twitter and Instagram.   

Mark Leonard's World in 30 Minutes
A new European payment system?

Mark Leonard's World in 30 Minutes

Play Episode Listen Later Sep 6, 2018 37:18


Mark Leonard speaks with Mark Schieritz from Die Zeit and ECFR's Sebastian Dullien about a new framework for transatlantic relations. The podcast was recorded on 6 September 2018. Bookshelf: Crashed by Adam Tooze https://www.penguinrandomhouse.com/books/301357/crashed-by-adam-tooze/9780670024933/ The Book of Why: The New Science of Cause and Effect by Judea Pearl https://www.penguin.co.uk/books/289825/the-book-of-why/#mJDZe5QqZFKGwC7k.99 The German barrier to a global euro by Sebastian Dullien https://www.ecfr.eu/article/commentary_german_barrier_global_euro_maas Weg vom Dollar by Mark Schieritz https://www.zeit.de/wirtschaft/2018-09/transatlantische-beziehungen-zahlungsverkehr-europa-usa-heiko-maas Es reicht! by Tina Hildebrandt, Kerstin Kohlenberg, Jörg Lau, Mark Schieritz und Michael Thumann https://www.zeit.de/2018/36/aussenpolitik-handelsstreit-donald-trump-heiko-maas Picture credit: Dollars and euros background by Petr Krachtovil via Public Domain Pictures https://www.publicdomainpictures.net/en/view-image.php?image=20851&picture=dollars-and-euros-background, CC-BY-0.

Science Signaling Podcast
<i>Science</i> and <i>Nature</i> get their social science studies replicated—or not, the mechanisms behind human-induced earthquakes, and the taboo of claiming causality in science

Science Signaling Podcast

Play Episode Listen Later Aug 30, 2018 29:10


A new project out of the Center for Open Science in Charlottesville, Virginia, found that of all the experimental social science papers published in Science and Nature from 2010–15, 62% successfully replicated, even when larger sample sizes were used. What does this say about peer review? Host Sarah Crespi talks with Staff Writer Kelly Servick about how this project stacks up against similar replication efforts, and whether we can achieve similar results by merely asking people to guess whether a study can be replicated. Podcast producer Meagan Cantwell interviews Emily Brodsky of the University of California, Santa Cruz, about her research report examining why earthquakes occur as far as 10 kilometers from wastewater injection and fracking sites. Emily discusses why the well-established mechanism for human-induced earthquakes doesn't explain this distance, and how these findings may influence where we place injection wells in the future. In this month's book podcast, Jen Golbeck interviews Judea Pearl and Dana McKenzie, authors of The Book of Why: The New Science of Cause and Effect. They propose that researchers have for too long shied away from claiming causality and provide a road map for bringing cause and effect back into science. This week's episode was edited by Podigy. Download a transcript of this episode (PDF) Listen to previous podcasts. About the Science Podcast [Image: Jens Lambert, Shutterstock; Music: Jeffrey Cook]

Science Magazine Podcast
<i>Science</i> and <i>Nature</i> get their social science studies replicated—or not, the mechanisms behind human-induced earthquakes, and the taboo of claiming causality in science

Science Magazine Podcast

Play Episode Listen Later Aug 30, 2018 27:48


A new project out of the Center for Open Science in Charlottesville, Virginia, found that of all the experimental social science papers published in Science and Nature from 2010–15, 62% successfully replicated, even when larger sample sizes were used. What does this say about peer review? Host Sarah Crespi talks with Staff Writer Kelly Servick about how this project stacks up against similar replication efforts, and whether we can achieve similar results by merely asking people to guess whether a study can be replicated. Podcast producer Meagan Cantwell interviews Emily Brodsky of the University of California, Santa Cruz, about her research report examining why earthquakes occur as far as 10 kilometers from wastewater injection and fracking sites. Emily discusses why the well-established mechanism for human-induced earthquakes doesn’t explain this distance, and how these findings may influence where we place injection wells in the future. In this month’s book podcast, Jen Golbeck interviews Judea Pearl and Dana McKenzie, authors of The Book of Why: The New Science of Cause and Effect. They propose that researchers have for too long shied away from claiming causality and provide a road map for bringing cause and effect back into science. This week’s episode was edited by Podigy. Download a transcript of this episode (PDF) Listen to previous podcasts. About the Science Podcast [Image: Jens Lambert, Shutterstock; Music: Jeffrey Cook]

Science Magazine Podcast
Science and Nature get their social science studies replicated—or not, the mechanisms behind human-induced earthquakes, and the taboo of claiming causality in science

Science Magazine Podcast

Play Episode Listen Later Aug 30, 2018 27:56


A new project out of the Center for Open Science in Charlottesville, Virginia, found that of all the experimental social science papers published in Science and Nature from 2010–15, 62% successfully replicated, even when larger sample sizes were used. What does this say about peer review? Host Sarah Crespi talks with Staff Writer Kelly Servick about how this project stacks up against similar replication efforts, and whether we can achieve similar results by merely asking people to guess whether a study can be replicated. Podcast producer Meagan Cantwell interviews Emily Brodsky of the University of California, Santa Cruz, about her research report examining why earthquakes occur as far as 10 kilometers from wastewater injection and fracking sites. Emily discusses why the well-established mechanism for human-induced earthquakes doesn't explain this distance, and how these findings may influence where we place injection wells in the future. In this month's book podcast, Jen Golbeck interviews Judea Pearl and Dana McKenzie, authors of The Book of Why: The New Science of Cause and Effect. They propose that researchers have for too long shied away from claiming causality and provide a road map for bringing cause and effect back into science. This week's episode was edited by Podigy.

Science Signaling Podcast
Liquid water on Mars, athletic performance in transgender women, and the lost colony of Roanoke

Science Signaling Podcast

Play Episode Listen Later Jul 26, 2018 26:55


Billions of years ago, Mars probably hosted many water features: streams, rivers, gullies, etc. But until recently, water detected on the Red Planet was either locked up in ice or flitting about as a gas in the atmosphere. Now, researchers analyzing radar data from the Mars Express mission have found evidence for an enormous salty lake under the southern polar ice cap of Mars. Daniel Clery joins host Sarah Crespi to discuss how the water was found and how it can still be liquid—despite temperatures and pressures typically inhospitable to water in its liquid form. Read the research. Sarah also talks with science journalist Katherine Kornei about her story on changing athletic performance after gender transition. The feature profiles researcher Joanna Harper on the work she has done to understand the impacts of hormone replacement therapy and testosterone levels in transgender women involved in running and other sports. It turns out within a year of beginning hormone replacement therapy, transgender women plateau at their new performance level and stay in a similar rank with respect to the top performers in the sport. Her work has influenced sports oversight bodies like the International Olympic Committee. In this month's book segment, Jen Golbeck interviews Andrew Lawler about his book The Secret Token: Myth, Obsession, and the Search for the Lost Colony of Roanoke. Next month's book will be The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie. Write us at sciencepodcast@aaas.org or tweet to us @sciencemagazine with your questions for the authors. This week's episode was edited by Podigy. Download a transcript of this episode (PDF) Listen to previous podcasts. [Image: Henry Howe; Music: Jeffrey Cook]

Science Magazine Podcast
Liquid water on Mars, athletic performance in transgender women, and the lost colony of Roanoke

Science Magazine Podcast

Play Episode Listen Later Jul 26, 2018 25:40


Billions of years ago, Mars probably hosted many water features: streams, rivers, gullies, etc. But until recently, water detected on the Red Planet was either locked up in ice or flitting about as a gas in the atmosphere. Now, researchers analyzing radar data from the Mars Express mission have found evidence for an enormous salty lake under the southern polar ice cap of Mars. Daniel Clery joins host Sarah Crespi to discuss how the water was found and how it can still be liquid—despite temperatures and pressures typically inhospitable to water in its liquid form. Read the research. Sarah also talks with science journalist Katherine Kornei about her story on changing athletic performance after gender transition. The feature profiles researcher Joanna Harper on the work she has done to understand the impacts of hormone replacement therapy and testosterone levels in transgender women involved in running and other sports. It turns out within a year of beginning hormone replacement therapy, transgender women plateau at their new performance level and stay in a similar rank with respect to the top performers in the sport. Her work has influenced sports oversight bodies like the International Olympic Committee. In this month’s book segment, Jen Golbeck interviews Andrew Lawler about his book The Secret Token: Myth, Obsession, and the Search for the Lost Colony of Roanoke. Next month’s book will be The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie. Write us at sciencepodcast@aaas.org or tweet to us @sciencemagazine with your questions for the authors. This week’s episode was edited by Podigy. Download a transcript of this episode (PDF) Listen to previous podcasts. [Image: Henry Howe; Music: Jeffrey Cook]

EdgeCast
Judea Pearl: Engines of Evidence [10.24.16]

EdgeCast

Play Episode Listen Later Oct 24, 2016 28:03


JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence. Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized. He is the author of Heuristics; Probabilistic Reasoning in Intelligent Systems; and Causality: Models, Reasoning, and Inference. He is the winner of the Alan Turing Award. Judea Pearl's Edge Bio Page (https://www.edge.org/memberbio/judea_pearl) The conversation: https://www.edge.org/conversation/judea_pearl-engines-of-evidence

Concrete Causation
The Mind-Brain Entanglement

Concrete Causation

Play Episode Listen Later May 24, 2014 51:36


Roland Poellinger (MCMP/LMU) gives a talk at the MCMP Colloquium (14 May, 2014) titled "The Mind-Brain Entanglement". Abstract: Listing "The Nonreductivist’s Troubles with Mental Causation" (1993) Jaegwon Kim suggested that the only remaining alternatives are the eliminativist’s standpoint or plain denial of the mind’s causal powers if we want to uphold the closure of the physical and reject causal overdetermination at the same time. Nevertheless, explaining stock market trends by referring to investors’ fear of loss is a very familiar example of attributing reality to both domains and acknowledging the mind’s interaction with the world: "if you pick a physical event and trace its causal ancestry or posterity, you may run into mental events" (Kim 1993). In this talk I will use the formal framework of Bayes net causal models in an interventionist understanding (as devised, e.g., by Judea Pearl in "Causality", 2000) to make the concept of causal influence precise. Investigating structurally similar cases of conflicting causal intuitions will motivate a natural extension of the interventionist Bayes net framework, Causal Knowledge Patterns, in which our intuition that the mind makes a difference finds an expression.

Fakultät für Philosophie, Wissenschaftstheorie und Religionswissenschaft - Digitale Hochschulschriften der LMU

Concrete Causation beschäftigt sich mit Theorien der Verursachung, ihrer Interpretation und ihrer Einbettung in metaphysisch-ontologische Fragestellungen sowie der Anwendung solcher Theorien in naturwissenschaftlichem und entscheidungstheoretischem Kontext. Die Arbeit gliedert sich in vier Kapitel, die eine historisch-systematische Verortung der zentralen Probleme vornehmen (Kapitel 1), um dann eine begriffliche und technische Darstellung der Theorien von David Lewis und Judea Pearl zu liefern (Kapitel 2). Der mathematisch-technische Rahmen von Pearl (in Bayes'schen Netzen) wird nach philosophisch motivierten begrifflichen Überlegungen für eine epistemische Interpretation von Kausalität und in einer Erweiterung des interventionistischen Ansatzes für die Betonung des wissensordnenden Aspekts von Kausalrelationen herangezogen (Kapitel 3). Die Integration von kausalem und nicht-kausalem Wissen in einheitlichen Strukturen stellt einen Ansatz zur Lösung von Problemen der (kausalen) Entscheidungstheorie dar und ermöglicht gleichzeitig die Abbildung von logisch-mathematischen, synonymischen sowie reduktiven Zusammenhängen in operationalisierbaren Netzen der Belief Propagation (Kapitel 4).

Concrete Causation
Graphs as Models of Interventions

Concrete Causation

Play Episode Listen Later Jul 9, 2010 41:04


In this talk Roland Poellinger (Munich) gives an outline of Judea Pearl's deterministic approach towards causation (workshop "Concrete Causation", 9 July, 2010). The title of the talk is taken from the programmatic section 2.2 of Pearl's paper "Causal Diagrams for Empirical Research" (Biometrika, Vol. 82, No. 4, 669-709, 1995) which is briefly sketched and commented on as an introduction to Pearl's interventionist account of causal analysis. Further topics: The problems of simple causal networks, interventions as variables, Humphreys' paradox, and causal decision making.

Kenan Institute for Ethics: Speeches and Panels
The Daniel Pearl Dialogue for Muslim-Jewish Understanding, featuring Akbar Ahmed & Judea Pearl

Kenan Institute for Ethics: Speeches and Panels

Play Episode Listen Later Jun 21, 2007 104:46


Kenan Institute for Ethics: Speeches and Panels
The Daniel Pearl Dialogue for Muslim-Jewish Understanding, featuring Akbar Ahmed & Judea Pearl

Kenan Institute for Ethics: Speeches and Panels

Play Episode Listen Later Jun 21, 2007 104:46