POPULARITY
Join host, Weston Davis, and Camp Monsters' sound design magician, Nick Patri, as they dive into their favorite creatures from Season 6, reveal how Weston crafts his monster stories, and share a few behind-the-scenes surprises!Thank you to this season's sponsors: Obermeyer, Mountain House, Coleman, Columbia, Zippo, Peak Refuel, Altra, and REI Co-op.Take the Camp Monsters Listeners Survey.Listen to REI's Wild Ideas Worth Living podcast!
Lucie's visit to her friend Elodie's hometown in Louisiana has been filled with vivid experiences: bold new flavors, lively conversations around the table, and day trips through the bayou behind Elodie's family home. It's been a great trip, but as her time there draws to an end, Lucie finds herself increasingly restless, as if something is lingering beneath the surface—a secret she can't quite grasp...This episode is sponsored by Obermeyer. Shop Obermeyer's amazing products in store or at REI.com. Take the Camp Monsters Listeners Survey.Listen to REI's Wild Ideas Worth Living podcast!
Arvind Narayanan and Sayash Kapoor are well regarded computer scientists at Princeton University and have just published a book with a provocative title, AI Snake Oil. Here I've interviewed Sayash and challenged him on this dismal title, for which he provides solid examples of predictive AI's failures. Then we get into the promise of generative AI.Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.Transcript with links to audio and external links to key publications Eric Topol (00:06):Hello, it's Eric Topol with Ground Truths, and I'm delighted to welcome the co-author of a new book AI SNAKE OIL and it's Sayash Kapoor who has written this book with Arvind Narayanan of Princeton. And so welcome, Sayash. It's wonderful to have you on Ground Truths.Sayash Kapoor (00:28):Thank you so much. It's a pleasure to be here.Eric Topol (00:31):Well, congratulations on this book. What's interesting is how much you've achieved at such a young age. Here you are named in TIME100 AI's inaugural edition as one of those eminent contributors to the field. And you're currently a PhD candidate at Princeton, is that right?Sayash Kapoor (00:54):That's correct, yes. I work at the Center for Information Technology Policy, which is a joint program between the computer science department and the school of public and international affairs.Eric Topol (01:05):So before you started working on your PhD in computer science, you already were doing this stuff, I guess, right?Sayash Kapoor (01:14):That's right. So before I started my PhD, I used to work at Facebook as a machine learning engineer.Eric Topol (01:20):Yeah, well you're taking it to a more formal level here. Before I get into the book itself, what was the background? I mean you did describe it in the book why you decided to write a book, especially one that was entitled AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.Background to Writing the BookSayash Kapoor (01:44):Yeah, absolutely. So I think for the longest time both Arvind and I had been sort of looking at how AI works and how it doesn't work, what are cases where people are somewhat fooled by the potential for this technology and fail to apply it in meaningful ways in their life. As an engineer at Facebook, I had seen how easy it is to slip up or make mistakes when deploying machine learning and AI tools in the real world. And had also seen that, especially when it comes to research, it's really easy to make mistakes even unknowingly that inflate the accuracy of a machine learning model. So as an example, one of the first research projects I did when I started my PhD was to look at the field of political science in the subfield of civil war prediction. This is a field which tries to predict where the next civil war will happen and in order to better be prepared for civil conflict.(02:39):And what we found was that there were a number of papers that claimed almost perfect accuracy at predicting when a civil war will take place. At first this seemed sort of astounding. If AI can really help us predict when a civil war will start like years in advance sometimes, it could be game changing, but when we dug in, it turned out that every single one of these claims where people claim that AI was better than two decades old logistic regression models, every single one of these claims was not reproducible. And so, that sort of set the alarm bells ringing for the both of us and we sort of dug in a little bit deeper and we found that this is pervasive. So this was a pervasive issue across fields that were quickly adopting AI and machine learning. We found, I think over 300 papers and the last time I compiled this list, I think it was over 600 papers that suffer from data leakage. That is when you can sort of train on the sets that you're evaluating your models on. It's sort of like teaching to the test. And so, machine learning model seems like it does much better when you evaluate it on your data compared to how it would really work out in the real world.Eric Topol (03:48):Right. You say in the book, “the goal of this book is to identify AI snake oil - and to distinguish it from AI that can work well if used in the right ways.” Now I have to tell you, it's kind of a downer book if you're an AI enthusiast because there's not a whole lot of positive here. We'll get to that in a minute. But you break down the types of AI, which I'm going to challenge a bit into three discrete areas, the predictive AI, which you take a really harsh stance on, say it will never work. Then there's generative AI, obviously the large language models that took the world by storm, although they were incubating for several years when ChatGPT came along and then content moderation AI. So maybe you could tell us about your breakdown to these three different domains of AI.Three Types of AI: Predictive, Generative, Content ModerationSayash Kapoor (04:49):Absolutely. I think one of our main messages across the book is that when we are talking about AI, often what we are really interested in are deeper questions about society. And so, our breakdown of predictive, generative, and content moderation AI sort of reflects how these tools are being used in the real world today. So for predictive AI, one of the motivations for including this in the book as a separate category was that we found that it often has nothing to do with modern machine learning methods. In some cases it can be as simple as decades old linear regression tools or logistic regression tools. And yet these tools are sold under the package of AI. Advances that are being made in generative AI are sold as if they apply to predictive AI as well. Perhaps as a result, what we are seeing is across dozens of different domains, including insurance, healthcare, education, criminal justice, you name it, companies have been selling predictive AI with the promise that we can use it to replace human decision making.(05:51):And I think that last part is where a lot of our issues really come down to because these tools are being sold as far more than they're actually capable of. These tools are being sold as if they can enable better decision making for criminal justice. And at the same time, when people have tried to interrogate these tools, what we found is these tools essentially often work no better than random, especially when it comes to some consequential decisions such as job automation. So basically deciding who gets to be called on the next level of like a job interview or who is rejected, right as soon as they submit the CV. And so, these are very, very consequential decisions and we felt like there is a lot of snake oil in part because people don't distinguish between applications that have worked really well or where we have seen tremendous advances such as generative AI and applications where essentially we've stalled for a number of decades and these tools don't really work as claimed by the developers.Eric Topol (06:55):I mean the way you partition that, the snake oil, which is a tough metaphor, and you even show the ad from 1905 of snake oil in the book. You're really getting at predictive AI and how it is using old tools and selling itself as some kind of breakthrough. Before I challenge that, are we going to be able to predict things? By the way, using generative AI, not as you described, but I would like to go through a few examples of how bad this has been and since a lot of our listeners and readers are in the medical world or biomedical world, I'll try to get to those. So one of the first ones you mentioned, which I completely agree, is how prediction of Covid from the chest x-ray and there were thousands of these studies that came throughout the pandemic. Maybe you could comment about that one.Some Flagrant ExamplesSayash Kapoor (08:04):Absolutely. Yeah, so this is one of my favorite examples as well. So essentially Michael Roberts and his team at the University of Cambridge a year or so after the pandemic looked back at what had happened. I think at the time there were around 500 studies that they included in the sample. And they looked back to see how many of these would be useful in a clinical setting beyond just the scope of writing a research paper. And they started out by using a simple checklist to see, okay, are these tools well validated? Does the training and the testing data, is it separate? And so on. So they ran through the simple checklist and that excluded all but 60 of these studies from consideration. So apart from 60 studies, none of these other studies even passed a very, very basic criteria for being included in the analysis. Now for these 60, it turns out that if you take a guess about how many were useful, I'm pretty confident most cases would be wrong.(09:03):There were exactly zero studies that were useful in a clinically relevant setting. And the reasons for this, I mean in some cases the reasons were as bizarre as training a machine learning model to predict Covid where all of the positive samples of people who had Covid were from adults. But all of the negative samples of people who didn't have Covid were from children. And so, essentially claiming that the resulting classifier can predict who has Covid is bizarre because all the classifier is doing is looking at the checks history and basically predicting which x-ray belongs to a child versus an adult. And so, this is the sort of error in some cases we saw duplicates in the training and test set. So you have the same person that is being used for training the model and that it is also used for evaluating the model. So simply memorizing a given sample of x-rays would be enough to achieve a very high performance. And so, for issues like these, I think all 60 of these studies prove to be not useful in a clinically relevant setting. And I think this is sort of the type of pattern that we've seen over and over again.Eric Topol (10:14):Yeah, and I agree with you on that point. I mean that was really a flagrant example and that would fulfill your title of your book, which as I said is a very tough title. But on page 29, and we'll have this in the post. You have a figure, the landscape of AI snake oil, hype, and harm. And the problem is there is nothing good in this landscape. So on the y-axis you have works, hype, snake oil going up on the y-axis. And on the x-axis, you have benign and harmful. So the only thing you have that works and that's benign is autocomplete. I wouldn't say that works. And then you have works facial recognition for surveillance is harmful. This is a pretty sobering view of AI. Obviously, there's many things that are working that aren't on this landscape. So I just would like to challenge, are you a bit skewed here and only fixating on bad things? Because this diagram is really rough. I mean, there's so much progress in AI and you have in here you mentioned the predicting civil wars, and obviously we have these cheating detection, criminal risk prediction. I mean a lot of problems, video interviews that are deep fakes, but you don't present any good things.Optimism on Generative AISayash Kapoor (11:51):So to be clear, I think both Arvind and are somewhat paradoxically optimistic about the future of generative AI. And so, the decision to focus on snake oil was a very intentional one from our end. So in particular, I think at various places in the book we outline why we're optimistic, what types of applications we think we're optimistic about as well. And the reason we don't focus on them is that it basically comes down to the fact that no one wants to read a book that has 300 pages about the virtues of spellcheck or AI for code generation or something like that. But I think I completely agree and acknowledge that there are lots of positive applications that didn't make the cut for the book as well. That was because we wanted people to come to this from a place of skepticism so that they're not fooled by the hype.(12:43):Because essentially we see even these positive uses of AI being lost out if people have unrealistic expectations from what an AI tool should do. And so, pointing out snake oil is almost a prerequisite for being able to use AI productively in your work environment. I can give a couple of examples of where or how we've sort of manifested this optimism. One is AI for coding. I think writing code is an application that I do, at least I use AI a lot. I think almost half of the code I write these days is generated, at least the first draft is generated using AI. And yet if I did not know how to program, it would be a completely different question, right? Because for me pointing out that, oh, this syntax looks incorrect or this is not handling the data in the correct way is as simple as looking at a piece of code because I've done this a few times. But if I weren't an expert on programming, it would be completely disastrous because even if the error rate is like 5%, I would have dozens of errors in my code if I'm using AI to generate it.(13:51):Another example of how we've been using it in our daily lives is Arvind has two little kids and he's built a number of applications for his kids using AI. So I think he's a big proponent of incorporating AI into children's lives as a force for good rather than having a completely hands-off approach. And I think both of these are just two examples, but I would say a large amount of our work these days occurs with the assistance of AI. So we are very much optimistic. And at the same time, I think one of the biggest hindrances to actually adopting AI in the real world is not understanding its limitations.Eric Topol (14:31):Right. Yeah, you say in the book quote, “the two of us are enthusiastic users of generative AI, both in our work and our personal lives.” It just doesn't come through as far as the examples. But before I leave the troubles of predictive AI, I liked to get into a few more examples because that's where your book shines in convincing that we got some trouble here and we need to be completely aware. So one of the most famous, well, there's a couple we're going to get into, but one I'd like to review with you, it's in the book, is the prediction of sepsis in the Epic model. So as you know very well, Epic is the most used IT and health systems electronic health records, and they launched never having published an algorithm that would tell when the patient was hospitalized if they actually had sepsis or risk of sepsis. Maybe you could take us through that, what you do in the book, and it truly was a fiasco.The Sepsis DebacleSayash Kapoor (15:43):Absolutely. So I think back in 2016/2017, Epic came up with a system that would help healthcare providers predict which patients are most at risk of sepsis. And I think, again, this is a very important problem. I think sepsis is one of the leading causes of death worldwide and even in the US. And so, if we could fix that, I think it would be a game changer. The problem was that there were no external validations of this algorithm for the next four years. So for four years, between 2017 to 2021, the algorithm wasn't used by hundreds of hospitals in the US. And in 2021, a team from University of Michigan did this study in their own hospital to see what the efficacy of the sepsis prediction model is. They found out that Epic had claimed an AUC of between 0.76 and 0.83, and the actual AUC was closer to 0.6, and AUC of 0.5 is making guesses at random.(16:42):So this was much, much worse than the company's claims. And I think even after that, it still took a year for sepsis to roll back this algorithm. So at first, Epic's claims were that this model works well and that's why hospitals are adopting it. But then it turned out that Epic was actually incentivizing hospitals to adopt sepsis prediction models. I think they were giving credits of hundreds of thousands of dollars in some cases. If a hospital satisfied a certain set of conditions, one of these conditions was using a sepsis prediction model. And so, we couldn't really take their claims at face value. And finally in October 2022, Epic essentially rolled back this algorithm. So they went from this one size fits all sepsis prediction model to a model that each hospital has to train on its own data, an approach which I think is more likely to work because each hospital's data is different. But it's also more time consuming and expensive for the hospitals because all of a sudden you now need your own data analysts to be able to roll out this model to be able to monitor it.(17:47):I think this study also highlights many of the more general issues with predictive AI. These tools are often sold as if they're replacements for an existing system, but then when things go bad, essentially they're replaced with tools that do far less. And companies often go back to the fine print saying that, oh, we should always deploy it with the human in the loop, or oh, it needs to have these extra protections that are not our responsibility, by the way. And I think that gap between what developers claim and how the tool actually works is what is most problematic.Eric Topol (18:21):Yeah, no, I mean it's an egregious example, and again, it fulfills like what we discussed with statistics, but even worse because it was marketed and it was incentivized financially and there's no doubt that some patients were completely miscategorized and potentially hurt. The other one, that's a classic example that went south is the Optum UnitedHealth algorithm. Maybe you could take us through that one as well, because that is yet another just horrible case of how people were discriminated against.The Infamous Optum AlgorithmSayash Kapoor (18:59):Absolutely. So Optum, another health tech company created an algorithm to prioritize high risk patients for preemptive care. So I think it was around when Obamacare was being introduced that insurance networks started looking into how they could reduce costs. And one of the main ways they identified to reduce costs is basically preemptively caring for patients who are extremely high risk. So in this case, they decided to keep 3% of the patients in the high risk category and they built a classifier to decide who's the highest risk, because potentially once you have these patients, you can proactively treat them. There might be fewer emergency room visits, there might be fewer hospitalizations and so on. So that's all fine and good. But what happened when they implemented the algorithm was that every machine learning model needs like the target variable, what is being predicted at the end of the day. What they decided to predict was how much patient would pay, how much would they charge, what cost the hospital would incur if they admitted this patient.(20:07):And they essentially use that to predict who should be prioritized for healthcare. Now unsurprisingly, it turned out that white patients often pay a lot more or are able to pay a lot more when it comes to hospital visits. Maybe it's because of better insurance or better conditions at work that allow them to take leave and so on. But whatever the mechanism is, what ended up happening with this algorithm was I think black patients with the same level of healthcare prognosis were half as likely or about much less likely compared to white ones of getting enrolled in this high risk program. So they were much less likely to get this proactive care. And this was a fantastic study by Obermeyer, et al. It was published in Science in 2019. Now, what I think is the most disappointing part of this is that Optum did not stop using this algorithm after this study was released. And that was because in some sense the algorithm was working precisely as expected. It was an algorithm that was meant to lower healthcare costs. It wasn't an algorithm that was meant to provide better care for patients who need it most. And so, even after this study was rolled out, I think Optum continued using this algorithm as is. And I think as far as I know, even today this is or some version of this algorithm is still in use across the network of hospitals that Optum serves.Eric Topol (21:31):No, it's horrible the fact that it was exposed by Ziad Obermeyer's paper in Science and that nothing has been done to change it, it's extraordinary. I mean, it's just hard to imagine. Now you do summarize the five reasons predictive AI fails in a nice table, we'll put that up on the post as well. And I think you've kind of reviewed that as these case examples. So now I get to challenge you about predictive AI because I don't know that such a fine line between that and generative AI are large language models. So as you know, the group at DeepMind and now others have done weather forecasting with multimodal large language models and have come up with some of the most accurate weather forecasting we've ever seen. And I've written a piece in Science about medical forecasting. Again, taking all the layers of a person's data and trying to predict if they're high risk for a particular condition, including not just their electronic record, but their genomics, proteomics, their scans and labs and on and on and on exposures, environmental.Multimodal A.I. in Medicine(22:44):So I want to get your sense about that because this is now a coalescence of where you took down predictive AI for good reasons, and then now these much more sophisticated models that are integrating not just large data sets, but truly multimodal. Now, some people think multimodal means only text, audio, speech and video images, but here we're talking about multimodal layers of data as for the weather forecasting model or earthquake prediction or other things. So let's get your views on that because they weren't really presented in the book. I think they're a positive step, but I want to see what you think.Sayash Kapoor (23:37):No, absolutely. I think maybe the two questions are sort of slightly separate in my view. So for things like weather forecasting, I think weather forecasting is a problem that's extremely tenable for generative AI or for making predictions about the future. And I think one of the key differences there is that we don't have the problem of feedback loops with humans. We are not making predictions about individual human beings. We are rather making predictions about what happens with geological outcomes. We have good differential equations that we've used to predict them in the past, and those are already pretty good. But I do think deep learning has taken us one step further. So in that sense, I think that's an extremely good example of what doesn't really fit within the context of the chapter because we are thinking about decisions thinking about individual human beings. And you rightly point out that that's not really covered within the chapter.(24:36):For the second part about incorporating multimodal data, genomics data, everything about an individual, I think that approach is promising. What I will say though is that so far we haven't seen it used for making individual decisions and especially consequential decisions about human beings because oftentimes what ends up happening is we can make very good predictions. That's not in question at all. But even with these good predictions about what will happen to a person, sometimes intervening on the decision is hard because oftentimes we treat prediction as a problem of correlations, but making decisions is a problem of causal estimation. And that's where those two sort of approaches disentangle a little bit. So one of my examples, favorite examples of this is this model that was used to predict who should be released before screening when someone comes in with symptoms of pneumonia. So let's say a patient comes in with symptoms of pneumonia, should you release them on the day of?(25:39):Should you keep them in the hospital or should you transfer them to the ICU? And these ML researchers were basically trying to solve this problem. They found out that the neural network model they developed, this was two decades ago, by the way. The neural network model they developed was extremely accurate at predicting who would basically have a high risk of having complications once they get pneumonia. But it turned out that the model was saying essentially that anyone who comes in who has asthma and who comes in with symptoms of pneumonia is the lowest risk patient. Now, why was this? This was because when in the past training data, when some such patients would come into the hospital, these patients would be transferred directly to the ICU because the healthcare professionals realized that could be a serious condition. And so, it turned out that actually patients who had asthma who came in with symptoms of pneumonia were actually the lowest risk amongst the population because they were taken such good care of.(26:38):But now if you use this prediction that a patient comes in with symptoms of pneumonia and they have asthma, and so they're low risk, if you use this to make a decision to send them back home, that could be catastrophic. And I think that's the danger with using predictive models to make decisions about people. Now, again, I think the scope and consequences of decisions also vary. So you could think of using this to surface interesting patterns in the data, especially at a slightly larger statistical level to see how certain subpopulations behave or how certain groups of people are likely to develop symptoms or whatever. But I think when as soon as it comes to making decisions about people, the paradigm of problem solving changes because as long as we are using correlational models, I think it's very hard to say what will happen if we change the conditions, what will happen if the decision making mechanism is very different from one where the data was collected.Eric Topol (27:37):Right. No, I mean where we agree on this is that at the individual level, using multimodal AI with all these layers of data that have now recently become available or should be available, that has to be compared ideally in a randomized trial with standard of care today, which doesn't use any of that. And to see whether or not that decision's made, does it change the natural history and is it an advantage, that's yet to be done. And I agree, it's a very promising pathway for the future. Now, I think you have done what is a very comprehensive sweep on the predictive AI failures. You've mentioned here in our discussion, your enthusiasm and in the book about generative AI positive features and hope and excitement perhaps even. You didn't really yet, we haven't discussed much on the content moderation AI that you have discreetly categorized. Maybe you could just give us the skinny on your sense of that.Content Moderation AISayash Kapoor (28:46):Absolutely. So content moderation AI is AI that's used to sort of clean up social media feeds. Social media platforms have a number of policies about what's allowed and not allowed on the platforms. Simple things such as spam are obviously not allowed because let's say people start spamming the platform, it becomes useless for everyone. But then there are other things like hate speech or nudity or pornography and things like that, which are also disallowed on most if not all social media platforms today. And I think a lot of the ways in which these policies are enforced today is using AI. So you might have an AI model that runs every single time you upload a photo to Facebook, for instance. And not just one perhaps hundreds of such models to detect if it has nudity or hate speech or any of these other things that might violate the platform's terms of service.(29:40):So content moderation AI is AI that's used to make these decisions. And very often in the last few years we've seen that when something gets taken down, for instance, Facebook deletes a post, people often blame the AI for having a poor understanding. Let's say of satire or not understanding what's in the image to basically say that their post was taken down because of bad AI. Now, there have been many claims that content moderation AI will solve social media's problems. In particular, we've heard claims from Mark Zuckerberg who in a senate testimony I think back in 2018, said that AI is going to solve most if not all of their content moderation problems. So our take on content moderation AI is basically this. AI is very, very useful for solving the simple parts of content moderation. What is a simple part? So basically the simple parts of content moderation are, let's say you have a large training data of the same type of policy violation on a platform like Facebook.(30:44):If you have large data sets, and if these data sets have a clear line in the sand, for instance, with nudity or pornography, it's very easy to create classifiers that will automate this. On the other hand, the hard part of content moderation is not actually just creating these AI models. The hard part is drawing the line. So when it comes to what is allowed and not allowed on platforms, these platforms are essentially making decisions about speech. And that is a topic that's extremely fraught. It's fraught in the US, it's also fraught globally. And essentially these platforms are trying to solve this really hard problem at scale. So they're trying to come up with rules that apply to every single user of the platform, like over 3 billion users in the case of Facebook. And this inevitably has these trade-offs about what speech is allowed versus disallowed that are hard to say one way or the other.(31:42):They're not black and white. And what we think is that AI has no place in this hard part of content moderation, which is essentially human. It's essentially about adjudicating between competing interests. And so, when people claim that AI will solve these many problems of content moderation, I think what they're often missing is that there's this extremely large number of things you need to do to get content moderation right. AI solves one of these dozen or so things, which is detecting and taking down content automatically, but all of the rest of it involves essentially human decisions. And so, this is sort of the brief gist of it. There are also other problems. For example, AI doesn't really work so well for low resource languages. It doesn't really work so well when it comes to nuances and so on that we discussed in the book. But we think some of these challenges are solvable in the medium to long term. But these questions around competing interests of power, I think are beyond the domain of AI even in the medium to long term.Age 28! and Career AdviceEric Topol (32:50):No, I think you nailed that. I think this is an area that you've really aptly characterized and shown the shortcomings of AI and how the human factor is so critically important. So what's extraordinary here is you're just 28 and you are rocking it here with publications all over the place on reproducibility, transparency, evaluating generative AI, AI safety. You have a website on AI snake oil that you're collecting more things, writing more things, and of course you have the experience of having worked in the IT world with Facebook and also I guess also Columbia. So you're kind of off to the races here as one of the really young leaders in the field. And I am struck by that, and maybe you could comment about the inspiration you might provide to other young people. You're the youngest person I've interviewed for Ground Truths, by the way, by a pretty substantial margin, I would say. And this is a field where it attracts so many young people. So maybe you could just talk a bit about your career path and your advice for people. They may be the kids of some of our listeners, but they also may be some of the people listening as well.Sayash Kapoor (34:16):Absolutely. First, thank you so much for the kind words. I think a lot of this work is with collaborators without whom of course, I would never be able to do this. I think Arvind is a great co-author and supporter. I think in terms of my career parts, it was sort of like a zigzag, I would say. It wasn't clear to me when I was an undergrad if I wanted to do grad school or go into the industry, and I sort of on a whim went to work at Facebook, and it was because I'd been working on machine learning for a little bit of time, and I just thought, it's worth seeing what the other side has to offer beyond academia. And I think that experience was very, very helpful. One of the things, I talked to a lot of undergrads here at Princeton, and one of the things I've seen people be very concerned about is, what is the grad school they're going to get into right after undergrad?(35:04):And I think it's not really a question you need to answer now. I mean, in some cases I would say it's even very helpful to have a few years of industry experience before getting into grad school. That has definitely, at least that has been my experience. Beyond that, I think working in a field like AI, I think it's very easy to be caught up with all of the new things that are happening each day. So I'm not sure if you know, but AI has I think over 500-1,000 new archive papers every single day. And with this rush, I think there's this expectation that you might put on yourself on being successful requires a certain number of publications or a certain threshold of things. And I think more often than not, that is counterproductive. So it has been very helpful for me, for example, to have collaborators who are thinking long term, so this book, for instance, is not something that would be very valued within the CS community, I would say. I think the CS community values peer-reviewed papers a lot more than they do books, and yet we chose to write it because I think the staying power of a book or the longevity of a book is much more than any single paper could do. So the other concrete thing I found very helpful is optimizing for a different metric compared to what the rest of the community seems to be doing, especially when it comes to fast moving fields like AI.Eric Topol (36:29):Well, that last piece of advice is important because I think too often people, whether it's computer scientists, life scientists, whoever, they don't realize that their audience is much broader. And that reaching the public with things like a book or op-eds or essays, varied ways that are intended for public consumption, not for, in this case, computer scientists. So that's why I think the book is a nice contribution. I don't like the title because it's so skewed. And also the content is really trying to hammer it at home. I hope you write a sequel book on the positive sides of AI. I did want to ask you, when I read the book, I thought I heard your voice. I thought you had written the book, and Arvind maybe did some editing. You wrote about Arvind this and Arvind that. Did you write the first draft of the book and then he kind of came along?Sayash Kapoor (37:28):No, absolutely not. So the way we wrote the book was we basically started writing it in parallel, and I wrote the first draft of half the chapters and he wrote the first draft of the other half, and that was essentially all the way through. So we would sort of write a draft, pass it to the other person, and then keep doing this until we sent it to our publishers.Eric Topol (37:51):Okay. So I guess I was thinking of the chapters you wrote where it came through. I'm glad that it was a shared piece of work because that's good, because that's what co-authoring is all about, right? Well, Sayash, it's really been a joy to meet you and congratulations on this book. I obviously have expressed my objections and my disagreements, but that's okay because this book will feed the skeptics of AI. They'll love this. And I hope that the positive side, which I think is under expressed, will not be lost and that you'll continue to work on this and be a conscience. You may know I've interviewed a few other people in the AI space that are similarly like you, trying to assure its safety, its transparency, the ethical issues. And I think we need folks like you. I mean, this is what helps get it on track, keeping it from getting off the rails or what it shouldn't be doing. So keep up the great work and thanks so much for joining.Sayash Kapoor (39:09):Thank you so much. It was a real pleasure.************************************************Thanks for listening, reading or watching!The Ground Truths newsletters and podcasts are all free, open-access, without ads.Please share this post/podcast with your friends and network if you found it informative!Voluntary paid subscriptions all go to support Scripps Research. Many thanks for that—they greatly helped fund our summer internship programs for 2023 and 2024.Thanks to my producer Jessica Nguyen and Sinjun Balabanoff for audio and video support at Scripps Research.Note: you can select preferences to receive emails about newsletters, podcasts, or all I don't want to bother you with an email for content that you're not interested in. Get full access to Ground Truths at erictopol.substack.com/subscribe
On this episode of Eclipse on Tap, we welcome first time guest, Don Lee to Pub 39A Studios and welcome back Bryan Oberymeyer to discuss their 2024 totality experience in Ohio. From seeking out mountain bike trails in advance, to finding the perfect spot for viewing totality, their stories provide great conversation on Episode 77. Available now on your favorite podcast platforms. Give us a follow on our social media pages at @eclipseontap [Episode recorded live from Pub 39A on 6/6/24. Produced by Matt Deighton]
Great interview with Andreas Obermeyer about employing and retaining the right person in transport and logistics. Mr. Obermeyer is an expert who has a wide horizon on the global labor market in logistics; a first-class consultant who works with la crème of logistics in Europe. Born and grown in Basel, he completed his apprenticeship in freight forwarding after his “Matura”, i.e. the college certificate valid in Switzerland. After positions in Schenker and Danzas, world famous in those years, Andreas started to work in consulting in 1997, since 2001 in HR executive search and personnel consultancy. Conversation with Marco Sorgetti read by Geoffrey Arend is 20 minutes of power & interest.
Karl Obermeyer is from the Twin Cities rock band Capital Sons and he joins the show to share some memories and pay tribute to the late Mike Jueneman (former drummer of the band), and Karl will also talk about the busy summer ahead for the band. Then, Trevor is joined by friend Tim Lyngen and the two recap the Minnesota Timberwolves season. To close the show, Trevor has his Parting Gift Commentary where he offers some encouragement.
Peter Olenick changed halfpipe skiing when he landed the first double in competition. His “Whiskey Flip” defined Peter's life on and off snow; he went big, took chances, and partied his face off during his legendary career. He was on top of the contest world during one of the most run eras. On the podcast, we talk about not being cool growing up, Aspen, finding confidence at High North Ski Camp, success, partying, and a lot more. Colby James West asks the ‘Inappropriate Questions.' Peter Olenick Show Notes: 4:00: Biggest birthday party, JOSS, Playboy Mansion, Aspen, different ski coaches, the joy of divorce, Steele rivalry, and the twin tip revolution 21:00: Liquid Force: Since 95, Liquid Force has outperformed the competition and turned a sport into a lifestyle. Use the code POWELL15 for 15% off LF orders at LiquidForce.com Stanley: Save 30% off at Stanley1913.com Using the code SNOW30 at checkout Best Day Brewing: All of the flavor of your favorite IPA or Kolsch, without the alcohol, the calories and sugar. 24:00: High North Ski Camp changed his life, Obermeyer, X-Games to Aspen, trying to qualify, partying, and filming. Elan Skis: Over 75 years of innovation that makes you better. 41:00: Peter Glenn Ski and Sports: Over 60 years of getting you out there. Outdoor Research: Click here for 25% off Outdoor Research products (not valid on sale items or pro products) 44:00: His Crew, finally qualifying for X in slope and pipe, X Games, contest mentality, sponsors, and money 50:00: Pranks, biggest 10% rule, X Games Gold, changing the sport, Sarah Burke, and the end of professional skiing 77:00: Inappropriate Questions with Colby James West
Using AI in healthcare comes with a lot of promise - but access to data, lack of clarity about who will pay for these tools and the challenge of creating algorithms without bias are holding us back. In 2023, TIME named Dr. Ziad Obermeyer one of the 100 most influential people working in AI. As a professor at UC Berkeley School of Public Health, and the co-founder of a non-profit and a startup in the AI healthcare space, his work centers on how to leverage AI to improve health and avoid racial bias. We discuss:The idea of a safe harbor for companies to discuss and resolve AI challengesHow his company Dandelion Health is helping solve the data log jam for AI product testingWhy academics need to spend time “on the shop floor”The simple framework for avoiding AI bias he shared in his recent testimony to the Senate Finance Committee Ziad says without access to the right data, AI systems can't offer equitable solutions: “I think data is the biggest bottleneck to these things, and that bottleneck is even more binding in less well-resourced hospitals… When we look around and we see, ‘well, there are all these health algorithms that are in medical journals and people are publishing about them'. The majority of those things come from Palo Alto, Rochester, Minnesota [and] Boston. And, those patients are wonderful and they deserve to have algorithms trained on them and learning about them, but they are not representative of the rest of the country – let alone the rest of the world. And so, we have these huge disparities in the data from which algorithms are learning. And then those mirror the disparities and where algorithms can be applied.”Relevant LinksDr. Obermeyer's profile at UC Berkeley School of Public HealthZiad Obermeyer's testimony to the Senate Finance Committee on how AI can help healthcareMore about Nightingale Open ScienceMore about Dandelion HealthArticle on dissecting racial bias in algorithmsArticle On the Inequity of Predicting A While Hoping for B. AER: P&P 2021 (with Sendhil Mullainathan)About Our GuestDr. Ziad Obermeyer is the Blue Cross of California Distinguished Associate Professor of Health Policy and Management at UC Berkeley School of Public Health. His research uses machine learning to help doctors make better decisions, and help researchers make new discoveries—by ‘seeing' the world the way algorithms do. His work on algorithmic racial bias has impacted how many organizations build and use algorithms, and how lawmakers and regulators hold AI accountable. He is a cofounder of Nightingale Open Science and Dandelion Health, a Chan Zuckerberg Biohub Investigator, a Faculty Research Fellow at the National Bureau of Economic Research, and was named one of the 100 most influential people in AI by TIME. Previously, he was...
On this episode, we welcome back a special guest to the podcast: Bryan Obermeyer, organizer of the Grattan Race Series and Eclipse on Tap cycling team's Director Sportif. The Grattan Race Series is entering its 44th year this May of bringing premier exhibition road racing to west Michigan at Grattan Raceway in Grattan Twp, MI. In the first half, we hype up the upcoming 2024 Grattan Race Series and share a few space-theme beers. Most importantly, we provide updates to our fluid 2024 totality plan. After further consideration, we have ultimately decided to rent an RV! In the second half, we reminisce over fun cycling experiences over the years at Grattan, talk about the 2099 eclipse that will pass right over the raceway, and close out with a tasty Underbru. Available now on your favorite podcast platforms. Give us a follow on our social media pages at @eclipseontap [Episode recorded live a Pub39A Studios on 2/26/24. Produced by Matt Deighton]
In this episode of the Indiana Pioneer Agronomy podcast, hosts Carl Joern and Ben Jacob discuss entomology and pest management with Dr. John Obermeyer. Dr. Obermeyer is an entomologist and Integrated Pest Mgmt Supervisor with the Purdue University extension. The trio dive into cold weather and the impact it has on soil and crop pests. ResourcesInsect Survival in Cold and/or Saturated Conditions: https://extension.entm.purdue.edu/newsletters/pestandcrop/article/insect-survival-in-cold-and-or-saturated-conditions-chill-and-dont-breathe/
James Evans' life is one resplendent with ideas. His trajectory into research and learning in areas as wide as network science, collective intelligence, computational social science, and even how knowledge is created, is as irreducible as it is exhilarating, and is a beacon in disorienting times marked by seemingly accelerating paces of change. Origins Podcast WebsiteFlourishing Commons NewsletterShow Notes:cultural and knowledge observatories (05:30)Mark Granovetter (09:15)Steve Barley (10:30)Woody Powell (10:30)Chris Summerfield (11:00)Some papers mentioned:Metaknowledge (17:10)Weaving the fabric of science: Dynamic network models of science's unfolding structure (18:30)Abduction (21:30)epistemic space (22:40)Claude Lévi-Strauss (24:20)Clifford Geertz (24:30)"Dissecting racial bias in an algorithm used to manage the health of populations" Obermeyer et al. (30:00)Scarcity Sendhil Mullainathan (35:00)The Knowledge Lab (36:00)"Quantifying the dynamics of failure across science, startups and security" Yin et al. (45:00)Charles Sanders Peirce (51:00)Pirkei Avot (56:00)Alison Gopnik on explore-exploit (01:02:30)Elise Boulding "the 200-year present" (01:03:00)Jo Guldi (01:06:00)Lightning Round (01:06:30):Book: The Enigma of ReasonPassion: physical exploration and spiritual callingHeart sing: 'social science fiction' and Hod LipsonScrewed up: management style at timesJames online:@profjamesevansThe Knowledge Lab'Five-Cut Fridays' five-song music playlist series James' playlistLogo artwork Cristina GonzalezMusic by swelo
On September 8th 2023, Trevor had the opportunity to talk with musician Karl Obermeyer from the Twin Cities rock band Capital Sons. In the interview the two discuss Karl's early musical inspiration, lineup changes within the band, the future of the band and also the current state of the music scene in the Twin Cities. Also on September 8th 2023, Trevor debuted a new commentary segment as an ode to Andy Rooney and also Jerry Springer. In this commentary, Trevor discusses the summer of the fragile, white, millennial and gen x'er man.
In this thought-provoking episode, Dr. Ziad Obermeyer delves into the complex issues of bias, safety, and generalizability of medical AI. Dr. Obermeyer emphasizes the importance of machine learning researchers' task formulation, an often-overlooked yet significant determinant of bias in AI algorithms. Highlighting the dual impact of machine learning, he compares two of his works that demonstrate how AI can either exacerbate or help mitigate health care disparities. Lastly, he discusses the significant challenges encountered in the development of AI models due to siloed and inaccessible data, sharing his own experiences and solutions in tackling this issue. Dr. Obermeyer is the Blue Cross of California Distinguished Professor at the Berkeley School of Public Health, Co-Founder of Nightingale Open Science, and Co-Founder of Dandelion Health. Transcript
On this week's episode of The Dose, host Joel Bervell speaks with Dr. Ziad Obermeyer, from the University of California Berkeley's School of Public Health, about the potential of AI in informing health outcomes — for better and for worse. Obermeyer is the author of groundbreaking research on algorithms, which are used on a massive scale in health care systems — for instance, to predict who is likely to get sick and then direct resources to those populations. But they can also entrench racism and inequality into the system. “We've accumulated so much data in our electronic medical records, in our insurance claims, in lots of other parts of society, and that's really powerful,” Obermeyer says. “But if we aren't super careful in what lessons we learn from that history, we're going to teach algorithms bad lessons, too.” Citations Dr. Ziad Obermeyer Dissecting racial bias in an algorithm used to manage the health of populations Nightingale Open Science
When Ziad Obermeyer was a resident in an emergency medicine program, he found himself lying awake at night worrying about the complex elements of patient diagnoses that physicians could miss. He subsequently found his way to data science and research and has since coauthored numerous papers on algorithmic bias and the use of AI and machine learning in predictive analytics in health care. Ziad joins Sam and Shervin to talk about his career trajectory and highlight some of the potentially breakthrough research he has conducted that's aimed at preventing death from cardiac events, preventing Alzheimer's disease, and treating other acute and chronic conditions. Read the episode transcript here. For more about Ziad: http://ziadobermeyer.com/research Nightingale Open Science: https://www.nightingalescience.org/ Dandelion Health: https://dandelionhealth.ai/ Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rüdinger. Stay in touch with us by joining our LinkedIn group, AI for Leaders at mitsmr.com/AIforLeaders or by following Me, Myself, and AI on LinkedIn. Guest bio: Dr. Ziad Obermeyer works at the intersection of machine learning and health. He is an associate professor and the Blue Cross of California Distinguished Professor at the University of California, Berkeley; a Chan Zuckerberg Biohub Investigator; and a faculty research fellow at the National Bureau of Economic Research. His papers have appeared in a wide range of journals, including Science, Nature Medicine, and The New England Journal of Medicine; his work on algorithmic bias is frequently cited in the public debate about artificial intelligence. He is a cofounder of Nightingale Open Science, a nonprofit that makes massive new medical imaging data sets available for research, and Dandelion, a platform for AI innovation in health. Obermeyer continues to practice emergency medicine in underserved communities. We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials. We want to know how you feel about Me, Myself, and AI. Please take a short, two-question survey.
Don't worry, you haven't gone mad, this really is an upload from the Best Beer Podcast to EVER exist. Being well over a year since the last time the show aired, it's only fitting that we have the most special & highly regarded guest. Former host and Co-Founder of Three Beers Inn, it's none other than Robert Obermeyer ladies and gentlemen. This one is WAY off the rails. Best listened to whilst drunk *DRINK RESPONSIBLY AND AT YOUR OWN RISK*Welcome to Part One. Hosted on Acast. See acast.com/privacy for more information.
Randy Obermeyer is flipping the script on finding new diesel technicians. The newest Technology and Maintenance Council chair knows that being a truck mechanic is much more than a dirty job. Today, it requires a deep understanding of diagnostics, electric truck software, circuit boards, telematics and control modules. He also knows this multi-faceted type of talent is hard to come by, but for the people with that talent, he wagers, the job is worth it. Here's a look at the man who is reinventing technician recruiting. For information visit: https://roadsigns.ttnews.com/roadshow-episode-ten/ How'd we do? Give us your listening experience feedback here: https://docs.google.com/forms/d/e/1FAIpQLSdE2YN79GA4zB5BdD7qJoL11xYEqrVrXpZcwhARZgY03D9ntA/viewform?usp=sf_link Follow the RoadSigns: Twitter: @ttroadsigns LinkedIn: RoadSignspodcast Instagram: @roadsignspodcast Join RoadSigns mailing list: roadsigns.ttnews.com/join-the-mailing-list/ For sponsorship and guest inquires please visit: https://roadsigns.ttnews.com/roadsigns-contact/
On this episode of Eclipse on Tap we talk space, beer, and cycling with Bryan Obermeyer. Bryan has been a longtime fixture of the Grand Rapids cycling community and organizes the annual Grattan Race Series: cycling's longest grand tour. We had a blast chatting about this season's exciting finale and transitioned into discussion surrounding NASA's recent DART mission in the second half. We wrap with some promotion for our 3rd annual UNDERGROUND-MAN event. Be sure to give us a follow on our social media pages at @eclipseontap and check out our website at www.eclipseontap.space [Episode recorded live at Pub39A Studios on 9/28/22. Produced by Matt Deighton]
(Year/Month/Day) Description // Readings (Cycle #)// 1st Reading Psalm 2nd Reading Gospel
In celebration of Mandela Day Breakfast with Martin Bester and Jacaranda with be assisting Hospice with the help of Good Morning Angels. Vesna Chanel Obermeyer shares how Hospice assisted her mother!
(Year/Month/Day) Description // Readings (Cycle #)// 1st Reading Psalm 2nd Reading Gospel
(Year/Month/Day) Description // Readings (Cycle #)// 1st Reading Psalm 2nd Reading Gospel
Alexander Rosendahl und Daniel Obermeyer in unserer neuesten Folge #Lauschvisite!
Laura Obermeyer is a skier, photographer, filmmaker and artist. You may recognize her last name as she is the granddaughter of Klaus Obermeyer who founded the revolutionary outerwear brand that bears their name. She’s very involved in the company but keeps that part of her life in the background. We have a great discusson about […] The post #208 Laura Obermeyer appeared first on Low Pressure Podcast.
Laura Obermeyer is a skier, photographer, filmmaker and artist. You may recognize her last name as she is the granddaughter of Klaus Obermeyer who founded the revolutionary outerwear brand that bears their name. She's very involved in the company but keeps that part of her life in the background. We have a great discusson about creativity vs. productivity, and about shifting gender dynamics within skiing. Rounding things out we chat about her youth, splitting time between the mountains of New Hampshire and Colorado, growing up with horses and a big Cowboy projecvt with TGR and lots more. Watch on YouTube - https://youtu.be/yzs55FPOPN4 All THE LPP - https://linktr.ee/LowPressurePodcast
Ziad Obermeyer is a Professor of Health Policy and Managementat the UC Berkeley School of Public Health where he conducts research at the intersection of machine learning, medicine, and health policy. Previously, he was a professor at Harvard Medical School and consultant at McKinsey & Co. He continues to practice emergency medicine in underserved parts of the US and is also a co-founder of Nightingale Open Science, a computing platform givingresearchers access to massive new health imaging datasets. In this episode, you'll hear how he ended up co-authoring the seminal study to identify bias in AI health systems, published in Science in 2019, and whether you should be using his Algorithmic Bias Playbook.Links to referenced articles and playbook: http://ziadobermeyer.com/research/https://www.chicagobooth.edu/research/center-for-applied-artificial-intelligence/research/algorithmic-bias
Glam & Grow - Fashion, Beauty, and Lifestyle Brand Interviews
On this week's episode of the Glam & Grow podcast, we chat with Tanya Obermeyer, Chief Operating Officer of gorjana. Since 2004, gorjana has been providing fine jewelry designed to mix, match, and layer - for endless self-expression. Being one of the first to introduce a wholesale website to their customers, gorjana has continued to focus on forward-thinking strategies to grow their brand. As you'll hear in today's episode, this has been crucial in providing their customers with the best experience possible both online and in stores.In this episode, Tanya discusses: gorjana's history and focus as a jewelry brand and her journey with the brand.The behind the scenes of strategically growing the gorjana brand using both e-commerce and retail storesHow the brand has leveraged the layering trend and established itself as ‘the layering authority'What's next for gorjanaYou'll also find out the challenges gorjana has faced during the pandemic and how their sales strategies have proven successful.This episode is sponsored by AttentiveAttentive is a personalized text message marketing platform that lets you communicate with your customers in real-time, engage them with timely campaigns, and help your business drive revenue. Thousands of brands like CB2, Pura Vida, and Coach have created magical customer experiences and driven over 20% of their online revenue using Attentive-powered personalized text messages. And you, too, can turn SMS into one of your top-three revenue channels in just a few months. Visit attentivemobile.com/wavebreak to learn how you can try it for free.This episode is also brought to you by WavebreakLeading direct-to-consumer brands hire Wavebreak to turn email marketing into a top revenue driver.Most eCommerce brands don't email right... and it costs them. At Wavebreak, our eCommerce email marketing agency helps qualified stores recapture 6-7 figures of lost revenue each year.From abandoned cart emails to Black Friday campaigns, our best-in-class team of email specialists manage the entire process: strategy, design, copywriting, coding, and testing. All aimed at driving growth, profit, brand recognition, and most importantly, ROI.Curious if Wavebreak is right for you? Reach out at Wavebreak.co
How can we design AI systems which remove human bias — rather than perpetuate it? Ziad Obermeyer is Associate Professor and Blue Cross of California Distinguished Professor of Health Policy and Management at the Berkeley School of Public Health where he researchers and teaches on the intersection of machine learning and healthcare. Some of his most interesting research focuses on algorithmic bias, and how we can better build AI systems which avoid perpetuating and falling into these traps. We talk about his fascinating story, how he created an AI algorithm which actually reduced bias and superseded human performance and some of the things he's learnt along the way. I hope you enjoy. Prof Obermeyer's new initiative: Nightingale Open Source. You can find me on Twitter @MustafaSultan and subscribe to my newsletter on www.musty.io
Interview and live performance from singer/songwriter Travis Obermeyer.
On this episode we talk with Heather Raney, a Product Manager for Obermeyer. We talk about how working retail helped her as a designer and manager, the secrets to a good portfolio, and her journey into the industry. Connect with Heather on LinkedIn and check out her portfolio. https://www.linkedin.com/in/heatherraney/ https://heatherraney.carbonmade.com/ Watch these conversations on YouTube! https://bit.ly/33SVb2O Listen to these conversations on the Highlander Podcast. https://opdd.usu.edu/podcast
Laura Obermeyer is an artist currently based out of Salt Lake City. When she's not taking photos of your favorite skiers, she's sketching pictures of cowboys, or making movies out in Japan. In this episode, we talked about growing up in Connecticut, moving out west after high school, linking up with Taylor Lunquist, getting into photography & drawing, and a bunch of other stuff. We wrap up with viewer questions which can be submitted on our Instagram. @TwoPlankerPod https://www.instagram.com/twoplankerpod/ Spotify: https://open.spotify.com/show/4DoaAVYv69xAV50r8ezybK Apple Podcast: https://podcasts.apple.com/us/podcast/two-planker-podcast/id1546428207 Show Notes: 0:00:00 Ad read 0:00:30 Intro, Who are you and what do you do? 0:01:24 Early life, Klaus Obermeyer history, Ski Sundown 0:11:34 Moving to Aspen after high school, Working for Obermeyer and Newschoolers 0:22:56 Meeting Taylor Lundquist, Shooting Jyosei in Japan 0:49:54 Shooting a second movie 00:53:49 Drawing, Collaborations 1:07:04 Moving to Salt Lake City from Aspen 1:13:54 Listener questions, Goals and advice, Closing
Get .1 ASHA CEU hereEpisode Summary:Hey cog-com SLP's, looking for an episode that's just right? This one's for you! This week, the talented, outpatient adult SLP and host of the Speech Uncensored podcast - Leigh Ann Porter-joins Kate to talk about modifying CARTs and spot on ARCS in the creation of home programs for patients with mild aphasia. I promise, it will all make sense when you listen-Leigh Ann does an awesome job explaining so that even Kate (and I) can understand (haha, a little peds SLP humor, this stuff starts out as Greek to us!). You'll learn some tangible, holistic strategies to tackle patient needs across reading, writing, and speaking and get a good sense of how to stay within that magic “Goldilocks Zone” - not too hard, not too easy, just right for each individual client. Leigh Ann lays out a few down-to-earth home program ideas that build on a patient's strengths and foster the autonomy, independence, and the intrinsic motivation required for the hard work of rehab. And of course, there are great resources to explore as you implement these ideas, because Leigh Ann's got your back! Find a chair that's not too soft, some coffee that's not too hot, and cozy up for some nerdy aphasia learning!Learn more about Leah Ann here.Learning Outcomes1. Identify two evidence-based practices to use with patients with mild aphasia. 2. Describe how to modify treatment protocols to increase complexity level for mild aphasia. 3. Identify at least three resources for implementing treatment approaches with mild aphasiaOnline Resources:The Speech Uncensored Podcast: Episode 107: Discourse Treatment in Aphasia Therapy: Attentive Reading Constrained Summarization (ARCS) with Yvonne Rogalski PhD, CCC-SLPEpisode 14: Meaningful Aphasia Therapy with Sarah Baar MA, CCC-SLP.Aditional aphasia episodes on Speech Uncensored Podcast:References:Beeson, P.M. (1999). Treating acquired writing impairment: Strengthening graphemic representations. Aphasiology, 13, 367-386. Beeson, P.M., Hirsch, F., & Rewega, M. (2002). Successful single-word writing treatment: Experimental analysis of four cases. Aphasiology, 16, 456-473-491.Beeson, P. M., Rising, K., & Volk, J. (2003). Writing treatment for severe aphasia. Journal of Speech, Language, Hearing Research, 46, 1038-1060.Beeson, P.M. & Egnor, H. (2006). Combining treatment for written and spoken naming. Journal of the International Neuropsychological Society, 12, 816-827.Obermeyer, J. A., & Edmonds, L. A. (2018). Attentive Reading With Constrained Summarization Adapted to Address Written Discourse in People With Mild Aphasia. American Journal of Speech-Language Pathology, 27(1S), 392.Obermeyer, J. A., Rogalski, Y., & Edmonds, L. A. (2019). Attentive Reading with Constrained Summarization-Written, a multi-modality discourse-level treatment for mild aphasia. Aphasiology, 1-26.Rogalski, Y., Altmann, L., & Rosenbek, J. (2014). Retrieval practice and testing improve memory in older adults. Aphasiology, 28:4, 381-400.Rogalski, Y. & Edmonds, L. (2008). Attentive Reading and Constrained Summarisation (ARCS) treatment in primary progressive aphasia: A case study. Aphasiology. 22. 763-775.Rogalski, Y., Edmonds, L., Daly, V., & Gardner, M. (2013). Attentive Reading and Constrained Summarisation (ARCS) discourse treatment for chronic Wernicke's aphasia. Aphasiology, 27:10, 1232-125Disclosures:Leigh Ann Financial Disclosures: Leigh Ann is employed by The University of Kansas Health System and receive honorariums from SpeechTherapyPD.com. Non-financial Disclosures: Leah Ann is the host of the Speech Uncensored Podcast.Kate Grandbois financial disclosures: Kate is the owner / founder of Grandbois Therapy + Consulting, LLC and co-founder of SLP Nerdcast. Kate Grandbois non-financial disclosures: Kate is a member of ASHA, SIG 12, and serves on the AAC Advisory Group for Massachusetts Advocates for Children. She is also a member of the Berkshire Association for Behavior Analysis and Therapy (BABAT), MassABA, the Association for Behavior Analysis International (ABAI) and the corresponding Speech Pathology and Applied Behavior Analysis SIG. Time Ordered Agenda:10 minutes: Introduction, Disclaimers and Disclosures20 minutes: Descriptions of the role evidence-based practices to use with patients with mild aphasia. 15 minutes: Descriptions of modified treatment protocols to increase complexity level for mild aphasia.10 minutes: Descriptions of resources for implementing treatment approaches with mild aphasia5 minutes: Summary and ClosingDisclaimerThe contents of this episode are not meant to replace clinical advice. SLP Nerdcast, its hosts and guests do not represent or endorse specific products or procedures mentioned during our episodes unless otherwise stated. We are NOT PhDs, but we do research our material. We do our best to provide a thorough review and fair representation of each topic that we tackle. That being said, it is always likely that there is an article we've missed, or another perspective that isn't shared. If you have something to add to the conversation, please email us! Wed love to hear from you!__SLP Nerdcast is a podcast for busy SLPs and teachers who need ASHA continuing education credits, CMHs, or professional development. We do the reading so you don't have to! Leave us a review if you feel so inclined!We love hearing from our listeners. Email us at info@slpnerdcast.com anytime! You can find our complaint policy here. You can also:Follow us on instagramFollow us on facebookWe are thrilled to be listed in the Top 25 SLP Podcasts!Thank you FeedSpot!
Dr. Thomas Obermeyer, an orthopedic surgeon with Barrington (Ill.) Orthopedic Specialists, joined the podcast to share his career journey and discuss the shift to ASCs, bundled payments and more.
In honor of International Women's Month, today's episode of About Your Mother brings you a story highlighting the strength and power of maternal lineage. In this dual interview, we celebrate the life of Vera Obermeyer, who recently passed away due to COVID. Our guests are here to talk about her long and colorful life full of purpose. The conversation is with Vera's daughter and granddaughter, Sarai Obermeyer and Amy Kelly. Listen as Sarai and Amy share stories of Vera and her strong, maternal influence on them. They also share some of the family's traumatic past and how it inspired them to lend their voice to those who need it the most. Breaking the Mold Sarai remembers her mother's advocacy of women's rights when there was hardly any. Vera broke the mold of her time being a mother, career woman, and a strong voice of equality. Yet, she did not aim to bring anyone down but lift everyone to equal status. "There was an understanding that women should have the right and access to fulfill their potential. But that did not mean that when I didn't mind the rights of men, you would want men and boys also to fulfill their potential." - Sarai Obermeyer Vera's views and the virtues she had instilled in them have also led them to a life of helping others and fighting for the marginalized and oppressed. Relationships Over Everything Sarai takes us through her memories with her mother and how she raised her children and nurtured a career. While it was a big undertaking, Sarai understood that for Vera, having a job was an essential thing in her life. She also reveals that her mother valued relationships over anything. She formed powerful bonds with every person that she held dear, as Sarai found out when she talked to one of her friends: "When I was speaking to her after my mother passed away, she was just tearful. It was so sad. You can just feel the beautiful friendship they had and she then lost by my mother passing away. When you think about it: from 10 to 91... an 81-year-old friendship. How many people have an 81-year-old friendship? Not many." - Sarai Obermeyer Follow Your Instincts I think being critical, following your own instincts, and making your own choices is really important. - Amy KellyClick To Tweet Amy shares her grandmother's experiences when raising her children in the 1950s. Women were expected to follow a particular way of life, but Vera didn't go with the flow. She relied on her instincts and what she thought was right. Naturally, people who expect others to conform did not like that. "People thought she was crazy. They really thought she was just beating to her own drum. Yet she just knew the whole time, she just followed her own instincts and made her own decisions with what she felt was right versus what society tells you is right." - Amy Kelly Despite being a woman with strong opinions, Vera never forcefully imposed her own views on her children. She let them choose their own course in life and supported them wholeheartedly. Yet, she was always there to ask the right questions and help them consider their options and think critically at all times. To learn more about Sarai Obermeyer & Amy Kelly and how one woman inspired them to be better, download and listen to this episode. Bio: About Amy Kelly Amy Kelly is a licensed Marriage and Family Therapist specializing in Child, Adolescent, and Reunification therapy. She graduated from UC Davis with a BA in Psychology, SF State with an MFT in Clinical Psychology and completed CE with the American Academy of Pediatrics. Amy is a member of CAMFT and is featured on Psychology Today Profile and GoodTherapy.org. She has been published on TherapyToday writing on Social Media use and Reunification therapy. About Sarai Obermeyer Sarai Obermeyer was a Deputy District Attorney at Solano County District Attorney's Office. Sarai focused on preventing violence and stopping discrimination in order to better humani...
This episode is an audio version of a video interview conducted by the Journal’s Editor in Chief, Dr Audiey Kao, with Dr Ziad Obermeyer, the Blue Cross of California Distinguished Associate Professor of Health Policy and Management at the UC Berkeley School of Public Health. Dr Obermeyer joined us to talk about the potential impact on mortality of cost-sharing practices of health insurers. To watch the full video interview, head to our site, JournalOfEthics.org, or visit our YouTube channel.
Walter Obermeier ist Geschäftsführer der UiPath GmbH in Deutschland. Walter hat ein großes KnowHow im Bereich von Geschäftsprozessen. In unserem Podcasts sprechen wir über Walters Werdegang, über RPA Use Cases und über berufliche Perspektiven, die sich jetzt und in Zukunft ergeben werden. Hast Du Fragen? Möchtest Du uns Feedback geben? Bist Du interessiert an Trainings zum Thema Automatisierung? Bist Du Experte im Bereich Automatisierung und möchtest im Podcast dabei sein? Melde dich einfach: E-Mail: podcast@botsandpeople.com LinkedIn Olli: https://bit.ly/30a5f4I LinkedIn Nico: https://bit.ly/2QJtSCf
This is our second podcast in a two-part series about race. In this podcast, we discuss the role of race in medicine. We review the pros and cons of considering race in medicine. We also talk about the origins of some of the most common race-based medical stereotypes. Finally, beyond human interactions, we reveal how implicit biases can become engrained in the very algorithms and systems that decide our care and further increase health disparities.Sources:Science “Race and Medicine” – Constance Holden 2003BMC Health Services Research “Is race medically relevant? A qualitative study of physicians’attitudes about the role of race in treatment decision-making” - Shedra Amy Snipes et al. 2011Skin tone: https://www.washingtonpost.com/lifestyle/2020/07/22/malone-mukwende-medical-handbook/The problem with race-based medicine:https://www.ted.com/talks/dorothy_roberts_the_problem_with_race_based_medicine?language=enRedheads and pain: https://www.pnas.org/content/100/8/4867Black patients and pain:https://www.aamc.org/news-insights/how-we-fail-black-patients-painhttps://www.pnas.org/content/113/16/4296 Meta-analysis of disparities in pain meds:https://pubmed.ncbi.nlm.nih.gov/22239747/In-group bias:http://www.psych.nyu.edu/vanbavel/lab/documents/Mende-Siedlecki.etal.2019.JEPG.pdfSpirometers: “Breathing Race into the Machine” by Lundy BraunOrigin of myths about physiological racial differences: https://www.nytimes.com/interactive/2019/08/14/magazine/racial-differences-doctors.htmlJohn Brown’s personal accounts:https://docsouth.unc.edu/neh/jbrown/jbrown.htmlGlomerular Filtration rates and race:https://www.kidney.org/sites/default/files/docs/12-10-4004_abe_faqs_aboutgfrrev1b_singleb.pdfhttps://pubmed.ncbi.nlm.nih.gov/9214396/https://medicine.uw.edu/news/uw-medicine-exclude-race-calculation-egfr-measure-kidney-function#:~:text=As%20of%20June%201%2C%202020,excludes%20race%20as%20a%20variable.https://www.kidney.org/news/establishing-task-force-to-reassess-inclusion-race-diagnosing-kidney-diseases Experience of Black physicians:https://www.npr.org/sections/health-shots/2020/07/01/880373604/to-be-young-a-doctor-and-black-overcoming-racial-barriers-in-medical-trainingBiDil: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(12)60052-X/fulltexthttps://www.nature.com/articles/6500489 CYP2D6 gene: https://www.nature.com/articles/gim201680Race and pharmacogenetics: https://www.nature.com/scitable/topicpage/pharmacogenetics-personalized-medicine-and-race-744/Sickle cell disease: https://www.cdc.gov/ncbddd/sicklecell/features/keyfinding-trait.htmlhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2093356/https://digitalwindow.vassar.edu/cgi/viewcontent.cgi?article=1334&context=senior_capstoneBias in Healthcare algorithm:Science “Dissecting racial bias in an algorithm used to manage the health of populations” Obermeyer et al. 2019Nature “Millions affected by racial bias in health-care algorithm” Heidi Ledford 2019https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
Carmen & Mike talk skiing in Colorado, losing creativity as we age and the ghetto in Cleveland.Contact us at: ciricillo@comcast.netGo to Carmen's Website: http://www.carmenciricillo.com/
Taylor Lundquist and Laura Obermeyer join Freedle Coty and Conor Smith for a conversation about their short film Jyosei, the value of both filming with all-ladies groups and mixing it up with the boys, Taylor's hectic 2020 season, future endeavors, and more. Visit www.podcast.level1.ski to watch Jyosei as well as a handful of other relevant content collected for your convenience. First aired 8-17-20. Intro track: "REALiTi (Demo)" - Grimes
Welcome to Souls Outside!In this episode…We open by chatting about the things we put aside when we think we should be focusing elsewhere. Next, our featured guest Dr. Jackie Obermeyer joins us for a “fireside” chat before sharing how we can follow in their soulprints by guiding us through a sound meditation!! Headphones highly encouraged!---As always, we have AUDIO, VIDEO & PRINT versions of this content – choose how you prefer to engage!And, join us on Facebook or LinkedIn to join in the conversation, dive deeper into each episode and share what you’ve got on the topic! Plus, be one of the first viewers of each episode by joining us on Facebook for a watch party, every Thursday as each new episode is released! ---Show Notes + Links to Gifts & More!0:00 Welcome & Overview of Today’s Episode0:59 Intro to Souls Outside1:22 Let’s start by chatting about the things we put aside when we think we should be focusing elsewhere. 7:56 We’re joined by Dr. Jackie Obermeyer to learn more about her journey to now!18:50 And let’s learn how to follow in Dr. Jackie Obermeyer’s soulprints by guiding us through a sound meditation!! Headphones highly encouraged!30:06 Thanks to our Founding Sponsors!31:17 P.S. Continue to journey in Dr. Jackie Obermeyer’s Soulprints with50% off a 30 minute Chakra Activation Session! Use Code: soulsoutside when you book!Head to her website to stay in touch for upcoming events & workshops. Plus, check out her Sound Therapy Sessions! Currently offered virtually due to covid! Jackie Obermeyer, PHD, Founder of Tuned Sound TherapyCombining her expertise in neuroplasticity with her 25 years of musical training and performance experience, Dr. Jackie Obermeyer has developed a unique strategy for optimizing energetic health and wellbeing.Working with specific frequencies and combinations of sound, she helps others disrupt limiting subconscious programming and implement new neuro-architecture to support the growth and evolution of their consciousness.Jackie believes that we all have the capacity to thrive in this world, and that all it takes is finding the right frequencies to support the life experiences that we seek.
Zu Gast ist Frank Obermeyer, der in den 60ern in einem Team mit Jupp Heynckes und Hans Siemensmeyer gespielt hat. Seit über 20 Jahren ist er Teamchef der 96-Traditionself. Zunächst sprechen Christian Herde und Dennis Draber über die vielen Aspekte der Niederlage in Bielefeld: Das furchtbare Wetter, die langersehnte Rückkehr von Innenverteidiger Timo Hübers, den fehlenden Bart seines Abwehrpartners Josip Elez, die ausgelassene Riesenchance von Cedric Teuchert und das extrem ärgerliche Gegentor kurz vor Schluss. Dann kommt Frank Obermeyer dazu und verrät uns seine Einschätzung des Spiels. Das 96-Urgestein hat eine konkrete Vision für die Zukunft der Roten und diese auch Martin Kind schon mitgeteilt. Frank möchte nämlich, dass mehr ehemalige 96-Spieler Verantwortung übernehmen und Führungsposten bekleiden. Als mögliche Kandidaten nennt er unter Anderem Legenden der jüngeren Vergangenheit, wie Steven Cherundolo und Altin Lala. Die beiden stehen übrigens bis heute für Hannover 96 auf dem Platz: In der Traditionself, deren Teamchef Frank Obermayer ist. Etwa 10 Mal pro Jahr kommt die Mannschaft zusammen, um Benefiz-Spiele oder auch große Turniere im...
Zu Gast ist Frank Obermeyer, der in den 60ern in einem Team mit Jupp Heynckes und Hans Siemensmeyer gespielt hat. Seit über 20 Jahren ist er Teamchef der 96-Traditionself. Zunächst sprechen Christian Herde und Dennis Draber über die vielen Aspekte der Niederlage in Bielefeld: Das furchtbare Wetter, die langersehnte Rückkehr von Innenverteidiger Timo Hübers, den fehlenden Bart seines Abwehrpartners Josip Elez, die ausgelassene Riesenchance von Cedric Teuchert und das extrem ärgerliche Gegentor kurz vor Schluss. Dann kommt Frank Obermeyer dazu und verrät uns seine Einschätzung des Spiels. Das 96-Urgestein hat eine konkrete Vision für die Zukunft der Roten und diese auch Martin Kind schon mitgeteilt. Frank möchte nämlich, dass mehr ehemalige 96-Spieler Verantwortung übernehmen und Führungsposten bekleiden. Als mögliche Kandidaten nennt er unter Anderem Legenden der jüngeren Vergangenheit, wie Steven Cherundolo und Altin Lala. Die beiden stehen übrigens bis heute für Hannover 96 auf dem Platz: In der Traditionself, deren Teamchef Frank Obermayer ist. Etwa 10 Mal pro Jahr kommt die Mannschaft zusammen, um Benefiz-Spiele oder auch große Turniere im Ausland zu bestreiten. Was die Traditionself so besonders macht, erzählt uns Frank in Folge 29 von 96Freunde der Hannover Podcast. P.S.: Wir sprechen natürlich auch über das Montagsspiel gegen Holstein Kiel.
Zu Gast ist Frank Obermeyer, der in den 60ern in einem Team mit Jupp Heynckes und Hans Siemensmeyer gespielt hat. Seit über 20 Jahren ist er Teamchef der 96-Traditionself. Zunächst sprechen Christian Herde und Dennis Draber über die vielen Aspekte der Niederlage in Bielefeld: Das furchtbare Wetter, die langersehnte Rückkehr von Innenverteidiger Timo Hübers, den fehlenden Bart seines Abwehrpartners Josip Elez, die ausgelassene Riesenchance von Cedric Teuchert und das extrem ärgerliche Gegentor kurz vor Schluss. Dann kommt Frank Obermeyer dazu und verrät uns seine Einschätzung des Spiels. Das 96-Urgestein hat eine konkrete Vision für die Zukunft der Roten und diese auch Martin Kind schon mitgeteilt. Frank möchte nämlich, dass mehr ehemalige 96-Spieler Verantwortung übernehmen und Führungsposten bekleiden. Als mögliche Kandidaten nennt er unter Anderem Legenden der jüngeren Vergangenheit, wie Steven Cherundolo und Altin Lala. Die beiden stehen übrigens bis heute für Hannover 96 auf dem Platz: In der Traditionself, deren Teamchef Frank Obermayer ist. Etwa 10 Mal pro Jahr kommt die Mannschaft zusammen, um Benefiz-Spiele oder auch große Turniere im Ausland zu bestreiten. Was die Traditionself so besonders macht, erzählt uns Frank in Folge 29 von 96Freunde der Hannover Podcast. P.S.: Wir sprechen natürlich auch über das Montagsspiel gegen Holstein Kiel. Du möchtest deinen Podcast auch kostenlos hosten und damit Geld verdienen? Dann schaue auf www.kostenlos-hosten.de und informiere dich. Dort erhältst du alle Informationen zu unseren kostenlosen Podcast-Hosting-Angeboten.
Earthworms are easy … to find. But despite their prevalence and importance to ecosystems around the world, there hasn't been a comprehensive survey of earthworm diversity or population size. This week in Science, Helen Philips, a postdoctoral fellow at the German Centre for Integrative Biodiversity Research and the Institute of Biology at Leipzig University, and colleagues published the results of their worldwide earthworm study, composed of data sets from many worm researchers around the globe. Host Sarah Crespi gets the lowdown from Philips on earthworm myths, collaborating with worm researchers, and links between worm populations and climate. Read a related commentary here. Sarah also talks with Ziad Obermeyer, a professor in the School of Public Health at the University of California, Berkeley, about dissecting out bias in an algorithm used by health care systems in the United States to recommend patients for additional health services. With unusual access to a proprietary algorithm, inputs, and outputs, Obermeyer and his colleagues found that the low amount of health care dollars spent on black patients in the past caused the algorithm to underestimate their risk for poor health in the future. Obermeyer and Sarah discuss how this happened and remedies that are already in progress. Read a related commentary here. Finally, in the monthly books segment, books host Kiki Sanford interviews author Alice Gorman about her book Dr. Space Junk vs The Universe: Archaeology and the Future. Listen to more book segments on the Science books blog: Books, et al. This week's episode was edited by Podigy. Ads on this week's show: The Tangled Tree: A Radical New History of Life by David Quanmen; MEL Science Download the transcript (PDF) Listen to previous podcasts. About the Science Podcast [Image: Public domain; Music: Jeffrey Cook]
Earthworms are easy … to find. But despite their prevalence and importance to ecosystems around the world, there hasn’t been a comprehensive survey of earthworm diversity or population size. This week in Science, Helen Philips, a postdoctoral fellow at the German Centre for Integrative Biodiversity Research and the Institute of Biology at Leipzig University, and colleagues published the results of their worldwide earthworm study, composed of data sets from many worm researchers around the globe. Host Sarah Crespi gets the lowdown from Philips on earthworm myths, collaborating with worm researchers, and links between worm populations and climate. Read a related commentary here. Sarah also talks with Ziad Obermeyer, a professor in the School of Public Health at the University of California, Berkeley, about dissecting out bias in an algorithm used by health care systems in the United States to recommend patients for additional health services. With unusual access to a proprietary algorithm, inputs, and outputs, Obermeyer and his colleagues found that the low amount of health care dollars spent on black patients in the past caused the algorithm to underestimate their risk for poor health in the future. Obermeyer and Sarah discuss how this happened and remedies that are already in progress. Read a related commentary here. Finally, in the monthly books segment, books host Kiki Sanford interviews author Alice Gorman about her book Dr. Space Junk vs The Universe: Archaeology and the Future. Listen to more book segments on the Science books blog: Books, et al. This week’s episode was edited by Podigy. Ads on this week’s show: The Tangled Tree: A Radical New History of Life by David Quanmen; MEL Science Download the transcript (PDF) Listen to previous podcasts. About the Science Podcast [Image: Public domain; Music: Jeffrey Cook]
Touch HD — Erika Obermeyer is the owner and winemaker of Erik Obermeyer wines. Her very own venture following a career spanning 15 odd years between Kleine Zalze and Graham Beck. We chat to her and partner in crime, Neil Germishuis who owns up to being “Chief Cook and Bottlewasher” in the business.
Porpoise Crispy Podcast Volume #8 Episode #15 Trazodone Curated by Bleepo Sarcophagus/Ryan Obermeyer September 17, 2019 Windowlicker Aphex Twin Aphex Twin Sesame Syrup Cigarettes After Sex Crush The Game of Love (Good BPM Edition) Daft Punk Daft Punk All Flowers In Time Jeff Buckley & Elizabeth Fraser B-sides Do You Know Where Your Children Are Michael Jackson zz - various artists SOS (Theatre Of Delays Remix) Portishead Third The Wild Ones The Push Kings zz - various artists Barracuda (live) Rasputina The Lost & Found Same Ol’ Mistakes Rihanna Rihanna Lullaby (Acoustic) The Cure The Cure (Acoustic) The pCrispy is only an hour of music so I know you’ve got time to enjoy to these bad asses of the Internets: The Westerino Show Funkytown Bayerclan Squirreling Podcast Secretly Timid Getting It Out
Fashion is a tough scene. Even when you manage to break into the fashion industry, how do you get into your dream category? With so much competition for design jobs, it seems like luck can play the biggest factor in many designer’s careers. But what if you could make luck work for you? In this episode, we spoke to Allion Juhasz. Allison has spent ten years in the industry, designing for big outdoor apparel names like Scott, UnderArmor, Obermeyer, and Ultimate Direction. These opportunities were open to Allison because she made the right moves at the right times. She readily admits that she’s been lucky--but she shares tons of ways that you can become lucky too! Follow her lead, and boost your chances of getting to design for the category YOU want most. In the interview (which you’ll love), we will cover: How she got into fashion--with a bachelor’s in marine biology How she got “lucky” with jobs--again and again! Why she left her first dream job (and what she would have done differently) Why she quit another job many designers would kill for--without a job lined up! Her tips for networking when it doesn’t come naturally to you How she has scored more great opportunities over the years Why working for a big brand isn’t always the best option How she spends her days in a smaller company with diverse aspects to her role Details about the product design and development process And more! Resources & People Mentioned Allison on Instagram Allison on LinkedIn Ultimate Direction Successful Fashion Designer: Free Resources for Fashion Designers! Enjoy the show? Help us out by: Rating us on iTunes – it really helps! Subscribing on iTunes Subscribing on YouTube Subscribing on Stitcher Subscribing on Google Play Subscribing on Spotify
Eric Rhoads welcomes California landscape painter Michael Obermeyer, whose paintings are in the U.S. Air Force Historical Art Collection in the Smithsonian Institute and the Pentagon.
Gary Obermeyer: Looking at Education | Steve Hargadon | Jan 29 2013 by Steve Hargadon
That’s Angie Obermeyer, née Johnson. Semi-professional dancer, reluctant advertiser, wife, and mother of three. She’s a factory sleeper - you wouldn’t know she was a sucker for technique while chatting on the playground afterschool but once you get her talking about it, her passion for dance becomes immediately apparent, as is her drive to make dance performances more accessible to the general public.
Klaus Obermeyer is a living legend. He has had the amazing privilege to see every technological advancement in skiing from the very beginning of the sport. He is 98 years-old and still has a great passion for the sport. If you’re in Aspen, you may even run into him on the Mountain. Tune in to hear Klaus discuss the early days of skiing, his method for teaching beginners, and his secret to a long and healthy life. Topics: [01:55] Klaus made his first pair of skis at two years-old. [02:08] He used the chestnut boards from some orange crates. [03:06] He built a small jump out of snow and generally had a great time sliding around on snow. [03:30] When he was around 4 or 5 years-old, a Norwegian man made him a pair of real skis. [04:45] A Doctor in Hamburg made the first metal ski edges. [06:05] People used different types of wood to make skis, but Americans used Hickory. Hickory is tough, but flexible. [08:58] Klaus made sure that when teaching beginners, he wouldn’t do anything to scare them; scared skiers are stiff skiers. [10:25] When snowboarding came around, it influenced the shape of skis. The shorter and wider skis are great for skiing in heavy, chunky snow. [13:00] Klaus worked to create ski clothing that enhanced the skiing experience; they wanted to make warm, comfortable clothing. [14:25] Klaus still skis, but won’t ski in a storm or when it’s icy. [14:58] At his age, he finds it easier to ski than it is to walk. [15:32] Klaus says the key is to not eat more calories than you burn, workout every day, keep your bones under pressure, and make sure your body is always used to working. [16:15] Never give up working out; Klaus likes swimming. [17:25] Klaus learned a lot about skiing from a sheep herder, who was the first person who knew how to make parallel turns. [18:10] The sheep herder skied to school everyday. [22:00] Norwegians skied for reasons of survival. [24:55] In terms of keeping skiing popular, Klaus says to “just let it happen” and “enjoy the feeling of sliding on snow” Quotes: “It was a pleasure to see how these skis got...a little bit better. And the sport of skiing kept changing…” -Klaus Obermeyer “...In 1947, there was practically no ski clothing...We developed a lot of it and then got copied by people. The aim was to make ski clothing that makes skiing more enjoyable…” -Klaus Obermeyer “At this point of my age, at 98 and a half years-old, it’s easier to ski than it is to walk.” -Klaus Obermeyer Resources: Wagner Custom Skis Klaus’ Biography on Obermeyer’s Website
Hi everyone! Episode #31 is on and Laura Obermeyer is back. Laura is a photographer, skier, and a member of the Obermeyer family. We talked about all of the awesome things she has coming down... The post E31 – Laura Obermeyer #2 appeared first on Out of Bounds Podcast.
Hi everyone! Episode #31 is on and Laura Obermeyer is back. Laura is a photographer, skier, and a member of the Obermeyer family. We talked about all of the awesome things she has coming down... The post E31 – Laura Obermeyer #2 appeared first on Out of Bounds Podcast.
This week on the Exercise Is Health podcast, Julie and Charlie interview Dr. Thomas Obermeyer of Barrington Orthopedic Specialists to discuss one of the most common shoulder issues plaguing many people today - rotator cuff disease. What is rotator cuff disease? What are contributing factors to it? What can be done about it? Check out all of this and more in this week's episode!
Adam Jaber talks with Laura Obermeyer about a wide range of topics including her involvement with the family business, Obermeyer. She also discusses her love of photography and how she ties that in with her... The post E12 Laura Obermeyer – Skiing, Photography, and The Family Business appeared first on Out of Bounds Podcast.
Adam Jaber talks with Laura Obermeyer about a wide range of topics including her involvement with the family business, Obermeyer. She also discusses her love of photography and how she ties that in with her... The post E12 Laura Obermeyer – Skiing, Photography, and The Family Business appeared first on Out of Bounds Podcast.
A strange story of time and redemption to close the year. Wilson Fowlie narrates.
A strange story of time and redemption to close the year. Wilson Fowlie narrates.
Dr. Ziad Obermeyer is an emergency medicine physician at Brigham and Women's Hospital and an assistant professor at Harvard Medical School. Stephen Morrissey, the interviewer, is the Managing Editor of the Journal. Z. Obermeyer and T.H. Lee. Lost in Thought - The Limits of the Human Mind and the Future of Medicine. N Engl J Med 2017;377:1209-11.
Travis Martin's Weight Loss Ministry and Shibboleth Lifestyle
Alisa Obermeyer is down more than 40 pounds. She looks 10-15 years younger and is feeling fantastic. She has even started a weight loss support group in her home, helping others lose weight and transform their lives and health just like she has. Hear more in this exciting live interveiw.
So have you heard about the Red Thread Project yet? On this episode you'll hear from Chicago fiber artist Lindsay Obermeyer, creator of the fantastic Red Thread Project, which is under way in Grand Rapids, Michigan.